#
d5a08a60 |
| 12-Feb-2001 |
Jake Burkholder <jake@FreeBSD.org> |
Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different scheduling classes using different portions of the array. Th
Implement a unified run queue and adjust priority levels accordingly.
- All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc.
show more ...
|
#
ef73ae4b |
| 10-Jan-2001 |
Jake Burkholder <jake@FreeBSD.org> |
Use PCPU_GET, PCPU_PTR and PCPU_SET to access all per-cpu variables other then curproc.
|
Revision tags: release/4.2.0 |
|
#
35e0e5b3 |
| 20-Oct-2000 |
John Baldwin <jhb@FreeBSD.org> |
Catch up to moving headers: - machine/ipl.h -> sys/ipl.h - machine/mutex.h -> sys/mutex.h
|
Revision tags: release/4.1.1_cvs |
|
#
f6a0af80 |
| 15-Sep-2000 |
John Baldwin <jhb@FreeBSD.org> |
Idle processes are always runnable, so let them state at SRUN.
|
#
4a6404df |
| 12-Sep-2000 |
John Baldwin <jhb@FreeBSD.org> |
Fix some printf format string warnings due to sizeof(int) != sizeof(long) on the alpha.
|
#
0384fff8 |
| 07-Sep-2000 |
Jason Evans <jasone@FreeBSD.org> |
Major update to the way synchronization is done in the kernel. Highlights include:
* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and c
Major update to the way synchronization is done in the kernel. Highlights include:
* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The alpha port is still in transition and currently uses both.)
* Per-CPU idle processes.
* Interrupts are run in their own separate kernel threads and can be preempted (i386 only).
Partially contributed by: BSDi (BSD/OS) Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh
show more ...
|
Revision tags: release/4.1.0, release/3.5.0_cvs |
|
#
36e9f877 |
| 28-Mar-2000 |
Matthew Dillon <dillon@FreeBSD.org> |
Commit major SMP cleanups and move the BGL (big giant lock) in the syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it).
Commit major SMP cleanups and move the BGL (big giant lock) in the syscall path inward. A system call may select whether it needs the MP lock or not (the default being that it does need it).
A great deal of conditional SMP code for various deadended experiments has been removed. 'cil' and 'cml' have been removed entirely, and the locking around the cpl has been removed. The conditional separately-locked fast-interrupt code has been removed, meaning that interrupts must hold the CPL now (but they pretty much had to anyway). Another reason for doing this is that the original separate-lock for interrupts just doesn't apply to the interrupt thread mechanism being contemplated.
Modifications to the cpl may now ONLY occur while holding the MP lock. For example, if an otherwise MP safe syscall needs to mess with the cpl, it must hold the MP lock for the duration and must (as usual) save/restore the cpl in a nested fashion.
This is precursor work for the real meat coming later: avoiding having to hold the MP lock for common syscalls and I/O's and interrupt threads. It is expected that the spl mechanisms and new interrupt threading mechanisms will be able to run in tandem, allowing a slow piecemeal transition to occur.
This patch should result in a moderate performance improvement due to the considerable amount of code that has been removed from the critical path, especially the simplification of the spl*() calls. The real performance gains will come later.
Approved by: jkh Reviewed by: current, bde (exception.s) Some work taken from: luoqi's patch
show more ...
|
Revision tags: release/4.0.0_cvs, release/3.4.0_cvs, release/3.3.0_cvs |
|
#
42cef09b |
| 19-Aug-1999 |
Peter Wemm <peter@FreeBSD.org> |
Fix a typo and a bug. - One RTP_PRIO_REALTIME was meant to be RTP_PRIO_IDLE. - RTP_PRIO_FIFO was not handled. - Move the usual case first for setrunqueue() etc.
|
#
dba6c5a6 |
| 19-Aug-1999 |
Peter Wemm <peter@FreeBSD.org> |
Extract the next runnable process selection out of cpu_switch() into a fairly machine independent C routine. gcc actually does a pretty good job of this.
Reviewed by: msmith (in principle)
|
#
09c817ba |
| 03-Jul-2009 |
Oleksandr Tymoshenko <gonzo@FreeBSD.org> |
- MFC
|
#
791d9a6e |
| 25-Jun-2009 |
Jeff Roberson <jeff@FreeBSD.org> |
- Use DPCPU for SCHED_STATS. This is somewhat awkward because the offset of the stat is not known until link time so we must emit a function to call SYSCTL_ADD_PROC rather than using SYSCTL_PR
- Use DPCPU for SCHED_STATS. This is somewhat awkward because the offset of the stat is not known until link time so we must emit a function to call SYSCTL_ADD_PROC rather than using SYSCTL_PROC directly. - Eliminate the atomic from SCHED_STAT_INC now that it's using per-cpu variables. Sched stats are always incremented while we're holding a spinlock so no further protection is required.
Reviewed by: sam
show more ...
|
Revision tags: release/7.2.0_cvs, release/7.2.0, release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
681e4062 |
| 12-May-2008 |
Julian Elischer <julian@FreeBSD.org> |
fix typo in runz_fuzz noticed by:Elijah Buck
|
#
8df78c41 |
| 17-Apr-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Make SCHED_STATS more generic by adding a wrapper to create the variables and sysctl nodes. - In reset walk the children of kern_sched_stats and reset the counters via the oid_arg1 pointer.
- Make SCHED_STATS more generic by adding a wrapper to create the variables and sysctl nodes. - In reset walk the children of kern_sched_stats and reset the counters via the oid_arg1 pointer. This allows us to add arbitrary counters to the tree and still reset them properly. - Define a set of switch types to be passed with flags to mi_switch(). These types are named SWT_*. These types correspond to SCHED_STATS counters and are automatically handled in this way. - Make the new SWT_ types more specific than the older switch stats. There are now stats for idle switches, remote idle wakeups, remote preemption ithreads idling, etc. - Add switch statistics for ULE's pickcpu algorithm. These stats include how much migration there is, how often affinity was successful, how often threads were migrated to the local cpu on wakeup, etc.
Sponsored by: Nokia
show more ...
|
#
9727e637 |
| 20-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Restore runq to manipulating threads directly by putting runq links and rqindex back in struct thread. - Compile kern_switch.c independently again and stop #include'ing it from schedulers.
- Restore runq to manipulating threads directly by putting runq links and rqindex back in struct thread. - Compile kern_switch.c independently again and stop #include'ing it from schedulers. - Remove the ts_thread backpointers and convert most code to go from struct thread to struct td_sched. - Cleanup the ts_flags #define garbage that was causing us to sometimes do things that expanded to td->td_sched->ts_thread->td_flags in 4BSD. - Export the kern.sched sysctl node in sysctl.h
show more ...
|
#
52e95411 |
| 20-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove the unused and redundant sched_newproc() function. - Remove the unused and redundant sched_newthread() which peaks into scheduler private structures.
|
#
a90f3f25 |
| 20-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Move maybe_preempt() from kern_switch.c to sched_4bsd.c. This is function is only used by 4bsd. - Create a new runq_choose_fuzz() function rather than polluting runq_choose() with 4BSD spec
- Move maybe_preempt() from kern_switch.c to sched_4bsd.c. This is function is only used by 4bsd. - Create a new runq_choose_fuzz() function rather than polluting runq_choose() with 4BSD specific code. - Move the fuzz sysctl into sched_4bsd.c - Remove some dead code from kern_switch.c
show more ...
|
#
237fdd78 |
| 16-Mar-2008 |
Robert Watson <rwatson@FreeBSD.org> |
In keeping with style(9)'s recommendations on macros, use a ';' after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, inclu
In keeping with style(9)'s recommendations on macros, use a ';' after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, including cflow's prcc and lxr.
MFC after: 1 month Discussed with: imp, rink
show more ...
|
#
6617724c |
| 12-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
Remove kernel support for M:N threading.
While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potent
Remove kernel support for M:N threading.
While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potential. Backwards compatibility will be provided via libmap.conf for dynamically linked binaries and static binaries will be broken.
show more ...
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
431f8906 |
| 14-Nov-2007 |
Julian Elischer <julian@FreeBSD.org> |
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
show more ...
|
#
5bce4ae3 |
| 09-Oct-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Fix ULE in kernels without PREEMPTION compiled in by always enabling the critical_exit() owepreempt check. ULE will always use owepreempt to preempt the idle thread. This change does not ef
- Fix ULE in kernels without PREEMPTION compiled in by always enabling the critical_exit() owepreempt check. ULE will always use owepreempt to preempt the idle thread. This change does not effect 4BSD since it will never set owepreempt without PREEMPTION enabled. - Remove some unused code from choosethread().
Discussed with: jhb Approved by: re
show more ...
|
#
c8790f5d |
| 20-Sep-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Fix some entries in the locks static table of witness. In particular: - smp_tlb_mtx is no longer used, so it is axed. - smp rendezvous lock isn't really a leaf spin-mutex. Its bad placement in the
Fix some entries in the locks static table of witness. In particular: - smp_tlb_mtx is no longer used, so it is axed. - smp rendezvous lock isn't really a leaf spin-mutex. Its bad placement in the table, however, has been the source of a false positive LOR reporting with the dt_lock. However, smp rendezvous lock would have had sched_lock there for older lock, so it wasn't still a leaf lock. - allpmaps is only used in ia32 architecture, so it is inserted in the appropriate stub.
Addictionally: - kse_zombie_lock is no longer present, so its definition is axed out. - zombie_lock doesn't need to have an exported symbol, so just let's it be declared as static.
Tested by: kris Approved by: jeff (mentor) Approved by: re
show more ...
|
#
b61ce5b0 |
| 17-Sep-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Move all of the PS_ flags into either p_flag or td_flags. - p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or previously the sched_lock. These bugs have existed for some
- Move all of the PS_ flags into either p_flag or td_flags. - p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or previously the sched_lock. These bugs have existed for some time. - Allow swapout to try each thread in a process individually and then swapin the whole process if any of these fail. This allows us to move most scheduler related swap flags into td_flags. - Keep ki_sflag for backwards compat but change all in source tools to use the new and more correct location of P_INMEM.
Reported by: pho Reviewed by: attilio, kib Approved by: re (kensmith)
show more ...
|
#
67e20930 |
| 20-Aug-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Improve runq_findbit_from() which is used by ULE's circular queue. Mask of the bits we want to ignore on the first pass rather than doing a linear scan. This puts us within a few instructio
- Improve runq_findbit_from() which is used by ULE's circular queue. Mask of the bits we want to ignore on the first pass rather than doing a linear scan. This puts us within a few instructions of the cost of runq_findbit() and removes this function from the top of profiling output for context switch heavy workloads.
Approved by: re
show more ...
|
#
413ea6f5 |
| 04-Aug-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Set SW_PREEMPT when we preempt in critical_exit().
Approved by: re
|
#
56696bd1 |
| 19-Jul-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove explicit references to sched_lock. A simpler assert will do.
Approved by: re
|