#
0502fe2e |
| 04-Apr-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Allow static_boost to specify no boost with '0', traditional kernel fixed pri boost with '1' or any priority less than the current thread's priority with a value greater than two. Default th
- Allow static_boost to specify no boost with '0', traditional kernel fixed pri boost with '1' or any priority less than the current thread's priority with a value greater than two. Default the boost to PRI_MIN_TIMESHARE to prevent regular user-space threads from starving threads in the kernel. This prevents these user-threads from also being scheduled as if they are high fixed-priority kernel threads. - Restore the setting of lowpri in tdq_choose(). It has to be either here or in sched_switch(). I accidentally removed it from both places.
Tested by: kris
show more ...
|
#
03d17db7 |
| 04-Apr-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Don't check for the ITHD pri class in tdq_load_add and rem. 4BSD doesn't do this either. Simply check P_NOLOAD. It'd be nice if this was in a thread flag so we didn't have an extra cache m
- Don't check for the ITHD pri class in tdq_load_add and rem. 4BSD doesn't do this either. Simply check P_NOLOAD. It'd be nice if this was in a thread flag so we didn't have an extra cache miss every time we add and remove a thread from the run-queue.
show more ...
|
#
9727e637 |
| 20-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Restore runq to manipulating threads directly by putting runq links and rqindex back in struct thread. - Compile kern_switch.c independently again and stop #include'ing it from schedulers.
- Restore runq to manipulating threads directly by putting runq links and rqindex back in struct thread. - Compile kern_switch.c independently again and stop #include'ing it from schedulers. - Remove the ts_thread backpointers and convert most code to go from struct thread to struct td_sched. - Cleanup the ts_flags #define garbage that was causing us to sometimes do things that expanded to td->td_sched->ts_thread->td_flags in 4BSD. - Export the kern.sched sysctl node in sysctl.h
show more ...
|
#
8b16c208 |
| 20-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- ULE and 4BSD share only one line of code from sched_newthread() so implement the required pieces in sched_fork_thread(). The td_sched pointer is already setup by thread_init anyway.
|
#
6d55b3ec |
| 19-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove some dead code and comments related to KSE. - Don't set tdq_lowpri on every switch, it should be precisely maintained now. - Add some comments to sched_thread_priority().
|
#
374ae2a3 |
| 19-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Relax requirements for p_numthreads, p_threads, p_swtick, and p_nice from requiring the per-process spinlock to only requiring the process lock. - Reflect these changes in the proc.h documentat
- Relax requirements for p_numthreads, p_threads, p_swtick, and p_nice from requiring the per-process spinlock to only requiring the process lock. - Reflect these changes in the proc.h documentation and consumers throughout the kernel. This is a substantial reduction in locking cost for these fields and was made possible by recent changes to threading support.
show more ...
|
#
237fdd78 |
| 16-Mar-2008 |
Robert Watson <rwatson@FreeBSD.org> |
In keeping with style(9)'s recommendations on macros, use a ';' after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, inclu
In keeping with style(9)'s recommendations on macros, use a ';' after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, including cflow's prcc and lxr.
MFC after: 1 month Discussed with: imp, rink
show more ...
|
#
d628fbfa |
| 14-Mar-2008 |
John Baldwin <jhb@FreeBSD.org> |
Make the function prototype for cpu_search() match the declaration so that this still compiles with gcc3.
|
#
6617724c |
| 12-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
Remove kernel support for M:N threading.
While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potent
Remove kernel support for M:N threading.
While the KSE project was quite successful in bringing threading to FreeBSD, the M:N approach taken by the kse library was never developed to its full potential. Backwards compatibility will be provided via libmap.conf for dynamically linked binaries and static binaries will be broken.
show more ...
|
#
c5aa6b58 |
| 12-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Pass the priority argument from *sleep() into sleepq and down into sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static b
- Pass the priority argument from *sleep() into sleepq and down into sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter.
Reviewed by: jhb, peter
show more ...
|
#
c143ac21 |
| 10-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Fix the invalid priority panics people are seeing by forcing tdq_runq_add to select the runq rather than hoping we set it properly when we adjusted the priority. This involves the same numbe
- Fix the invalid priority panics people are seeing by forcing tdq_runq_add to select the runq rather than hoping we set it properly when we adjusted the priority. This involves the same number of branches as before so should perform identically without the extra fragility.
Tested by: bz Reviewed by: bz
show more ...
|
#
7217d8d1 |
| 10-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Don't rely on a side effect of sched_prio() to set the initial ts_runq for thread0. Set it directly in sched_setup(). This fixes traps on boot seen on some machines.
Reported by: phk
|
#
73daf66f |
| 10-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
Reduce ULE context switch time by over 25%.
- Only calculate timeshare priorities once per tick or when a thread is woken from sleeping. - Keep the ts_runq pointer valid after all priority chan
Reduce ULE context switch time by over 25%.
- Only calculate timeshare priorities once per tick or when a thread is woken from sleeping. - Keep the ts_runq pointer valid after all priority changes. - Call tdq_runq_add() directly from sched_switch() without passing in via tdq_add(). We don't need to adjust loads or runqs anymore. - Sort tdq and ts_sched according to utilization to improve cache behavior.
Sponsored by: Nokia
show more ...
|
#
ff256d9c |
| 10-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add an implementation of sched_preempt() that avoids excessive IPIs. - Normalize the preemption/ipi setting code by introducing sched_shouldpreempt() so the logical is identical and not repeate
- Add an implementation of sched_preempt() that avoids excessive IPIs. - Normalize the preemption/ipi setting code by introducing sched_shouldpreempt() so the logical is identical and not repeated between tdq_notify() and sched_setpreempt(). - In tdq_notify() don't set NEEDRESCHED as we may not actually own the thread lock this could have caused us to lose td_flags settings. - Garbage collect some tunables that are no longer relevant.
show more ...
|
#
62fa74d9 |
| 02-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
Add support for the new cpu topology api: - When searching for affinity search backwards in the tree from the last cpu we ran on while the thread still has affinity for the group. This can t
Add support for the new cpu topology api: - When searching for affinity search backwards in the tree from the last cpu we ran on while the thread still has affinity for the group. This can take advantage of knowledge of shared L2 or L3 caches among a group of cores. - When searching for the least loaded cpu find the least loaded cpu via the least loaded path through the tree. This load balances system bus links, individual cache levels, and hyper-threaded/SMT cores. - Make the periodic balancer recursively balance the highest and lowest loaded cpu across each link.
Add support for cpusets: - Convert the cpuset to a simple native cpumask_t while the kernel still only supports cpumask. - Pass the derived cpumask down through the cpu_search functions to restrict the result cpus. - Make the various steal functions resilient to failure since all threads can not run on all cpus any longer.
General improvements: - Precisely track the lowest priority thread on every runq with tdq_setlowpri(). Before it was more advisory but this ended up having pathological behaviors. - Remove many #ifdef SMP conditions to simplify the code. - Get rid of the old cumbersome tdq_group. This is more naturally expressed via the cpu_group tree.
Sponsored by: Nokia Testing by: kris
show more ...
|
#
81aa7175 |
| 02-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove the old smp cpu topology specification with a new, more flexible tree structure that encodes the level of cache sharing and other properties. - Provide several convenience functions f
- Remove the old smp cpu topology specification with a new, more flexible tree structure that encodes the level of cache sharing and other properties. - Provide several convenience functions for creating one and two level cpu trees as well as a default flat topology. The system now always has some topology. - On i386 and amd64 create a seperate level in the hierarchy for HTT and multi-core cpus. This will allow the scheduler to intelligently load balance non-uniform cores. Presently we don't detect what level of the cache hierarchy is shared at each level in the topology. - Add a mechanism for testing common topologies that have more information than the MD code is able to provide via the kern.smp.topology tunable. This should be considered a debugging tool only and not a stable api.
Sponsored by: Nokia
show more ...
|
#
885d51a3 |
| 02-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add a new sched_affinity() api to be used in the upcoming cpuset implementation. - Add empty implementations of sched_affinity() to 4BSD and ULE.
Sponsored by: Nokia
|
Revision tags: release/7.0.0_cvs, release/7.0.0 |
|
#
317da705 |
| 23-Jan-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- sched_prio() should only adjust tdq_lowpri if the thread is running or on a run-queue. If the priority is numerically raised only change lowpri if we're certain it will be correct. Some slo
- sched_prio() should only adjust tdq_lowpri if the thread is running or on a run-queue. If the priority is numerically raised only change lowpri if we're certain it will be correct. Some slop is allowed however previously we could erroneously raise lowpri for an idle cpu that a thread had recently run on which lead to errors in load balancing decisions.
show more ...
|
Revision tags: release/6.3.0_cvs, release/6.3.0 |
|
#
a755f214 |
| 15-Jan-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- When executing the 'tryself' branch in sched_pickcpu() look at the lowest priority on the queue for the current cpu vs curthread's priority. In the case that curthread is waking up many thre
- When executing the 'tryself' branch in sched_pickcpu() look at the lowest priority on the queue for the current cpu vs curthread's priority. In the case that curthread is waking up many threads of a lower priority as would happen with a turnstile_broadcast() or wakeup() of many threads this prevents them from all ending up on the current cpu. - In sched_add() make the relationship between a scheduled ithread and the current cpu advisory rather than strict. Only give the ithread affinity for the current cpu if it's actually being scheduled from a hardware interrupt. This prevents it from migrating when it simply blocks on a lock.
Sponsored by: Nokia
show more ...
|
#
fd0b8c78 |
| 05-Jan-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Restore timeslicing code for all bit SCHED_FIFO priority classes.
Reported by: Peter Jeremy <peterjeremy@optushome.com.au>
|
#
731016fe |
| 22-Dec-2007 |
Wojciech A. Koszek <wkoszek@FreeBSD.org> |
Make SCHED_ULE buildable with gcc3.
Reviewed by: cognet (mentor), jeffr Approved by: cognet (mentor), jeffr
|
#
eea4f254 |
| 16-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_obje
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
show more ...
|
#
435806d3 |
| 11-Dec-2007 |
David Xu <davidxu@FreeBSD.org> |
Fix LOR of thread lock and umtx's priority propagation mutex due to the reworking of scheduler lock.
MFC: after 3 days
|
#
431f8906 |
| 14-Nov-2007 |
Julian Elischer <julian@FreeBSD.org> |
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
show more ...
|
#
cbdd62ad |
| 23-Oct-2007 |
Peter Grehan <grehan@FreeBSD.org> |
Cut over to ULE on PowerPC
kern/sched_ule.c - Add __powerpc__ to the list of supported architectures
powerpc/conf/GENERIC - Swap SCHED_4BSD with SCHED_ULE
powerpc/powerpc/genassym.c - Export TD_LO
Cut over to ULE on PowerPC
kern/sched_ule.c - Add __powerpc__ to the list of supported architectures
powerpc/conf/GENERIC - Swap SCHED_4BSD with SCHED_ULE
powerpc/powerpc/genassym.c - Export TD_LOCK field of thread struct
powerpc/powerpc/swtch.S - Handle new 3rd parameter to cpu_switch() by updating the old thread's lock. Note: uniprocessor-only, will require modification for MP support.
powerpc/powerpc/vm_machdep.c - Set 3rd param of cpu_switch to mutex of old thread's lock, making the call a no-op.
Reviewed by: marcel, jeffr (slightly older version)
show more ...
|