#
3db720fd |
| 25-Aug-2006 |
David Xu <davidxu@FreeBSD.org> |
Add user priority loaning code to support priority propagation for 1:1 threading's POSIX priority mutexes, the code is no-op unless priority-aware umtx code is committed.
|
#
36ec198b |
| 15-Jun-2006 |
David Xu <davidxu@FreeBSD.org> |
Add scheduler API sched_relinquish(), the API is used to implement yield() and sched_yield() syscalls. Every scheduler has its own way to relinquish cpu, the ULE and CORE schedulers have two internal
Add scheduler API sched_relinquish(), the API is used to implement yield() and sched_yield() syscalls. Every scheduler has its own way to relinquish cpu, the ULE and CORE schedulers have two internal run- queues, a timesharing thread which calls yield() syscall should be moved to inactive queue.
show more ...
|
#
b41f1452 |
| 13-Jun-2006 |
David Xu <davidxu@FreeBSD.org> |
Add scheduler CORE, the work I have done half a year ago, recent, I picked it up again. The scheduler is forked from ULE, but the algorithm to detect an interactive process is almost completely diffe
Add scheduler CORE, the work I have done half a year ago, recent, I picked it up again. The scheduler is forked from ULE, but the algorithm to detect an interactive process is almost completely different with ULE, it comes from Linux paper "Understanding the Linux 2.6.8.1 CPU Scheduler", although I still use same word "score" as a priority boost in ULE scheduler.
Briefly, the scheduler has following characteristic: 1. Timesharing process's nice value is seriously respected, timeslice and interaction detecting algorithm are based on nice value. 2. per-cpu scheduling queue and load balancing. 3. O(1) scheduling. 4. Some cpu affinity code in wakeup path. 5. Support POSIX SCHED_FIFO and SCHED_RR. Unlike scheduler 4BSD and ULE which using fuzzy RQ_PPQ, the scheduler uses 256 priority queues. Unlike ULE which using pull and push, the scheduelr uses pull method, the main reason is to let relative idle cpu do the work, but current the whole scheduler is protected by the big sched_lock, so the benefit is not visible, it really can be worse than nothing because all other cpu are locked out when we are doing balancing work, which the 4BSD scheduelr does not have this problem. The scheduler does not support hyperthreading very well, in fact, the scheduler does not make the difference between physical CPU and logical CPU, this should be improved in feature. The scheduler has priority inversion problem on MP machine, it is not good for realtime scheduling, it can cause realtime process starving. As a result, it seems the MySQL super-smack runs better on my Pentium-D machine when using libthr, despite on UP or SMP kernel.
show more ...
|
#
0ae716e5 |
| 06-Jun-2006 |
David Xu <davidxu@FreeBSD.org> |
Make ke_rqindex unsigned.
|
Revision tags: release/5.5.0_cvs, release/5.5.0, release/6.1.0_cvs, release/6.1.0 |
|
#
9f8eb3cb |
| 27-Dec-2005 |
David Xu <davidxu@FreeBSD.org> |
Use variable i instead of variable cpus as an index to get correct kseq.
|
#
a1d4fe69 |
| 19-Dec-2005 |
David Xu <davidxu@FreeBSD.org> |
Fix a bug in slice calculation code, current code uses hz but sched_clock() is called by state clock.
Submitted by: taku at tackymt dot homeip dot net
|
Revision tags: release/6.0.0_cvs, release/6.0.0 |
|
#
a8615740 |
| 22-Sep-2005 |
David Xu <davidxu@FreeBSD.org> |
Temporarily disable nice threshold detection code, as it can starve a thread holding critical resource, e.g mutex or other implicit synchronous flags. Give thread which exceeds nice threshold a minim
Temporarily disable nice threshold detection code, as it can starve a thread holding critical resource, e.g mutex or other implicit synchronous flags. Give thread which exceeds nice threshold a minimum time slice.
PR: kern/86087
show more ...
|
#
f8ec133e |
| 19-Aug-2005 |
David Xu <davidxu@FreeBSD.org> |
Move up code for testing KEF_HOLD to avoid ke_cpu being changed unexpectly for PRI_ITHD and PRI_REALTIME threads.
|
#
1278181c |
| 08-Aug-2005 |
David Xu <davidxu@FreeBSD.org> |
Try best to keep a preempted thread at front of run queue, this seems improved performance a bit for some workloads, but still seeing interactive lagging unless cpu idling race is fixed.
|
#
3d16f519 |
| 31-Jul-2005 |
David Xu <davidxu@FreeBSD.org> |
If a thread was removed from system run queue, kse_assign shouldn't add it again.
|
#
05a6b7ad |
| 25-Jul-2005 |
Xin LI <delphij@FreeBSD.org> |
Cast to uintptr_t when the compiler complains. This unbreaks ULE scheduler breakage accompanied by the recent atomic_ptr() change.
|
#
4da0d332 |
| 24-Jun-2005 |
Peter Wemm <peter@FreeBSD.org> |
Move HWPMC_HOOKS into its own opt_hwpmc_hooks.h file. It doesn't merit being in opt_global.h and forcing a global recompile when only a few files reference it.
Approved by: re
|
#
6680bbd5 |
| 07-Jun-2005 |
Jeff Roberson <jeff@FreeBSD.org> |
- Fix the case where we're not preempting but there is already a newtd as this happens via thread_switchout(). I don't particularly like the structure of the code here. We twice call out to t
- Fix the case where we're not preempting but there is already a newtd as this happens via thread_switchout(). I don't particularly like the structure of the code here. We twice call out to thread code when a thread is voluntarily switching. Once to thread_switchout() and once to slot_fill(), while sched_4BSD does even more work which is redundant to select another thread to use our remaining slice. This should be simplified in the future, but for now I'm only going to fix the bug not the bad design.
show more ...
|
#
9fe02f7e |
| 04-Jun-2005 |
Jeff Roberson <jeff@FreeBSD.org> |
- It's 2005 already, I've been working on this for three years.
|
#
21381d1b |
| 04-Jun-2005 |
Jeff Roberson <jeff@FreeBSD.org> |
- Don't SLOT_USE() in the preempt case, sched_add() has already taken the slot for us. Previously, we would take two slots on every preempt, and setrunqueue() would fix it up for us in the non
- Don't SLOT_USE() in the preempt case, sched_add() has already taken the slot for us. Previously, we would take two slots on every preempt, and setrunqueue() would fix it up for us in the non threaded case. The threaded case was simply broken. - Clean up flags, prototypes, comments.
show more ...
|
Revision tags: release/5.4.0_cvs, release/5.4.0 |
|
#
ebccf1e3 |
| 19-Apr-2005 |
Joseph Koshy <jkoshy@FreeBSD.org> |
Bring a working snapshot of hwpmc(4), its associated libraries, userland utilities and documentation into -CURRENT.
Bump FreeBSD_version.
Reviewed by: alc, jhb (kernel changes)
|
#
77918643 |
| 08-Apr-2005 |
Stephan Uphoff <ups@FreeBSD.org> |
Sprinkle some volatile magic and rearrange things a bit to avoid race conditions in critical_exit now that it no longer blocks interrupts.
Reviewed by: jhb
|
#
7a9507b6 |
| 23-Feb-2005 |
Jeff Roberson <jeff@FreeBSD.org> |
- A test in sched_switch() is no longer necessary and it is incorrect when td0 is preempted before it voluntarily switches.
Discovered by: Arjan Van Leeuwen <avleeuwen@gmail.com>
|
#
42a29039 |
| 04-Feb-2005 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add ke_runq == NULL to the conditions which will cause us to abort adjusting timeshare loads in sched_class(). This is only important if the thread has never run, otherwise the state checks
- Add ke_runq == NULL to the conditions which will cause us to abort adjusting timeshare loads in sched_class(). This is only important if the thread has never run, otherwise the state checks should work as expected.
show more ...
|
Revision tags: release/4.11.0_cvs, release/4.11.0 |
|
#
50aaa791 |
| 30-Dec-2004 |
John Baldwin <jhb@FreeBSD.org> |
Fix a typo and two whitespace nits.
|
#
f5c157d9 |
| 30-Dec-2004 |
John Baldwin <jhb@FreeBSD.org> |
Rework the interface between priority propagation (lending) and the schedulers a bit to ensure more correct handling of priorities and fewer priority inversions: - Add two functions to the sched(9) A
Rework the interface between priority propagation (lending) and the schedulers a bit to ensure more correct handling of priorities and fewer priority inversions: - Add two functions to the sched(9) API to handle priority lending: sched_lend_prio() and sched_unlend_prio(). The turnstile code uses these functions to ask the scheduler to lend a thread a set priority and to tell the scheduler when it thinks it is ok for a thread to stop borrowing priority. The unlend case is slightly complex in that the turnstile code tells the scheduler what the minimum priority of the thread needs to be to satisfy the requirements of any other threads blocked on locks owned by the thread in question. The scheduler then decides where the thread can go back to normal mode (if it's normal priority is high enough to satisfy the pending lock requests) or it it should continue to use the priority specified to the sched_unlend_prio() call. This involves adding a new per-thread flag TDF_BORROWING that replaces the ULE-only kse flag for priority elevation. - Schedulers now refuse to lower the priority of a thread that is currently borrowing another therad's priority. - If a scheduler changes the priority of a thread that is currently sitting on a turnstile, it will call a new function turnstile_adjust() to inform the turnstile code of the change. This function resorts the thread on the priority list of the turnstile if needed, and if the thread ends up at the head of the list (due to having the highest priority) and its priority was raised, then it will propagate that new priority to the owner of the lock it is blocked on.
Some additional fixes specific to the 4BSD scheduler include: - Common code for updating the priority of a thread when the user priority of its associated kse group has been consolidated in a new static function resetpriority_thread(). One change to this function is that it will now only adjust the priority of a thread if it already has a time sharing priority, thus preserving any boosts from a tsleep() until the thread returns to userland. Also, resetpriority() no longer calls maybe_resched() on each thread in the group. Instead, the code calling resetpriority() is responsible for calling resetpriority_thread() on any threads that need to be updated. - schedcpu() now uses resetpriority_thread() instead of just calling sched_prio() directly after it updates a kse group's user priority. - sched_clock() now uses resetpriority_thread() rather than writing directly to td_priority. - sched_nice() now updates all the priorities of the threads after the group priority has been adjusted.
Discussed with: bde Reviewed by: ups, jeffr Tested on: 4bsd, ule Tested on: i386, alpha, sparc64
show more ...
|
#
2ebf8eb1 |
| 27-Dec-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Unintentionally checked in a debugging panic. Remove that.
|
#
598b368d |
| 26-Dec-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Fix a long standing problem where an ithread would not honor sched_pin(). - Remove the sched_add wrapper that used sched_add_internal() as a backend. Its only purpose was to interpret one flag
- Fix a long standing problem where an ithread would not honor sched_pin(). - Remove the sched_add wrapper that used sched_add_internal() as a backend. Its only purpose was to interpret one flag and turn it into an int. Do the right thing and interpret the flag in sched_add() instead. - Pass the flag argument to sched_add() to kseq_runq_add() so that we can get the SRQ_PREEMPT optimization too. - Add a KEF_INTERNAL flag. If KEF_INTERNAL is set we don't adjust the SLOT counts, otherwise the slot counts are adjusted as soon as we enter sched_add() or sched_rem() rather than when the thread is actually placed on the run queue. This greatly simplifies the handling of slots. - Remove the explicit prevention of migration for ithreads on non-x86 platforms. This was never shown to have any real benefit. - Remove the unused class argument to KSE_CAN_MIGRATE(). - Add ktr points for thread migration events. - Fix a long standing bug on platforms which don't initialize the cpu topology. The ksg_maxid variable was never correctly set on these platforms which caused the long term load balancer to never inspect more than the first group or processor. - Fix another bug which prevented the long term load balancer from working properly. If stathz != hz we can't expect sched_clock() to be called on the exact tick count that we're anticipating. - Rearrange sched_switch() a bit to reduce indentation levels.
show more ...
|
#
81d47d3f |
| 26-Dec-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove earlier KTR_ULE tracepoints. - Define new KTR_SCHED points so that we can graph the operation of the scheduler.
|
#
7842f65e |
| 14-Dec-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Garbage collect several unused members of struct kse and struce ksegrp. As best as I can tell, some of these were never used.
|