#
6f96710c |
| 28-Aug-2004 |
Peter Wemm <peter@FreeBSD.org> |
Backout the previous backout (with scott's ok). sched_ule.c:1.122 is believed to fix the problem with ULE that this change triggered.
|
#
2384290c |
| 20-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
Revert the previous change. It works great for 4BSD but causes major problems for ULE. The reason is quite unknown and worrisome.
|
#
2c86298c |
| 20-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
In maybe_preempt(), ignore threads that are in an inconsistent state. This is an effective band-aid for at least some of the scheduler corruption seen recently. The real fix will involve protecting
In maybe_preempt(), ignore threads that are in an inconsistent state. This is an effective band-aid for at least some of the scheduler corruption seen recently. The real fix will involve protecting threads while they are inconsistent, and will come later.
Submitted by: julian
show more ...
|
#
0f4ad918 |
| 10-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
Add a temporary debugging hack to detect a deadlock in setrunqueue(). This is here so that we can gather stats on the nature of the recent rash of hard lockups, and in this particular case panic the
Add a temporary debugging hack to detect a deadlock in setrunqueue(). This is here so that we can gather stats on the nature of the recent rash of hard lockups, and in this particular case panic the machine instead of letting it deadlock forever.
show more ...
|
#
1a5cd27b |
| 09-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Make kg->kg_runnable actually count runnable threads in the ksegrp run queue instead of only doing it sometimes.. This is not used outdide of debugging code in the current code, but that will probabl
Make kg->kg_runnable actually count runnable threads in the ksegrp run queue instead of only doing it sometimes.. This is not used outdide of debugging code in the current code, but that will probably change.
show more ...
|
#
732d9528 |
| 09-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Increase the amount of data exported by KTR in the KTR_RUNQ setting. This extra data is needed to really follow what is going on in the threaded case.
|
#
44fe3c1f |
| 06-Aug-2004 |
John Baldwin <jhb@FreeBSD.org> |
Don't scare users with a warning about preemption being off when it isn't yet safe to have on by default.
|
#
1a8cfbc4 |
| 27-Jul-2004 |
Robert Watson <rwatson@FreeBSD.org> |
Pass a thread argument into cpu_critical_{enter,exit}() rather than dereference curthread. It is called only from critical_{enter,exit}(), which already dereferences curthread. This doesn't seem to
Pass a thread argument into cpu_critical_{enter,exit}() rather than dereference curthread. It is called only from critical_{enter,exit}(), which already dereferences curthread. This doesn't seem to affect SMP performance in my benchmarks, but improves MySQL transaction throughput by about 1% on UP on my Xeon.
Head nodding: jhb, bmilekic
show more ...
|
#
18f480f8 |
| 23-Jul-2004 |
Scott Long <scottl@FreeBSD.org> |
Remove the previous hack since it doesn't make a difference and is getting in the way of debugging.
|
#
9493183e |
| 22-Jul-2004 |
Scott Long <scottl@FreeBSD.org> |
Disable the PREEMPTION-enabled code in critical_exit() that encourages switching to a different thread. This is just a hack to try to improve stability some more, but likely points closer to the rea
Disable the PREEMPTION-enabled code in critical_exit() that encourages switching to a different thread. This is just a hack to try to improve stability some more, but likely points closer to the real culprit.
show more ...
|
#
52eb8464 |
| 16-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof (which is per-process) and directly into struct thread as td_profil_addr and td_profil_ticks as these variables are really per-thread. (They are used to defer an addupc_intr() that was too "hard" until ast()).
show more ...
|
#
2d50560a |
| 10-Jul-2004 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Update for the KDB framework: o Make debugging code conditional upon KDB instead of DDB. o Call kdb_enter() instead of Debugger(). o Call kdb_backtrace() instead of db_print_backtrace() or backtra
Update for the KDB framework: o Make debugging code conditional upon KDB instead of DDB. o Call kdb_enter() instead of Debugger(). o Call kdb_backtrace() instead of db_print_backtrace() or backtrace().
kern_mutex.c: o Replace checks for db_active with checks for kdb_active and make them unconditional.
kern_shutdown.c: o s/DDB_UNATTENDED/KDB_UNATTENDED/g o s/DDB_TRACE/KDB_TRACE/g o Save the TID of the thread doing the kernel dump so the debugger knows which thread to select as the current when debugging the kernel core file. o Clear kdb_active instead of db_active and do so unconditionally. o Remove backtrace() implementation.
kern_synch.c: o Call kdb_reenter() instead of db_error().
show more ...
|
#
8b44a2e2 |
| 03-Jul-2004 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Unbreak build for the the !PREEMPTION case: don't define variables that aren't used in that case.
|
#
0c0b25ae |
| 02-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to determine if a thread about to be added to a run queue should be preempted to directly. If it is not safe to preempt or if the new thread does not have a high enough priority, then the function returns false and sched_add() adds the thread to the run queue. If the thread should be preempted to but the current thread is in a nested critical section, then the flag TDF_OWEPREEMPT is set and the thread is added to the run queue. Otherwise, mi_switch() is called immediately and the thread is never added to the run queue since it is switch to directly. When exiting an outermost critical section, if TDF_OWEPREEMPT is set, then clear it and call mi_switch() to perform the deferred preemption. - Remove explicit preemption from ithread_schedule() as calling setrunqueue() now does all the correct work. This also removes the do_switch argument from ithread_schedule(). - Do not use the manual preemption code in mtx_unlock if the architecture supports native preemption. - Don't call mi_switch() in a loop during shutdown to give ithreads a chance to run if the architecture supports native preemption since the ithreads will just preempt DELAY(). - Don't call mi_switch() from the page zeroing idle thread for architectures that support native preemption as it is unnecessary. - Native preemption is enabled on the same archs that supported ithread preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed except that we will do fewer context switches in a few cases and will avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
show more ...
|
Revision tags: release/4.10.0_cvs, release/4.10.0, release/5.2.1_cvs, release/5.2.1 |
|
#
b209e5e3 |
| 02-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- style fixes to the critical_exit() KASSERT().
Submitted by: bde
|
#
fca542bc |
| 01-Feb-2004 |
Robert Watson <rwatson@FreeBSD.org> |
Move KASSERT regarding td_critnest to after the value of td is set to curthread, to avoid warning and incorrect behavior.
Hoped not to mind: jeff
|
#
6767c654 |
| 01-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Assert that td_critnest > 0 in critical_exit() to catch cases of unbalanced uses of the critical_* api.
|
Revision tags: release/5.2.0_cvs, release/5.2.0 |
|
#
09a4a69c |
| 12-Dec-2003 |
Robert Watson <rwatson@FreeBSD.org> |
Although sometimes to the uninitiated, it may seem like goup, KSEGOUP is actually spelt KSEGROUP. Go figure.
Reported by: samy@kerneled.com
|
#
0d2a2989 |
| 17-Nov-2003 |
Peter Wemm <peter@FreeBSD.org> |
Initial landing of SMP support for FreeBSD/amd64.
- This is heavily derived from John Baldwin's apic/pci cleanup on i386. - I have completely rewritten or drastically cleaned up some other parts.
Initial landing of SMP support for FreeBSD/amd64.
- This is heavily derived from John Baldwin's apic/pci cleanup on i386. - I have completely rewritten or drastically cleaned up some other parts. (in particular, bootstrap) - This is still a WIP. It seems that there are some highly bogus bioses on nVidia nForce3-150 boards. I can't stress how broken these boards are. I have a workaround in mind, but right now the Asus SK8N is broken. The Gigabyte K8NPro (nVidia based) is also mind-numbingly hosed. - Most of my testing has been with SCHED_ULE. SCHED_4BSD works. - the apic and acpi components are 'standard'. - If you have an nVidia nForce3-150 board, you are stuck with 'device atpic' in addition, because they somehow managed to forget to connect the 8254 timer to the apic, even though its in the same silicon! ARGH! This directly violates the ACPI spec.
show more ...
|
Revision tags: release/4.9.0_cvs, release/4.9.0 |
|
#
94816f6d |
| 17-Oct-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove the correct thread from the run queue in setrunqueue(). This fixes ULE + KSE.
|
#
7cf90fb3 |
| 16-Oct-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a td argument rather than a kse.
|
#
0e2a4d3a |
| 15-Jun-2003 |
David Xu <davidxu@FreeBSD.org> |
Rename P_THREADED to P_SA. P_SA means a process is using scheduler activations.
|
#
677b542e |
| 11-Jun-2003 |
David E. O'Brien <obrien@FreeBSD.org> |
Use __FBSDID().
|
Revision tags: release/5.1.0_cvs, release/5.1.0 |
|
#
faaa20f6 |
| 21-May-2003 |
Julian Elischer <julian@FreeBSD.org> |
When we are spilling threads out of the run queue during panic, make sure we keep the thread state variable consistent with its real state. i.e. Don't say it's on the run queue when it isn't.
Also c
When we are spilling threads out of the run queue during panic, make sure we keep the thread state variable consistent with its real state. i.e. Don't say it's on the run queue when it isn't.
Also clarify the associated comment.
Turns a double panic back to a single panic :-/
Approved by: re@ (jhb)
show more ...
|
Revision tags: release/4.8.0_cvs, release/4.8.0 |
|
#
cc66ebe2 |
| 03-Apr-2003 |
Peter Wemm <peter@FreeBSD.org> |
Commit a partial lazy thread switch mechanism for i386. it isn't as lazy as it could be and can do with some more cleanup. Currently its under options LAZY_SWITCH. What this does is avoid %cr3 rel
Commit a partial lazy thread switch mechanism for i386. it isn't as lazy as it could be and can do with some more cleanup. Currently its under options LAZY_SWITCH. What this does is avoid %cr3 reloads for short context switches that do not involve another user process. ie: we can take an interrupt, switch to a kthread and return to the user without explicitly flushing the tlb. However, this isn't as exciting as it could be, the interrupt overhead is still high and too much blocks on Giant still. There are some debug sysctls, for stats and for an on/off switch.
The main problem with doing this has been "what if the process that you're running on exits while we're borrowing its address space?" - in this case we use an IPI to give it a kick when we're about to reclaim the pmap.
Its not compiled in unless you add the LAZY_SWITCH option. I want to fix a few more things and get some more feedback before turning it on by default.
This is NOT a replacement for Bosko's lazy interrupt stuff. This was more meant for the kthread case, while his was for interrupts. Mine helps a little for interrupts, but his helps a lot more.
The stats are enabled with options SWTCH_OPTIM_STATS - this has been a pseudo-option for years, I just added a bunch of stuff to it.
One non-trivial change was to select a new thread before calling cpu_switch() in the first place. This allows us to catch the silly case of doing a cpu_switch() to the current process. This happens uncomfortably often. This simplifies a bit of the asm code in cpu_switch (no longer have to call choosethread() in the middle). This has been implemented on i386 and (thanks to jake) sparc64. The others will come soon. This is actually seperate to the lazy switch stuff.
Glanced at by: jake, jhb
show more ...
|