#
44692526 |
| 03-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
remove unused code
MFC after: 2 days
|
#
9923b511 |
| 02-Sep-2004 |
Scott Long <scottl@FreeBSD.org> |
Turn PREEMPTION into a kernel option. Make sure that it's defined if FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is enabled (code inspired by the PREEMPTION warning in ke
Turn PREEMPTION into a kernel option. Make sure that it's defined if FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is enabled (code inspired by the PREEMPTION warning in kern_switch.c). This is a possible MT5 candidate.
show more ...
|
#
6804a3ab |
| 01-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Give the 4bsd scheduler the ability to wake up idle processors when there is new work to be done.
MFC after: 5 days
|
#
2630e4c9 |
| 01-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Give setrunqueue() and sched_add() more of a clue as to where they are coming from and what is expected from them.
MFC after: 2 days
|
#
6f96710c |
| 28-Aug-2004 |
Peter Wemm <peter@FreeBSD.org> |
Backout the previous backout (with scott's ok). sched_ule.c:1.122 is believed to fix the problem with ULE that this change triggered.
|
#
2384290c |
| 20-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
Revert the previous change. It works great for 4BSD but causes major problems for ULE. The reason is quite unknown and worrisome.
|
#
2c86298c |
| 20-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
In maybe_preempt(), ignore threads that are in an inconsistent state. This is an effective band-aid for at least some of the scheduler corruption seen recently. The real fix will involve protecting
In maybe_preempt(), ignore threads that are in an inconsistent state. This is an effective band-aid for at least some of the scheduler corruption seen recently. The real fix will involve protecting threads while they are inconsistent, and will come later.
Submitted by: julian
show more ...
|
#
0f4ad918 |
| 10-Aug-2004 |
Scott Long <scottl@FreeBSD.org> |
Add a temporary debugging hack to detect a deadlock in setrunqueue(). This is here so that we can gather stats on the nature of the recent rash of hard lockups, and in this particular case panic the
Add a temporary debugging hack to detect a deadlock in setrunqueue(). This is here so that we can gather stats on the nature of the recent rash of hard lockups, and in this particular case panic the machine instead of letting it deadlock forever.
show more ...
|
#
1a5cd27b |
| 09-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Make kg->kg_runnable actually count runnable threads in the ksegrp run queue instead of only doing it sometimes.. This is not used outdide of debugging code in the current code, but that will probabl
Make kg->kg_runnable actually count runnable threads in the ksegrp run queue instead of only doing it sometimes.. This is not used outdide of debugging code in the current code, but that will probably change.
show more ...
|
#
732d9528 |
| 09-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Increase the amount of data exported by KTR in the KTR_RUNQ setting. This extra data is needed to really follow what is going on in the threaded case.
|
#
44fe3c1f |
| 06-Aug-2004 |
John Baldwin <jhb@FreeBSD.org> |
Don't scare users with a warning about preemption being off when it isn't yet safe to have on by default.
|
#
1a8cfbc4 |
| 27-Jul-2004 |
Robert Watson <rwatson@FreeBSD.org> |
Pass a thread argument into cpu_critical_{enter,exit}() rather than dereference curthread. It is called only from critical_{enter,exit}(), which already dereferences curthread. This doesn't seem to
Pass a thread argument into cpu_critical_{enter,exit}() rather than dereference curthread. It is called only from critical_{enter,exit}(), which already dereferences curthread. This doesn't seem to affect SMP performance in my benchmarks, but improves MySQL transaction throughput by about 1% on UP on my Xeon.
Head nodding: jhb, bmilekic
show more ...
|
#
18f480f8 |
| 23-Jul-2004 |
Scott Long <scottl@FreeBSD.org> |
Remove the previous hack since it doesn't make a difference and is getting in the way of debugging.
|
#
9493183e |
| 22-Jul-2004 |
Scott Long <scottl@FreeBSD.org> |
Disable the PREEMPTION-enabled code in critical_exit() that encourages switching to a different thread. This is just a hack to try to improve stability some more, but likely points closer to the rea
Disable the PREEMPTION-enabled code in critical_exit() that encourages switching to a different thread. This is just a hack to try to improve stability some more, but likely points closer to the real culprit.
show more ...
|
#
52eb8464 |
| 16-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof (which is per-process) and directly into struct thread as td_profil_addr and td_profil_ticks as these variables are really per-thread. (They are used to defer an addupc_intr() that was too "hard" until ast()).
show more ...
|
#
2d50560a |
| 10-Jul-2004 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Update for the KDB framework: o Make debugging code conditional upon KDB instead of DDB. o Call kdb_enter() instead of Debugger(). o Call kdb_backtrace() instead of db_print_backtrace() or backtra
Update for the KDB framework: o Make debugging code conditional upon KDB instead of DDB. o Call kdb_enter() instead of Debugger(). o Call kdb_backtrace() instead of db_print_backtrace() or backtrace().
kern_mutex.c: o Replace checks for db_active with checks for kdb_active and make them unconditional.
kern_shutdown.c: o s/DDB_UNATTENDED/KDB_UNATTENDED/g o s/DDB_TRACE/KDB_TRACE/g o Save the TID of the thread doing the kernel dump so the debugger knows which thread to select as the current when debugging the kernel core file. o Clear kdb_active instead of db_active and do so unconditionally. o Remove backtrace() implementation.
kern_synch.c: o Call kdb_reenter() instead of db_error().
show more ...
|
#
8b44a2e2 |
| 03-Jul-2004 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Unbreak build for the the !PREEMPTION case: don't define variables that aren't used in that case.
|
#
0c0b25ae |
| 02-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to determine if a thread about to be added to a run queue should be preempted to directly. If it is not safe to preempt or if the new thread does not have a high enough priority, then the function returns false and sched_add() adds the thread to the run queue. If the thread should be preempted to but the current thread is in a nested critical section, then the flag TDF_OWEPREEMPT is set and the thread is added to the run queue. Otherwise, mi_switch() is called immediately and the thread is never added to the run queue since it is switch to directly. When exiting an outermost critical section, if TDF_OWEPREEMPT is set, then clear it and call mi_switch() to perform the deferred preemption. - Remove explicit preemption from ithread_schedule() as calling setrunqueue() now does all the correct work. This also removes the do_switch argument from ithread_schedule(). - Do not use the manual preemption code in mtx_unlock if the architecture supports native preemption. - Don't call mi_switch() in a loop during shutdown to give ithreads a chance to run if the architecture supports native preemption since the ithreads will just preempt DELAY(). - Don't call mi_switch() from the page zeroing idle thread for architectures that support native preemption as it is unnecessary. - Native preemption is enabled on the same archs that supported ithread preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed except that we will do fewer context switches in a few cases and will avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
show more ...
|
Revision tags: release/4.10.0_cvs, release/4.10.0, release/5.2.1_cvs, release/5.2.1 |
|
#
b209e5e3 |
| 02-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- style fixes to the critical_exit() KASSERT().
Submitted by: bde
|
#
fca542bc |
| 01-Feb-2004 |
Robert Watson <rwatson@FreeBSD.org> |
Move KASSERT regarding td_critnest to after the value of td is set to curthread, to avoid warning and incorrect behavior.
Hoped not to mind: jeff
|
#
6767c654 |
| 01-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Assert that td_critnest > 0 in critical_exit() to catch cases of unbalanced uses of the critical_* api.
|
Revision tags: release/5.2.0_cvs, release/5.2.0 |
|
#
09a4a69c |
| 12-Dec-2003 |
Robert Watson <rwatson@FreeBSD.org> |
Although sometimes to the uninitiated, it may seem like goup, KSEGOUP is actually spelt KSEGROUP. Go figure.
Reported by: samy@kerneled.com
|
#
0d2a2989 |
| 17-Nov-2003 |
Peter Wemm <peter@FreeBSD.org> |
Initial landing of SMP support for FreeBSD/amd64.
- This is heavily derived from John Baldwin's apic/pci cleanup on i386. - I have completely rewritten or drastically cleaned up some other parts.
Initial landing of SMP support for FreeBSD/amd64.
- This is heavily derived from John Baldwin's apic/pci cleanup on i386. - I have completely rewritten or drastically cleaned up some other parts. (in particular, bootstrap) - This is still a WIP. It seems that there are some highly bogus bioses on nVidia nForce3-150 boards. I can't stress how broken these boards are. I have a workaround in mind, but right now the Asus SK8N is broken. The Gigabyte K8NPro (nVidia based) is also mind-numbingly hosed. - Most of my testing has been with SCHED_ULE. SCHED_4BSD works. - the apic and acpi components are 'standard'. - If you have an nVidia nForce3-150 board, you are stuck with 'device atpic' in addition, because they somehow managed to forget to connect the 8254 timer to the apic, even though its in the same silicon! ARGH! This directly violates the ACPI spec.
show more ...
|
Revision tags: release/4.9.0_cvs, release/4.9.0 |
|
#
94816f6d |
| 17-Oct-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove the correct thread from the run queue in setrunqueue(). This fixes ULE + KSE.
|
#
7cf90fb3 |
| 16-Oct-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Update the sched api. sched_{add,rem,clock,pctcpu} now all accept a td argument rather than a kse.
|