Revision tags: release/4.6.2_cvs, release/4.6.2 |
|
#
9eb881f8 |
| 30-Jul-2002 |
Seigo Tanimura <tanimura@FreeBSD.org> |
- Optimize wakeup() and its friends; if a thread waken up is being swapped in, we do not have to ask for the scheduler thread to do that.
- Assert that a process is not swapped out in runq funct
- Optimize wakeup() and its friends; if a thread waken up is being swapped in, we do not have to ask for the scheduler thread to do that.
- Assert that a process is not swapped out in runq functions and swapout().
- Introduce thread_safetoswapout() for readability.
- In swapout_procs(), perform a test that may block (check of a thread working on its vm map) first. This lets us call swapout() with the sched_lock held, providing a better atomicity.
show more ...
|
Revision tags: release/4.6.1 |
|
#
fe799533 |
| 17-Jul-2002 |
Andrew Gallatin <gallatin@FreeBSD.org> |
Allow alphas to do crashdumps: Refuse to run anything in choosethread() after a panic which is not an interrupt thread, or the thread which caused the panic. Also, remove panicstr checks from mslee
Allow alphas to do crashdumps: Refuse to run anything in choosethread() after a panic which is not an interrupt thread, or the thread which caused the panic. Also, remove panicstr checks from msleep() and from cv_wait() in order to allow threads to go to sleep and yeild the cpu to the panicing thread, or to an interrupt thread which might be doing the crashdump.
Reviewed by: jhb (and it was mostly his idea too)
show more ...
|
#
c3b98db0 |
| 14-Jul-2002 |
Julian Elischer <julian@FreeBSD.org> |
Thinking about it I came to the conclusion that the KSE states were incorrectly formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run
Thinking about it I came to the conclusion that the KSE states were incorrectly formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run queue. THREAD: Attached to a thread and slaved to whatever state the thread is in.
This means that most places where we were adjusting kse state can go away as it is just moving around because the thread is.. The only places we need to adjust the KSE state is in transition to and from the idle and run queues.
Reviewed by: jhb@freebsd.org
show more ...
|
#
40e55026 |
| 12-Jul-2002 |
Julian Elischer <julian@FreeBSD.org> |
also set the KSE state for the idle KSE/thread case.
|
#
33d7ad1a |
| 12-Jul-2002 |
John Baldwin <jhb@FreeBSD.org> |
Set the thread state of the newly chosen to run thread to TDS_RUNNING in choosethread() in MI C code instead of doing it in in assembly in all the various cpu_switch() functions. This fixes problems
Set the thread state of the newly chosen to run thread to TDS_RUNNING in choosethread() in MI C code instead of doing it in in assembly in all the various cpu_switch() functions. This fixes problems on ia64 and sparc64.
Reviewed by: julian, peter, benno Tested on: i386, alpha, sparc64
show more ...
|
#
5e3da64e |
| 12-Jul-2002 |
Julian Elischer <julian@FreeBSD.org> |
Remove debugging code that I originally only wanted to be there for a couple of days after merge.
Reminded with pointy stick by: jhb
|
Revision tags: release/4.6.0_cvs |
|
#
e602ba25 |
| 29-Jun-2002 |
Julian Elischer <julian@FreeBSD.org> |
Part 1 of KSE-III
The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test pro
Part 1 of KSE-III
The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools)
Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands)
NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals..
show more ...
|
#
2f9267ec |
| 20-Jun-2002 |
Peter Wemm <peter@FreeBSD.org> |
Move the "- 1" into the RQB_FFS(mask) macro itself so that implementations can provide a base zero ffs function if they wish. This changes #define RQB_FFS(mask) (ffs64(mask)) foo = RQB_FFS(mask)
Move the "- 1" into the RQB_FFS(mask) macro itself so that implementations can provide a base zero ffs function if they wish. This changes #define RQB_FFS(mask) (ffs64(mask)) foo = RQB_FFS(mask) - 1; to #define RQB_FFS(mask) (ffs64(mask) - 1) foo = RQB_FFS(mask); On some platforms we can get the "- 1" for free, eg: those that use the C code for ffs64().
Reviewed by: jake (in principle)
show more ...
|
#
d2ac2316 |
| 25-May-2002 |
Jake Burkholder <jake@FreeBSD.org> |
Make the run queue parameters machine dependent. Optimize 64 bit architectures by using a 64 bit word for the bit array which keeps track of non-empty queues.
Reviewed by: peter
|
#
0cce52f8 |
| 08-May-2002 |
Jake Burkholder <jake@FreeBSD.org> |
Remove runq_findproc. This never worked right in the first place and can be prohibitively expensive.
|
#
182da820 |
| 02-Apr-2002 |
Matthew Dillon <dillon@FreeBSD.org> |
Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter() and cpu_critical_exit() and moves associated critical prototypes into their own header file, <arch>/<arch>/critical.h, wh
Stage-2 commit of the critical*() code. This re-inlines cpu_critical_enter() and cpu_critical_exit() and moves associated critical prototypes into their own header file, <arch>/<arch>/critical.h, which is only included by the three MI source files that need it.
Backout and re-apply improperly comitted syntactical cleanups made to files that were still under active development. Backout improperly comitted program structure changes that moved localized declarations to the top of two procedures. Partially re-apply one of the program structure changes to move 'mask' into an intermediate block rather then in three separate sub-blocks to make the code more readable. Re-integrate bug fixes that Jake made to the sparc64 code.
Note: In general, developers should not gratuitously move declarations out of sub-blocks. They are where they are for reasons of structure, grouping, readability, compiler-localizability, and to avoid developer-introduced bugs similar to several found in recent years in the VFS and VM code.
Reviewed by: jake
show more ...
|
#
d74ac681 |
| 27-Mar-2002 |
Matthew Dillon <dillon@FreeBSD.org> |
Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt disablement assumptions in kern_fork.c by adding another API call, cpu_critical_fork_exit(). Cleanup the td_savecrit field
Compromise for critical*()/cpu_critical*() recommit. Cleanup the interrupt disablement assumptions in kern_fork.c by adding another API call, cpu_critical_fork_exit(). Cleanup the td_savecrit field by moving it from MI to MD. Temporarily move cpu_critical*() from <arch>/include/cpufunc.h to <arch>/<arch>/critical.c (stage-2 will clean this up).
Implement interrupt deferral for i386 that allows interrupts to remain enabled inside critical sections. This also fixes an IPI interlock bug, and requires uses of icu_lock to be enclosed in a true interrupt disablement.
This is the stage-1 commit. Stage-2 will occur after stage-1 has stabilized, and will move cpu_critical*() into its own header file(s) + other things. This commit may break non-i386 architectures in trivial ways. This should be temporary.
Reviewed by: core Approved by: core
show more ...
|
#
e97c3e3d |
| 06-Mar-2002 |
Dag-Erling Smørgrav <des@FreeBSD.org> |
Rename runq_find() to runq_findproc(), and hide it behind #ifdef DIAGNOSTIC, as it can have a severe impact on performance under high load, and the bug it was meant to catch was fixed ages ago.
|
#
181df8c9 |
| 26-Feb-2002 |
Matthew Dillon <dillon@FreeBSD.org> |
revert last commit temporarily due to whining on the lists.
|
#
f96ad4c2 |
| 26-Feb-2002 |
Matthew Dillon <dillon@FreeBSD.org> |
STAGE-1 of 3 commit - allow (but do not require) interrupts to remain enabled in critical sections and streamline critical_enter() and critical_exit().
This commit allows an architecture to leave in
STAGE-1 of 3 commit - allow (but do not require) interrupts to remain enabled in critical sections and streamline critical_enter() and critical_exit().
This commit allows an architecture to leave interrupts enabled inside critical sections if it so wishes. Architectures that do not wish to do this are not effected by this change.
This commit implements the feature for the I386 architecture and provides a sysctl, debug.critical_mode, which defaults to 1 (use the feature). For now you can turn the sysctl on and off at any time in order to test the architectural changes or track down bugs.
This commit is just the first stage. Some areas of the code, specifically the MACHINE_CRITICAL_ENTER #ifdef'd code, is strictly temporary and will be cleaned up in the STAGE-2 commit when the critical_*() functions are moved entirely into MD files.
The following changes have been made:
* critical_enter() and critical_exit() for I386 now simply increment and decrement curthread->td_critnest. They no longer disable hard interrupts. When critical_exit() decrements the counter to 0 it effectively calls a routine to deal with whatever interrupts were deferred during the time the code was operating in a critical section.
Other architectures are unaffected.
* fork_exit() has been conditionalized to remove MD assumptions for the new code. Old code will still use the old MD assumptions in regards to hard interrupt disablement. In STAGE-2 this will be turned into a subroutine call into MD code rather then hardcoded in MI code.
The new code places the burden of entering the critical section in the trampoline code where it belongs.
* I386: interrupts are now enabled while we are in a critical section. The interrupt vector code has been adjusted to deal with the fact. If it detects that we are in a critical section it currently defers the interrupt by adding the appropriate bit to an interrupt mask.
* In order to accomplish the deferral, icu_lock is required. This is i386-specific. Thus icu_lock can only be obtained by mainline i386 code while interrupts are hard disabled. This change has been made.
* Because interrupts may or may not be hard disabled during a context switch, cpu_switch() can no longer simply assume that PSL_I will be in a consistent state. Therefore, it now saves and restores eflags.
* FAST INTERRUPT PROVISION. Fast interrupts are currently deferred. The intention is to eventually allow them to operate either while we are in a critical section or, if we are able to restrict the use of sched_lock, while we are not holding the sched_lock.
* ICU and APIC vector assembly for I386 cleaned up. The ICU code has been cleaned up to match the APIC code in regards to format and macro availability. Additionally, the code has been adjusted to deal with deferred interrupts.
* Deferred interrupts use a per-cpu boolean int_pending, and masks ipending, spending, and fpending. Being per-cpu variables it is not currently necessary to lock; bus cycles modifying them.
Note that the same mechanism will enable preemption to be incorporated as a true software interrupt without having to further hack up the critical nesting code.
* Note: the old critical_enter() code in kern/kern_switch.c is currently #ifdef to be compatible with both the old and new methodology. In STAGE-2 it will be moved entirely to MD code.
Performance issues:
One of the purposes of this commit is to enhance critical section performance, specifically to greatly reduce bus overhead to allow the critical section code to be used to protect per-cpu caches. These caches, such as Jeff's slab allocator work, can potentially operate very quickly making the effective savings of the new critical section code's performance very significant.
The second purpose of this commit is to allow architectures to enable certain interrupts while in a critical section. Specifically, the intention is to eventually allow certain FAST interrupts to operate rather then defer.
The third purpose of this commit is to begin to clean up the critical_enter()/critical_exit()/cpu_critical_enter()/ cpu_critical_exit() API which currently has serious cross pollution in MI code (in fork_exit() and ast() for example).
The fourth purpose of this commit is to provide a framework that allows kernel-preempting software interrupts to be implemented cleanly. This is currently used for two forward interrupts in I386. Other architectures will have the choice of using this infrastructure or building the functionality directly into critical_enter()/ critical_exit().
Finally, this commit is designed to greatly improve the flexibility of various architectures to manage critical section handling, software interrupts, preemption, and other highly integrated architecture-specific details.
show more ...
|
#
2c100766 |
| 11-Feb-2002 |
Julian Elischer <julian@FreeBSD.org> |
In a threaded world, differnt priorirites become properties of different entities. Make it so.
Reviewed by: jhb@freebsd.org (john baldwin)
|
Revision tags: release/4.5.0_cvs, release/4.4.0_cvs |
|
#
7e1f6dfe |
| 18-Dec-2001 |
John Baldwin <jhb@FreeBSD.org> |
Modify the critical section API as follows: - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting
Modify the critical section API as follows: - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit().
Tested on: i386, alpha
show more ...
|
#
6a494eeb |
| 18-Sep-2001 |
Jonathan Lemon <jlemon@FreeBSD.org> |
Change p into ke->ke_proc, this was hidden behind INVARIANTS.
|
#
b40ce416 |
| 12-Sep-2001 |
Julian Elischer <julian@FreeBSD.org> |
KSE Milestone 2 Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is fu
KSE Milestone 2 Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process.
Sorry john! (your next MFC will be a doosie!)
Reviewed by: peter@freebsd.org, dillon@freebsd.org
X-MFC after: ha ha ha ha
show more ...
|
#
f583b1d9 |
| 04-Jul-2001 |
John Baldwin <jhb@FreeBSD.org> |
Spelling fix in a KASSERT: runq_chose -> runq_choose.
|
Revision tags: release/4.3.0_cvs, release/4.3.0 |
|
#
f34fa851 |
| 28-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
Catch up to header include changes: - <sys/mutex.h> now requires <sys/systm.h> - <sys/mutex.h> and <sys/sx.h> now require <sys/lock.h>
|
#
6fe01250 |
| 15-Mar-2001 |
Peter Wemm <peter@FreeBSD.org> |
Jake essentially rewrote this. It is not by any stretch of the imagination a derivative of what I did before.
|
#
9cbd0393 |
| 11-Mar-2001 |
Dag-Erling Smørgrav <des@FreeBSD.org> |
Assert that the process we're trying to enqueue isn't already there.
|
#
3a3f6082 |
| 09-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
Add a new informative KASSERT to ensure that a process is in the SRUN state before we return it to cpu_switch().
|
#
f32ded2f |
| 24-Feb-2001 |
Jake Burkholder <jake@FreeBSD.org> |
- Assert that the proc to return is not NULL in runq_choose the same as runq_remove. - bzero the whole struct runq in runq_init just in case its not statically allocated.
|