History log of /freebsd/sys/kern/kern_switch.c (Results 326 – 330 of 330)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 4a6404df 12-Sep-2000 John Baldwin <jhb@FreeBSD.org>

Fix some printf format string warnings due to sizeof(int) != sizeof(long) on
the alpha.


# 0384fff8 07-Sep-2000 Jason Evans <jasone@FreeBSD.org>

Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and c

Major update to the way synchronization is done in the kernel. Highlights
include:

* Mutual exclusion is used instead of spl*(). See mutex(9). (Note: The
alpha port is still in transition and currently uses both.)

* Per-CPU idle processes.

* Interrupts are run in their own separate kernel threads and can be
preempted (i386 only).

Partially contributed by: BSDi (BSD/OS)
Submissions by (at least): cp, dfr, dillon, grog, jake, jhb, sheldonh

show more ...


Revision tags: release/4.1.0, release/3.5.0_cvs
# 36e9f877 28-Mar-2000 Matthew Dillon <dillon@FreeBSD.org>

Commit major SMP cleanups and move the BGL (big giant lock) in the
syscall path inward. A system call may select whether it needs the MP
lock or not (the default being that it does need it).

Commit major SMP cleanups and move the BGL (big giant lock) in the
syscall path inward. A system call may select whether it needs the MP
lock or not (the default being that it does need it).

A great deal of conditional SMP code for various deadended experiments
has been removed. 'cil' and 'cml' have been removed entirely, and the
locking around the cpl has been removed. The conditional
separately-locked fast-interrupt code has been removed, meaning that
interrupts must hold the CPL now (but they pretty much had to anyway).
Another reason for doing this is that the original separate-lock for
interrupts just doesn't apply to the interrupt thread mechanism being
contemplated.

Modifications to the cpl may now ONLY occur while holding the MP
lock. For example, if an otherwise MP safe syscall needs to mess with
the cpl, it must hold the MP lock for the duration and must (as usual)
save/restore the cpl in a nested fashion.

This is precursor work for the real meat coming later: avoiding having
to hold the MP lock for common syscalls and I/O's and interrupt threads.
It is expected that the spl mechanisms and new interrupt threading
mechanisms will be able to run in tandem, allowing a slow piecemeal
transition to occur.

This patch should result in a moderate performance improvement due to
the considerable amount of code that has been removed from the critical
path, especially the simplification of the spl*() calls. The real
performance gains will come later.

Approved by: jkh
Reviewed by: current, bde (exception.s)
Some work taken from: luoqi's patch

show more ...


Revision tags: release/4.0.0_cvs, release/3.4.0_cvs, release/3.3.0_cvs
# 42cef09b 19-Aug-1999 Peter Wemm <peter@FreeBSD.org>

Fix a typo and a bug.
- One RTP_PRIO_REALTIME was meant to be RTP_PRIO_IDLE.
- RTP_PRIO_FIFO was not handled.
- Move the usual case first for setrunqueue() etc.


# dba6c5a6 19-Aug-1999 Peter Wemm <peter@FreeBSD.org>

Extract the next runnable process selection out of cpu_switch() into a
fairly machine independent C routine. gcc actually does a pretty good
job of this.

Reviewed by: msmith (in principle)


1...<<11121314