#
486b8ac0 |
| 28-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
Don't explicitly zero p_intr_nesting_level and p_aioinfo in fork.
|
#
35a47246 |
| 28-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
Use mtx_intr_enable() on sched_lock to ensure child processes always start with interrupts enabled rather than calling the no-longer MI function enable_intr(). This is bogus anyways and in theory sh
Use mtx_intr_enable() on sched_lock to ensure child processes always start with interrupts enabled rather than calling the no-longer MI function enable_intr(). This is bogus anyways and in theory shouldn't even be needed.
show more ...
|
#
5db078a9 |
| 09-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
Fix mtx_legal2block. The only time that it is bad to block on a mutex is if we hold a spin mutex, since we can trivially get into deadlocks if we start switching out of processes that hold spinlocks
Fix mtx_legal2block. The only time that it is bad to block on a mutex is if we hold a spin mutex, since we can trivially get into deadlocks if we start switching out of processes that hold spinlocks. Checking to see if interrupts were disabled was a sort of cheap way of doing this since most of the time interrupts were only disabled when holding a spin lock. At least on the i386. To fix this properly, use a per-process counter p_spinlocks that counts the number of spin locks currently held, and instead of checking to see if interrupts are disabled in the witness code, check to see if we hold any spin locks. Since child processes always start up with the sched lock magically held in fork_exit(), we initialize p_spinlocks to 1 for child processes. Note that proc0 doesn't go through fork_exit(), so it starts with no spin locks held.
Consulting from: cp
show more ...
|
#
5641ae5d |
| 07-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Don't hold the proc lock across VREF and the fd* functions to avoid lock order reversals. - Add some preliminary locking in the !RF_PROC case. - Protect p_estcpu with sched_lock.
|
#
57934cd3 |
| 07-Mar-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Lock the forklist with an sx lock. - Add proc locking to fork1(). Always lock the child procoess (new process) first when both processes need to be locked at the same time. - Remove unneeded s
- Lock the forklist with an sx lock. - Add proc locking to fork1(). Always lock the child procoess (new process) first when both processes need to be locked at the same time. - Remove unneeded spl()'s as the data they protected is now locked. - Ensure that the proctree is exclusively locked and the new process is locked when setting up the parent process pointer. - Lock the check for P_KTHREAD in p_flag in fork_exit().
show more ...
|
#
5b270b2a |
| 28-Feb-2001 |
Jake Burkholder <jake@FreeBSD.org> |
Sigh. Try to get priorities sorted out. Don't bother trying to update native priority, it is diffcult to get right and likely to end up horribly wrong. Use an honestly wrong fixed value that seems
Sigh. Try to get priorities sorted out. Don't bother trying to update native priority, it is diffcult to get right and likely to end up horribly wrong. Use an honestly wrong fixed value that seems to work; PUSER for user threads, and the interrupt priority for ithreads. Set it once when the process is created and forget about it.
Suggested by: bde Pointy hat: me
show more ...
|
#
be15bfc0 |
| 27-Feb-2001 |
Jake Burkholder <jake@FreeBSD.org> |
Initialize native priority to PRI_MAX. It was usually 0 which made a process's priority go through the roof when it released a (contested) mutex. Only set the native priority in mtx_lock if hasn't
Initialize native priority to PRI_MAX. It was usually 0 which made a process's priority go through the roof when it released a (contested) mutex. Only set the native priority in mtx_lock if hasn't already been set.
Reviewed by: jhb
show more ...
|
#
9764c9d3 |
| 22-Feb-2001 |
John Baldwin <jhb@FreeBSD.org> |
Quiet a warning with a uintptr_t cast.
Noticed by: bde
|
#
91421ba2 |
| 21-Feb-2001 |
Robert Watson <rwatson@FreeBSD.org> |
o Move per-process jail pointer (p->pr_prison) to inside of the subject credential structure, ucred (cr->cr_prison). o Allow jail inheritence to be a function of credential inheritence. o Abstract
o Move per-process jail pointer (p->pr_prison) to inside of the subject credential structure, ucred (cr->cr_prison). o Allow jail inheritence to be a function of credential inheritence. o Abstract prison structure reference counting behind pr_hold() and pr_free(), invoked by the similarly named credential reference management functions, removing this code from per-ABI fork/exit code. o Modify various jail() functions to use struct ucred arguments instead of struct proc arguments. o Introduce jailed() function to determine if a credential is jailed, rather than directly checking pointers all over the place. o Convert PRISON_CHECK() macro to prison_check() function. o Move jail() function prototypes to jail.h. o Emulate the P_JAILED flag in fill_kinfo_proc() and no longer set the flag in the process flags field itself. o Eliminate that "const" qualifier from suser/p_can/etc to reflect mutex use.
Notes:
o Some further cleanup of the linux/jail code is still required. o It's now possible to consider resolving some of the process vs credential based permission checking confusion in the socket code. o Mutex protection of struct prison is still not present, and is required to protect the reference count plus some fields in the structure.
Reviewed by: freebsd-arch Obtained from: TrustedBSD Project
show more ...
|
#
5813dc03 |
| 20-Feb-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Don't call clear_resched() in userret(), instead, clear the resched flag in mi_switch() just before calling cpu_switch() so that the first switch after a resched request will satisfy the reques
- Don't call clear_resched() in userret(), instead, clear the resched flag in mi_switch() just before calling cpu_switch() so that the first switch after a resched request will satisfy the request. - While I'm at it, move a few things into mi_switch() and out of cpu_switch(), specifically set the p_oncpu and p_lastcpu members of proc in mi_switch(), and handle the sched_lock state change across a context switch in mi_switch(). - Since cpu_switch() no longer handles the sched_lock state change, we have to setup an initial state for sched_lock in fork_exit() before we release it.
show more ...
|
#
d941d475 |
| 12-Feb-2001 |
Robert Watson <rwatson@FreeBSD.org> |
o Export the nextpid variable via SYSCTL as kern.lastpid, decreasing by one the number of variables needed for top and other setgid kmem utilities that could only be accessed via /dev/kmem previo
o Export the nextpid variable via SYSCTL as kern.lastpid, decreasing by one the number of variables needed for top and other setgid kmem utilities that could only be accessed via /dev/kmem previously.
Submitted by: Thomas Moestl <tmoestl@gmx.net> Reviewed by: freebsd-audit
show more ...
|
#
9ed346ba |
| 09-Feb-2001 |
Bosko Milekic <bmilekic@FreeBSD.org> |
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
simil
Change and clean the mutex lock interface.
mtx_enter(lock, type) becomes:
mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized)
similarily, for releasing a lock, we now have:
mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument.
The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind.
Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two:
MTX_QUIET and MTX_NOSWITCH
The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers:
mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively.
Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case.
Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled.
Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those.
Finally, caught up to the interface changes in all sys code.
Contributors: jake, jhb, jasone (in no particular order)
show more ...
|
#
8865286b |
| 27-Jan-2001 |
John Baldwin <jhb@FreeBSD.org> |
Fix fork_exit() to take a pointer to a function that returns void as its first argument rather than a function that returns a void *.
Noticed by: jake
|
#
2a36ec35 |
| 24-Jan-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Change fork_exit() to take a pointer to a trapframe as its 3rd argument instead of a trapframe directly. (Requested by bde.) - Convert the alpha switch_trampoline to call fork_exit() and use the
- Change fork_exit() to take a pointer to a trapframe as its 3rd argument instead of a trapframe directly. (Requested by bde.) - Convert the alpha switch_trampoline to call fork_exit() and use the MI fork_return() instead of child_return(). - Axe child_return().
show more ...
|
#
a7b124c3 |
| 24-Jan-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Catch up to proc flag changes. - Add new fork_exit() and fork_return() MI C functions.
|
#
5d22597f |
| 23-Jan-2001 |
Hajimu UMEMOTO <ume@FreeBSD.org> |
Add mibs to hold the number of forks since boot. New mibs are:
vm.stats.vm.v_forks vm.stats.vm.v_vforks vm.stats.vm.v_rforks vm.stats.vm.v_kthreads vm.stats.vm.v_forkpages vm.stats.vm.v_vfork
Add mibs to hold the number of forks since boot. New mibs are:
vm.stats.vm.v_forks vm.stats.vm.v_vforks vm.stats.vm.v_rforks vm.stats.vm.v_kthreads vm.stats.vm.v_forkpages vm.stats.vm.v_vforkpages vm.stats.vm.v_rforkpages vm.stats.vm.v_kthreadpages
Submitted by: Paul Herman <pherman@frenchfries.net> Reviewed by: alfred
show more ...
|
#
a448b62a |
| 21-Jan-2001 |
Jake Burkholder <jake@FreeBSD.org> |
Make intr_nesting_level per-process, rather than per-cpu. Setup interrupt threads to run with it always >= 1, so that malloc can detect M_WAITOK from "interrupt" context. This is also necessary in
Make intr_nesting_level per-process, rather than per-cpu. Setup interrupt threads to run with it always >= 1, so that malloc can detect M_WAITOK from "interrupt" context. This is also necessary in order to context switch from sched_ithd() directly.
Reviewed By: peter
show more ...
|
#
98f03f90 |
| 23-Dec-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Protect proc.p_pptr and proc.p_children/p_sibling with the proctree_lock.
linprocfs not locked pending response from informal maintainer.
Reviewed by: jhb, -smp@
|
#
c0c25570 |
| 13-Dec-2000 |
Jake Burkholder <jake@FreeBSD.org> |
- Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead of explicit calls to lockmgr. Also provides macros for the flags pased to specify shared, exclusive or release which map to t
- Change the allproc_lock to use a macro, ALLPROC_LOCK(how), instead of explicit calls to lockmgr. Also provides macros for the flags pased to specify shared, exclusive or release which map to the lockmgr flags. This is so that the use of lockmgr can be easily replaced with optimized reader-writer locks. - Add some locking that I missed the first time.
show more ...
|
#
8dd431fc |
| 04-Dec-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Whitespace. Fix indentation, align comments.
|
#
4971f62a |
| 03-Dec-2000 |
John Baldwin <jhb@FreeBSD.org> |
- Add a mutex to the proc structure p_mtx that will be used to lock accesses to each individual proc. - Initialize the lock during fork1(), and destroy it in wait1().
|
#
86360fee |
| 02-Dec-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup from struct proc, which are now unused (p_nthread already was). Remove process flag P_KTHREADP which was untested and only set i
Remove thr_sleep and thr_wakeup. Remove fields p_nthread and p_wakeup from struct proc, which are now unused (p_nthread already was). Remove process flag P_KTHREADP which was untested and only set in vfs_aio.c (it should use kthread_create). Move the yield system call to kern_synch.c as kern_threads.c has been removed completely.
moral support from: alfred, jhb
show more ...
|
#
1512b5d6 |
| 01-Dec-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Use an mp-safe callout for endtsleep.
|
#
4f559836 |
| 27-Nov-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Use callout_reset instead of timeout(9). Most callouts are statically allocated, 2 have been added to struct proc for setitimer and sleep.
Reviewed by: jhb, jlemon
|
#
553629eb |
| 22-Nov-2000 |
Jake Burkholder <jake@FreeBSD.org> |
Protect the following with a lockmgr lock:
allproc zombproc pidhashtbl proc.p_list proc.p_hash nextpid
Reviewed by: jhb Obtained from: BSD/OS and netbsd
|