#
cf2c39e7 |
| 11-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
remove lingering call to rd(tick)
|
#
83b72e3e |
| 11-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
missed nits replacing mutex with lock
|
#
7c0435b9 |
| 11-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the prof
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%.
Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
show more ...
|
Revision tags: release/5.5.0_cvs, release/5.5.0, release/6.1.0_cvs, release/6.1.0 |
|
#
3f08bd8b |
| 28-Jan-2006 |
John Baldwin <jhb@FreeBSD.org> |
Add a basic reader/writer lock implementation to the kernel. This implementation is by no means perfect as far as some of the algorithms that it uses and the fact that it is missing some functionali
Add a basic reader/writer lock implementation to the kernel. This implementation is by no means perfect as far as some of the algorithms that it uses and the fact that it is missing some functionality (try locks and upgrades/downgrades are not there yet), however it does seem to work in my local testing. There is more detail in the comments in the code, but the short version follows.
A reader/writer lock is very much like a regular mutex: it cannot be held across a voluntary sleep; it can be acquired in an interrupt thread; if the lock is held by a writer then the priority of any threads that block on the lock will be lent to the owner; the simple case lock operations all are done in a single atomic op. It also shares some similiarities with sx locks: it supports reader/writer semantics (multiple readers, but single writers); readers are allowed to recurse, but writers are not.
We can extend this implementation further by either improving algorithms or adding new functionality, but this should at least give us a base to work with now.
Reviewed by: arch (in theory) Tested on: i386 (4 cpu box with a kernel module that used 4 threads that randomly chose between read locks and write locks that ran w/o panicing for over a day solid. It usually panic'd within a few seconds when there were bugs during testing. :) The kernel module source is available on request.)
show more ...
|
#
25e498b4 |
| 18-Jan-2006 |
John Baldwin <jhb@FreeBSD.org> |
Always include the lock_classes[] array in the kernel. The "is it a spinlock" test in mtx_destroy() needs it even in non-debug kernels.
Reported by: danfe
|
#
6ef970a9 |
| 17-Jan-2006 |
John Baldwin <jhb@FreeBSD.org> |
Bah. Fix 'show lock' to actually be compiled in. I had just fixed this in p4 but had an older subr_lock.c on the machine I committed to CVS from.
|
#
83a81bcb |
| 17-Jan-2006 |
John Baldwin <jhb@FreeBSD.org> |
Add a new file (kern/subr_lock.c) for holding code related to struct lock_obj objects: - Add new lock_init() and lock_destroy() functions to setup and teardown lock_object objects including KTR log
Add a new file (kern/subr_lock.c) for holding code related to struct lock_obj objects: - Add new lock_init() and lock_destroy() functions to setup and teardown lock_object objects including KTR logging and registering with WITNESS. - Move all the handling of LO_INITIALIZED out of witness and the various lock init functions into lock_init() and lock_destroy(). - Remove the constants for static indices into the lock_classes[] array and change the code outside of subr_lock.c to use LOCK_CLASS to compare against a known lock class. - Move the 'show lock' ddb function and lock_classes[] array out of kern_mutex.c over to subr_lock.c.
show more ...
|
Revision tags: release/7.2.0_cvs, release/7.2.0 |
|
#
9c797940 |
| 13-Apr-2009 |
Oleksandr Tymoshenko <gonzo@FreeBSD.org> |
- Merge from HEAD
|
#
2e6b8de4 |
| 15-Mar-2009 |
Jeff Roberson <jeff@FreeBSD.org> |
- Implement a new mechanism for resetting lock profiling. We now guarantee that all cpus have acknowledged the cleared enable int by scheduling the resetting thread on each cpu in succession.
- Implement a new mechanism for resetting lock profiling. We now guarantee that all cpus have acknowledged the cleared enable int by scheduling the resetting thread on each cpu in succession. Since all lock profiling happens within a critical section this guarantees that all cpus have left lock profiling before we clear the datastructures. - Assert that the per-thread queue of locks lock profiling is aware of is clear on thread exit. There were several cases where this was not true that slows lock profiling and leaks information. - Remove all objects from all lists before clearing any per-cpu information in reset. Lock profiling objects can migrate between per-cpu caches and previously these migrated objects could be zero'd before they'd been removed
Discussed with: attilio Sponsored by: Nokia
show more ...
|
Revision tags: release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
947265b6 |
| 27-Jul-2008 |
Kip Macy <kmacy@FreeBSD.org> |
- track maximum wait time - resize columns based on actual observed numerical values
MFC after: 3 days
|
#
90356491 |
| 15-May-2008 |
Attilio Rao <attilio@FreeBSD.org> |
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the lock "type" directly to witness_init() from the parent lock init function. Handle delayed initialization before than witness_initialize() is called through the witness_pendhelp structure. - Axe out LO_ENROLLPEND as it is not really needed. The case where the mutex init delayed wants to be destroyed can't happen because witness_destroy() checks for witness_cold and panic in case. - In enroll(), if we cannot allocate a new object from the freelist, notify that to userspace through a printf(). - Modify the depart function in order to return nothing as in the current CVS version it always returns true and adjust callers accordingly. - Fix the witness_addgraph() argument name prototype. - Remove unuseful code from itismychild().
This commit leads to a shrinked struct lock_object and so smaller locks, in particular on amd64 where 2 uintptr_t (16 bytes per-primitive) are gained.
Reviewed by: jhb
show more ...
|
Revision tags: release/7.0.0_cvs, release/7.0.0 |
|
#
13ddf72d |
| 06-Feb-2008 |
Attilio Rao <attilio@FreeBSD.org> |
Really, no explicit checks against against lock_class_* object should be done in consumers code: using locks properties is much more appropriate. Fix current code doing these bogus checks.
Note: Rea
Really, no explicit checks against against lock_class_* object should be done in consumers code: using locks properties is much more appropriate. Fix current code doing these bogus checks.
Note: Really, callout are not usable by all !(LC_SPINLOCK | LC_SLEEPABLE) primitives like rmlocks doesn't implement the generic lock layer functions, but they can be equipped for this, so the check is still valid.
Tested by: matteo, kris (earlier version) Reviewed by: jhb
show more ...
|
Revision tags: release/6.3.0_cvs, release/6.3.0 |
|
#
357911ce |
| 08-Jan-2008 |
Kris Kennaway <kris@FreeBSD.org> |
Fix logic in skipcount handling (used to sample every 1/N lock operations to reduce profiling overhead)
|
#
0c66dc67 |
| 31-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Pause a while after disabling lock profiling and before resetting it to be sure that all participating CPUs have stopped updating it. - Restore the behavior of printing the name of the lock typ
- Pause a while after disabling lock profiling and before resetting it to be sure that all participating CPUs have stopped updating it. - Restore the behavior of printing the name of the lock type in the output.
show more ...
|
#
eea4f254 |
| 16-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_obje
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
show more ...
|
#
f53d15fe |
| 08-Nov-2007 |
Stephan Uphoff <ups@FreeBSD.org> |
Initial checkin for rmlock (read mostly lock) a multi reader single writer lock optimized for almost exclusive reader access. (see also rmlock.9)
TODO: Convert to per cpu variables linkerset as
Initial checkin for rmlock (read mostly lock) a multi reader single writer lock optimized for almost exclusive reader access. (see also rmlock.9)
TODO: Convert to per cpu variables linkerset as soon as it is available. Optimize UP (single processor) case.
show more ...
|
#
4486adc5 |
| 14-Sep-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Currently the LO_NOPROFILE flag (which is masked on upper level code by per-primitive macros like MTX_NOPROFILE, SX_NOPROFILE or RW_NOPROFILE) is not really honoured. In particular lock_profile_obtai
Currently the LO_NOPROFILE flag (which is masked on upper level code by per-primitive macros like MTX_NOPROFILE, SX_NOPROFILE or RW_NOPROFILE) is not really honoured. In particular lock_profile_obtain_lock_failure() and lock_profile_obtain_lock_success() are naked respect this flag. The bug leads to locks marked with no-profiling to be profiled as well. In the case of the clock_lock, used by the timer i8254 this leads to unpredictable behaviour both on amd64 and ia32 (double faults panic, sudden reboots, etc.). The amd64 clock_lock is also not marked as not profilable as it should be. Fix these bugs adding proper checks in the lock profiling code and at clock_lock initialization time.
i8254 bug pointed out by: kris Tested by: matteo, Giuseppe Cocomazzi <sbudella at libero dot it> Approved by: jeff (mentor) Approved by: re
show more ...
|
#
cdcc788a |
| 03-Jun-2007 |
Kris Kennaway <kris@FreeBSD.org> |
Revert some debugging KTRs that were added during development.
|
#
c91fcee7 |
| 18-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().
|
#
8289600c |
| 03-Apr-2007 |
Kip Macy <kmacy@FreeBSD.org> |
skip call to _lock_profile_obtain_lock_success entirely if acquisition time is non-zero (i.e. recursing or adding sharers)
|
#
fe68a916 |
| 26-Feb-2007 |
Kip Macy <kmacy@FreeBSD.org> |
general LOCK_PROFILING cleanup
- only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x
- remove unused function from subr_lock.c
- ge
general LOCK_PROFILING cleanup
- only collect timestamps when a lock is contested - this reduces the overhead of collecting profiles from 20x to 5x
- remove unused function from subr_lock.c
- generalize cnt_hold and cnt_lock statistics to be kept for all locks
- NOTE: rwlock profiling generates invalid statistics (and most likely always has) someone familiar with that should review
show more ...
|
Revision tags: release/6.2.0_cvs, release/6.2.0 |
|
#
aa077979 |
| 04-Dec-2006 |
Kip Macy <kmacy@FreeBSD.org> |
Bug fix for obscenely large wait times on uncontested locks
if waittime was zero (the lock was uncontested) l->lpo_waittime in the hash table would not get initialized.
Inspection prompted by quest
Bug fix for obscenely large wait times on uncontested locks
if waittime was zero (the lock was uncontested) l->lpo_waittime in the hash table would not get initialized.
Inspection prompted by questions from: Attilio Rao
show more ...
|
#
61bd5e21 |
| 13-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
track lock class name in a way that doesn't break WITNESS
|
#
44a96b46 |
| 13-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
Unbreak witness
|
#
54e57f76 |
| 12-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
show lock class in profiling output for default case where type is not specified when initializing the lock
Approved by: scottl (standing in for mentor rwatson)
|