#
c5aa6b58 |
| 12-Mar-2008 |
Jeff Roberson <jeff@FreeBSD.org> |
- Pass the priority argument from *sleep() into sleepq and down into sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static b
- Pass the priority argument from *sleep() into sleepq and down into sched_sleep(). This removes extra thread_lock() acquisition and allows the scheduler to decide what to do with the static boost. - Change the priority arguments to cv_* to match sleepq/msleep/etc. where 0 means no priority change. Catch -1 in cv_broadcastpri() and convert it to 0 for now. - Set a flag when sleeping in a way that is compatible with swapping since direct priority comparisons are meaningless now. - Add a sysctl to ule, kern.sched.static_boost, that defaults to on which controls the boost behavior. Turning it off gives better performance in some workloads but needs more investigation. - While we're modifying sleepq, change signal and broadcast to both return with the lock held as the lock was held on enter.
Reviewed by: jhb, peter
show more ...
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
eea4f254 |
| 16-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_obje
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
show more ...
|
#
f9721b43 |
| 18-Nov-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Expand lock class with the "virtual" function lc_assert which will offer an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, s
Expand lock class with the "virtual" function lc_assert which will offer an unified way for all the lock primitives to express lock assertions. Currenty, lockmgrs and rmlocks don't have assertions, so just panic in that case. This will be a base for more callout improvements.
Ok'ed by: jhb, jeff
show more ...
|
#
431f8906 |
| 14-Nov-2007 |
Julian Elischer <julian@FreeBSD.org> |
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print
generally we are interested in what thread did something as opposed to what process. Since threads by default have teh name of the process unless over-written with more useful information, just print the thread name instead.
show more ...
|
#
764a938b |
| 02-Oct-2007 |
Pawel Jakub Dawidek <pjd@FreeBSD.org> |
Fix sx_try_slock(), so it only fails when there is an exclusive owner. Before that fix, it was possible for the function to fail if number of sharers changes between 'x = sx->sx_lock' step and atomic
Fix sx_try_slock(), so it only fails when there is an exclusive owner. Before that fix, it was possible for the function to fail if number of sharers changes between 'x = sx->sx_lock' step and atomic_cmpset_acq_ptr() call.
This fixes ZFS problem when ZFS returns strange EIO errors under load. In ZFS there is a code that depends on the fact that sx_try_slock() can only fail if there is an exclusive owner.
Discussed with: attilio Reviewed by: jhb Approved by: re (kensmith)
show more ...
|
#
c1a6d9fa |
| 06-Jul-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Fix some problems with lock_profiling in sx locks: - Adjust lock_profiling stubs semantic in the hard functions in order to be more accurate and trustable - Disable shared paths for lock_profiling.
Fix some problems with lock_profiling in sx locks: - Adjust lock_profiling stubs semantic in the hard functions in order to be more accurate and trustable - Disable shared paths for lock_profiling. Actually, lock_profiling has a subtle race which makes results caming from shared paths not completely trustable. A macro stub (LOCK_PROFILING_SHARED) can be actually used for re-enabling this paths, but is currently intended for developing use only. - Use homogeneous names for automatic variables in hard functions regarding lock_profiling - Style fixes - Add a CTASSERT for some flags building
Discussed with: kmacy, kris Approved by: jeff (mentor) Approved by: re
show more ...
|
#
f9819486 |
| 31-May-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Add functions sx_xlock_sig() and sx_slock_sig(). These functions are intended to do the same actions of sx_xlock() and sx_slock() but with the difference to perform an interruptible sleep, so that sl
Add functions sx_xlock_sig() and sx_slock_sig(). These functions are intended to do the same actions of sx_xlock() and sx_slock() but with the difference to perform an interruptible sleep, so that sleep can be interrupted by external events. In order to support these new featueres, some code renstruction is needed, but external API won't be affected at all.
Note: use "void" cast for "int" returning functions in order to avoid tools like Coverity prevents to whine.
Requested by: rwatson Tested by: rwatson Reviewed by: jhb Approved by: jeff (mentor)
show more ...
|
#
2c7289cb |
| 29-May-2007 |
Attilio Rao <attilio@FreeBSD.org> |
style(9) fixes for sx locks.
Approved by: jeff (mentor)
|
#
acf840c4 |
| 29-May-2007 |
Attilio Rao <attilio@FreeBSD.org> |
Add a small fix for lock profiling in sx locks. "0" cannot be a correct value since when the function is entered at least one shared holder must be present and since we want the last one "1" is the c
Add a small fix for lock profiling in sx locks. "0" cannot be a correct value since when the function is entered at least one shared holder must be present and since we want the last one "1" is the correct value. Note that lock_profiling for sx locks is far from being perfect. Expect further fixes for that.
Approved by: jeff (mentor)
show more ...
|
#
7ec137e5 |
| 19-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Rename the macros for assertion flags passed to sx_assert() from SX_* to SA_* to match mutexes and rwlocks. The old flags still exist for backwards compatiblity.
Requested by: attilio
|
#
71a95881 |
| 19-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Expose sx_xholder() as a public macro. It returns a pointer to the thread that holds the current exclusive lock, or NULL if no thread holds an exclusive lock.
Requested by: pjd
|
#
46a8b9cb |
| 19-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Oops, didn't include SX_ADAPTIVESPIN in the list of valid flags for the assert in sx_init_flags().
Submitted by: attilio
|
#
b0d67325 |
| 19-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Add a new SX_RECURSE flag to make support for recursive exclusive locks conditional. By default, sx(9) locks are back to not supporting recursive exclusive locks.
Submitted by: attilio
|
#
da7d0d1e |
| 18-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Fix a comment.
|
#
c91fcee7 |
| 18-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().
|
#
0026c92c |
| 08-May-2007 |
John Baldwin <jhb@FreeBSD.org> |
Add destroyed cookie values for sx locks and rwlocks as well as extra KASSERTs so that any lock operations on a destroyed lock will panic or hang.
|
#
59a31e6a |
| 04-Apr-2007 |
Kip Macy <kmacy@FreeBSD.org> |
fix typo
|
#
e2bc1066 |
| 04-Apr-2007 |
Kip Macy <kmacy@FreeBSD.org> |
style fixes and make sure that the lock is treated as released in the sharers == 0 case not that this is somewhat racy because a new sharer can come in while we're updating stats
|
#
afc0bfbd |
| 04-Apr-2007 |
Kip Macy <kmacy@FreeBSD.org> |
Fixes to sx for newsx - fix recursed case and move out of inline
Submitted by: Attilio Rao <attilio@freebsd.org>
|
#
4e7f640d |
| 01-Apr-2007 |
John Baldwin <jhb@FreeBSD.org> |
Optimize sx locks to use simple atomic operations for the common cases of obtaining and releasing shared and exclusive locks. The algorithms for manipulating the lock cookie are very similar to that
Optimize sx locks to use simple atomic operations for the common cases of obtaining and releasing shared and exclusive locks. The algorithms for manipulating the lock cookie are very similar to that rwlocks. This patch also adds support for exclusive locks using the same algorithm as mutexes.
A new sx_init_flags() function has been added so that optional flags can be specified to alter a given locks behavior. The flags include SX_DUPOK, SX_NOWITNESS, SX_NOPROFILE, and SX_QUITE which are all identical in nature to the similar flags for mutexes.
Adaptive spinning on select locks may be enabled by enabling the ADAPTIVE_SX kernel option. Only locks initialized with the SX_ADAPTIVESPIN flag via sx_init_flags() will adaptively spin.
The common cases for sx_slock(), sx_sunlock(), sx_xlock(), and sx_xunlock() are now performed inline in non-debug kernels. As a result, <sys/sx.h> now requires <sys/lock.h> to be included prior to <sys/sx.h>.
The new kernel option SX_NOINLINE can be used to disable the aforementioned inlining in non-debug kernels.
The size of struct sx has changed, so the kernel ABI is probably greatly disturbed.
MFC after: 1 month Submitted by: attilio Tested by: kris, pjd
show more ...
|
#
aa89d8cd |
| 21-Mar-2007 |
John Baldwin <jhb@FreeBSD.org> |
Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes, rwlocks, and sx locks to 'lock_object'.
|
#
6e21afd4 |
| 09-Mar-2007 |
John Baldwin <jhb@FreeBSD.org> |
Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes. These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both func
Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes. These functions are intended to be used to drop a lock and then reacquire it when doing an sleep such as msleep(9). Both functions accept a 'struct lock_object *' as their first parameter. The 'lc_unlock' function returns an integer that is then passed as the second paramter to the subsequent 'lc_lock' function. This can be used to communicate state. For example, sx locks and rwlocks use this to indicate if the lock was share/read locked vs exclusive/write locked.
Currently, spin mutexes and lockmgr locks do not provide working lc_lock and lc_unlock functions.
show more ...
|
#
ae8dde30 |
| 09-Mar-2007 |
John Baldwin <jhb@FreeBSD.org> |
Use C99-style struct member initialization for lock classes.
|
#
c66d7606 |
| 02-Mar-2007 |
Kip Macy <kmacy@FreeBSD.org> |
lock stats updates need to be protected by the lock
|
#
a5bceb77 |
| 01-Mar-2007 |
Kip Macy <kmacy@FreeBSD.org> |
Evidently I've overestimated gcc's ability to peak inside inline functions and optimize away unused stack values. The 48 bytes that the lock_profile_object adds to the stack evidently has a measurabl
Evidently I've overestimated gcc's ability to peak inside inline functions and optimize away unused stack values. The 48 bytes that the lock_profile_object adds to the stack evidently has a measurable performance impact on certain workloads.
show more ...
|