History log of /freebsd/sys/kern/kern_rwlock.c (Results 201 – 225 of 299)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: release/6.4.0_cvs, release/6.4.0
# 41313430 10-Sep-2008 John Baldwin <jhb@FreeBSD.org>

Teach WITNESS about the interlocks used with lockmgr. This removes a bunch
of spurious witness warnings since lockmgr grew witness support. Before
this, every time you passed an interlock to a lock

Teach WITNESS about the interlocks used with lockmgr. This removes a bunch
of spurious witness warnings since lockmgr grew witness support. Before
this, every time you passed an interlock to a lockmgr lock WITNESS treated
it as a LOR.

Reviewed by: attilio

show more ...


# bf9c6c31 10-Sep-2008 John Baldwin <jhb@FreeBSD.org>

Various whitespace fixes.


# 48972152 27-May-2008 Attilio Rao <attilio@FreeBSD.org>

Improve a comment which, in the actual CVS stock, doesn't completely
explain the logic of the code chunk.


# 00ca0944 04-Apr-2008 Jeff Roberson <jeff@FreeBSD.org>

- Add sysctls at debug.rwlock to control the behavior of the speculative
spinning when readers hold a lock. This spinning is speculative because,
unlike the write case, we can not test whether

- Add sysctls at debug.rwlock to control the behavior of the speculative
spinning when readers hold a lock. This spinning is speculative because,
unlike the write case, we can not test whether the owners are running.
- Add speculative read spinning for readers who are blocked by pending
writers while a read lock is still held. This allows the thread to
spin until the write lock succeeds after which it may spin until the
writer has released the lock. This prevents excessive context switches
when readers and writers both hold the lock for brief periods.

Sponsored by: Nokia

show more ...


# b31a149b 01-Apr-2008 Attilio Rao <attilio@FreeBSD.org>

Add rw_try_rlock() and rw_try_wlock() to rwlocks.
These functions try the specified operation (rlocking and wlocking) and
true is returned if the operation completes, false otherwise.

The KPI is enr

Add rw_try_rlock() and rw_try_wlock() to rwlocks.
These functions try the specified operation (rlocking and wlocking) and
true is returned if the operation completes, false otherwise.

The KPI is enriched by this commit, so __FreeBSD_version bumping and
manpage updating will happen soon.

Requested by: jeff, kris

show more ...


Revision tags: release/7.0.0_cvs, release/7.0.0
# 0fef2c50 07-Feb-2008 Jeff Roberson <jeff@FreeBSD.org>

- In rw_wunlock_hard prefer to wakeup writers if there are both readers
and writers available. Doing otherwise can cause deadlocks as no
read locks can proceed while there are write waiters.

- In rw_wunlock_hard prefer to wakeup writers if there are both readers
and writers available. Doing otherwise can cause deadlocks as no
read locks can proceed while there are write waiters.

Sponsored by: Nokia

show more ...


# 5dff04c3 06-Feb-2008 Jeff Roberson <jeff@FreeBSD.org>

Adaptive spinning in write path with readers and writer starvation avoidance.
- Move recursion checking into rwlock inlines to free a bit for use with
adaptive spinners.
- Clear the RW_LOCK_WRIT

Adaptive spinning in write path with readers and writer starvation avoidance.
- Move recursion checking into rwlock inlines to free a bit for use with
adaptive spinners.
- Clear the RW_LOCK_WRITE_SPINNERS flag whenever the lock state changes
causing write spinners to restart their loop.
- Write spinners are limited by a count while readers hold the lock as
there is no way to know for certain whether readers are running still.
- In the read path block if there are write waiters or spinners to avoid
starving writers. Use a new per-thread count, td_rw_rlocks, to skip
starvation avoidance if it might cause a deadlock.
- Remove or change invalid assertions in turnstiles.

Reviewed by: attilio (developed parts of the patch as well)
Sponsored by: Nokia

show more ...


# cff3c4fd 17-Jan-2008 John Baldwin <jhb@FreeBSD.org>

Remove a conditional that is always true.

MFC after: 2 weeks


Revision tags: release/6.3.0_cvs, release/6.3.0
# eea4f254 16-Dec-2007 Jeff Roberson <jeff@FreeBSD.org>

- Re-implement lock profiling in such a way that it no longer breaks
the ABI when enabled. There is no longer an embedded lock_profile_object
in each lock. Instead a list of lock_profile_obje

- Re-implement lock profiling in such a way that it no longer breaks
the ABI when enabled. There is no longer an embedded lock_profile_object
in each lock. Instead a list of lock_profile_objects is kept per-thread
for each lock it may own. The cnt_hold statistic is now always 0 to
facilitate this.
- Support shared locking by tracking individual lock instances and
statistics in the per-thread per-instance lock_profile_object.
- Make the lock profiling hash table a per-cpu singly linked list with a
per-cpu static lock_prof allocator. This removes the need for an array
of spinlocks and reduces cache contention between cores.
- Use a seperate hash for spinlocks and other locks so that only a
critical_enter() is required and not a spinlock_enter() to modify the
per-cpu tables.
- Count time spent spinning in the lock statistics.
- Remove the LOCK_PROFILE_SHARED option as it is always supported now.
- Specifically drop and release the scheduler locks in both schedulers
since we track owners now.

In collaboration with: Kip Macy
Sponsored by: Nokia

show more ...


# 49aead8a 26-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Simplify the adaptive spinning algorithm in rwlock and mutex:
currently, before to spin the turnstile spinlock is acquired and the
waiters flag is set.
This is not strictly necessary, so just spin be

Simplify the adaptive spinning algorithm in rwlock and mutex:
currently, before to spin the turnstile spinlock is acquired and the
waiters flag is set.
This is not strictly necessary, so just spin before to acquire the
spinlock and to set the flags.
This will simplify a lot other functions too, as now we have the waiters
flag set only if there are actually waiters.
This should make wakeup/sleeping couplet faster under intensive mutex
workload.
This also fixes a bug in rw_try_upgrade() in the adaptive case, where
turnstile_lookup() will recurse on the ts_lock lock that will never be
really released [1].

[1] Reported by: jeff with Nokia help
Tested by: pho, kris (earlier, bugged version of rwlock part)
Discussed with: jhb [2], jeff
MFC after: 1 week

[2] John had a similar patch about 6.x and/or 7.x about mutexes probabilly

show more ...


# f9721b43 18-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Expand lock class with the "virtual" function lc_assert which will offer
an unified way for all the lock primitives to express lock assertions.
Currenty, lockmgrs and rmlocks don't have assertions, s

Expand lock class with the "virtual" function lc_assert which will offer
an unified way for all the lock primitives to express lock assertions.
Currenty, lockmgrs and rmlocks don't have assertions, so just panic in
that case.
This will be a base for more callout improvements.

Ok'ed by: jhb, jeff

show more ...


# 6f5c319c 14-Nov-2007 Attilio Rao <attilio@FreeBSD.org>

Remove a bogus KASSERT which will prevent rwlock to be acquired
recursively in exclusive mode with debugging kernels.

Submitted by: kmacy
Approved by: jeff


# 431f8906 14-Nov-2007 Julian Elischer <julian@FreeBSD.org>

generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print

generally we are interested in what thread did something as
opposed to what process. Since threads by default have teh name of the
process unless over-written with more useful information, just print the
thread name instead.

show more ...


# 6aa294be 20-Jul-2007 Attilio Rao <attilio@FreeBSD.org>

Fix some problems with lock profiling in rw locks:
- Adjust lock_profiling stubs semantic in the hard functions in order to be
more accurate and trustable
- As for sx locks, disable shared paths fo

Fix some problems with lock profiling in rw locks:
- Adjust lock_profiling stubs semantic in the hard functions in order to be
more accurate and trustable
- As for sx locks, disable shared paths for lock_profiling. Actually,
lock_profiling has a subtle race which makes results caming from shared
paths not completely trustable. A macro stub (LOCK_PROFILING_SHARED) can
be actually used for re-enabling this paths, but is currently intended
for developing use only.
- style(9) fixes

Approved by: jeff, kmacy, jhb[1]
Approved by: re

[1] Had initial reservations not shared by others, conceded
in the end.

show more ...


# f08945a7 26-Jun-2007 Attilio Rao <attilio@FreeBSD.org>

Introduce a new rwlocks initialization function: rw_init_flags.
This is very similar to sx_init_flags: it initializes the rwlock using
special flags passed as third argument (RW_DUPOK, RW_NOPROFILE,

Introduce a new rwlocks initialization function: rw_init_flags.
This is very similar to sx_init_flags: it initializes the rwlock using
special flags passed as third argument (RW_DUPOK, RW_NOPROFILE,
RW_NOWITNESS, RW_QUIET, RW_RECURSE).
Among these, the most important new feature is probabilly that rwlocks
can be acquired recursively now (for both shared and exclusive paths).

Because of the recursion counter, the ABI is changed.

Tested by: Timothy Redaelli <drizzt@gufi.org>
Reviewed by: jhb
Approved by: jeff (mentor)
Approved by: re

show more ...


# 2502c107 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
deadlocks that are possible with thread_lock().
- The turnstile lock order is defi

Commit 3/14 of sched_lock decomposition.
- Add a per-turnstile spinlock to solve potential priority propagation
deadlocks that are possible with thread_lock().
- The turnstile lock order is defined as the exact opposite of the
lock order used with the sleep locks they represent. This allows us
to walk in reverse order in priority_propagate and this is the only
place we wish to multiply acquire turnstile locks.
- Use the turnstile_chain lock to protect assigning mutexes to turnstiles.
- Change the turnstile interface to pass back turnstile pointers to the
consumers. This allows us to reduce some locking and makes it easier
to cancel turnstile assignment while the turnstile chain lock is held.

Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)

show more ...


# c91fcee7 18-May-2007 John Baldwin <jhb@FreeBSD.org>

Move lock_profile_object_{init,destroy}() into lock_{init,destroy}().


# 0026c92c 08-May-2007 John Baldwin <jhb@FreeBSD.org>

Add destroyed cookie values for sx locks and rwlocks as well as extra
KASSERTs so that any lock operations on a destroyed lock will panic or
hang.


# b80ad3ee 30-Mar-2007 John Baldwin <jhb@FreeBSD.org>

- Drop memory barriers in rw_try_upgrade(). We don't need an 'acq' memory
barrier here as the earlier rw_rlock() already contained one.
- Comment fix.


# cd6e6e4e 22-Mar-2007 John Baldwin <jhb@FreeBSD.org>

- Simplify the #ifdef's for adaptive mutexes and rwlocks by conditionally
defining a macro earlier in the file.
- Add NO_ADAPTIVE_RWLOCKS option to disable adaptive spinning for rwlocks.


# aa89d8cd 21-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Rename the 'mtx_object', 'rw_object', and 'sx_object' members of mutexes,
rwlocks, and sx locks to 'lock_object'.


# c1f2a533 13-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Print readers count as unsigned in ddb 'show lock'.

Submitted by: attilio


# 4b493b1a 12-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Fix a typo.


# 6e21afd4 09-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.
These functions are intended to be used to drop a lock and then reacquire
it when doing an sleep such as msleep(9). Both func

Add two new function pointers 'lc_lock' and 'lc_unlock' to lock classes.
These functions are intended to be used to drop a lock and then reacquire
it when doing an sleep such as msleep(9). Both functions accept a
'struct lock_object *' as their first parameter. The 'lc_unlock' function
returns an integer that is then passed as the second paramter to the
subsequent 'lc_lock' function. This can be used to communicate state.
For example, sx locks and rwlocks use this to indicate if the lock was
share/read locked vs exclusive/write locked.

Currently, spin mutexes and lockmgr locks do not provide working lc_lock
and lc_unlock functions.

show more ...


# ae8dde30 09-Mar-2007 John Baldwin <jhb@FreeBSD.org>

Use C99-style struct member initialization for lock classes.


12345678910>>...12