Lines Matching +full:entry +full:- +full:method
2 Wound/Wait Deadlock-Proof Mutex Design
5 Please read mutex-design.rst first, as it applies to wait/wound mutexes too.
7 Motivation for WW-Mutexes
8 -------------------------
37 and the deadlock handling approach is called Wait-Die. The name is based on
41 and dies. Hence Wait-Die.
42 There is also another algorithm called Wound-Wait:
46 transaction. Hence Wound-Wait.
48 However, the Wound-Wait algorithm is typically stated to generate fewer backoffs
49 compared to Wait-Die, but is, on the other hand, associated with more work than
50 Wait-Die when recovering from a backoff. Wound-Wait is also a preemptive
54 Wound-Wait transaction is considered preempted when it dies (returning
55 -EDEADLK) following a wound.
58 --------
72 class also specifies what algorithm to use, Wound-Wait or Wait-Die.
82 From a simple semantics point-of-view the _slow functions are not strictly
87 prematurely return -EDEADLK. The advantage of the _slow functions is in
90 - ww_mutex_lock has a __must_check int return type, whereas ww_mutex_lock_slow
94 - When full debugging is enabled ww_mutex_lock_slow checks that all acquired
96 block on the contending lock (preventing spinning through the -EDEADLK
107 Of course, all the usual variants for handling wake-ups due to signals are also
111 -----
113 The algorithm (Wait-Die vs Wound-Wait) is chosen by using either
114 DEFINE_WW_CLASS() (Wound-Wait) or DEFINE_WD_CLASS() (Wait-Die)
115 As a rough rule of thumb, use Wound-Wait iff you
134 Method 1, using a list in execbuf->buffers that's not allowed to be reordered.
136 Furthermore the lock helper can use propagate the -EALREADY return code back to
145 struct obj_entry *entry;
150 list_for_each_entry (entry, list, head) {
151 if (entry->obj == res_obj) {
155 ret = ww_mutex_lock(&entry->obj->lock, ctx);
157 contended_entry = entry;
166 list_for_each_entry_continue_reverse (entry, list, head)
167 ww_mutex_unlock(&entry->obj->lock);
170 ww_mutex_unlock(&res_obj->lock);
172 if (ret == -EDEADLK) {
174 ww_mutex_lock_slow(&contended_entry->obj->lock, ctx);
175 res_obj = contended_entry->obj;
183 Method 2, using a list in execbuf->buffers that can be reordered. Same semantics
184 of duplicate entry detection using -EALREADY as method 1 above. But the
185 list-reordering allows for a bit more idiomatic code::
189 struct obj_entry *entry, *entry2;
193 list_for_each_entry (entry, list, head) {
194 ret = ww_mutex_lock(&entry->obj->lock, ctx);
196 entry2 = entry;
199 ww_mutex_unlock(&entry2->obj->lock);
201 if (ret != -EDEADLK) {
207 ww_mutex_lock_slow(&entry->obj->lock, ctx);
211 * buf->next to the first unlocked entry,
214 list_del(&entry->head);
215 list_add(&entry->head, list);
227 struct obj_entry *entry;
229 list_for_each_entry (entry, list, head)
230 ww_mutex_unlock(&entry->obj->lock);
235 Method 3 is useful if the list of objects is constructed ad-hoc and not upfront,
240 - They can handle lock-acquisition in any order which allows us to start walking
243 - Due to the -EALREADY return code signalling that a given objects is already
244 held there's no need for additional book-keeping to break cycles in the graph
250 - Since the list of objects is dynamically constructed (and might very well be
251 different when retrying due to hitting the -EDEADLK die condition) there's
254 - On the other hand the dynamic object list construction also means that the -EALREADY return
257 Note also that methods #1 and #2 and method #3 can be combined, e.g. to first lock a
260 method #3 below. The backoff/retry procedure will be a bit more involved, since
261 when the dynamic locking step hits -EDEADLK we also need to unlock all the
265 Also, method 3 can't fail the lock acquisition step since it doesn't return
266 -EALREADY. Of course this would be different when using the _interruptible
278 struct obj *entry, *temp;
280 list_for_each_entry_safe (entry, temp, list, locked_list) {
283 list_del(&entry->locked_list);
284 ww_mutex_unlock(entry->ww_mutex)
295 /* re-init loop start state */
300 ret = ww_mutex_lock(obj->ww_mutex, ctx);
301 if (ret == -EALREADY) {
305 if (ret == -EDEADLK) {
309 list_add(&entry->locked_list, list);
314 list_add_tail(&entry->locked_list, list);
327 Method 4: Only lock one single objects. In that case deadlock detection and
333 ----------------------
346 (2) For Wait-Die, among waiters with contexts, only the first one can have
347 other locks acquired already (ctx->acquired > 0). Note that this waiter
350 The Wound-Wait preemption is implemented with a lazy-preemption scheme:
372 - Forgetting to call ww_acquire_fini or ww_acquire_init.
373 - Attempting to lock more mutexes after ww_acquire_done.
374 - Attempting to lock the wrong mutex after -EDEADLK and
376 - Attempting to lock the right mutex after -EDEADLK,
379 - Calling ww_mutex_lock_slow before -EDEADLK was returned.
381 - Unlocking mutexes with the wrong unlock function.
382 - Calling one of the ww_acquire_* twice on the same context.
383 - Using a different ww_class for the mutex than for the ww_acquire_ctx.
384 - Normal lockdep errors that can result in deadlocks.
387 - Calling ww_acquire_init to initialize a second ww_acquire_ctx before
389 - 'normal' deadlocks that can occur.