Lines Matching defs:hb

166  * hb->lock:
168 * hb -> futex_q, relation
171 * (cannot be raw because hb can contain arbitrary amount
192 * hb->lock
218 * We get here with hb->lock held, and having found a
220 * has dropped the hb->lock in between futex_queue() and futex_unqueue_pi(),
383 * This creates pi_state, we have hb->lock held, this means nothing can
494 * @hb: the pi futex hash bucket
495 * @key: the futex key associated with uaddr and hb
509 * The hb->lock must be held by the caller.
515 int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
549 top_waiter = futex_top_waiter(hb, key);
597 * the kernel and blocked on hb->lock.
786 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
787 * drop hb->lock since the caller owns the hb -> futex_q relation.
865 * acquire the lock. Must be called with the hb lock held.
942 CLASS(hb, hb)(&q.key);
944 futex_q_lock(&q, hb);
946 ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
968 futex_q_unlock(hb);
987 __futex_queue(&q, hb, current);
997 * Caution; releasing @hb in-scope. The hb->lock is still locked
1001 * rt_mutex_pre_schedule() invocation. The hb will remain valid because
1002 * the thread, performing resize, will block on hb->lock during
1005 futex_hash_put(no_free_ptr(hb));
1008 * under the hb lock, but that *should* work because it does nothing.
1015 * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not
1017 * include hb->lock in the blocking chain, even through we'll not in
1021 * Therefore acquire wait_lock while holding hb->lock, but drop the
1051 * unwind the above, however we canont lock hb->lock because
1052 * rt_mutex already has a waiter enqueued and hb->lock can itself try
1055 * Doing the cleanup without holding hb->lock can cause inconsistent
1056 * state between hb and pi_state, but only in the direction of not
1096 CLASS(hb, hb)(&q.key);
1098 futex_hash_put(hb);
1103 futex_q_unlock(hb);
1107 futex_q_unlock(hb);
1155 CLASS(hb, hb)(&key);
1156 spin_lock(&hb->lock);
1164 top_waiter = futex_top_waiter(hb, &key);
1181 * By taking wait_lock while still holding hb->lock, we ensure
1186 * rt_waiter without holding hb->lock, it is possible for
1212 futex_hash_get(hb);
1220 spin_unlock(&hb->lock);
1254 * on hb->lock. So we can safely ignore them. We do neither
1259 spin_unlock(&hb->lock);
1279 spin_unlock(&hb->lock);