Lines Matching full:we
114 * We need to check the following states:
134 * [1] Indicates that the kernel can acquire the futex atomically. We
218 * We get here with hb->lock held, and having found a
227 * free pi_state before we can take a reference ourselves.
232 * Now that we have a pi_state, we can acquire wait_lock
240 * still is what we expect it to be, otherwise retry the entire
383 * This creates pi_state, we have hb->lock held, this means nothing can
419 * We are the first waiter - try to look up the real owner and attach
437 * We need to look at the task state to figure out, whether the
439 * in futex_exit_release(), we do this protected by p->pi_lock:
445 * FUTEX_STATE_DEAD, we know that the task has finished
496 * @ps: the pi_state pointer where we store the result of the
527 * Read the user space value first so we can validate a few
554 * No waiter and user TID is 0. We are here because the
561 * We take over the futex. No other waiters and the user space
562 * TID is 0. We preserve the owner died bit.
604 * If the update of the user space value succeeded, we try to
605 * attach to the owner. If that fails, no harm done, we only
627 * We pass it to the next owner. The WAITERS bit is always kept
628 * enabled while there is PI state around. We cleanup the owner
629 * died bit, because we are the owner.
654 * This is a point of no return; once we modified the uval
682 * We are here because either:
684 * - we stole the lock and pi_state->owner needs updating to reflect
689 * - someone stole our lock and we need to fix things to point to the
692 * Either way, we have to replace the TID in the user space variable.
693 * This must be atomic as we have to preserve the owner died bit here.
695 * Note: We write the user space value _before_ changing the pi_state
696 * because we can fault here. Imagine swapped out pages or a fork
700 * pi_state in an inconsistent state when we fault here, because we
708 * We raced against a concurrent self; things are
715 /* We got the lock. pi_state is correct. Tell caller. */
726 * rtmutex then newowner is NULL. We can't return here with
740 * We raced against a concurrent self; things are
770 * We fixed up user space. Now we need to fix the pi_state
778 * In order to reschedule or handle a page fault, we need to drop the
781 * the rtmutex) the chance to try the fixup of the pi_state. So once we
782 * are back from handling the fault we need to check the pi_state after
784 * the fixup has been done already we simply return.
786 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely
824 * best we can make the kernel state consistent but user state will
876 * Got the lock. We might not be the anticipated owner if we
879 * Speculative pi_state->owner read (we don't hold wait_lock);
880 * since we own the lock pi_state->owner == current is the
889 * If we didn't get the lock; check if anybody stole it from us. In
890 * that case, we need to fix up the uval to point to them instead of
900 * Paranoia check. If we did not take the lock, then we should not be
950 * Atomic work succeeded and we got the lock,
951 * or failed. Either way, we do _not_ block.
955 /* We got the lock. */
964 * - EBUSY: Task is exiting and we just wait for the
1000 * then we might need to wake him. This can not be done after the
1007 * Must be done before we enqueue the waiter, here is unfortunately
1015 * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not
1017 * include hb->lock in the blocking chain, even through we'll not in
1050 * If we failed to acquire the lock (deadlock/signal/timeout), we must
1051 * unwind the above, however we canont lock hb->lock because
1061 * There be dragons here, since we must deal with the inconsistency on
1082 * Fixup the pi_state owner and possibly acquire the lock if we
1129 * This is the in-kernel slowpath: we look up the PI state (if any),
1146 * We release only a lock we actually own:
1160 * Check waiters first. We do not trust user space values at
1161 * all and we at least want to know if user space fiddled
1181 * By taking wait_lock while still holding hb->lock, we ensure
1182 * there is no point where we hold neither; and thereby
1192 * complete such that we're guaranteed to observe the
1228 * Success, we're done! No tricky corner cases.
1252 * We have no kernel internal state, i.e. no waiters in the
1254 * on hb->lock. So we can safely ignore them. We do neither
1255 * preserve the WAITERS bit not the OWNER_DIED one. We are the