Lines Matching full:we

114  * We need to check the following states:
134 * [1] Indicates that the kernel can acquire the futex atomically. We
218 * We get here with hb->lock held, and having found a in attach_to_pi_state()
227 * free pi_state before we can take a reference ourselves. in attach_to_pi_state()
232 * Now that we have a pi_state, we can acquire wait_lock in attach_to_pi_state()
240 * still is what we expect it to be, otherwise retry the entire in attach_to_pi_state()
383 * This creates pi_state, we have hb->lock held, this means nothing can in __attach_to_pi_owner()
419 * We are the first waiter - try to look up the real owner and attach in attach_to_pi_owner()
437 * We need to look at the task state to figure out, whether the in attach_to_pi_owner()
439 * in futex_exit_release(), we do this protected by p->pi_lock: in attach_to_pi_owner()
445 * FUTEX_STATE_DEAD, we know that the task has finished in attach_to_pi_owner()
496 * @ps: the pi_state pointer where we store the result of the
527 * Read the user space value first so we can validate a few in futex_lock_pi_atomic()
554 * No waiter and user TID is 0. We are here because the in futex_lock_pi_atomic()
561 * We take over the futex. No other waiters and the user space in futex_lock_pi_atomic()
562 * TID is 0. We preserve the owner died bit. in futex_lock_pi_atomic()
604 * If the update of the user space value succeeded, we try to in futex_lock_pi_atomic()
605 * attach to the owner. If that fails, no harm done, we only in futex_lock_pi_atomic()
627 * We pass it to the next owner. The WAITERS bit is always kept in wake_futex_pi()
628 * enabled while there is PI state around. We cleanup the owner in wake_futex_pi()
629 * died bit, because we are the owner. in wake_futex_pi()
654 * This is a point of no return; once we modified the uval in wake_futex_pi()
682 * We are here because either: in __fixup_pi_state_owner()
684 * - we stole the lock and pi_state->owner needs updating to reflect in __fixup_pi_state_owner()
689 * - someone stole our lock and we need to fix things to point to the in __fixup_pi_state_owner()
692 * Either way, we have to replace the TID in the user space variable. in __fixup_pi_state_owner()
693 * This must be atomic as we have to preserve the owner died bit here. in __fixup_pi_state_owner()
695 * Note: We write the user space value _before_ changing the pi_state in __fixup_pi_state_owner()
696 * because we can fault here. Imagine swapped out pages or a fork in __fixup_pi_state_owner()
700 * pi_state in an inconsistent state when we fault here, because we in __fixup_pi_state_owner()
708 * We raced against a concurrent self; things are in __fixup_pi_state_owner()
715 /* We got the lock. pi_state is correct. Tell caller. */ in __fixup_pi_state_owner()
726 * rtmutex then newowner is NULL. We can't return here with in __fixup_pi_state_owner()
740 * We raced against a concurrent self; things are in __fixup_pi_state_owner()
770 * We fixed up user space. Now we need to fix the pi_state in __fixup_pi_state_owner()
778 * In order to reschedule or handle a page fault, we need to drop the in __fixup_pi_state_owner()
781 * the rtmutex) the chance to try the fixup of the pi_state. So once we in __fixup_pi_state_owner()
782 * are back from handling the fault we need to check the pi_state after in __fixup_pi_state_owner()
784 * the fixup has been done already we simply return. in __fixup_pi_state_owner()
786 * Note: we hold both hb->lock and pi_mutex->wait_lock. We can safely in __fixup_pi_state_owner()
824 * best we can make the kernel state consistent but user state will in __fixup_pi_state_owner()
876 * Got the lock. We might not be the anticipated owner if we in fixup_pi_owner()
879 * Speculative pi_state->owner read (we don't hold wait_lock); in fixup_pi_owner()
880 * since we own the lock pi_state->owner == current is the in fixup_pi_owner()
889 * If we didn't get the lock; check if anybody stole it from us. In in fixup_pi_owner()
890 * that case, we need to fix up the uval to point to them instead of in fixup_pi_owner()
900 * Paranoia check. If we did not take the lock, then we should not be in fixup_pi_owner()
950 * Atomic work succeeded and we got the lock, in futex_lock_pi()
951 * or failed. Either way, we do _not_ block. in futex_lock_pi()
955 /* We got the lock. */ in futex_lock_pi()
964 * - EBUSY: Task is exiting and we just wait for the in futex_lock_pi()
1000 * then we might need to wake him. This can not be done after the in futex_lock_pi()
1007 * Must be done before we enqueue the waiter, here is unfortunately in futex_lock_pi()
1015 * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not in futex_lock_pi()
1017 * include hb->lock in the blocking chain, even through we'll not in in futex_lock_pi()
1050 * If we failed to acquire the lock (deadlock/signal/timeout), we must in futex_lock_pi()
1051 * unwind the above, however we canont lock hb->lock because in futex_lock_pi()
1061 * There be dragons here, since we must deal with the inconsistency on in futex_lock_pi()
1082 * Fixup the pi_state owner and possibly acquire the lock if we in futex_lock_pi()
1129 * This is the in-kernel slowpath: we look up the PI state (if any),
1146 * We release only a lock we actually own: in futex_unlock_pi()
1160 * Check waiters first. We do not trust user space values at in futex_unlock_pi()
1161 * all and we at least want to know if user space fiddled in futex_unlock_pi()
1181 * By taking wait_lock while still holding hb->lock, we ensure in futex_unlock_pi()
1182 * there is no point where we hold neither; and thereby in futex_unlock_pi()
1192 * complete such that we're guaranteed to observe the in futex_unlock_pi()
1228 * Success, we're done! No tricky corner cases. in futex_unlock_pi()
1252 * We have no kernel internal state, i.e. no waiters in the in futex_unlock_pi()
1254 * on hb->lock. So we can safely ignore them. We do neither in futex_unlock_pi()
1255 * preserve the WAITERS bit not the OWNER_DIED one. We are the in futex_unlock_pi()