Lines Matching full:we

90 	 * We should use WRITE_ONCE() here because we can have concurrent reads  in __vma_start_write()
92 * We don't really care about the correctness of that early check, but in __vma_start_write()
93 * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. in __vma_start_write()
112 * We are the only writer, so no need to use vma_refcount_put(). in vma_mark_detached()
132 * locked result to avoid performance overhead, in which case we fall back to
135 * reused and attached to a different mm before we lock it.
151 * We can use READ_ONCE() for the mm_lock_seq here, and don't need in vma_start_read()
153 * we don't rely on for anything - the mm_lock_seq read against which we in vma_start_read()
181 * False unlocked result is impossible because we modify and check in vma_start_read()
185 * We must use ACQUIRE semantics for the mm_lock_seq so that if we are in vma_start_read()
186 * racing with vma_end_write_all(), we only start reading from the VMA in vma_start_read()
208 other_mm = vma->vm_mm; /* use a copy as vma can be freed after we drop vm_refcnt */ in vma_start_read()
240 /* Check if the VMA got isolated after we found it */ in lock_vma_under_rcu()
251 * At this point, we have a stable reference to a VMA: The VMA is in lock_vma_under_rcu()
252 * locked and we know it hasn't already been isolated. in lock_vma_under_rcu()
253 * From here on, we can access the VMA without worrying about which in lock_vma_under_rcu()
258 /* Check if the vma we locked is the right one. */ in lock_vma_under_rcu()
306 /* Start mmap_lock speculation in case we need to verify the vma later */ in lock_next_vma()
316 * Infinite loop should not happen because the vma we find will in lock_next_vma()
334 * vma can be ahead of the last search position but we need to verify in lock_next_vma()
335 * it was not shrunk after we found it and another vma has not been in lock_next_vma()
336 * installed ahead of it. Otherwise we might observe a gap that should in lock_next_vma()
383 * We don't have this operation yet. in mmap_upgrade_trylock()
411 * For example, if we have a kernel bug that causes a page
412 * fault, we don't want to just use mmap_read_lock() to get
414 * to happen while we're holding the mm lock for writing.
420 * We can also actually take the mm lock for writing if we
436 * Well, dang. We might still be successful, but only in lock_mm_and_find_vma()
437 * if we can extend a vma to do so. in lock_mm_and_find_vma()
445 * We can try to upgrade the mmap lock atomically, in lock_mm_and_find_vma()
446 * in which case we can continue to use the vma in lock_mm_and_find_vma()
447 * we already looked up. in lock_mm_and_find_vma()
449 * Otherwise we'll have to drop the mmap lock and in lock_mm_and_find_vma()