/linux/mm/ |
H A D | zpool.c | 101 * the requested module, if needed, but there is no guarantee the module will 141 * Implementations must guarantee this to be thread-safe. 192 * Implementations must guarantee this to be thread-safe, 214 * Implementations must guarantee this to be thread-safe. 230 * Implementations must guarantee this to be thread-safe. 251 * Implementations must guarantee this to be thread-safe. 266 * This frees previously allocated memory. This does not guarantee 270 * Implementations must guarantee this to be thread-safe,
|
H A D | shrinker.c | 492 * step 1: use rcu_read_lock() to guarantee existence of the in shrink_slab_memcg() 497 * step 4: use rcu_read_lock() to guarantee existence of the shrinker. in shrink_slab_memcg() 499 * shrinker_try_get() to guarantee existence of the shrinker, in shrink_slab_memcg() 506 * need to acquire the RCU lock to guarantee existence of in shrink_slab_memcg() 638 * step 1: use rcu_read_lock() to guarantee existence of the shrinker in shrink_slab()
|
/linux/kernel/sched/ |
H A D | membarrier.c | 20 * order to enforce the guarantee that any writes occurring on CPU0 before 42 * and r2 == 0. This violates the guarantee that membarrier() is 56 * order to enforce the guarantee that any writes occurring on CPU1 before 77 * the guarantee that membarrier() is supposed to provide. 181 * A sync_core() would provide this guarantee, but in ipi_sync_core() 214 * guarantee that no memory access following registration is reordered in ipi_sync_rq_state() 224 * guarantee that no memory access prior to exec is reordered after in membarrier_exec_mmap() 448 * mm and in the current runqueue to guarantee that no memory in sync_runqueues_membarrier_state()
|
/linux/include/linux/ |
H A D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
H A D | types.h | 225 * The alignment is required to guarantee that bit 0 of @next will be 229 * This guarantee is important for few reasons: 232 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
H A D | u64_stats_sync.h | 26 * 4) If reader fetches several counters, there is no guarantee the whole values 47 * snapshot for each variable (but no guarantee on several ones)
|
/linux/kernel/printk/ |
H A D | printk_ringbuffer.c | 459 * Guarantee the state is loaded before copying the descriptor in desc_read() 491 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 507 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 681 * 1. Guarantee the block ID loaded in in data_push_tail() 708 * 2. Guarantee the descriptor state loaded in in data_push_tail() 748 * Guarantee any descriptor states that have transitioned to in data_push_tail() 833 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 843 * Guarantee the last state load from desc_read() is before in desc_push_tail() 895 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 929 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
/linux/arch/x86/include/asm/vdso/ |
H A D | gettimeofday.h | 204 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 208 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 214 * guarantee than we get with a normal seqlock. in vread_pvclock() 216 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
/linux/Documentation/locking/ |
H A D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
/linux/Documentation/core-api/ |
H A D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
H A D | this_cpu_ops.rst | 111 surrounding code this_cpu_inc() will only guarantee that one of the 113 guarantee that the OS will not move the process directly before or 229 These operations have no guarantee against concurrent interrupts or
|
/linux/tools/memory-model/Documentation/ |
H A D | ordering.txt | 101 with void return types) do not guarantee any ordering whatsoever. Nor do 106 operations such as atomic_read() do not guarantee full ordering, and 130 such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. 150 atomic_inc() implementations do not guarantee full ordering, thus 278 from "x" instead of writing to it. Then an smp_wmb() could not guarantee 501 and further do not guarantee "atomic" access. For example, the compiler
|
/linux/Documentation/driver-api/ |
H A D | reset.rst | 87 Exclusive resets on the other hand guarantee direct control. 99 is no guarantee that calling reset_control_assert() on a shared reset control 152 The reset control API does not guarantee the order in which the individual
|
/linux/Documentation/ |
H A D | memory-barriers.txt | 331 of the standard containing this guarantee is Section 3.14, which 381 A write memory barrier gives a guarantee that all the STORE operations 444 A read barrier is an address-dependency barrier plus a guarantee that all 461 A general memory barrier gives a guarantee that all the LOAD and STORE 532 There are certain things that the Linux kernel memory barriers do not guarantee: 534 (*) There is no guarantee that any of the memory accesses specified before a 539 (*) There is no guarantee that issuing a memory barrier on one CPU will have 544 (*) There is no guarantee that a CPU will see the correct order of effects 549 (*) There is no guarantee that some intervening piece of off-the-CPU 897 However, they do -not- guarantee any other sort of ordering: [all …]
|
/linux/rust/kernel/ |
H A D | build_assert.rs | 8 /// If the compiler or optimizer cannot guarantee that `build_error!` can never 36 /// will panic. If the compiler or optimizer cannot guarantee the condition will
|
H A D | types.rs | 236 // The type invariants guarantee that `unwrap` will succeed. in deref() 243 // The type invariants guarantee that `unwrap` will succeed. in deref_mut() 392 // INVARIANT: The safety requirements guarantee that the new instance now owns the in from_raw() 442 // SAFETY: The type invariants guarantee that the object is valid. in deref() 457 // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to in drop()
|
/linux/drivers/net/ethernet/meta/fbnic/ |
H A D | fbnic_txrx.h | 15 /* Guarantee we have space needed for storing the buffer 21 * If we cannot guarantee that then we should return TX_BUSY
|
/linux/net/smc/ |
H A D | smc_cdc.c | 46 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smc_cdc_tx_handler() 273 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smcd_cdc_msg_send() 349 /* guarantee 0 <= peer_rmbe_space <= peer_rmbe_size */ in smc_cdc_msg_recv_action() 367 /* guarantee 0 <= sndbuf_space <= sndbuf_desc->len */ in smc_cdc_msg_recv_action() 384 /* guarantee 0 <= bytes_to_rcv <= rmb_desc->len */ in smc_cdc_msg_recv_action()
|
/linux/Documentation/driver-api/usb/ |
H A D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|
/linux/arch/arc/include/asm/ |
H A D | futex.h | 82 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in arch_futex_atomic_op_inuser() 131 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in futex_atomic_cmpxchg_inatomic()
|
/linux/Documentation/RCU/Design/Requirements/ |
H A D | Requirements.rst | 58 #. `Grace-Period Guarantee`_ 59 #. `Publish/Subscribe Guarantee`_ 64 Grace-Period Guarantee 67 RCU's grace-period guarantee is unusual in being premeditated: Jack 68 Slingwine and I had this guarantee firmly in mind when we started work 71 understanding of this guarantee. 73 RCU's grace-period guarantee allows updaters to wait for the completion 83 This guarantee allows ordering to be enforced with extremely low 174 the synchronize_rcu() in start_recovery() to guarantee that 196 Although RCU's grace-period guarantee is useful in and of itself, with [all …]
|
/linux/Documentation/filesystems/xfs/ |
H A D | xfs-delayed-logging-design.rst | 17 guarantee forwards progress for long running transactions with finite initial 51 followed to guarantee forwards progress and prevent deadlocks. 119 there is no guarantee of how much of the operation reached stale storage. Hence 121 the high level operation must use intents and deferred operations to guarantee 130 xfs_trans_commit() does not guarantee that the modification has been committed 154 provide a forwards progress guarantee so that no modification ever stalls 160 A transaction reservation provides a guarantee that there is physical log space 178 for the transaction that is calculated at mount time. We must guarantee that the 416 It should be noted that this does not change the guarantee that log recovery 892 guarantee which context the pin count is associated with. This is because of [all …]
|
/linux/Documentation/block/ |
H A D | stat.rst | 15 By having a single file, the kernel can guarantee that the statistics 18 each, it would be impossible to guarantee that a set of readings
|
/linux/Documentation/RCU/Design/Memory-Ordering/ |
H A D | Tree-RCU-Memory-Ordering.rst | 13 grace-period memory ordering guarantee is provided. 15 What Is Tree RCU's Grace Period Memory Ordering Guarantee? 33 RCU updaters use this guarantee by splitting their updates into 42 The RCU implementation provides this guarantee using a network 145 RCU's grace-period memory ordering guarantee to extend to any 254 Tree RCU's grace-period memory-ordering guarantee is provided by a 272 If RCU's grace-period guarantee is to mean anything at all, any access 275 portion of RCU's grace period guarantee is shown in the following
|
/linux/drivers/clocksource/ |
H A D | timer-pxa.c | 116 * register 0 and the OSCR, to guarantee that we will receive in pxa_timer_resume() 118 * to OSCR to guarantee that OSCR is monotonically incrementing. in pxa_timer_resume()
|