/linux/Documentation/core-api/ |
H A D | refcount-vs-atomic.rst | 14 ``atomic_*()`` functions with regards to the memory ordering guarantees. 17 these memory ordering guarantees. 33 In the absence of any memory ordering guarantees (i.e. fully unordered) 35 program order (po) relation (on the same CPU). It guarantees that 41 A strong (full) memory ordering guarantees that all prior loads and 44 It also guarantees that all po-earlier stores on the same CPU 49 A RELEASE memory ordering guarantees that all prior loads and 51 before the operation. It also guarantees that all po-earlier 57 An ACQUIRE memory ordering guarantees that all post loads and 59 completed after the acquire operation. It also guarantees that all [all …]
|
/linux/Documentation/block/ |
H A D | bfq-iosched.rst | 9 - BFQ guarantees a high system and application responsiveness, and a 70 4-1 Service guarantees provided 84 Regardless of the actual background workload, BFQ guarantees that, for 126 Strong fairness, bandwidth and delay guarantees 132 guarantees, it is possible to compute a tight per-I/O-request delay 133 guarantees by a simple formula. If not configured for strict service 134 guarantees, BFQ switches to time-based resource sharing (only) for 142 possibly heavy workloads are being served, BFQ guarantees: 193 - With respect to idling for service guarantees, if several 196 guarantees the expected throughput distribution without ever [all …]
|
/linux/rust/kernel/list/ |
H A D | impl_list_item_mod.rs | 127 // SAFETY: See GUARANTEES comment on each method. 129 // GUARANTEES: 135 // SAFETY: The caller guarantees that `me` points at a valid value of type `Self`. 141 // GUARANTEES: 158 // GUARANTEES: 171 // GUARANTEES: 191 // SAFETY: See GUARANTEES comment on each method. 193 // GUARANTEES: 224 // GUARANTEES: 238 // GUARANTEES: (always) [all …]
|
/linux/rust/kernel/ |
H A D | page.rs | 168 // it has performed a bounds check and guarantees that `src` is in read_raw() 171 // There caller guarantees that there is no data race. in read_raw() 190 // bounds check and guarantees that `dst` is valid for `len` bytes. in write_raw() 192 // There caller guarantees that there is no data race. in write_raw() 210 // bounds check and guarantees that `dst` is valid for `len` bytes. in fill_zero_raw() 212 // There caller guarantees that there is no data race. in fill_zero_raw() 238 // bounds check and guarantees that `dst` is valid for `len` bytes. Furthermore, we have in copy_from_user_slice_raw() 239 // exclusive access to the slice since the caller guarantees that there are no races. in copy_from_user_slice_raw()
|
H A D | list.rs | 64 /// Implementers must ensure that they provide the guarantees documented on methods provided by 71 /// # Guarantees 88 /// # Guarantees 104 /// # Guarantees 122 /// # Guarantees 383 // SAFETY: We just checked that `item` is in a list, so the caller guarantees that it in remove() 455 // SAFETY: The above call to `post_remove` guarantees that we can recreate the `ListArc`. in remove_internal_inner()
|
H A D | firmware.rs | 69 // SAFETY: `func` not bailing out with a non-zero error code, guarantees that `fw` is a in request_internal() 98 // `bindings::firmware` guarantees, if successfully requested, that in data()
|
/linux/Documentation/ABI/testing/ |
H A D | sysfs-kernel-mm-numa | 22 the guarantees of cpusets. This should not be enabled 24 guarantees.
|
/linux/arch/s390/include/asm/ |
H A D | kmsan.h | 46 * Note, this sacrifices occasionally breaking scheduling guarantees. in kmsan_virt_addr_valid() 48 * performance guarantees due to being heavily instrumented. in kmsan_virt_addr_valid()
|
/linux/tools/memory-model/Documentation/ |
H A D | locking.txt | 43 The basic rule guarantees that if CPU0() acquires mylock before CPU1(), 75 This converse to the basic rule guarantees that if CPU0() acquires 135 The smp_load_acquire() guarantees that its load from "flags" will 137 problem. The smp_store_release() guarantees that its store will be
|
/linux/arch/parisc/include/asm/ |
H A D | ldcw.h | 5 /* Because kmalloc only guarantees 8-byte alignment for kmalloc'd data, 6 and GCC only guarantees 8-byte alignment for stack locals, we can't
|
/linux/arch/riscv/include/asm/ |
H A D | bitops.h | 266 * Note: there are no guarantees that this function will not be reordered 268 * make sure not to rely on its reordering guarantees. 283 * Note: there are no guarantees that this function will not be reordered 285 * make sure not to rely on its reordering guarantees.
|
/linux/arch/x86/include/asm/ |
H A D | kmsan.h | 89 * Note, this sacrifices occasionally breaking scheduling guarantees. in kmsan_virt_addr_valid() 91 * performance guarantees due to being heavily instrumented. in kmsan_virt_addr_valid()
|
/linux/kernel/livepatch/ |
H A D | shadow.c | 189 * This function guarantees that the constructor function is called only when 218 * This function guarantees that only one shadow variable exists with the given 219 * @id for the given @obj. It also guarantees that the constructor function
|
/linux/arch/arm/nwfpe/ |
H A D | ChangeLog | 61 comment in the code beside init_flag state the kernel guarantees 64 I couldn't even find anything that guarantees it is zeroed when
|
/linux/drivers/gpu/drm/i915/gem/ |
H A D | i915_gem_busy.c | 26 * The uABI guarantees an active writer is also amongst the read in __busy_write_id() 47 * guarantees forward progress. We could rely on the idle worker in __busy_set_if_active()
|
/linux/kernel/ |
H A D | Kconfig.preempt | 24 time, but there are no guarantees and occasional longer delays 85 require real-time guarantees.
|
/linux/lib/ |
H A D | Kconfig.kgdb | 141 'go', KDB tries to continue. No guarantees that the 144 No guarantees that the kernel is still usable in this situation.
|
/linux/rust/kernel/block/mq/ |
H A D | request.rs | 118 // success of the call to `try_set_end` guarantees that there are no in end_ok() 149 // valid. The existence of `&self` guarantees that the private data is in wrapper_ref() 241 // SAFETY: The type invariant of `Request` guarantees that the private in dec_ref()
|
/linux/drivers/mmc/core/ |
H A D | sd_ops.c | 294 /* NOTE: caller guarantees scr is heap-allocated */ in mmc_app_send_scr() 344 /* NOTE: caller guarantees resp is heap-allocated */ in mmc_sd_switch() 364 /* NOTE: caller guarantees ssr is heap-allocated */ in mmc_app_sd_status()
|
/linux/include/linux/ |
H A D | dm-bufio.h | 107 * dm_bufio_write_dirty_buffers guarantees that the buffer is on-disk but 127 * Write all dirty buffers. Guarantees that all dirty buffers created prior
|
/linux/Documentation/ |
H A D | memory-barriers.txt | 49 - Guarantees. 223 GUARANTEES 226 There are some minimal guarantees that may be expected of a CPU: 309 And there are anti-guarantees: 311 (*) These guarantees do not apply to bitfields, because compilers often 322 (*) These guarantees apply only to properly aligned and sized scalar 329 guarantees were introduced into the C11 standard, so beware when 418 the CPU under consideration guarantees that for any load preceding it, 476 This acts as a one-way permeable barrier. It guarantees that all memory 491 This also acts as a one-way permeable barrier. It guarantees that all [all …]
|
/linux/net/sunrpc/xprtrdma/ |
H A D | frwr_ops.c | 260 * largest r/wsize NFS will ask for. This guarantees that in frwr_query_device() 492 * memory regions. This guarantees that registered MRs are properly fenced 533 /* Strong send queue ordering guarantees that when the in frwr_unmap_sync() 598 * This guarantees that registered MRs are properly fenced from the 634 /* Strong send queue ordering guarantees that when the in frwr_unmap_async()
|
/linux/Documentation/RCU/Design/Memory-Ordering/ |
H A D | Tree-RCU-Memory-Ordering.rst | 18 RCU grace periods provide extremely strong memory-ordering guarantees 72 Tree RCU uses these two ordering guarantees to form an ordering 118 | But the chain of rcu_node-structure lock acquisitions guarantees | 120 | accesses and also guarantees that the updater's post-grace-period | 138 | RCU guarantees that the outcome r0 == 0 && r1 == 0 will not | 193 Tree RCU's grace--period memory-ordering guarantees rely most heavily on 281 provides no ordering guarantees, either for itself or for phase one of 472 guarantees that the first CPU's quiescent state happens before the
|
/linux/drivers/greybus/ |
H A D | hd.c | 71 /* Locking: Caller guarantees serialisation */ 95 /* Locking: Caller guarantees serialisation */
|
/linux/tools/testing/selftests/arm64/pauth/ |
H A D | pac_corruptor.S | 9 * also guarantees no possible collision. TCR_EL1.TBI0 is set by default so no
|