/linux/arch/arm64/mm/ |
H A D | cache.S | 20 * Ensure that the I and D caches are coherent within specified region. 48 * Ensure that the I and D caches are coherent within specified region. 64 * Ensure that the I and D caches are coherent within specified region. 87 * Ensure that the I cache is invalid within specified region. 105 * Ensure that any D-cache lines for the interval [start, end) 120 * Ensure that any D-cache lines for the interval [start, end) 138 * Ensure that any D-cache lines for the interval [start, end) 169 * Ensure that any D-cache lines for the interval [start, end) 184 * Ensure that any D-cache lines for the interval [start, end)
|
/linux/mm/kmsan/ |
H A D | kmsan_test.c | 164 /* Test case: ensure that kmalloc() returns uninitialized memory. */ 177 * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). 191 /* Test case: ensure that kzalloc() returns initialized memory. */ 203 /* Test case: ensure that local variables are uninitialized by default. */ 214 /* Test case: ensure that local variables with initializers are initialized. */ 271 * Test case: ensure that uninitialized values are tracked through function 295 * Test case: ensure kmsan_check_memory() reports an error when checking 344 * Test case: ensure that memset() can initialize a buffer allocated via 364 /* Test case: ensure that use-after-free reporting works. */ 382 * Test case: ensure that uninitialized values are propagated through per-CPU [all …]
|
/linux/arch/arm/include/asm/ |
H A D | cacheflush.h | 71 * Ensure coherency between the Icache and the Dcache in the 79 * Ensure coherency between the Icache and the Dcache in the 87 * Ensure that the data held in page is written back. 136 * Their sole purpose is to ensure that data held in the cache 155 * Their sole purpose is to ensure that data held in the cache 264 * flush_icache_user_range is used when we want to ensure that the 271 * Perform necessary cache operations to ensure that data previously 277 * Perform necessary cache operations to ensure that the TLB will 328 * data, we need to do a full cache flush to ensure that writebacks 356 * to always ensure proper cache maintenance to update main memory right [all …]
|
H A D | fncpy.h | 19 * the alignment of functions must be preserved when copying. To ensure this, 23 * function to be copied is defined, and ensure that your allocator for the 66 * Ensure alignment of source and destination addresses, \
|
/linux/tools/testing/selftests/powerpc/papr_sysparm/ |
H A D | papr_sysparm.c | 57 // Ensure expected error in get_bad_parameter() 61 // Ensure the buffer is unchanged in get_bad_parameter() 80 // Ensure expected error in check_efault_common() 111 // Ensure expected error in set_hmc0() 133 // Ensure expected error in set_with_ro_fd() 176 .description = "ensure EPERM on attempt to update HMC0",
|
/linux/include/vdso/ |
H A D | helpers.h | 54 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin_clock() 60 /* Ensure the data update is visible before the sequence is set valid again */ in vdso_write_end_clock() 71 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin() 79 /* Ensure the data update is visible before the sequence is set valid again */ in vdso_write_end()
|
/linux/include/linux/ |
H A D | balloon_compaction.h | 19 * ensure following these simple rules: 30 * the aforementioned balloon page corner case, as well as to ensure the simple 96 * Caller must ensure the page is locked and the spin_lock protecting balloon in balloon_page_insert() 122 * Caller must ensure that the page is locked. 136 * Caller must ensure the page is private and protect the list. 148 * Caller must ensure the page is private and protect the list. in balloon_page_delete()
|
/linux/drivers/iio/accel/ |
H A D | mma9551_core.c | 211 * Locking is not handled inside the function. Callers should ensure they 236 * Locking is not handled inside the function. Callers should ensure they 261 * Locking is not handled inside the function. Callers should ensure they 286 * Locking is not handled inside the function. Callers should ensure they 320 * Locking is not handled inside the function. Callers should ensure they 347 * Locking is not handled inside the function. Callers should ensure they 380 * Locking is not handled inside the function. Callers should ensure they 419 * Locking is not handled inside the function. Callers should ensure they 458 * Locking is not handled inside the function. Callers should ensure they 493 * Locking is not handled inside the function. Callers should ensure they [all …]
|
/linux/fs/ceph/ |
H A D | io.c | 38 * Declare that a buffered read operation is about to start, and ensure 41 * and holds a shared lock on inode->i_rwsem to ensure that the flag 83 * Declare that a buffered write operation is about to start, and ensure 124 * Declare that a direct I/O operation is about to start, and ensure 127 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/fs/nfs/ |
H A D | io.c | 30 * Declare that a buffered read operation is about to start, and ensure 33 * and holds a shared lock on inode->i_rwsem to ensure that the flag 83 * Declare that a buffered read operation is about to start, and ensure 123 * Declare that a direct I/O operation is about to start, and ensure 126 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/tools/testing/selftests/user_events/ |
H A D | abi_test.c | 243 /* Ensure kernel clears bit after disable */ in TEST_F() 249 /* Ensure doesn't change after unreg */ in TEST_F() 261 /* Ensure it exists after close and disable */ in TEST_F() 264 /* Ensure we can delete it */ in TEST_F() 271 /* Ensure it does not exist after invalid flags */ in TEST_F() 338 /* Ensure bit 1 and 2 are tied together, should not delete yet */ in TEST_F() 347 /* Ensure COW pages get updated after fork */ in TEST_F() 373 /* Ensure child doesn't disable parent */ in TEST_F()
|
H A D | user_events_selftests.h | 28 /* Ensure tracefs is installed */ in tracefs_enabled() 36 /* Ensure mounted tracefs */ in tracefs_enabled() 78 /* Ensure user_events is installed */ in user_events_enabled()
|
/linux/rust/kernel/sync/ |
H A D | lock.rs | 29 /// - Implementers must ensure that only one thread/CPU may access the protected data once the lock 31 /// - Implementers must also ensure that [`relock`] uses the same locking method as the original 63 /// Callers must ensure that [`Backend::init`] has been previously called. 71 /// Callers must ensure that [`Backend::init`] has been previously called. 85 /// Callers must ensure that `guard_state` comes from a previous call to [`Backend::lock`] (or 88 // SAFETY: The safety requirements ensure that the lock is initialised. in relock() 96 /// Callers must ensure that [`Backend::init`] has been previously called. 273 /// The caller must ensure that it owns the lock.
|
/linux/arch/x86/kernel/cpu/resctrl/ |
H A D | pseudo_lock.c | 43 * pseudo-locking. This includes testing to ensure pseudo-locked regions 103 * First we ensure that the kernel memory cannot be found in the cache. 110 * Local register variables are utilized to ensure that the memory region 126 * variable to ensure we always use a valid pointer, but the cost in resctrl_arch_pseudo_lock_fn() 152 * ensure that we will not be preempted during this critical section. in resctrl_arch_pseudo_lock_fn() 373 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 380 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 396 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 403 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn()
|
/linux/fs/netfs/ |
H A D | locking.c | 44 * Declare that a buffered read operation is about to start, and ensure 47 * and holds a shared lock on inode->i_rwsem to ensure that the flag 98 * Declare that a buffered read operation is about to start, and ensure 155 * Declare that a direct I/O operation is about to start, and ensure 158 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/arch/arm64/kvm/hyp/nvhe/ |
H A D | tlb.c | 34 * - ensure that the page table updates are visible to all in enter_vmid_context() 83 * TLB fill. For guests, we ensure that the S1 MMU is in enter_vmid_context() 135 /* Ensure write of the old VMID */ in exit_vmid_context() 165 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 195 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
|
/linux/tools/testing/selftests/bpf/progs/ |
H A D | test_global_map_resize.c | 35 /* at least one extern is included, to ensure that a specific 57 /* see above; ensure this is not optimized out */ in bss_array_sum() 75 /* see above; ensure this is not optimized out */ in data_array_sum()
|
/linux/rust/kernel/ |
H A D | cpumask.rs | 27 /// The callers must ensure that the `struct cpumask` is valid for access and 56 /// The caller must ensure that `ptr` is valid for writing and remains valid for the lifetime 70 /// The caller must ensure that `ptr` is valid for reading and remains valid for the lifetime 172 /// The callers must ensure that the `struct cpumask_var_t` is valid for access and remains valid 247 /// The caller must ensure that the returned [`CpumaskVar`] is properly initialized before 271 /// The caller must ensure that `ptr` is valid for writing and remains valid for the lifetime 285 /// The caller must ensure that `ptr` is valid for reading and remains valid for the lifetime
|
H A D | cpu.rs | 54 /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 78 /// The caller must ensure that `id` is a valid CPU ID (i.e., `0 <= id < nr_cpu_ids()`). 83 // Ensure the `id` fits in an [`i32`] as it's also representable that way. in from_u32_unchecked() 138 /// Callers must ensure that the CPU device is not used after it has been unregistered.
|
H A D | page.rs | 171 /// * Callers must ensure that `dst` is valid for writing `len` bytes. 172 /// * Callers must ensure that this call does not race with a write to the same page that 193 /// * Callers must ensure that `src` is valid for reading `len` bytes. 194 /// * Callers must ensure that this call does not race with a read or write to the same page 214 /// Callers must ensure that this call does not race with a read or write to the same page that 237 /// Callers must ensure that this call does not race with a read or write to the same page that
|
/linux/tools/testing/selftests/ftrace/test.d/00basic/ |
H A D | snapshot.tc | 15 echo "Ensure keep tracing off" 24 echo "Ensure keep tracing on"
|
/linux/arch/arm/mm/ |
H A D | cache-v4.S | 71 * Ensure coherency between the Icache and the Dcache in the 85 * Ensure coherency between the Icache and the Dcache in the 100 * Ensure no D cache aliasing occurs, either with itself or
|
/linux/tools/testing/selftests/powerpc/papr_vpd/ |
H A D | papr_vpd.c | 57 /* Ensure EOF */ in dev_papr_vpd_get_handle_all() 289 /* Ensure EOF */ in papr_vpd_system_loc_code() 312 .description = "ensure EINVAL on unterminated location code", 316 .description = "ensure EFAULT on bad handle addr", 332 .description = "ensure re-read yields same results"
|
/linux/Documentation/networking/device_drivers/cellular/qualcomm/ |
H A D | rmnet.rst | 49 ensure 4 byte alignment. 75 ensure 4 byte alignment. 99 ensure 4 byte alignment. 129 ensure 4 byte alignment.
|
/linux/arch/riscv/kernel/ |
H A D | unaligned_access_speed.c | 68 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_unaligned_access() 71 /* Ensure the copy ends before the end time is snapped. */ in check_unaligned_access() 324 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_vector_unaligned_access() 327 /* Ensure the copy ends before the end time is snapped. */ in check_vector_unaligned_access() 342 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_vector_unaligned_access() 345 /* Ensure the copy ends before the end time is snapped. */ in check_vector_unaligned_access()
|