| /linux/fs/ceph/ |
| H A D | io.c | 29 /* ensure that bit state is consistent */ in ceph_block_o_direct() 34 /* ensure modified bit is visible */ in ceph_block_o_direct() 47 * Declare that a buffered read operation is about to start, and ensure 50 * and holds a shared lock on inode->i_rwsem to ensure that the flag 71 /* ensure that bit state is consistent */ in ceph_start_io_read() 107 * Declare that a buffered write operation is about to start, and ensure 139 /* ensure that bit state is consistent */ in ceph_block_buffered() 144 /* ensure modified bit is visible */ in ceph_block_buffered() 159 * Declare that a direct I/O operation is about to start, and ensure 162 * and holds a shared lock on inode->i_rwsem to ensure that the flag [all …]
|
| /linux/mm/kmsan/ |
| H A D | kmsan_test.c | 164 /* Test case: ensure that kmalloc() returns uninitialized memory. */ 177 * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). 191 /* Test case: ensure that kzalloc() returns initialized memory. */ 203 /* Test case: ensure that local variables are uninitialized by default. */ 214 /* Test case: ensure that local variables with initializers are initialized. */ 271 * Test case: ensure that uninitialized values are tracked through function 295 * Test case: ensure kmsan_check_memory() reports an error when checking 344 * Test case: ensure that memset() can initialize a buffer allocated via 364 /* Test case: ensure that use-after-free reporting works for kmalloc. */ 408 /* Test case: ensure that use-after-free reporting works for a freed page. */ [all …]
|
| /linux/arch/arm/include/asm/ |
| H A D | cacheflush.h | 71 * Ensure coherency between the Icache and the Dcache in the 79 * Ensure coherency between the Icache and the Dcache in the 87 * Ensure that the data held in page is written back. 136 * Their sole purpose is to ensure that data held in the cache 155 * Their sole purpose is to ensure that data held in the cache 264 * flush_icache_user_range is used when we want to ensure that the 271 * Perform necessary cache operations to ensure that data previously 277 * Perform necessary cache operations to ensure that the TLB will 328 * data, we need to do a full cache flush to ensure that writebacks 356 * to always ensure proper cache maintenance to update main memory right [all …]
|
| H A D | fncpy.h | 19 * the alignment of functions must be preserved when copying. To ensure this, 23 * function to be copied is defined, and ensure that your allocator for the 66 * Ensure alignment of source and destination addresses, \
|
| /linux/tools/testing/selftests/powerpc/papr_sysparm/ |
| H A D | papr_sysparm.c | 57 // Ensure expected error in get_bad_parameter() 61 // Ensure the buffer is unchanged in get_bad_parameter() 80 // Ensure expected error in check_efault_common() 111 // Ensure expected error in set_hmc0() 133 // Ensure expected error in set_with_ro_fd() 176 .description = "ensure EPERM on attempt to update HMC0",
|
| /linux/fs/nfs/ |
| H A D | io.c | 21 * Declare that a buffered read operation is about to start, and ensure 24 * and holds a shared lock on inode->i_rwsem to ensure that the flag 74 * Declare that a buffered read operation is about to start, and ensure 116 * Declare that a direct I/O operation is about to start, and ensure 119 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
| /linux/tools/testing/selftests/user_events/ |
| H A D | abi_test.c | 243 /* Ensure kernel clears bit after disable */ in TEST_F() 249 /* Ensure doesn't change after unreg */ in TEST_F() 261 /* Ensure it exists after close and disable */ in TEST_F() 264 /* Ensure we can delete it */ in TEST_F() 271 /* Ensure it does not exist after invalid flags */ in TEST_F() 338 /* Ensure bit 1 and 2 are tied together, should not delete yet */ in TEST_F() 347 /* Ensure COW pages get updated after fork */ in TEST_F() 373 /* Ensure child doesn't disable parent */ in TEST_F()
|
| H A D | user_events_selftests.h | 28 /* Ensure tracefs is installed */ in tracefs_enabled() 36 /* Ensure mounted tracefs */ in tracefs_enabled() 78 /* Ensure user_events is installed */ in user_events_enabled()
|
| /linux/arch/x86/kernel/cpu/resctrl/ |
| H A D | pseudo_lock.c | 43 * pseudo-locking. This includes testing to ensure pseudo-locked regions 103 * First we ensure that the kernel memory cannot be found in the cache. 110 * Local register variables are utilized to ensure that the memory region 126 * variable to ensure we always use a valid pointer, but the cost in resctrl_arch_pseudo_lock_fn() 152 * ensure that we will not be preempted during this critical section. in resctrl_arch_pseudo_lock_fn() 373 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 380 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 396 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn() 403 * Use LFENCE to ensure all previous instructions are retired in measure_residency_fn()
|
| /linux/fs/netfs/ |
| H A D | locking.c | 44 * Declare that a buffered read operation is about to start, and ensure 47 * and holds a shared lock on inode->i_rwsem to ensure that the flag 98 * Declare that a buffered read operation is about to start, and ensure 155 * Declare that a direct I/O operation is about to start, and ensure 158 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
| /linux/rust/kernel/sync/ |
| H A D | lock.rs | 29 /// - Implementers must ensure that only one thread/CPU may access the protected data once the lock 31 /// - Implementers must also ensure that [`relock`] uses the same locking method as the original 63 /// Callers must ensure that [`Backend::init`] has been previously called. 71 /// Callers must ensure that [`Backend::init`] has been previously called. 85 /// Callers must ensure that `guard_state` comes from a previous call to [`Backend::lock`] (or 88 // SAFETY: The safety requirements ensure that the lock is initialised. in relock() 96 /// Callers must ensure that [`Backend::init`] has been previously called. 312 /// The caller must ensure that it owns the lock. in new()
|
| /linux/arch/arm64/kvm/hyp/nvhe/ |
| H A D | tlb.c | 34 * - ensure that the page table updates are visible to all in enter_vmid_context() 83 * TLB fill. For guests, we ensure that the S1 MMU is in enter_vmid_context() 135 /* Ensure write of the old VMID */ in exit_vmid_context() 164 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 193 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
|
| /linux/tools/testing/selftests/bpf/progs/ |
| H A D | test_global_map_resize.c | 35 /* at least one extern is included, to ensure that a specific 57 /* see above; ensure this is not optimized out */ in bss_array_sum() 75 /* see above; ensure this is not optimized out */ in data_array_sum()
|
| /linux/tools/testing/selftests/ftrace/test.d/dynevent/ |
| H A D | add_remove_fprobe_module.tc | 23 :;: "Ensure it is enabled" ;: 42 :;: "Ensure it is enabled" ;: 79 :;: "Ensure ftrace is disabled." ;:
|
| H A D | enable_disable_tprobe.tc | 16 :;: "Check enable/disable to ensure it works" ;: 28 :;: "Repeat enable/disable to ensure it works" ;:
|
| /linux/rust/kernel/debugfs/ |
| H A D | entry.rs | 42 // SAFETY: The invariants of this function's arguments ensure the safety of this call. in dynamic_dir() 64 // SAFETY: The invariants of this function's arguments ensure the safety of this call. in dynamic_file() 95 // SAFETY: The invariants of this function's arguments ensure the safety of this call. in dir() 115 // SAFETY: The invariants of this function's arguments ensure the safety of this call. in file()
|
| /linux/arch/arm/mm/ |
| H A D | cache-v4.S | 71 * Ensure coherency between the Icache and the Dcache in the 85 * Ensure coherency between the Icache and the Dcache in the 100 * Ensure no D cache aliasing occurs, either with itself or
|
| H A D | dma.h | 13 * Their sole purpose is to ensure that data held in the cache 24 * Their sole purpose is to ensure that data held in the cache
|
| /linux/tools/testing/selftests/powerpc/papr_vpd/ |
| H A D | papr_vpd.c | 57 /* Ensure EOF */ in dev_papr_vpd_get_handle_all() 289 /* Ensure EOF */ in papr_vpd_system_loc_code() 312 .description = "ensure EINVAL on unterminated location code", 316 .description = "ensure EFAULT on bad handle addr", 332 .description = "ensure re-read yields same results"
|
| /linux/tools/testing/selftests/ftrace/test.d/00basic/ |
| H A D | snapshot.tc | 15 echo "Ensure keep tracing off" 24 echo "Ensure keep tracing on"
|
| /linux/arch/mips/kernel/ |
| H A D | smp-cps.c | 440 /* Ensure the L2 tag writes complete before the state machine starts */ in init_cluster_l2() 454 /* Ensure the state machine starts before we poll for completion */ in init_cluster_l2() 485 /* Ensure cluster GCRs are where we expect */ in boot_core() 510 /* Ensure the core can access the GCRs */ in boot_core() 517 /* Ensure the core can access the GCRs */ in boot_core() 532 /* Ensure its coherency is disabled */ in boot_core() 538 /* Ensure the core can access the GCRs */ in boot_core() 554 * Ensure that the VP_RUN register is written before the in boot_core() 695 * Ensure that our calculation of the VP ID matches up with in cps_init_secondary() 751 /* Ensure that the VP_STOP register is written */ in cps_shutdown_this_cpu()
|
| /linux/drivers/gpu/drm/vkms/ |
| H A D | vkms_drv.h | 63 * The goal of this structure is to keep enough precision to ensure 112 * @x_start: X (width) coordinate of the first pixel to copy. The caller must ensure that x_start 114 * @y_start: Y (height) coordinate of the first pixel to copy. The caller must ensure that y_start 119 * (included) to @out_pixel[@count] (excluded). The caller must ensure that out_pixel have a 147 * struct vkms_plane_state must ensure that this pointer is valid
|
| /linux/arch/powerpc/lib/ |
| H A D | test_emulate_step_exec_instr.S | 34 * parameter (GPR3) is saved additionally to ensure that the resulting 44 * Save LR on stack to ensure that the return address is available 89 * original state, i.e. the pointer to pt_regs, to ensure that the
|
| /linux/Documentation/networking/device_drivers/cellular/qualcomm/ |
| H A D | rmnet.rst | 50 ensure 4 byte alignment. 77 ensure 4 byte alignment. 101 ensure 4 byte alignment. 132 ensure 4 byte alignment.
|
| /linux/arch/riscv/kernel/ |
| H A D | patch.c | 80 * ensure that it was safe between each cores. in __patch_insn_set() 127 * ensure that it was safe between each cores. in __patch_insn_write() 131 * does not ensure text_mutex is held by the calling thread. That's in __patch_insn_write() 292 * Instead, ensure the lock is held before calling stop_machine(), and in patch_text()
|