/linux/arch/arm64/mm/ |
H A D | cache.S | 20 * Ensure that the I and D caches are coherent within specified region. 48 * Ensure that the I and D caches are coherent within specified region. 64 * Ensure that the I and D caches are coherent within specified region. 87 * Ensure that the I cache is invalid within specified region. 105 * Ensure that any D-cache lines for the interval [start, end) 120 * Ensure that any D-cache lines for the interval [start, end) 138 * Ensure that any D-cache lines for the interval [start, end) 169 * Ensure that any D-cache lines for the interval [start, end) 184 * Ensure that any D-cache lines for the interval [start, end)
|
/linux/arch/arm/include/asm/ |
H A D | cacheflush.h | 71 * Ensure coherency between the Icache and the Dcache in the 79 * Ensure coherency between the Icache and the Dcache in the 87 * Ensure that the data held in page is written back. 136 * Their sole purpose is to ensure that data held in the cache 155 * Their sole purpose is to ensure that data held in the cache 264 * flush_icache_user_range is used when we want to ensure that the 271 * Perform necessary cache operations to ensure that data previously 277 * Perform necessary cache operations to ensure that the TLB will 328 * data, we need to do a full cache flush to ensure that writebacks 356 * to always ensure proper cache maintenance to update main memory right [all …]
|
H A D | fncpy.h | 19 * the alignment of functions must be preserved when copying. To ensure this, 23 * function to be copied is defined, and ensure that your allocator for the 66 * Ensure alignment of source and destination addresses, \
|
/linux/tools/testing/selftests/powerpc/papr_sysparm/ |
H A D | papr_sysparm.c | 57 // Ensure expected error in get_bad_parameter() 61 // Ensure the buffer is unchanged in get_bad_parameter() 80 // Ensure expected error in check_efault_common() 111 // Ensure expected error in set_hmc0() 133 // Ensure expected error in set_with_ro_fd() 176 .description = "ensure EPERM on attempt to update HMC0",
|
/linux/drivers/crypto/intel/keembay/ |
H A D | ocs-aes.c | 358 /* Ensure DMA error interrupts are enabled */ in aes_irq_enable() 379 /* Ensure AES interrupts are disabled */ in aes_irq_enable() 564 /* Ensure interrupts are disabled and pending interrupts cleared. */ in ocs_aes_init() 608 /* Ensure cipher, mode and instruction are valid. */ in ocs_aes_validate_inputs() 642 /* Ensure input length is multiple of block size */ in ocs_aes_validate_inputs() 646 /* Ensure source and destination linked lists are created */ in ocs_aes_validate_inputs() 654 /* Ensure input length is multiple of block size */ in ocs_aes_validate_inputs() 658 /* Ensure source and destination linked lists are created */ in ocs_aes_validate_inputs() 663 /* Ensure IV is present and block size in length */ in ocs_aes_validate_inputs() 670 /* Ensure input length of 1 byte or greater */ in ocs_aes_validate_inputs() [all …]
|
/linux/include/linux/ |
H A D | balloon_compaction.h | 18 * ensure following these simple rules: 31 * the aforementioned balloon page corner case, as well as to ensure the simple 88 * Caller must ensure the page is locked and the spin_lock protecting balloon 105 * Caller must ensure the page is locked and the spin_lock protecting balloon 162 * Caller must ensure the page is private and protect the list. 174 * Caller must ensure the page is private and protect the list.
|
H A D | prime_numbers.h | 17 * @max should be less than ULONG_MAX to ensure termination. To begin with 32 * @max should be less than ULONG_MAX, and @from less than @max, to ensure
|
/linux/drivers/iio/accel/ |
H A D | mma9551_core.c | 211 * Locking is not handled inside the function. Callers should ensure they 236 * Locking is not handled inside the function. Callers should ensure they 261 * Locking is not handled inside the function. Callers should ensure they 286 * Locking is not handled inside the function. Callers should ensure they 320 * Locking is not handled inside the function. Callers should ensure they 347 * Locking is not handled inside the function. Callers should ensure they 380 * Locking is not handled inside the function. Callers should ensure they 419 * Locking is not handled inside the function. Callers should ensure they 458 * Locking is not handled inside the function. Callers should ensure they 493 * Locking is not handled inside the function. Callers should ensure they [all …]
|
/linux/fs/ceph/ |
H A D | io.c | 38 * Declare that a buffered read operation is about to start, and ensure 41 * and holds a shared lock on inode->i_rwsem to ensure that the flag 83 * Declare that a buffered write operation is about to start, and ensure 124 * Declare that a direct I/O operation is about to start, and ensure 127 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/fs/nfs/ |
H A D | io.c | 30 * Declare that a buffered read operation is about to start, and ensure 33 * and holds a shared lock on inode->i_rwsem to ensure that the flag 83 * Declare that a buffered read operation is about to start, and ensure 123 * Declare that a direct I/O operation is about to start, and ensure 126 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/tools/testing/selftests/user_events/ |
H A D | abi_test.c | 243 /* Ensure kernel clears bit after disable */ in TEST_F() 249 /* Ensure doesn't change after unreg */ in TEST_F() 261 /* Ensure it exists after close and disable */ in TEST_F() 264 /* Ensure we can delete it */ in TEST_F() 271 /* Ensure it does not exist after invalid flags */ in TEST_F() 338 /* Ensure bit 1 and 2 are tied together, should not delete yet */ in TEST_F() 347 /* Ensure COW pages get updated after fork */ in TEST_F() 373 /* Ensure child doesn't disable parent */ in TEST_F()
|
H A D | user_events_selftests.h | 28 /* Ensure tracefs is installed */ in tracefs_enabled() 36 /* Ensure mounted tracefs */ in tracefs_enabled() 78 /* Ensure user_events is installed */ in user_events_enabled()
|
/linux/fs/netfs/ |
H A D | locking.c | 44 * Declare that a buffered read operation is about to start, and ensure 47 * and holds a shared lock on inode->i_rwsem to ensure that the flag 98 * Declare that a buffered read operation is about to start, and ensure 155 * Declare that a direct I/O operation is about to start, and ensure 158 * and holds a shared lock on inode->i_rwsem to ensure that the flag
|
/linux/arch/arm64/kvm/hyp/nvhe/ |
H A D | tlb.c | 34 * - ensure that the page table updates are visible to all in enter_vmid_context() 83 * TLB fill. For guests, we ensure that the S1 MMU is in enter_vmid_context() 135 /* Ensure write of the old VMID */ in exit_vmid_context() 165 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 195 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
|
/linux/tools/testing/selftests/ftrace/test.d/00basic/ |
H A D | snapshot.tc | 15 echo "Ensure keep tracing off" 24 echo "Ensure keep tracing on"
|
/linux/arch/arm/mm/ |
H A D | cache-v4.S | 71 * Ensure coherency between the Icache and the Dcache in the 85 * Ensure coherency between the Icache and the Dcache in the 100 * Ensure no D cache aliasing occurs, either with itself or
|
H A D | dma.h | 13 * Their sole purpose is to ensure that data held in the cache 24 * Their sole purpose is to ensure that data held in the cache
|
/linux/arch/riscv/kernel/ |
H A D | unaligned_access_speed.c | 68 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_unaligned_access() 71 /* Ensure the copy ends before the end time is snapped. */ in check_unaligned_access() 320 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_vector_unaligned_access() 323 /* Ensure the copy ends before the end time is snapped. */ in check_vector_unaligned_access() 338 /* Ensure the CSR read can't reorder WRT to the copy. */ in check_vector_unaligned_access() 341 /* Ensure the copy ends before the end time is snapped. */ in check_vector_unaligned_access()
|
/linux/tools/testing/selftests/powerpc/papr_vpd/ |
H A D | papr_vpd.c | 57 /* Ensure EOF */ in dev_papr_vpd_get_handle_all() 289 /* Ensure EOF */ in papr_vpd_system_loc_code() 312 .description = "ensure EINVAL on unterminated location code", 316 .description = "ensure EFAULT on bad handle addr", 332 .description = "ensure re-read yields same results"
|
/linux/Documentation/networking/device_drivers/cellular/qualcomm/ |
H A D | rmnet.rst | 49 ensure 4 byte alignment. 75 ensure 4 byte alignment. 99 ensure 4 byte alignment. 129 ensure 4 byte alignment.
|
/linux/arch/x86/kernel/cpu/ |
H A D | tsx.c | 33 * Ensure TSX support is not enumerated in CPUID. in tsx_disable() 34 * This is visible to userspace and will ensure they in tsx_disable() 53 * Ensure TSX support is enumerated in CPUID. in tsx_enable() 54 * This is visible to userspace and will ensure they in tsx_enable()
|
/linux/arch/powerpc/lib/ |
H A D | test_emulate_step_exec_instr.S | 34 * parameter (GPR3) is saved additionally to ensure that the resulting 44 * Save LR on stack to ensure that the return address is available 89 * original state, i.e. the pointer to pt_regs, to ensure that the
|
/linux/arch/arm/kernel/ |
H A D | reboot.c | 47 /* Push out any further dirty data, and ensure cache is empty */ in __soft_restart() 128 * provide a HW restart implementation, to ensure that all CPUs reset at once. 130 * doesn't have to co-ordinate with other CPUs to ensure they aren't still
|
/linux/Documentation/scheduler/ |
H A D | sched-util-clamp.rst | 31 These two bounds will ensure a task will operate within this performance range 44 performance point required by its display pipeline to ensure no frame is 50 dynamic feedback loop offers a great flexibility to ensure best user experience 63 background tasks to stay on the little cores which will ensure that: 79 ensure system resources are used optimally to deliver the best possible user 90 UCLAMP_MIN=1024 will ensure such tasks will always see the highest performance 97 User space can form a feedback loop with the thermal subsystem too to ensure 357 When task @p is running, **the scheduler should try its best to ensure it 518 to ensure they are performance and power aware. Ideally this knob should be set 553 to ensure on next wake up it runs at a higher performance point. It should try [all …]
|
/linux/Documentation/livepatch/ |
H A D | reliable-stacktrace.rst | 33 Principally, the reliable stacktrace function must ensure that either: 55 To ensure that kernel code can be correctly unwound in all cases, 87 To ensure that this does not result in functions being omitted from the trace, 115 To ensure that such cases do not result in functions being omitted from a 249 Architectures must either ensure that unwinders either reliably unwind
|