| /linux/drivers/gpu/drm/xe/ |
| H A D | xe_tlb_inval_job.c | 19 /** struct xe_tlb_inval_job - TLB invalidation job */ 23 /** @tlb_inval: TLB invalidation client */ 27 /** @vm: VM which TLB invalidation is being issued for */ 83 * xe_tlb_inval_job_create() - TLB invalidation job create 85 * @tlb_inval: TLB invalidation client 87 * @vm: VM which TLB invalidation is being issued for 92 * Create a TLB invalidation job and initialize internal fields. The caller is 95 * Return: TLB invalidation job object on success, ERR_PTR failure 157 * @job: TLB invalidation job that may trigger reclamation 201 * xe_tlb_inval_job_alloc_dep() - TLB invalidation job alloc dependency [all …]
|
| /linux/Documentation/ABI/testing/ |
| H A D | debugfs-intel-iommu | 121 This file exports invalidation queue internals of each 130 Invalidation queue on IOMMU: dmar0 144 Invalidation queue on IOMMU: dmar1 168 * 1 - enable sampling IOTLB invalidation latency data 170 * 2 - enable sampling devTLB invalidation latency data 172 * 3 - enable sampling intr entry cache invalidation latency data 185 2) Enable sampling IOTLB invalidation latency data 207 3) Enable sampling devTLB invalidation latency data
|
| /linux/arch/arm64/include/asm/ |
| H A D | tlbflush.h | 152 * the level at which the invalidation must take place. If the level is 153 * wrong, no invalidation may take place. In the case where the level 155 * a non-hinted invalidation. Any provided level outside the hint range 156 * will also cause fall-back to non-hinted invalidation. 158 * For Stage-2 invalidation, use the level values provided to that effect 312 * TLB Invalidation 315 * This header file implements the low-level TLB invalidation routines 318 * Every invalidation operation uses the following template: 322 * DSB ISH // Ensure the TLB invalidation has completed 327 * The following functions form part of the "core" TLB invalidation API, [all …]
|
| /linux/drivers/gpu/drm/i915/gt/uc/abi/ |
| H A D | guc_actions_abi.h | 196 * 0: Heavy mode of Invalidation: 197 * The pipeline of the engine(s) for which the invalidation is targeted to is 199 * Observed before completing the TLB invalidation 200 * 1: Lite mode of Invalidation: 203 * completing TLB invalidation. 204 * Light Invalidation Mode is to be used only when 206 * for the in-flight transactions across the TLB invalidation. In other words, 207 * this mode can be used when the TLB invalidation is intended to clear out the 208 * stale cached translations that are no longer in use. Light Invalidation Mode 209 * is much faster than the Heavy Invalidation Mode, as it does not wait for the
|
| /linux/Documentation/driver-api/ |
| H A D | generic_pt.rst | 111 IOMMU Invalidation Features 114 Invalidation is how the page table algorithms synchronize with a HW cache of the 125 single range invalidation for each operation, over-invalidating if there are 126 gaps of VA that don't need invalidation. This trades off impacted VA for number 127 of invalidation operations. It does not keep track of what is being invalidated;
|
| /linux/drivers/infiniband/ulp/rtrs/ |
| H A D | README | 54 The procedure is the default behaviour of the driver. This invalidation and 165 the user header, flags (specifying if memory invalidation is necessary) and the 169 attaches an invalidation message if requested and finally an "empty" rdma 176 or in case client requested invalidation: 184 the user header, flags (specifying if memory invalidation is necessary) and the 190 attaches an invalidation message if requested and finally an "empty" rdma 201 or in case client requested invalidation:
|
| /linux/arch/arm64/kvm/hyp/nvhe/ |
| H A D | tlb.c | 36 * being either ish or nsh, depending on the invalidation in enter_vmid_context() 164 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 167 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa() 193 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh() 196 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa_nsh()
|
| /linux/drivers/iommu/intel/ |
| H A D | dmar.c | 1218 return "Context-cache Invalidation"; in qi_type_string() 1220 return "IOTLB Invalidation"; in qi_type_string() 1222 return "Device-TLB Invalidation"; in qi_type_string() 1224 return "Interrupt Entry Cache Invalidation"; in qi_type_string() 1226 return "Invalidation Wait"; in qi_type_string() 1228 return "PASID-based IOTLB Invalidation"; in qi_type_string() 1230 return "PASID-cache Invalidation"; in qi_type_string() 1232 return "PASID-based Device-TLB Invalidation"; in qi_type_string() 1247 pr_err("VT-d detected Invalidation Queue Error: Reason %llx", in qi_dump_fault() 1250 pr_err("VT-d detected Invalidation Time-out Error: SID %llx", in qi_dump_fault() [all …]
|
| /linux/arch/arm64/kvm/hyp/vhe/ |
| H A D | tlb.c | 110 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa() 113 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa() 141 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh() 144 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa_nsh() 222 * TLB invalidation emulation for NV. For any given instruction, we
|
| /linux/arch/powerpc/include/asm/ |
| H A D | pnv-ocxl.h | 19 /* Radix Invalidation Control 28 /* Invalidation Criteria 35 /* Invalidation Flag */
|
| /linux/Documentation/filesystems/caching/ |
| H A D | netfs-api.rst | 36 (8) Data file invalidation 39 (11) Page release and invalidation 285 The read operation will fail with ESTALE if invalidation occurred whilst the 302 Data File Invalidation 319 This increases the invalidation counter in the cookie to cause outstanding 324 Invalidation runs asynchronously in a worker thread so that it doesn't block 427 Page Release and Invalidation 442 Page release and page invalidation should also wait for any mark left on the
|
| /linux/Documentation/gpu/ |
| H A D | drm-vm-bind-locking.rst | 87 notifier invalidation. This is not a real seqlock but described in 95 invalidation notifiers. 103 invalidation. The userptr notifier lock is per gpu_vm. 406 <Invalidation example>` below). Note that when the core mm decides to 435 // invalidation notifier running anymore. 449 // of the MMU invalidation notifier. Hence the 476 The userptr gpu_vma MMU invalidation notifier might be called from 495 // invalidation callbacks, the mmu notifier core will flip 504 When this invalidation notifier returns, the GPU can no longer be 564 invalidation notifier where zapping happens. Hence, if the
|
| /linux/drivers/misc/sgi-gru/ |
| H A D | grutlbpurge.c | 32 /* ---------------------------------- TLB Invalidation functions -------- 86 * General purpose TLB invalidation function. This function scans every GRU in 115 * To help improve the efficiency of TLB invalidation, the GMS data 120 * provide the callbacks for TLB invalidation. The GMS contains: 137 * zero to force a full TLB invalidation. This is fast but will
|
| /linux/include/linux/ |
| H A D | memregion.h | 43 * contents while performing the invalidation. It is only exported for 61 WARN_ON_ONCE("CPU cache invalidation required"); in cpu_cache_invalidate_memregion()
|
| H A D | mmu_notifier.h | 43 * a device driver to possibly ignore the invalidation if the 134 * Invalidation of multiple concurrent ranges may be 195 * TLB invalidation. 353 * mmu_interval_set_seq - Save the invalidation sequence 383 * Returns: true if an invalidation collided with this critical section, and 406 * Returns: true indicates an invalidation has collided with this critical
|
| H A D | io-pgtable.h | 30 * @tlb_add_page: Optional callback to queue up leaf TLB invalidation for a 31 * single page. IOMMUs that cannot batch TLB invalidation 34 * and defer the invalidation until iommu_iotlb_sync() instead.
|
| /linux/arch/powerpc/kernel/ |
| H A D | l2cr_6xx.S | 60 - L2I set to perform a global invalidation 111 /* Before we perform the global invalidation, we must disable dynamic 207 /* Perform a global invalidation */ 223 /* Wait for the invalidation to complete */ 342 /* Perform a global invalidation */
|
| /linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n1/ |
| H A D | l2_cache.json | 12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line … 44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
|
| /linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v1/ |
| H A D | l2_cache.json | 12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line … 44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
|
| /linux/drivers/cxl/ |
| H A D | Kconfig | 222 to invalidate caches when those events occur. If that invalidation 224 invalidation failure are due to the CPU not providing a cache 225 invalidation mechanism. For example usage of wbinvd is restricted to
|
| /linux/lib/ |
| H A D | cache_maint.c | 6 * iterate over each registered instance to first kick off invalidation and 130 * Machines that do not support invalidation, e.g. VMs, will not have any
|
| /linux/drivers/gpu/drm/amd/amdgpu/ |
| H A D | amdgpu_hmm.c | 42 * page table invalidation are completed and we once more see a coherent process 60 * @range: details on the invalidation 97 * @range: details on the invalidation
|
| /linux/virt/kvm/ |
| H A D | pfncache.c | 6 * memory with suitable invalidation mechanisms. 41 * be modified, and invalidation would no longer be in gfn_to_pfn_cache_invalidate_start() 42 * necessary. Hence check again whether invalidation in gfn_to_pfn_cache_invalidate_start() 132 * be used here because the invalidation of caches in the in mmu_notifier_retry_cache()
|
| /linux/drivers/media/pci/intel/ipu6/ |
| H A D | ipu6.h | 138 * MMU Invalidation HW bug workaround by ZLW mechanism 140 * Old IPU6 MMUV2 has a bug in the invalidation mechanism which might result in 142 * corruption. So we cannot directly use the MMU V2 invalidation registers
|
| /linux/arch/arm64/kvm/ |
| H A D | vmid.c | 65 * flush_pending and issuing a local context invalidation on in flush_context() 67 * invalidation over the inner shareable domain on rollover. in flush_context()
|