Home
last modified time | relevance | path

Searched full:invalidation (Results 1 – 25 of 312) sorted by relevance

12345678910>>...13

/linux/Documentation/ABI/testing/
H A Ddebugfs-intel-iommu121 This file exports invalidation queue internals of each
130 Invalidation queue on IOMMU: dmar0
144 Invalidation queue on IOMMU: dmar1
168 * 1 - enable sampling IOTLB invalidation latency data
170 * 2 - enable sampling devTLB invalidation latency data
172 * 3 - enable sampling intr entry cache invalidation latency data
185 2) Enable sampling IOTLB invalidation latency data
207 3) Enable sampling devTLB invalidation latency data
/linux/drivers/gpu/drm/xe/abi/
H A Dguc_actions_abi.h228 /* Flush PPC or SMRO caches along with TLB invalidation request */
239 * 0: Heavy mode of Invalidation:
240 * The pipeline of the engine(s) for which the invalidation is targeted to is
242 * Observed before completing the TLB invalidation
243 * 1: Lite mode of Invalidation:
246 * completing TLB invalidation.
247 * Light Invalidation Mode is to be used only when
249 * for the in-flight transactions across the TLB invalidation. In other words,
250 * this mode can be used when the TLB invalidation is intended to clear out the
251 * stale cached translations that are no longer in use. Light Invalidation Mod
[all...]
/linux/drivers/gpu/drm/i915/gt/uc/abi/
H A Dguc_actions_abi.h196 * 0: Heavy mode of Invalidation:
197 * The pipeline of the engine(s) for which the invalidation is targeted to is
199 * Observed before completing the TLB invalidation
200 * 1: Lite mode of Invalidation:
203 * completing TLB invalidation.
204 * Light Invalidation Mode is to be used only when
206 * for the in-flight transactions across the TLB invalidation. In other words,
207 * this mode can be used when the TLB invalidation is intended to clear out the
208 * stale cached translations that are no longer in use. Light Invalidation Mode
209 * is much faster than the Heavy Invalidation Mode, as it does not wait for the
/linux/Documentation/driver-api/
H A Dgeneric_pt.rst111 IOMMU Invalidation Features
114 Invalidation is how the page table algorithms synchronize with a HW cache of the
125 single range invalidation for each operation, over-invalidating if there are
126 gaps of VA that don't need invalidation. This trades off impacted VA for number
127 of invalidation operations. It does not keep track of what is being invalidated;
/linux/drivers/iommu/intel/
H A Dpasid.c329 * VT-d spec 5.0 table28 states guides for cache invalidation: in intel_pasid_flush_present()
331 * - PASID-selective-within-Domain PASID-cache invalidation in intel_pasid_flush_present()
332 * - PASID-selective PASID-based IOTLB invalidation in intel_pasid_flush_present()
334 * - Global Device-TLB invalidation to affected functions in intel_pasid_flush_present()
336 * - PASID-based Device-TLB invalidation (with S=1 and in intel_pasid_flush_present()
619 * - PASID-selective-within-Domain PASID-cache invalidation in intel_pasid_setup_dirty_tracking()
621 * - Domain-selective IOTLB invalidation in intel_pasid_setup_dirty_tracking()
623 * - PASID-selective PASID-based IOTLB invalidation in intel_pasid_setup_dirty_tracking()
625 * - Global Device-TLB invalidation to affected functions in intel_pasid_setup_dirty_tracking()
627 * - PASID-based Device-TLB invalidation (with S=1 and in intel_pasid_setup_dirty_tracking()
[all …]
H A Ddmar.c1218 return "Context-cache Invalidation"; in qi_type_string()
1220 return "IOTLB Invalidation"; in qi_type_string()
1222 return "Device-TLB Invalidation"; in qi_type_string()
1224 return "Interrupt Entry Cache Invalidation"; in qi_type_string()
1226 return "Invalidation Wait"; in qi_type_string()
1228 return "PASID-based IOTLB Invalidation"; in qi_type_string()
1230 return "PASID-cache Invalidation"; in qi_type_string()
1232 return "PASID-based Device-TLB Invalidation"; in qi_type_string()
1247 pr_err("VT-d detected Invalidation Queue Error: Reason %llx", in qi_dump_fault()
1250 pr_err("VT-d detected Invalidation Time-out Error: SID %llx", in qi_dump_fault()
[all …]
H A Dcache.c3 * cache.c - Intel VT-d cache invalidation
324 * invalidation requests while address remapping hardware is disabled. in qi_batch_add_dev_iotlb()
338 * npages == -1 means a PASID-selective invalidation, otherwise, in qi_batch_add_piotlb()
339 * a positive value for Page-selective-within-PASID invalidation. in qi_batch_add_piotlb()
355 * Device-TLB invalidation requests while address remapping hardware in qi_batch_add_pasid_dev_iotlb()
493 * stage mapping requires explicit invalidation of the caches.
496 * flushing, if cache invalidation is not required.
H A Diommu.h85 #define DMAR_IQH_REG 0x80 /* Invalidation queue head register */
86 #define DMAR_IQT_REG 0x88 /* Invalidation queue tail register */
87 #define DMAR_IQ_SHIFT 4 /* Invalidation queue head/tail shift */
88 #define DMAR_IQA_REG 0x90 /* Invalidation queue addr register */
89 #define DMAR_ICS_REG 0x9c /* Invalidation complete status register */
90 #define DMAR_IQER_REG 0xb0 /* Invalidation queue error record register */
338 #define DMA_FSTS_IQE (1 << 4) /* Invalidation Queue Error */
339 #define DMA_FSTS_ICE (1 << 5) /* Invalidation Completion Error */
340 #define DMA_FSTS_ITE (1 << 6) /* Invalidation Time-out Error */
430 /* PASID cache invalidation granu */
[all …]
/linux/drivers/infiniband/ulp/rtrs/
H A DREADME54 The procedure is the default behaviour of the driver. This invalidation and
165 the user header, flags (specifying if memory invalidation is necessary) and the
169 attaches an invalidation message if requested and finally an "empty" rdma
176 or in case client requested invalidation:
184 the user header, flags (specifying if memory invalidation is necessary) and the
190 attaches an invalidation message if requested and finally an "empty" rdma
201 or in case client requested invalidation:
/linux/arch/arm64/kvm/hyp/nvhe/
H A Dtlb.c36 * being either ish or nsh, depending on the invalidation in enter_vmid_context()
165 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa()
168 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa()
195 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
198 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa_nsh()
/linux/include/uapi/linux/
H A Diommufd.h803 * enum iommu_hwpt_invalidate_data_type - IOMMU HWPT Cache Invalidation
805 * @IOMMU_HWPT_INVALIDATE_DATA_VTD_S1: Invalidation data for VTD_S1
806 * @IOMMU_VIOMMU_INVALIDATE_DATA_ARM_SMMUV3: Invalidation data for ARM SMMUv3
815 * stage-1 cache invalidation
816 * @IOMMU_VTD_INV_FLAGS_LEAF: Indicates whether the invalidation applies
825 * struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
833 * The Intel VT-d specific invalidation data for user-managed stage-1 cache
834 * invalidation in nested translation. Userspace uses this structure to
850 * struct iommu_viommu_arm_smmuv3_invalidate - ARM SMMUv3 cache invalidation
852 * @cmd: 128-bit cache invalidation command that runs in SMMU CMDQ.
[all …]
/linux/arch/arm64/kvm/hyp/vhe/
H A Dtlb.c111 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa()
114 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa()
143 * We have to ensure completion of the invalidation at Stage-2, in __kvm_tlb_flush_vmid_ipa_nsh()
146 * the Stage-1 invalidation happened first. in __kvm_tlb_flush_vmid_ipa_nsh()
224 * TLB invalidation emulation for NV. For any given instruction, we
/linux/arch/powerpc/include/asm/
H A Dpnv-ocxl.h19 /* Radix Invalidation Control
28 /* Invalidation Criteria
35 /* Invalidation Flag */
/linux/Documentation/filesystems/caching/
H A Dnetfs-api.rst36 (8) Data file invalidation
39 (11) Page release and invalidation
285 The read operation will fail with ESTALE if invalidation occurred whilst the
302 Data File Invalidation
319 This increases the invalidation counter in the cookie to cause outstanding
324 Invalidation runs asynchronously in a worker thread so that it doesn't block
427 Page Release and Invalidation
442 Page release and page invalidation should also wait for any mark left on the
/linux/Documentation/gpu/
H A Ddrm-vm-bind-locking.rst87 notifier invalidation. This is not a real seqlock but described in
95 invalidation notifiers.
103 invalidation. The userptr notifier lock is per gpu_vm.
406 <Invalidation example>` below). Note that when the core mm decides to
435 // invalidation notifier running anymore.
449 // of the MMU invalidation notifier. Hence the
476 The userptr gpu_vma MMU invalidation notifier might be called from
495 // invalidation callbacks, the mmu notifier core will flip
504 When this invalidation notifier returns, the GPU can no longer be
564 invalidation notifier where zapping happens. Hence, if the
/linux/arch/arm64/include/asm/
H A Dkvm_pgtable.h300 * TLB invalidation.
467 * to freeing and therefore no TLB invalidation is performed.
503 * TLB invalidation is performed for each page-table entry cleared during the
565 * to freeing and therefore no TLB invalidation is performed.
576 * to freeing and therefore no TLB invalidation is performed.
596 * freeing and therefore no TLB invalidation is performed.
613 * invalidation or CMOs are performed.
688 * TLB invalidation is performed for each page-table entry cleared during the
700 * without TLB invalidation.
765 * TLB invalidation is performed after updating the entry. Software bits cannot
/linux/drivers/misc/sgi-gru/
H A Dgrutlbpurge.c32 /* ---------------------------------- TLB Invalidation functions --------
86 * General purpose TLB invalidation function. This function scans every GRU in
115 * To help improve the efficiency of TLB invalidation, the GMS data
120 * provide the callbacks for TLB invalidation. The GMS contains:
137 * zero to force a full TLB invalidation. This is fast but will
/linux/include/linux/
H A Dmemregion.h43 * contents while performing the invalidation. It is only exported for
61 WARN_ON_ONCE("CPU cache invalidation required"); in cpu_cache_invalidate_memregion()
/linux/drivers/gpu/drm/amd/amdgpu/
H A Dgmc_v12_0.c245 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v12_0_flush_vm_hub()
246 * release after invalidation to avoid entering power gated state in gmc_v12_0_flush_vm_hub()
282 * add semaphore release after invalidation, in gmc_v12_0_flush_vm_hub()
288 /* Issue additional private vm invalidation to MMHUB */ in gmc_v12_0_flush_vm_hub()
295 /* Issue private invalidation */ in gmc_v12_0_flush_vm_hub()
297 /* Read back to ensure invalidation is done*/ in gmc_v12_0_flush_vm_hub()
355 * @inst: is used to select which instance of KIQ to use for the invalidation
412 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v12_0_emit_flush_gpu_tlb()
413 * release after invalidation to avoid entering power gated state in gmc_v12_0_emit_flush_gpu_tlb()
441 * add semaphore release after invalidation, in gmc_v12_0_emit_flush_gpu_tlb()
H A Dgmc_v11_0.c276 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v11_0_flush_gpu_tlb()
277 * release after invalidation to avoid entering power gated state in gmc_v11_0_flush_gpu_tlb()
311 /* Issue additional private vm invalidation to MMHUB */ in gmc_v11_0_flush_gpu_tlb()
318 /* Issue private invalidation */ in gmc_v11_0_flush_gpu_tlb()
320 /* Read back to ensure invalidation is done*/ in gmc_v11_0_flush_gpu_tlb()
337 * @inst: is used to select which instance of KIQ to use for the invalidation
378 * off cycle, add semaphore acquire before invalidation and semaphore in gmc_v11_0_emit_flush_gpu_tlb()
379 * release after invalidation to avoid entering power gated state in gmc_v11_0_emit_flush_gpu_tlb()
407 * add semaphore release after invalidation, in gmc_v11_0_emit_flush_gpu_tlb()
/linux/arch/powerpc/kernel/
H A Dl2cr_6xx.S60 - L2I set to perform a global invalidation
111 /* Before we perform the global invalidation, we must disable dynamic
207 /* Perform a global invalidation */
223 /* Wait for the invalidation to complete */
342 /* Perform a global invalidation */
/linux/include/vdso/
H A Dhelpers.h54 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin_clock()
71 /* Ensure the sequence invalidation is visible before data is modified */ in vdso_write_begin()
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n1/
H A Dl2_cache.json12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line …
44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v1/
H A Dl2_cache.json12 …he L2 (from other CPUs) which return data even if the snoops cause an invalidation. L2 cache line …
44 …"PublicDescription": "Counts each explicit invalidation of a cache line in the level 2 cache by ca…
/linux/drivers/cxl/
H A DKconfig221 to invalidate caches when those events occur. If that invalidation
223 invalidation failure are due to the CPU not providing a cache
224 invalidation mechanism. For example usage of wbinvd is restricted to

12345678910>>...13