Lines Matching +full:iommu +full:- +full:parent
1 .. SPDX-License-Identifier: GPL-2.0+
13 IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
16 drivers are eventually expected to deprecate any internal IOMMU logic
20 I/O page tables for all IOMMUs, with room in the design to add non-generic
24 small letter (iommufd) refers to the file descriptors created via /dev/iommu for
31 --------------------
35 - IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
41 - IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
44 - IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
45 (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
48 feature flag. This can be either an UNMANAGED stage-1 domain for a device
49 running in the user space, or a nesting parent stage-2 domain for mappings
50 from guest-level physical addresses to host-level physical addresses.
55 - IOMMUFD_OBJ_HWPT_NESTED, representing an actual hardware I/O page table
59 IOMMU_DOMAIN_NESTED. This must be a stage-1 domain for a device running in
60 the user space (e.g. in a guest VM enabling the IOMMU nested translation
61 feature.) As such, it must be created with a given nesting parent stage-2
62 domain to associate to. This nested stage-1 page table managed by the user
63 space usually has mappings from guest-level I/O virtual addresses to guest-
66 - IOMMUFD_FAULT, representing a software queue for an HWPT reporting IO page
67 faults using the IOMMU HW's PRI (Page Request Interface). This queue object
70 could be then used to allocate a fault-enabled HWPT via the IOMMU_HWPT_ALLOC
73 - IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU instance,
74 passed to or shared with a VM. It may be some HW-accelerated virtualization
77 * Security namespace for guest owned ID, e.g. guest-controlled cache tags
78 * Non-device-affiliated event reporting, e.g. invalidation queue errors
79 * Access to a shareable nesting parent pagetable across physical IOMMUs
85 Such a vIOMMU object generally has the access to a nesting parent pagetable
86 to support some HW-accelerated virtualization features. So, a vIOMMU object
87 must be created given a nesting parent HWPT_PAGING object, and then it would
93 The name "vIOMMU" isn't necessarily identical to a virtualized IOMMU in a
94 VM. A VM can have one giant virtualized IOMMU running on a machine having
96 or configurations from this single virtualized IOMMU instance to multiple
99 IOMMU, not necessarily of a virtualized IOMMU. For VMMs that want the full
102 passed-through devices would be connected to their own virtualized IOMMUs
106 - IOMMUFD_OBJ_VDEVICE, representing a virtual device for an IOMMUFD_OBJ_DEVICE
111 e.g. vSID of ARM SMMUv3, vDeviceID of AMD IOMMU, and vRID of Intel VT-d to a
119 - IOMMUFD_OBJ_VEVENTQ, representing a software queue for a vIOMMU to report its
120 events such as translation faults occurred to a nested stage-1 (excluding I/O
121 page faults that should go through IOMMUFD_OBJ_FAULT) and HW-specific events.
127 - IOMMUFD_OBJ_HW_QUEUE, representing a hardware accelerated queue, as a subset
128 of IOMMU's virtualization features, for the IOMMU HW to directly read or write
129 the virtual queue memory owned by a guest OS. This HW-acceleration feature can
130 allow VM to work with the IOMMU HW directly without a VM Exit, so as to reduce
138 All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
140 The diagrams below show relationships between user-visible objects and kernel
150 | | IOAS |<---| HWPT_PAGING |<---------------------| DEVICE | |
157 |------------>|iommu_domain|<-----------------------|device|
166 | | IOAS |<---| HWPT_PAGING |<---| HWPT_NESTED |<--| DEVICE | |
173 |------------>|iommu_domain|<----|iommu_domain|<----|device|
182 | |----------------| vIOMMU |<---| vDEVICE |<----| |
188 | | | IOAS |<---|(HWPT_PAGING)|<---| HWPT_NESTED |<--| DEVICE | |
195 |iommu_device| |------>|iommu_domain|<----|iommu_domain|<----|device|
200 expose interfaces that are specific to single IOMMU drivers. All operations
225 allocates a nesting parent HWPT_PAGING if the IOMMU_HWPT_ALLOC_NEST_PARENT
230 nesting parent HWPT_PAGING via @pt_id to associate the new HWPT_NESTED object
232 must be a nesting parent manually allocated via the same uAPI previously with
234 allocation will be further validated by the IOMMU driver to ensure that the
235 nesting parent domain and the nested domain being allocated are compatible.
237 and iommu_domains. Once this completes the device could do DMA via a 2-stage
239 can be allocated by (and then associated to) the same nesting parent.
248 uAPI, provided a dev_id (for the device's physical IOMMU to back the vIOMMU)
249 and an hwpt_id (to associate the vIOMMU to a nesting parent HWPT_PAGING). The
251 struct device is behind. And an IOMMU driver can implement a viommu_alloc op
252 to allocate its own vIOMMU data structure embedding the core-level structure
253 iommufd_viommu and some driver-specific data. If necessary, the driver can
257 as a nesting parent object to allocate an HWPT_NESTED object described above.
262 two parent objects. Another @virt_id will be also set via the uAPI providing
264 vIOMMU. If necessary, the IOMMU driver may choose to implement a vdevce_alloc
272 --------------------
276 - iommufd_ioas for IOMMUFD_OBJ_IOAS.
277 - iommufd_device for IOMMUFD_OBJ_DEVICE.
278 - iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING.
279 - iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED.
280 - iommufd_fault for IOMMUFD_OBJ_FAULT.
281 - iommufd_viommu for IOMMUFD_OBJ_VIOMMU.
282 - iommufd_vdevice for IOMMUFD_OBJ_VDEVICE.
283 - iommufd_veventq for IOMMUFD_OBJ_VEVENTQ.
284 - iommufd_hw_queue for IOMMUFD_OBJ_HW_QUEUE.
288 - Automatic domain - refers to an iommu domain created automatically when
292 - Manual domain - refers to an iommu domain designated by the user as the
297 - In-kernel user - refers to something like a VFIO mdev that is using the
306 - struct io_pagetable holding the IOVA map
307 - struct iopt_area's representing populated portions of IOVA
308 - struct iopt_pages representing the storage of PFNs
309 - struct iommu_domain representing the IO page table in the IOMMU
310 - struct iopt_pages_access representing in-kernel users of PFNs
311 - struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
319 on what domains are present and what kinds of in-kernel "software access" users
325 Multiple io_pagetable-s, through their iopt_area-s, can share a single
326 iopt_pages which avoids multi-pinning and double accounting of page
335 .. kernel-doc:: include/uapi/linux/iommufd.h
340 The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
342 device-centric uAPI for connecting its device to an iommufd, instead of
345 .. kernel-doc:: drivers/iommu/iommufd/device.c
348 .. kernel-doc:: drivers/iommu/iommufd/main.c
352 ----------------
362 The second approach directly extends VFIO to support a new set of device-centric
367 Currently both approaches are still work-in-progress.
375 Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
378 - Binding iommu_domain's to PASID/SSID
379 - Userspace page tables, for ARM, x86 and S390
380 - Kernel bypass'd invalidation of user page tables
381 - Re-use of the KVM page table in the IOMMU
382 - Dirty page tracking in the IOMMU
383 - Runtime Increase/Decrease of IOPTE size
384 - PRI support with faults resolved in userspace