xref: /linux/Documentation/userspace-api/iommufd.rst (revision 3e93d5bbcbfc3808f83712c0701f9d4c148cc8ed)
1.. SPDX-License-Identifier: GPL-2.0+
2
3=======
4IOMMUFD
5=======
6
7:Author: Jason Gunthorpe
8:Author: Kevin Tian
9
10Overview
11========
12
13IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
14IO page tables from userspace using file descriptors. It intends to be general
15and consumable by any driver that wants to expose DMA to userspace. These
16drivers are eventually expected to deprecate any internal IOMMU logic
17they may already/historically implement (e.g. vfio_iommu_type1.c).
18
19At minimum iommufd provides universal support of managing I/O address spaces and
20I/O page tables for all IOMMUs, with room in the design to add non-generic
21features to cater to specific hardware functionality.
22
23In this context the capital letter (IOMMUFD) refers to the subsystem while the
24small letter (iommufd) refers to the file descriptors created via /dev/iommu for
25use by userspace.
26
27Key Concepts
28============
29
30User Visible Objects
31--------------------
32
33Following IOMMUFD objects are exposed to userspace:
34
35- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
36  of user space memory into ranges of I/O Virtual Address (IOVA).
37
38  The IOAS is a functional replacement for the VFIO container, and like the VFIO
39  container it copies an IOVA map to a list of iommu_domains held within it.
40
41- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
42  external driver.
43
44- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
45  (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
46  primarily indicates this type of HWPT should be linked to an IOAS. It also
47  indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING
48  feature flag. This can be either an UNMANAGED stage-1 domain for a device
49  running in the user space, or a nesting parent stage-2 domain for mappings
50  from guest-level physical addresses to host-level physical addresses.
51
52  The IOAS has a list of HWPT_PAGINGs that share the same IOVA mapping and
53  it will synchronize its mapping with each member HWPT_PAGING.
54
55- IOMMUFD_OBJ_HWPT_NESTED, representing an actual hardware I/O page table
56  (i.e. a single struct iommu_domain) managed by user space (e.g. guest OS).
57  "NESTED" indicates that this type of HWPT should be linked to an HWPT_PAGING.
58  It also indicates that it is backed by an iommu_domain that has a type of
59  IOMMU_DOMAIN_NESTED. This must be a stage-1 domain for a device running in
60  the user space (e.g. in a guest VM enabling the IOMMU nested translation
61  feature.) As such, it must be created with a given nesting parent stage-2
62  domain to associate to. This nested stage-1 page table managed by the user
63  space usually has mappings from guest-level I/O virtual addresses to guest-
64  level physical addresses.
65
66- IOMMUFD_FAULT, representing a software queue for an HWPT reporting IO page
67  faults using the IOMMU HW's PRI (Page Request Interface). This queue object
68  provides user space an FD to poll the page fault events and also to respond
69  to those events. A FAULT object must be created first to get a fault_id that
70  could be then used to allocate a fault-enabled HWPT via the IOMMU_HWPT_ALLOC
71  command by setting the IOMMU_HWPT_FAULT_ID_VALID bit in its flags field.
72
73- IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU instance,
74  passed to or shared with a VM. It may be some HW-accelerated virtualization
75  features and some SW resources used by the VM. For examples:
76
77  * Security namespace for guest owned ID, e.g. guest-controlled cache tags
78  * Non-device-affiliated event reporting, e.g. invalidation queue errors
79  * Access to a shareable nesting parent pagetable across physical IOMMUs
80  * Virtualization of various platforms IDs, e.g. RIDs and others
81  * Delivery of paravirtualized invalidation
82  * Direct assigned invalidation queues
83  * Direct assigned interrupts
84
85  Such a vIOMMU object generally has the access to a nesting parent pagetable
86  to support some HW-accelerated virtualization features. So, a vIOMMU object
87  must be created given a nesting parent HWPT_PAGING object, and then it would
88  encapsulate that HWPT_PAGING object. Therefore, a vIOMMU object can be used
89  to allocate an HWPT_NESTED object in place of the encapsulated HWPT_PAGING.
90
91  .. note::
92
93     The name "vIOMMU" isn't necessarily identical to a virtualized IOMMU in a
94     VM. A VM can have one giant virtualized IOMMU running on a machine having
95     multiple physical IOMMUs, in which case the VMM will dispatch the requests
96     or configurations from this single virtualized IOMMU instance to multiple
97     vIOMMU objects created for individual slices of different physical IOMMUs.
98     In other words, a vIOMMU object is always a representation of one physical
99     IOMMU, not necessarily of a virtualized IOMMU. For VMMs that want the full
100     virtualization features from physical IOMMUs, it is suggested to build the
101     same number of virtualized IOMMUs as the number of physical IOMMUs, so the
102     passed-through devices would be connected to their own virtualized IOMMUs
103     backed by corresponding vIOMMU objects, in which case a guest OS would do
104     the "dispatch" naturally instead of VMM trappings.
105
106- IOMMUFD_OBJ_VDEVICE, representing a virtual device for an IOMMUFD_OBJ_DEVICE
107  against an IOMMUFD_OBJ_VIOMMU. This virtual device holds the device's virtual
108  information or attributes (related to the vIOMMU) in a VM. An immediate vDATA
109  example can be the virtual ID of the device on a vIOMMU, which is a unique ID
110  that VMM assigns to the device for a translation channel/port of the vIOMMU,
111  e.g. vSID of ARM SMMUv3, vDeviceID of AMD IOMMU, and vRID of Intel VT-d to a
112  Context Table. Potential use cases of some advanced security information can
113  be forwarded via this object too, such as security level or realm information
114  in a Confidential Compute Architecture. A VMM should create a vDEVICE object
115  to forward all the device information in a VM, when it connects a device to a
116  vIOMMU, which is a separate ioctl call from attaching the same device to an
117  HWPT_PAGING that the vIOMMU holds.
118
119- IOMMUFD_OBJ_VEVENTQ, representing a software queue for a vIOMMU to report its
120  events such as translation faults occurred to a nested stage-1 (excluding I/O
121  page faults that should go through IOMMUFD_OBJ_FAULT) and HW-specific events.
122  This queue object provides user space an FD to poll/read the vIOMMU events. A
123  vIOMMU object must be created first to get its viommu_id, which could be then
124  used to allocate a vEVENTQ. Each vIOMMU can support multiple types of vEVENTS,
125  but is confined to one vEVENTQ per vEVENTQ type.
126
127- IOMMUFD_OBJ_HW_QUEUE, representing a hardware accelerated queue, as a subset
128  of IOMMU's virtualization features, for the IOMMU HW to directly read or write
129  the virtual queue memory owned by a guest OS. This HW-acceleration feature can
130  allow VM to work with the IOMMU HW directly without a VM Exit, so as to reduce
131  overhead from the hypercalls. Along with the HW QUEUE object, iommufd provides
132  user space an mmap interface for VMM to mmap a physical MMIO region from the
133  host physical address space to the guest physical address space, allowing the
134  guest OS to directly control the allocated HW QUEUE. Thus, when allocating a
135  HW QUEUE, the VMM must request a pair of mmap info (offset/length) and pass in
136  exactly to an mmap syscall via its offset and length arguments.
137
138All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
139
140The diagrams below show relationships between user-visible objects and kernel
141datastructures (external to iommufd), with numbers referred to operations
142creating the objects and links::
143
144  _______________________________________________________________________
145 |                      iommufd (HWPT_PAGING only)                       |
146 |                                                                       |
147 |        [1]                  [3]                                [2]    |
148 |  ________________      _____________                        ________  |
149 | |                |    |             |                      |        | |
150 | |      IOAS      |<---| HWPT_PAGING |<---------------------| DEVICE | |
151 | |________________|    |_____________|                      |________| |
152 |         |                    |                                  |     |
153 |_________|____________________|__________________________________|_____|
154           |                    |                                  |
155           |              ______v_____                          ___v__
156           | PFN storage |  (paging)  |                        |struct|
157           |------------>|iommu_domain|<-----------------------|device|
158                         |____________|                        |______|
159
160  _______________________________________________________________________
161 |                      iommufd (with HWPT_NESTED)                       |
162 |                                                                       |
163 |        [1]                  [3]                [4]             [2]    |
164 |  ________________      _____________      _____________     ________  |
165 | |                |    |             |    |             |   |        | |
166 | |      IOAS      |<---| HWPT_PAGING |<---| HWPT_NESTED |<--| DEVICE | |
167 | |________________|    |_____________|    |_____________|   |________| |
168 |         |                    |                  |               |     |
169 |_________|____________________|__________________|_______________|_____|
170           |                    |                  |               |
171           |              ______v_____       ______v_____       ___v__
172           | PFN storage |  (paging)  |     |  (nested)  |     |struct|
173           |------------>|iommu_domain|<----|iommu_domain|<----|device|
174                         |____________|     |____________|     |______|
175
176  _______________________________________________________________________
177 |                      iommufd (with vIOMMU/vDEVICE)                    |
178 |                                                                       |
179 |                             [5]                [6]                    |
180 |                        _____________      _____________               |
181 |                       |             |    |             |              |
182 |      |----------------|    vIOMMU   |<---|   vDEVICE   |<----|        |
183 |      |                |             |    |_____________|     |        |
184 |      |                |             |                        |        |
185 |      |      [1]       |             |          [4]           | [2]    |
186 |      |     ______     |             |     _____________     _|______  |
187 |      |    |      |    |     [3]     |    |             |   |        | |
188 |      |    | IOAS |<---|(HWPT_PAGING)|<---| HWPT_NESTED |<--| DEVICE | |
189 |      |    |______|    |_____________|    |_____________|   |________| |
190 |      |        |              |                  |               |     |
191 |______|________|______________|__________________|_______________|_____|
192        |        |              |                  |               |
193  ______v_____   |        ______v_____       ______v_____       ___v__
194 |   struct   |  |  PFN  |  (paging)  |     |  (nested)  |     |struct|
195 |iommu_device|  |------>|iommu_domain|<----|iommu_domain|<----|device|
196 |____________|   storage|____________|     |____________|     |______|
197
1981. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
199   hold multiple IOAS objects. IOAS is the most generic object and does not
200   expose interfaces that are specific to single IOMMU drivers. All operations
201   on the IOAS must operate equally on each of the iommu_domains inside of it.
202
2032. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
204   to bind a device to an iommufd. The driver is expected to implement a set of
205   ioctls to allow userspace to initiate the binding operation. Successful
206   completion of this operation establishes the desired DMA ownership over the
207   device. The driver must also set the driver_managed_dma flag and must not
208   touch the device until this operation succeeds.
209
2103. IOMMUFD_OBJ_HWPT_PAGING can be created in two ways:
211
212   * IOMMUFD_OBJ_HWPT_PAGING is automatically created when an external driver
213     calls the IOMMUFD kAPI to attach a bound device to an IOAS. Similarly the
214     external driver uAPI allows userspace to initiate the attaching operation.
215     If a compatible member HWPT_PAGING object exists in the IOAS's HWPT_PAGING
216     list, then it will be reused. Otherwise a new HWPT_PAGING that represents
217     an iommu_domain to userspace will be created, and then added to the list.
218     Successful completion of this operation sets up the linkages among IOAS,
219     device and iommu_domain. Once this completes the device could do DMA.
220
221   * IOMMUFD_OBJ_HWPT_PAGING can be manually created via the IOMMU_HWPT_ALLOC
222     uAPI, provided an ioas_id via @pt_id to associate the new HWPT_PAGING to
223     the corresponding IOAS object. The benefit of this manual allocation is to
224     allow allocation flags (defined in enum iommufd_hwpt_alloc_flags), e.g. it
225     allocates a nesting parent HWPT_PAGING if the IOMMU_HWPT_ALLOC_NEST_PARENT
226     flag is set.
227
2284. IOMMUFD_OBJ_HWPT_NESTED can be only manually created via the IOMMU_HWPT_ALLOC
229   uAPI, provided an hwpt_id or a viommu_id of a vIOMMU object encapsulating a
230   nesting parent HWPT_PAGING via @pt_id to associate the new HWPT_NESTED object
231   to the corresponding HWPT_PAGING object. The associating HWPT_PAGING object
232   must be a nesting parent manually allocated via the same uAPI previously with
233   an IOMMU_HWPT_ALLOC_NEST_PARENT flag, otherwise the allocation will fail. The
234   allocation will be further validated by the IOMMU driver to ensure that the
235   nesting parent domain and the nested domain being allocated are compatible.
236   Successful completion of this operation sets up linkages among IOAS, device,
237   and iommu_domains. Once this completes the device could do DMA via a 2-stage
238   translation, a.k.a nested translation. Note that multiple HWPT_NESTED objects
239   can be allocated by (and then associated to) the same nesting parent.
240
241   .. note::
242
243      Either a manual IOMMUFD_OBJ_HWPT_PAGING or an IOMMUFD_OBJ_HWPT_NESTED is
244      created via the same IOMMU_HWPT_ALLOC uAPI. The difference is at the type
245      of the object passed in via the @pt_id field of struct iommufd_hwpt_alloc.
246
2475. IOMMUFD_OBJ_VIOMMU can be only manually created via the IOMMU_VIOMMU_ALLOC
248   uAPI, provided a dev_id (for the device's physical IOMMU to back the vIOMMU)
249   and an hwpt_id (to associate the vIOMMU to a nesting parent HWPT_PAGING). The
250   iommufd core will link the vIOMMU object to the struct iommu_device that the
251   struct device is behind. And an IOMMU driver can implement a viommu_alloc op
252   to allocate its own vIOMMU data structure embedding the core-level structure
253   iommufd_viommu and some driver-specific data. If necessary, the driver can
254   also configure its HW virtualization feature for that vIOMMU (and thus for
255   the VM). Successful completion of this operation sets up the linkages between
256   the vIOMMU object and the HWPT_PAGING, then this vIOMMU object can be used
257   as a nesting parent object to allocate an HWPT_NESTED object described above.
258
2596. IOMMUFD_OBJ_VDEVICE can be only manually created via the IOMMU_VDEVICE_ALLOC
260   uAPI, provided a viommu_id for an iommufd_viommu object and a dev_id for an
261   iommufd_device object. The vDEVICE object will be the binding between these
262   two parent objects. Another @virt_id will be also set via the uAPI providing
263   the iommufd core an index to store the vDEVICE object to a vDEVICE array per
264   vIOMMU. If necessary, the IOMMU driver may choose to implement a vdevce_alloc
265   op to init its HW for virtualization feature related to a vDEVICE. Successful
266   completion of this operation sets up the linkages between vIOMMU and device.
267
268A device can only bind to an iommufd due to DMA ownership claim and attach to at
269most one IOAS object (no support of PASID yet).
270
271Kernel Datastructure
272--------------------
273
274User visible objects are backed by following datastructures:
275
276- iommufd_ioas for IOMMUFD_OBJ_IOAS.
277- iommufd_device for IOMMUFD_OBJ_DEVICE.
278- iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING.
279- iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED.
280- iommufd_fault for IOMMUFD_OBJ_FAULT.
281- iommufd_viommu for IOMMUFD_OBJ_VIOMMU.
282- iommufd_vdevice for IOMMUFD_OBJ_VDEVICE.
283- iommufd_veventq for IOMMUFD_OBJ_VEVENTQ.
284- iommufd_hw_queue for IOMMUFD_OBJ_HW_QUEUE.
285
286Several terminologies when looking at these datastructures:
287
288- Automatic domain - refers to an iommu domain created automatically when
289  attaching a device to an IOAS object. This is compatible to the semantics of
290  VFIO type1.
291
292- Manual domain - refers to an iommu domain designated by the user as the
293  target pagetable to be attached to by a device. Though currently there are
294  no uAPIs to directly create such domain, the datastructure and algorithms
295  are ready for handling that use case.
296
297- In-kernel user - refers to something like a VFIO mdev that is using the
298  IOMMUFD access interface to access the IOAS. This starts by creating an
299  iommufd_access object that is similar to the domain binding a physical device
300  would do. The access object will then allow converting IOVA ranges into struct
301  page * lists, or doing direct read/write to an IOVA.
302
303iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
304mapped to memory pages, composed of:
305
306- struct io_pagetable holding the IOVA map
307- struct iopt_area's representing populated portions of IOVA
308- struct iopt_pages representing the storage of PFNs
309- struct iommu_domain representing the IO page table in the IOMMU
310- struct iopt_pages_access representing in-kernel users of PFNs
311- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
312
313Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
314ultimately derived from userspace VAs via an mm_struct. Once they have been
315pinned the PFNs are stored in IOPTEs of an iommu_domain or inside the pinned_pfns
316xarray if they have been pinned through an iommufd_access.
317
318PFN have to be copied between all combinations of storage locations, depending
319on what domains are present and what kinds of in-kernel "software access" users
320exist. The mechanism ensures that a page is pinned only once.
321
322An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
323list of iommu_domains that mirror the IOVA to PFN map.
324
325Multiple io_pagetable-s, through their iopt_area-s, can share a single
326iopt_pages which avoids multi-pinning and double accounting of page
327consumption.
328
329iommufd_ioas is shareable between subsystems, e.g. VFIO and VDPA, as long as
330devices managed by different subsystems are bound to a same iommufd.
331
332IOMMUFD User API
333================
334
335.. kernel-doc:: include/uapi/linux/iommufd.h
336
337IOMMUFD Kernel API
338==================
339
340The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
341scene. This allows the external drivers calling such kAPI to implement a simple
342device-centric uAPI for connecting its device to an iommufd, instead of
343explicitly imposing the group semantics in its uAPI as VFIO does.
344
345.. kernel-doc:: drivers/iommu/iommufd/device.c
346   :export:
347
348.. kernel-doc:: drivers/iommu/iommufd/main.c
349   :export:
350
351VFIO and IOMMUFD
352----------------
353
354Connecting a VFIO device to iommufd can be done in two ways.
355
356First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
357container IOCTLs by mapping them into io_pagetable operations. Doing so allows
358the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
359/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
360container fd.
361
362The second approach directly extends VFIO to support a new set of device-centric
363user API based on aforementioned IOMMUFD kernel API. It requires userspace
364change but better matches the IOMMUFD API semantics and easier to support new
365iommufd features when comparing it to the first approach.
366
367Currently both approaches are still work-in-progress.
368
369There are still a few gaps to be resolved to catch up with VFIO type1, as
370documented in iommufd_vfio_check_extension().
371
372Future TODOs
373============
374
375Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
376type1. New features on the radar include:
377
378 - Binding iommu_domain's to PASID/SSID
379 - Userspace page tables, for ARM, x86 and S390
380 - Kernel bypass'd invalidation of user page tables
381 - Re-use of the KVM page table in the IOMMU
382 - Dirty page tracking in the IOMMU
383 - Runtime Increase/Decrease of IOPTE size
384 - PRI support with faults resolved in userspace
385