Lines Matching +full:no +full:- +full:sync +full:- +full:mode
12 mappings during each submission (as required by older execbuf mode).
18 User has to opt-in for VM_BIND mode of binding for an address space (VM)
31 * Support for userptr gem objects (no special uapi is required for this).
34 ------------------------
41 Execbuf ioctl in VM_BIND mode
42 -------------------------------
43 A VM in VM_BIND mode will not support older execbuf mode of binding.
44 The execbuf ioctl handling in VM_BIND mode differs significantly from the
46 Hence, a new execbuf3 ioctl has been added to support VM_BIND mode. (See
48 execlist. Hence, no support for implicit sync. It is expected that the below
52 "dma-buf: Add an API for exporting sync files"
55 The new execbuf3 ioctl only works in VM_BIND mode and the VM_BIND mode only
65 In VM_BIND mode, VA allocation is completely managed by the user instead of
67 VM_BIND mode. Also, for determining object activeness, VM_BIND mode will not
68 be using the i915_vma active reference tracking. It will instead use dma-resv
72 evictions, vma lookup table, implicit sync, vma active reference tracking etc.,
78 -------------------
79 By default, BOs can be mapped on multiple VMs and can also be dma-buf
82 dma-resv fence list of all shared BOs mapped on the VM.
87 the VM they are private to and can't be dma-buf exported.
88 All private BOs of a VM share the dma-resv object. Hence during each execbuf
89 submission, they need only one dma-resv fence list updated. Thus, the fast
94 -------------------------
95 The locking design here supports the older (execlist based) execbuf mode, the
96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future
98 The older execbuf mode and the newer VM_BIND mode without page faults manages
99 residency of backing storage using dma_fence. The VM_BIND mode with page faults
104 1) Lock-A: A vm_bind mutex will protect vm_bind lists. This lock is taken in
111 The older execbuf mode of binding do not need this lock.
113 2) Lock-B: The object's dma-resv lock will protect i915_vma state and needs to
115 dma-resv fence list of an object. Note that private BOs of a VM will all
116 share a dma-resv object.
121 3) Lock-C: Spinlock/s to protect some of the VM's lists like the list of
127 fault handler, where we take lock-A in read mode, whichever lock-B we need to
129 system allocator) and some additional locks (lock-D) for taking care of page
130 table races. Page fault mode should not need to ever manipulate the vm lists,
131 so won't ever need lock-C.
134 ---------------------
145 -----------------------
148 over sync (See enum dma_resv_usage). One can override it with either
157 Also, in VM_BIND mode, use dma-resv apis for determining object activeness
163 --------------
165 hence improving performance of CPU-bound applications. It also allows us to
174 ------------------------------
175 Usage of dma-fence expects that they complete in reasonable amount of time.
177 compute to use user/memory fence (See `User/Memory Fence`_) and dma-fence usage
178 must be limited to in-kernel consumption only.
183 done by having a per-context preempt fence which is enabled when someone tries
205 ---------
211 ----------------
213 VM_BIND mode. While both the older execbuf mode and the newer VM_BIND mode of
214 binding will require using dma-fence to ensure residency, the GPU page faults
215 mode when supported, will not use any dma-fence as residency is purely managed
219 --------------------------
221 include placement and atomicity. Sub-BO level placement hint will be even more
222 relevant with upcoming GPU on-demand page fault support.
225 -------------------------------
229 ---------------------------------
237 ------------------------------------
245 .. kernel-doc:: Documentation/gpu/rfc/i915_vm_bind.h