Lines Matching +full:work +full:- +full:around

1 /* SPDX-License-Identifier: MIT */
19 * ------------
33 * ----------
35 * DRM_XE_VM_BIND_OP_MAP - Create mapping for a BO
36 * DRM_XE_VM_BIND_OP_UNMAP - Destroy mapping for a BO / userptr
37 * DRM_XE_VM_BIND_OP_MAP_USERPTR - Create mapping for userptr
54 * .. code-block::
56 * bind BO0 0x0-0x1000
62 * bind BO1 0x201000-0x202000
66 * bind BO2 0x1ff000-0x201000
74 * bind can be done immediately (all in-fences satisfied, VM dma-resv kernel
78 * -------------
83 * ----------
95 * ------------------------
105 * -------------------------
118 * XE_VM_BIND_OP_RESTART operation. When VM async bind work is restarted, the
122 * ---------------------
127 * example is a real use case for VK sparse binding. We work around this
141 * ------------------------
151 * ----------------------------
155 * .. code-block::
157 * 0x0000-0x2000 and 0x3000-0x5000 have mappings
158 * Munmap 0x1000-0x4000, results in mappings 0x0000-0x1000 and 0x4000-0x5000
163 * .. code-block::
165 * unbind 0x0000-0x2000
166 * unbind 0x3000-0x5000
167 * rebind 0x0000-0x1000
168 * rebind 0x4000-0x5000
170 * Why not just do a partial unbind of 0x1000-0x2000 and 0x3000-0x4000? This
178 * In this example there is a window of time where 0x0000-0x1000 and
179 * 0x4000-0x5000 are invalid but the user didn't ask for these addresses to be
180 * removed from the mapping. To work around this we treat any munmap style
186 * VM). The caveat is all dma-resv slots must be updated atomically with respect
188 * vm->lock in write mode from the first operation until the last.
191 * ----------------------------
208 * ------------
214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
217 * -------
219 * Either the next exec (non-compute) or rebind worker (compute mode) will
221 * after the VM dma-resv wait if the VM is in compute mode.
235 * --------------
237 * If the kernel decides to move memory around (either userptr invalidate, BO
240 * page tables for the moved memory are no longer valid. To work around this we
248 * dma-resv DMA_RESV_USAGE_PREEMPT_FENCE slot. The same preempt fence, for every
249 * engine using the VM, is also installed into the same dma-resv slot of every
253 * -------------
262 * .. code-block::
264 * <----------------------------------------------------------------------|
268 * Lock VM dma-resv and external BOs dma-resv |
273 * Wait VM's DMA_RESV_USAGE_KERNEL dma-resv slot |
278 * Wait all VM's dma-resv slots |
279 * Retry ----------------------------------------------------------
284 * -----------
296 * work around this we allow a VM page tables to be shadowed in multiple GTs.
305 * various places plus exporting a composite fence for multi-GT binds to the
313 * A pending page fault can hold up the GPU work which holds up the dma fence
316 * such, dma fences are not allowed when VM is in fault mode. Because dma-fences
321 * ----------------
329 * ------------------
339 * To work around the above issue with processing faults in the G2H worker, we
354 * .. code-block::
361 * <----------------------------------------------------------------------|
363 * Lock VM & BO dma-resv locks |
368 * Drop VM & BO dma-resv locks |
369 * Retry ----------------------------------------------------------
375 * ---------------
389 * .. code-block::
395 * Lock VM & BO dma-resv locks
403 * -------------------------------------------------
410 * held which make acquiring the VM global lock impossible. To work around this
414 * kernel to move the VMA's memory around. This is a necessary lockless
425 * -----
427 * VM global lock (vm->lock) - rw semaphore lock. Outer most lock which protects
434 * VM dma-resv lock (vm->ttm.base.resv->lock) - WW lock. Protects VM dma-resv
439 * external BO dma-resv lock (bo->ttm.base.resv->lock) - WW lock. Protects
440 * external BO dma-resv slots. Expected to be acquired during VM binds (in
441 * addition to the VM dma-resv lock). All external BO dma-locks within a VM are
442 * expected to be acquired (in addition to the VM dma-resv lock) during execs
447 * -----------------------
450 * time (vm->lock).
453 * executing at the same time (vm->lock).
456 * the same VM is executing (vm->lock).
459 * compute mode rebind worker with the same VM is executing (vm->lock).
462 * executing (dma-resv locks).
465 * with the same VM is executing (dma-resv locks).
467 * dma-resv usage
473 * external BOs dma-resv slots. Let try to make this as clear as possible.
476 * -----------------
482 * 2. In non-compute mode, jobs from execs install themselves into the
485 * 3. In non-compute mode, jobs from execs install themselves into the
501 * ------------
507 * 2. In non-compute mode, the exection of all jobs from rebinds in execs shall
511 * 3. In non-compute mode, the exection of all jobs from execs shall wait on the
524 * -----------------------
527 * non-compute mode execs
529 * 2. New jobs from non-compute mode execs are blocked behind any existing jobs
538 * Future work
547 * wait on the dma-resv kernel slots of VM or BO, technically we only have to