Lines Matching +full:compute +full:-

1 /* SPDX-License-Identifier: MIT */
19 * ------------
33 * ----------
35 * DRM_XE_VM_BIND_OP_MAP - Create mapping for a BO
36 * DRM_XE_VM_BIND_OP_UNMAP - Destroy mapping for a BO / userptr
37 * DRM_XE_VM_BIND_OP_MAP_USERPTR - Create mapping for userptr
54 * .. code-block::
56 * bind BO0 0x0-0x1000
62 * bind BO1 0x201000-0x202000
66 * bind BO2 0x1ff000-0x201000
74 * bind can be done immediately (all in-fences satisfied, VM dma-resv kernel
78 * -------------
83 * ----------
95 * ------------------------
105 * -------------------------
122 * ---------------------
141 * ------------------------
151 * ----------------------------
155 * .. code-block::
157 * 0x0000-0x2000 and 0x3000-0x5000 have mappings
158 * Munmap 0x1000-0x4000, results in mappings 0x0000-0x1000 and 0x4000-0x5000
163 * .. code-block::
165 * unbind 0x0000-0x2000
166 * unbind 0x3000-0x5000
167 * rebind 0x0000-0x1000
168 * rebind 0x4000-0x5000
170 * Why not just do a partial unbind of 0x1000-0x2000 and 0x3000-0x4000? This
178 * In this example there is a window of time where 0x0000-0x1000 and
179 * 0x4000-0x5000 are invalid but the user didn't ask for these addresses to be
185 * the VM's DMA_RESV_USAGE_KERNEL slot (blocks future jobs / resume compute mode
186 * VM). The caveat is all dma-resv slots must be updated atomically with respect
187 * to execs and compute mode rebind worker. To accomplish this, hold the
188 * vm->lock in write mode from the first operation until the last.
191 * ----------------------------
208 * ------------
213 * until all pending users (jobs or compute mode engines) of the userptr are
214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
217 * -------
219 * Either the next exec (non-compute) or rebind worker (compute mode) will
221 * after the VM dma-resv wait if the VM is in compute mode.
223 * Compute mode
226 * A VM in compute mode enables long running workloads and ultra low latency
231 * are not used when a VM is in compute mode. User fences (TODO: link user fence
235 * --------------
248 * dma-resv DMA_RESV_USAGE_PREEMPT_FENCE slot. The same preempt fence, for every
249 * engine using the VM, is also installed into the same dma-resv slot of every
253 * -------------
262 * .. code-block::
264 * <----------------------------------------------------------------------|
268 * Lock VM dma-resv and external BOs dma-resv |
273 * Wait VM's DMA_RESV_USAGE_KERNEL dma-resv slot |
278 * Wait all VM's dma-resv slots |
279 * Retry ----------------------------------------------------------
284 * -----------
305 * various places plus exporting a composite fence for multi-GT binds to the
316 * such, dma fences are not allowed when VM is in fault mode. Because dma-fences
321 * ----------------
329 * ------------------
354 * .. code-block::
361 * <----------------------------------------------------------------------|
363 * Lock VM & BO dma-resv locks |
368 * Drop VM & BO dma-resv locks |
369 * Retry ----------------------------------------------------------
375 * ---------------
389 * .. code-block::
395 * Lock VM & BO dma-resv locks
403 * -------------------------------------------------
422 * evictions, and compute mode rebind worker) in XE.
425 * -----
427 * VM global lock (vm->lock) - rw semaphore lock. Outer most lock which protects
431 * bind path also acquires this lock in write while the exec / compute mode
434 * VM dma-resv lock (vm->ttm.base.resv->lock) - WW lock. Protects VM dma-resv
436 * during VM binds, execs, and compute mode rebind worker. This lock is also
439 * external BO dma-resv lock (bo->ttm.base.resv->lock) - WW lock. Protects
440 * external BO dma-resv slots. Expected to be acquired during VM binds (in
441 * addition to the VM dma-resv lock). All external BO dma-locks within a VM are
442 * expected to be acquired (in addition to the VM dma-resv lock) during execs
443 * and the compute mode rebind worker. This lock is also held when an external
447 * -----------------------
450 * time (vm->lock).
452 * 2. A compute mode rebind worker and bind operation with the same VM can't be
453 * executing at the same time (vm->lock).
456 * the same VM is executing (vm->lock).
459 * compute mode rebind worker with the same VM is executing (vm->lock).
462 * executing (dma-resv locks).
464 * 6. Evictions within a VM can't be happen while a compute mode rebind worker
465 * with the same VM is executing (dma-resv locks).
467 * dma-resv usage
473 * external BOs dma-resv slots. Let try to make this as clear as possible.
476 * -----------------
482 * 2. In non-compute mode, jobs from execs install themselves into the
485 * 3. In non-compute mode, jobs from execs install themselves into the
494 * 6. Every engine using a compute mode VM has a preempt fence in installed into
497 * 7. Every engine using a compute mode VM has a preempt fence in installed into
501 * ------------
507 * 2. In non-compute mode, the execution of all jobs from rebinds in execs shall
511 * 3. In non-compute mode, the execution of all jobs from execs shall wait on the
514 * 4. In compute mode, the execution of all jobs from rebinds in the rebind
518 * 5. In compute mode, resumes in rebind worker shall wait on last rebind fence
520 * 6. In compute mode, resumes in rebind worker shall wait on the
524 * -----------------------
527 * non-compute mode execs
529 * 2. New jobs from non-compute mode execs are blocked behind any existing jobs
533 * compute mode
535 * 4. Compute mode engine resumes are blocked behind any existing jobs from
547 * wait on the dma-resv kernel slots of VM or BO, technically we only have to