Lines Matching refs:GPU
29 * ``gpu_vm``: Abstraction of a virtual GPU address space with
32 * ``gpu_vma``: Abstraction of a GPU address range within a gpu_vm with
45 and which tracks GPU activity. When the GPU activity is finished,
49 to track GPU activity in the form of multiple dma_fences on a
57 affected gpu_vmas, submits a GPU command batch and registers the
58 dma_fence representing the GPU command's activity with all affected
226 for GPU idle or depend on all previous GPU activity. Furthermore, any
227 subsequent attempt by the GPU to access freed memory through the
388 GPU virtual address range, directly maps a CPU mm range of anonymous-
398 pages, dirty them if they are not mapped read-only to the GPU, and
401 pages, we need to stop GPU access to the pages by waiting for VM idle
402 in the MMU notifier and make sure that before the next time the GPU
404 the old pages from the GPU page tables and repeat the process of
408 pages dirty again before the next GPU access. We also get similar MMU
409 notifications for NUMA accounting which the GPU driver doesn't really
504 When this invalidation notifier returns, the GPU can no longer be
506 page-binding before a new GPU submission can succeed.
554 reuse, there should be no remaining GPU mappings and any GPU TLB
558 Since the unmapping (or zapping) of GPU ptes is typically taking place
579 might also require outer level locks, the zapping of GPU ptes