Lines Matching +full:vm +full:- +full:map

9 specified address space (VM). These mappings (also referred to as persistent
18 User has to opt-in for VM_BIND mode of binding for an address space (VM)
19 during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension.
27 * Multiple Virtual Address (VA) mappings can map to the same physical pages
29 * VA mapping can map to a partial section of the BO (partial binding).
34 ------------------------
38 submissions on that VM and will not be in the working set for currently running
42 -------------------------------
43 A VM in VM_BIND mode will not support older execbuf mode of binding.
52 "dma-buf: Add an API for exporting sync files"
56 works with execbuf3 ioctl for submission. All BOs mapped on that VM (through
68 be using the i915_vma active reference tracking. It will instead use dma-resv
78 -------------------
79 By default, BOs can be mapped on multiple VMs and can also be dma-buf
82 dma-resv fence list of all shared BOs mapped on the VM.
85 is private to a specified VM via I915_GEM_CREATE_EXT_VM_PRIVATE flag during
86 BO creation. Unlike Shared BOs, these VM private BOs can only be mapped on
87 the VM they are private to and can't be dma-buf exported.
88 All private BOs of a VM share the dma-resv object. Hence during each execbuf
89 submission, they need only one dma-resv fence list updated. Thus, the fast
91 w.r.t the number of VM private BOs.
94 -------------------------
104 1) Lock-A: A vm_bind mutex will protect vm_bind lists. This lock is taken in
113 2) Lock-B: The object's dma-resv lock will protect i915_vma state and needs to
115 dma-resv fence list of an object. Note that private BOs of a VM will all
116 share a dma-resv object.
121 3) Lock-C: Spinlock/s to protect some of the VM's lists like the list of
127 fault handler, where we take lock-A in read mode, whichever lock-B we need to
129 system allocator) and some additional locks (lock-D) for taking care of page
130 table races. Page fault mode should not need to ever manipulate the vm lists,
131 so won't ever need lock-C.
134 ---------------------
140 `Evictable page table allocations`_) and are maintained per VM and needs to
141 be pinned in memory when VM is made active (ie., upon an execbuf call with
142 that VM). So, bulk LRU movement of page table pages is also needed.
145 -----------------------
157 Also, in VM_BIND mode, use dma-resv apis for determining object activeness
163 --------------
165 hence improving performance of CPU-bound applications. It also allows us to
174 ------------------------------
175 Usage of dma-fence expects that they complete in reasonable amount of time.
177 compute to use user/memory fence (See `User/Memory Fence`_) and dma-fence usage
178 must be limited to in-kernel consumption only.
183 done by having a per-context preempt fence which is enabled when someone tries
205 ---------
211 ----------------
214 binding will require using dma-fence to ensure residency, the GPU page faults
215 mode when supported, will not use any dma-fence as residency is purely managed
219 --------------------------
221 include placement and atomicity. Sub-BO level placement hint will be even more
222 relevant with upcoming GPU on-demand page fault support.
225 -------------------------------
229 ---------------------------------
232 VM (difference here are that the page table pages will not have an i915_vma
237 ------------------------------------
238 VM_BIND interface can be used to map system memory directly (without gem BO
245 .. kernel-doc:: Documentation/gpu/rfc/i915_vm_bind.h