Lines Matching +full:buffer +full:- +full:manager
6 frame buffers, textures, vertices and other graphics-related data. Given
11 The DRM core includes two memory managers, namely Translation Table Manager
12 (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory
13 manager to be developed and tried to be a one-size-fits-them all
20 GEM started as an Intel-sponsored project in reaction to TTM's
22 providing a solution to every graphics memory-related problems, GEM
28 The Translation Table Manager (TTM)
31 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_module.c
34 .. kernel-doc:: include/drm/ttm/ttm_caching.h
38 ---------------------------
40 .. kernel-doc:: include/drm/ttm/ttm_device.h
43 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_device.c
47 --------------------------------
49 .. kernel-doc:: include/drm/ttm/ttm_placement.h
53 -----------------------------
55 .. kernel-doc:: include/drm/ttm/ttm_resource.h
58 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_resource.c
62 -----------------------
64 .. kernel-doc:: include/drm/ttm/ttm_tt.h
67 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_tt.c
71 -----------------------
73 .. kernel-doc:: include/drm/ttm/ttm_pool.h
76 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_pool.c
79 The Graphics Execution Manager (GEM)
82 The GEM design approach has resulted in a memory manager that doesn't
84 userspace or kernel API. GEM exposes a set of standard memory-related
86 let drivers implement hardware-specific operations with their own
89 The GEM userspace API is described in the `GEM - the Graphics Execution
90 Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While
92 principles. Buffer allocation and read and write operations, described
94 driver-specific ioctls.
96 GEM is data-agnostic. It manages abstract buffer objects without knowing
97 what individual buffers contain. APIs that require knowledge of buffer
98 contents or purpose, such as buffer allocation or synchronization
100 using driver-specific ioctls.
104 - Memory allocation and freeing
105 - Command execution
106 - Aperture management at command execution time
108 Buffer object allocation is relatively straightforward and largely
112 Device-specific operations, such as command execution, pinning, buffer
114 driver-specific ioctls.
117 ------------------
123 DRM Memory Manager object which provides an address space pool for
127 command ring buffer following core GEM initialization if required by the
135 --------------------
143 driver-specific GEM object structure type that embeds an instance of
150 to the DRM device, a pointer to the GEM object and the buffer object
175 --------------------
177 All GEM objects are reference-counted by the GEM core. References can be
183 operation. That operation is mandatory for GEM-enabled drivers and must
192 ------------------
196 All of those are 32-bit integer values; the usual Linux kernel limits
200 object through a driver-specific ioctl, and can use that handle to refer
201 to the GEM object in other standard or driver-specific ioctls. Closing a
227 driver-specific support.
229 GEM also supports buffer sharing with dma-buf file descriptors through
230 PRIME. GEM-based drivers must use the provided helpers functions to
233 global GEM names it is the preferred buffer sharing mechanism. Sharing
235 Furthermore PRIME also allows cross-device buffer sharing since it is
236 based on dma-bufs.
239 -------------------
242 read/write-like access to buffers, implemented through driver-specific
244 to the buffer is needed (to perform software rendering for instance),
249 co-exist to map GEM objects to userspace. The first method uses a
250 driver-specific ioctl to perform the mapping operation, calling
252 dubious, seems to be discouraged for new GEM-enabled drivers, and will
263 in a driver-specific way and can then be used as the mmap offset
271 userspace, but relies on the driver-provided fault handler to map pages
280 .. code-block:: c
311 Documentation/admin-guide/mm/nommu-mmap.rst
314 ----------------
316 When mapped to the device or used in a command buffer, backing pages for
321 flushing of various kinds. This core CPU<->GPU coherency management is
322 provided by a device-specific ioctl, which evaluates an object's current
330 -----------------
336 bind all the objects into the GTT, execute the buffer, and provide
338 This often involves evicting some objects from the GTT and re-binding
343 Similarly, if several objects in the buffer require fence registers to
344 be allocated for correct rendering (e.g. 2D blits on pre-965 chips),
350 ----------------------
352 .. kernel-doc:: include/drm/drm_gem.h
355 .. kernel-doc:: drivers/gpu/drm/drm_gem.c
359 ----------------------------------
361 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_helper.c
364 .. kernel-doc:: include/drm/drm_gem_dma_helper.h
367 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_helper.c
371 -----------------------------------
373 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
376 .. kernel-doc:: include/drm/drm_gem_shmem_helper.h
379 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c
383 -----------------------------------
385 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
388 .. kernel-doc:: include/drm/drm_gem_vram_helper.h
391 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c
395 -----------------------------------
397 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
400 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c
403 VMA Offset Manager
406 .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
407 :doc: vma offset manager
409 .. kernel-doc:: include/drm/drm_vma_manager.h
412 .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c
417 PRIME Buffer Sharing
420 PRIME is the cross device buffer sharing framework in drm, originally
421 created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME
422 buffers are dma-buf based file descriptors.
425 ---------------------------
427 .. kernel-doc:: drivers/gpu/drm/drm_prime.c
431 ----------------------
433 .. kernel-doc:: drivers/gpu/drm/drm_prime.c
437 -------------------------
439 .. kernel-doc:: include/drm/drm_prime.h
442 .. kernel-doc:: drivers/gpu/drm/drm_prime.c
449 --------
451 .. kernel-doc:: drivers/gpu/drm/drm_mm.c
455 -------------------------
457 .. kernel-doc:: drivers/gpu/drm/drm_mm.c
461 ------------------------------------------
463 .. kernel-doc:: include/drm/drm_mm.h
466 .. kernel-doc:: drivers/gpu/drm/drm_mm.c
475 --------
477 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
481 ---------------
483 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
489 -------
491 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
495 --------
497 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
501 -----------------------------
503 .. kernel-doc:: include/drm/drm_gpuvm.h
506 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c
513 -----------------------------
515 .. kernel-doc:: drivers/gpu/drm/drm_buddy.c
521 .. kernel-doc:: drivers/gpu/drm/drm_cache.c
529 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
532 .. kernel-doc:: include/drm/drm_syncobj.h
535 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c
541 .. kernel-doc:: drivers/gpu/drm/drm_exec.c
544 .. kernel-doc:: include/drm/drm_exec.h
547 .. kernel-doc:: drivers/gpu/drm/drm_exec.c
554 --------
556 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
560 ------------
562 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
566 -----------------------------
568 .. kernel-doc:: include/drm/gpu_scheduler.h
571 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c
574 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_entity.c