Home
last modified time | relevance | path

Searched refs:GPU (Results 1 – 25 of 116) sorted by relevance

12345

/linux/Documentation/gpu/rfc/
H A Dgpusvm.rst4 GPU SVM Section
25 * Eviction is defined as migrating data from the GPU back to the
26 CPU without a virtual address to free up GPU memory.
32 * GPU page table invalidation, which requires a GPU virtual address, is
33 handled via the notifier that has access to the GPU virtual address.
34 * GPU fault side
36 and should strive to take mmap_read lock only in GPU SVM layer.
47 migration policy requiring GPU access to occur in GPU memory.
49 While no current user (Xe) of GPU SVM has such a policy, it is likely
60 * GPU pagetable locking
[all …]
H A Di915_vm_bind.rst8 objects (BOs) or sections of a BOs at specified GPU virtual addresses on a
10 mappings) will be persistent across multiple GPU submissions (execbuf calls)
30 * Support capture of persistent mappings in the dump upon GPU error.
96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future
108 In future, when GPU page faults are supported, we can potentially use a
124 When GPU page faults are supported, the execbuf path do not take any of these
166 implement Vulkan's Sparse Resources. With increasing GPU hardware performance,
180 Where GPU page faults are not available, kernel driver upon buffer invalidation
190 waiting process. User fence can be signaled either by the GPU or kernel async
199 Allows compute UMD to directly submit GPU jobs instead of through execbuf
[all …]
/linux/Documentation/gpu/nova/core/
H A Ddevinit.rst7 overview using the Ampere GPU family as an example. The goal is to provide a conceptual
11 that occur after a GPU reset. The devinit sequence is essential for properly configuring
12 the GPU hardware before it can be used.
15 Unit) microcontroller of the GPU. This interpreter executes a "script" of initialization
18 nova-core driver is even loaded. On an Ampere GPU, the devinit ucode is separate from the
33 Upon reset, several microcontrollers on the GPU (such as PMU, SEC2, GSP, etc.) run GPU
34 firmware (gfw) code to set up the GPU and its core parameters. Most of the GPU is
37 These low-level GPU firmware components are typically:
43 - On an Ampere GPU, the FWSEC typically runs on the GSP (GPU System Processor) in
H A Dfwsec.rst7 and its role in the GPU boot sequence. As such, this information is subject to
8 change in the future and is only current as of the Ampere GPU family. However,
14 'Heavy-secure' mode, and performs firmware verification after a GPU reset
15 before loading various ucode images onto other microcontrollers on the GPU,
172 This is using an GA-102 Ampere GPU as an example and could vary for future GPUs.
H A Dvbios.rst7 images in the ROM of the GPU. The VBIOS is mirrored onto the BAR 0 space and is read
8 by both Boot ROM firmware (also known as IFR or init-from-rom firmware) on the GPU to
17 VBIOS of an Ampere GA102 GPU which is supported by the nova-core driver.
33 microcontrollers for gfw initialization after GPU reset and before the driver
163 This diagram is created based on an GA-102 Ampere GPU as an example and could
H A Dfalcon.rst7 The descriptions are based on the Ampere GPU or earlier designs; however, they
14 NVIDIA GPUs may have multiple such Falcon instances (e.g., GSP (the GPU system
46 root of trust. For example, in the case of an Ampere GPU, the CPU runs the "Booter"
73 On an Ampere GPU, for example, the boot verification flow is:
82 HS ucode. For example, on an Ampere GPU, after the Booter ucode runs on the
/linux/Documentation/driver-api/
H A Dedac.rst116 Several stacks of HBM chips connect to the CPU or GPU through an ultra-fast
196 GPU nodes can be accessed the same way as the data fabric on CPU nodes.
199 and each GPU data fabric contains four Unified Memory Controllers (UMC).
207 Memory controllers on AMD GPU nodes can be represented in EDAC thusly:
209 GPU DF / GPU Node -> EDAC MC
210 GPU UMC -> EDAC CSROW
211 GPU UMC channel -> EDAC CHANNEL
218 - The CPU UMC (Unified Memory Controller) is mostly the same as the GPU UMC.
224 - GPU UMCs use 1 chip select, So UMC = EDAC CSROW.
225 - GPU UMCs use 8 channels, So UMC channel = EDAC channel.
[all …]
/linux/Documentation/driver-api/thermal/
H A Dnouveau_thermal.rst14 This driver allows to read the GPU core temperature, drive the GPU fan and
28 In order to protect the GPU from overheating, Nouveau supports 4 configurable
34 The GPU will be downclocked to reduce its power dissipation;
36 The GPU is put on hold to further lower power dissipation;
38 Shut the computer down to protect your GPU.
44 The default value for these thresholds comes from the GPU's vbios. These
/linux/Documentation/gpu/
H A Dmsm-crash-dump.rst7 Following a GPU hang the MSM driver outputs debugging information via
35 ID of the GPU that generated the crash formatted as
39 The current value of RBBM_STATUS which shows what top level GPU
50 GPU address of the ringbuffer.
76 GPU address of the buffer object.
91 GPU memory region.
H A Dnouveau.rst4 drm/nouveau NVIDIA GPU Driver
15 driver, responsible for managing NVIDIA GPU hardware at the kernel level.
16 NVKM provides a unified interface for handling various GPU architectures.
H A Ddrm-vm-bind-async.rst12 * ``gpu_vm``: A virtual GPU address space. Typically per process, but
38 the GPU and CPU. Memory fences are sometimes referred to as
45 which therefore needs to set the gpu_vm or the GPU execution context in
49 affected gpu_vmas, submits a GPU command batch and registers the
50 dma_fence representing the GPU command's activity with all affected
73 out-fences. Synchronous VM_BIND may block and wait for GPU operations;
80 before modifying the GPU page-tables, and signal the out-syncobjs when
142 for a GPU semaphore embedded by UMD in the workload.
214 /* Map (parts of) an object into the GPU virtual address range.
216 /* Unmap a GPU virtual address range */
[all …]
H A Ddrm-vm-bind-locking.rst29 * ``gpu_vm``: Abstraction of a virtual GPU address space with
32 * ``gpu_vma``: Abstraction of a GPU address range within a gpu_vm with
45 and which tracks GPU activity. When the GPU activity is finished,
49 to track GPU activity in the form of multiple dma_fences on a
57 affected gpu_vmas, submits a GPU command batch and registers the
58 dma_fence representing the GPU command's activity with all affected
226 for GPU idle or depend on all previous GPU activity. Furthermore, any
227 subsequent attempt by the GPU to access freed memory through the
388 GPU virtual address range, directly maps a CPU mm range of anonymous-
398 pages, dirty them if they are not mapped read-only to the GPU, and
[all …]
H A Dv3d.rst8 GPU buffer object (BO) management
19 GPU Scheduling
H A Ddrm-compute.rst29 all GPU memory, and possibly a huge chunk of CPU memory.
40 This creates a third option, using cgroups to prevent eviction. Both GPU and
42 eviction would be made cgroup aware. This allows the GPU to be partitioned
H A Dtegra.rst2 drm/tegra NVIDIA Tegra GPU and display driver
11 supports the built-in GPU, comprised of the gr2d and gr3d engines. Starting
12 with Tegra124 the GPU is based on the NVIDIA desktop GPU architecture and
25 GPU and video engines via host1x.
/linux/drivers/vfio/pci/nvgrace-gpu/
H A DKconfig3 tristate "VFIO support for the GPU in the NVIDIA Grace Hopper Superchip"
7 VFIO support for the GPU in the NVIDIA Grace Hopper Superchip is
8 required to assign the GPU device to userspace using KVM/qemu/etc.
/linux/drivers/gpu/drm/amd/amdkfd/
H A DKconfig7 bool "HSA kernel driver for AMD GPU devices"
13 Enable this if you want to use HSA features on AMD GPU devices.
29 bool "HSA kernel driver support for peer-to-peer for AMD GPU devices"
33 the PCIe bus. This can improve performance of multi-GPU compute
/linux/arch/arm64/boot/dts/qcom/
H A Dmsm8996-v3.0.dtsi13 * This revision seems to have differ GPU CPR
14 * parameters, GPU frequencies and some differences
16 * the GPU. Funnily enough, it's simpler to make it an
/linux/drivers/gpu/drm/qxl/
H A DKconfig3 tristate "QXL virtual GPU"
12 QXL virtual GPU for Spice virtualization desktop integration.
/linux/drivers/gpu/nova-core/
H A DKconfig2 tristate "Nova Core GPU driver"
11 GPUs based on the GPU System Processor (GSP). This is true for Turing
/linux/drivers/gpu/drm/sti/
H A DNOTES23 GPU >-------------+GDP Main | | +---+ HDMI +--> HDMI
24 GPU >-------------+GDP mixer+---+ | :===========:
25 GPU >-------------+Cursor | | +---+ DVO +--> 24b//
/linux/Documentation/gpu/nova/
H A Dindex.rst4 nova NVIDIA GPU drivers
9 the GPU System Processor (GSP).
/linux/Documentation/accel/
H A Dintroduction.rst11 These devices can be either stand-alone ASICs or IP blocks inside an SoC/GPU.
24 type of device can be stand-alone or an IP inside a SoC or a GPU. It will
55 drivers can be of use to GPU drivers as well.
61 from trying to use an accelerator as a GPU, the compute accelerators will be
/linux/drivers/gpu/drm/i915/
H A DKconfig.profile46 check the health of the GPU and undertake regular house-keeping of
52 May be 0 to disable heartbeats and therefore disable automatic GPU
96 Before sleeping waiting for a request (GPU operation) to complete,
114 the GPU, we allow the innocent contexts also on the system to quiesce.
/linux/Documentation/translations/it_IT/process/
H A Dbotching-up-ioctls.rst15 unificata per gestire la memoria e le unità esecutive di diverse GPU. Dunque,
17 inviare dei programmi alla GPU. Il che è va bene dato che non c'è più un insano
25 GPU. Probabilmente, ogni sviluppatore di driver per GPU dovrebbe imparare queste
101 validazione degli input e gestire in modo robusto i percorsi - tanto le GPU
132 possibile la sua interruzione. Le GPU moriranno e i vostri utenti non vi
146 Le GPU fanno quasi tutto in modo asincrono, dunque dobbiamo regolare le
193 certe GPU. Questo significa che il driver deve esporre verso lo spazio

12345