xref: /linux/Documentation/gpu/drm-compute.rst (revision e3610441d1fb47b1f00e4c38bdf333176e824729)
1==================================
2Long running workloads and compute
3==================================
4
5Long running workloads (compute) are workloads that will not complete in 10
6seconds. (The time let the user wait before he reaches for the power button).
7This means that other techniques need to be used to manage those workloads,
8that cannot use fences.
9
10Some hardware may schedule compute jobs, and have no way to pre-empt them, or
11have their memory swapped out from them. Or they simply want their workload
12not to be preempted or swapped out at all.
13
14This means that it differs from what is described in driver-api/dma-buf.rst.
15
16As with normal compute jobs, dma-fence may not be used at all. In this case,
17not even to force preemption. The driver with is simply forced to unmap a BO
18from the long compute job's address space on unbind immediately, not even
19waiting for the workload to complete. Effectively this terminates the workload
20when there is no hardware support to recover.
21
22Since this is undesirable, there need to be mitigations to prevent a workload
23from being terminated. There are several possible approach, all with their
24advantages and drawbacks.
25
26The first approach you will likely try is to pin all buffers used by compute.
27This guarantees that the job will run uninterrupted, but also allows a very
28denial of service attack by pinning as much memory as possible, hogging the
29all GPU memory, and possibly a huge chunk of CPU memory.
30
31A second approach that will work slightly better on its own is adding an option
32not to evict when creating a new job (any kind). If all of userspace opts in
33to this flag, it would prevent cooperating userspace from forced terminating
34older compute jobs to start a new one.
35
36If job preemption and recoverable pagefaults are not available, those are the
37only approaches possible. So even with those, you want a separate way of
38controlling resources. The standard kernel way of doing so is cgroups.
39
40This creates a third option, using cgroups to prevent eviction. Both GPU and
41driver-allocated CPU memory would be accounted to the correct cgroup, and
42eviction would be made cgroup aware. This allows the GPU to be partitioned
43into cgroups, that will allow jobs to run next to each other without
44interference.
45
46The interface to the cgroup would be similar to the current CPU memory
47interface, with similar semantics for min/low/high/max, if eviction can
48be made cgroup aware.
49
50What should be noted is that each memory region (tiled memory for example)
51should have its own accounting.
52
53The key is set to the regionid set by the driver, for example "tile0".
54For the value of $card, we use drmGetUnique().
55