xref: /linux/drivers/gpu/drm/xe/xe_migrate_doc.h (revision dd08ebf6c3525a7ea2186e636df064ea47281987)
1*dd08ebf6SMatthew Brost /* SPDX-License-Identifier: MIT */
2*dd08ebf6SMatthew Brost /*
3*dd08ebf6SMatthew Brost  * Copyright © 2022 Intel Corporation
4*dd08ebf6SMatthew Brost  */
5*dd08ebf6SMatthew Brost 
6*dd08ebf6SMatthew Brost #ifndef _XE_MIGRATE_DOC_H_
7*dd08ebf6SMatthew Brost #define _XE_MIGRATE_DOC_H_
8*dd08ebf6SMatthew Brost 
9*dd08ebf6SMatthew Brost /**
10*dd08ebf6SMatthew Brost  * DOC: Migrate Layer
11*dd08ebf6SMatthew Brost  *
12*dd08ebf6SMatthew Brost  * The XE migrate layer is used generate jobs which can copy memory (eviction),
13*dd08ebf6SMatthew Brost  * clear memory, or program tables (binds). This layer exists in every GT, has
14*dd08ebf6SMatthew Brost  * a migrate engine, and uses a special VM for all generated jobs.
15*dd08ebf6SMatthew Brost  *
16*dd08ebf6SMatthew Brost  * Special VM details
17*dd08ebf6SMatthew Brost  * ==================
18*dd08ebf6SMatthew Brost  *
19*dd08ebf6SMatthew Brost  * The special VM is configured with a page structure where we can dynamically
20*dd08ebf6SMatthew Brost  * map BOs which need to be copied and cleared, dynamically map other VM's page
21*dd08ebf6SMatthew Brost  * table BOs for updates, and identity map the entire device's VRAM with 1 GB
22*dd08ebf6SMatthew Brost  * pages.
23*dd08ebf6SMatthew Brost  *
24*dd08ebf6SMatthew Brost  * Currently the page structure consists of 48 phyiscal pages with 16 being
25*dd08ebf6SMatthew Brost  * reserved for BO mapping during copies and clear, 1 reserved for kernel binds,
26*dd08ebf6SMatthew Brost  * several pages are needed to setup the identity mappings (exact number based
27*dd08ebf6SMatthew Brost  * on how many bits of address space the device has), and the rest are reserved
28*dd08ebf6SMatthew Brost  * user bind operations.
29*dd08ebf6SMatthew Brost  *
30*dd08ebf6SMatthew Brost  * TODO: Diagram of layout
31*dd08ebf6SMatthew Brost  *
32*dd08ebf6SMatthew Brost  * Bind jobs
33*dd08ebf6SMatthew Brost  * =========
34*dd08ebf6SMatthew Brost  *
35*dd08ebf6SMatthew Brost  * A bind job consist of two batches and runs either on the migrate engine
36*dd08ebf6SMatthew Brost  * (kernel binds) or the bind engine passed in (user binds). In both cases the
37*dd08ebf6SMatthew Brost  * VM of the engine is the migrate VM.
38*dd08ebf6SMatthew Brost  *
39*dd08ebf6SMatthew Brost  * The first batch is used to update the migration VM page structure to point to
40*dd08ebf6SMatthew Brost  * the bind VM page table BOs which need to be updated. A physical page is
41*dd08ebf6SMatthew Brost  * required for this. If it is a user bind, the page is allocated from pool of
42*dd08ebf6SMatthew Brost  * pages reserved user bind operations with drm_suballoc managing this pool. If
43*dd08ebf6SMatthew Brost  * it is a kernel bind, the page reserved for kernel binds is used.
44*dd08ebf6SMatthew Brost  *
45*dd08ebf6SMatthew Brost  * The first batch is only required for devices without VRAM as when the device
46*dd08ebf6SMatthew Brost  * has VRAM the bind VM page table BOs are in VRAM and the identity mapping can
47*dd08ebf6SMatthew Brost  * be used.
48*dd08ebf6SMatthew Brost  *
49*dd08ebf6SMatthew Brost  * The second batch is used to program page table updated in the bind VM. Why
50*dd08ebf6SMatthew Brost  * not just one batch? Well the TLBs need to be invalidated between these two
51*dd08ebf6SMatthew Brost  * batches and that only can be done from the ring.
52*dd08ebf6SMatthew Brost  *
53*dd08ebf6SMatthew Brost  * When the bind job complete, the page allocated is returned the pool of pages
54*dd08ebf6SMatthew Brost  * reserved for user bind operations if a user bind. No need do this for kernel
55*dd08ebf6SMatthew Brost  * binds as the reserved kernel page is serially used by each job.
56*dd08ebf6SMatthew Brost  *
57*dd08ebf6SMatthew Brost  * Copy / clear jobs
58*dd08ebf6SMatthew Brost  * =================
59*dd08ebf6SMatthew Brost  *
60*dd08ebf6SMatthew Brost  * A copy or clear job consist of two batches and runs on the migrate engine.
61*dd08ebf6SMatthew Brost  *
62*dd08ebf6SMatthew Brost  * Like binds, the first batch is used update the migration VM page structure.
63*dd08ebf6SMatthew Brost  * In copy jobs, we need to map the source and destination of the BO into page
64*dd08ebf6SMatthew Brost  * the structure. In clear jobs, we just need to add 1 mapping of BO into the
65*dd08ebf6SMatthew Brost  * page structure. We use the 16 reserved pages in migration VM for mappings,
66*dd08ebf6SMatthew Brost  * this gives us a maximum copy size of 16 MB and maximum clear size of 32 MB.
67*dd08ebf6SMatthew Brost  *
68*dd08ebf6SMatthew Brost  * The second batch is used do either do the copy or clear. Again similar to
69*dd08ebf6SMatthew Brost  * binds, two batches are required as the TLBs need to be invalidated from the
70*dd08ebf6SMatthew Brost  * ring between the batches.
71*dd08ebf6SMatthew Brost  *
72*dd08ebf6SMatthew Brost  * More than one job will be generated if the BO is larger than maximum copy /
73*dd08ebf6SMatthew Brost  * clear size.
74*dd08ebf6SMatthew Brost  *
75*dd08ebf6SMatthew Brost  * Future work
76*dd08ebf6SMatthew Brost  * ===========
77*dd08ebf6SMatthew Brost  *
78*dd08ebf6SMatthew Brost  * Update copy and clear code to use identity mapped VRAM.
79*dd08ebf6SMatthew Brost  *
80*dd08ebf6SMatthew Brost  * Can we rework the use of the pages async binds to use all the entries in each
81*dd08ebf6SMatthew Brost  * page?
82*dd08ebf6SMatthew Brost  *
83*dd08ebf6SMatthew Brost  * Using large pages for sysmem mappings.
84*dd08ebf6SMatthew Brost  *
85*dd08ebf6SMatthew Brost  * Is it possible to identity map the sysmem? We should explore this.
86*dd08ebf6SMatthew Brost  */
87*dd08ebf6SMatthew Brost 
88*dd08ebf6SMatthew Brost #endif
89