xref: /linux/Documentation/gpu/amdgpu/display/dcn-overview.rst (revision 0526b56cbc3c489642bd6a5fe4b718dea7ef0ee8)
1=======================
2Display Core Next (DCN)
3=======================
4
5To equip our readers with the basic knowledge of how AMD Display Core Next
6(DCN) works, we need to start with an overview of the hardware pipeline. Below
7you can see a picture that provides a DCN overview, keep in mind that this is a
8generic diagram, and we have variations per ASIC.
9
10.. kernel-figure:: dc_pipeline_overview.svg
11
12Based on this diagram, we can pass through each block and briefly describe
13them:
14
15* **Display Controller Hub (DCHUB)**: This is the gateway between the Scalable
16  Data Port (SDP) and DCN. This component has multiple features, such as memory
17  arbitration, rotation, and cursor manipulation.
18
19* **Display Pipe and Plane (DPP)**: This block provides pre-blend pixel
20  processing such as color space conversion, linearization of pixel data, tone
21  mapping, and gamut mapping.
22
23* **Multiple Pipe/Plane Combined (MPC)**: This component performs blending of
24  multiple planes, using global or per-pixel alpha.
25
26* **Output Pixel Processing (OPP)**: Process and format pixels to be sent to
27  the display.
28
29* **Output Pipe Timing Combiner (OPTC)**: It generates time output to combine
30  streams or divide capabilities. CRC values are generated in this block.
31
32* **Display Output (DIO)**: Codify the output to the display connected to our
33  GPU.
34
35* **Display Writeback (DWB)**: It provides the ability to write the output of
36  the display pipe back to memory as video frames.
37
38* **Multi-Media HUB (MMHUBBUB)**: Memory controller interface for DMCUB and DWB
39  (Note that DWB is not hooked yet).
40
41* **DCN Management Unit (DMU)**: It provides registers with access control and
42  interrupts the controller to the SOC host interrupt unit. This block includes
43  the Display Micro-Controller Unit - version B (DMCUB), which is handled via
44  firmware.
45
46* **DCN Clock Generator Block (DCCG)**: It provides the clocks and resets
47  for all of the display controller clock domains.
48
49* **Azalia (AZ)**: Audio engine.
50
51The above diagram is an architecture generalization of DCN, which means that
52every ASIC has variations around this base model. Notice that the display
53pipeline is connected to the Scalable Data Port (SDP) via DCHUB; you can see
54the SDP as the element from our Data Fabric that feeds the display pipe.
55
56Always approach the DCN architecture as something flexible that can be
57configured and reconfigured in multiple ways; in other words, each block can be
58setup or ignored accordingly with userspace demands. For example, if we
59want to drive an 8k@60Hz with a DSC enabled, our DCN may require 4 DPP and 2
60OPP. It is DC's responsibility to drive the best configuration for each
61specific scenario. Orchestrate all of these components together requires a
62sophisticated communication interface which is highlighted in the diagram by
63the edges that connect each block; from the chart, each connection between
64these blocks represents:
65
661. Pixel data interface (red): Represents the pixel data flow;
672. Global sync signals (green): It is a set of synchronization signals composed
68   by VStartup, VUpdate, and VReady;
693. Config interface: Responsible to configure blocks;
704. Sideband signals: All other signals that do not fit the previous one.
71
72These signals are essential and play an important role in DCN. Nevertheless,
73the Global Sync deserves an extra level of detail described in the next
74section.
75
76All of these components are represented by a data structure named dc_state.
77From DCHUB to MPC, we have a representation called dc_plane; from MPC to OPTC,
78we have dc_stream, and the output (DIO) is handled by dc_link. Keep in mind
79that HUBP accesses a surface using a specific format read from memory, and our
80dc_plane should work to convert all pixels in the plane to something that can
81be sent to the display via dc_stream and dc_link.
82
83Front End and Back End
84----------------------
85
86Display pipeline can be broken down into two components that are usually
87referred as **Front End (FE)** and **Back End (BE)**, where FE consists of:
88
89* DCHUB (Mainly referring to a subcomponent named HUBP)
90* DPP
91* MPC
92
93On the other hand, BE consist of
94
95* OPP
96* OPTC
97* DIO (DP/HDMI stream encoder and link encoder)
98
99OPP and OPTC are two joining blocks between FE and BE. On a side note, this is
100a one-to-one mapping of the link encoder to PHY, but we can configure the DCN
101to choose which link encoder to connect to which PHY. FE's main responsibility
102is to change, blend and compose pixel data, while BE's job is to frame a
103generic pixel stream to a specific display's pixel stream.
104
105Data Flow
106---------
107
108Initially, data is passed in from VRAM through Data Fabric (DF) in native pixel
109formats. Such data format stays through till HUBP in DCHUB, where HUBP unpacks
110different pixel formats and outputs them to DPP in uniform streams through 4
111channels (1 for alpha + 3 for colors).
112
113The Converter and Cursor (CNVC) in DPP would then normalize the data
114representation and convert them to a DCN specific floating-point format (i.e.,
115different from the IEEE floating-point format). In the process, CNVC also
116applies a degamma function to transform the data from non-linear to linear
117space to relax the floating-point calculations following. Data would stay in
118this floating-point format from DPP to OPP.
119
120Starting OPP, because color transformation and blending have been completed
121(i.e alpha can be dropped), and the end sinks do not require the precision and
122dynamic range that floating points provide (i.e. all displays are in integer
123depth format), bit-depth reduction/dithering would kick in. In OPP, we would
124also apply a regamma function to introduce the gamma removed earlier back.
125Eventually, we output data in integer format at DIO.
126
127AMD Hardware Pipeline
128---------------------
129
130When discussing graphics on Linux, the **pipeline** term can sometimes be
131overloaded with multiple meanings, so it is important to define what we mean
132when we say **pipeline**. In the DCN driver, we use the term **hardware
133pipeline** or **pipeline** or just **pipe** as an abstraction to indicate a
134sequence of DCN blocks instantiated to address some specific configuration. DC
135core treats DCN blocks as individual resources, meaning we can build a pipeline
136by taking resources for all individual hardware blocks to compose one pipeline.
137In actuality, we can't connect an arbitrary block from one pipe to a block from
138another pipe; they are routed linearly, except for DSC, which can be
139arbitrarily assigned as needed. We have this pipeline concept for trying to
140optimize bandwidth utilization.
141
142.. kernel-figure:: pipeline_4k_no_split.svg
143
144Additionally, let's take a look at parts of the DTN log (see
145'Documentation/gpu/amdgpu/display/dc-debug.rst' for more information) since
146this log can help us to see part of this pipeline behavior in real-time::
147
148 HUBP:  format  addr_hi  width  height ...
149 [ 0]:      8h      81h   3840    2160
150 [ 1]:      0h       0h      0       0
151 [ 2]:      0h       0h      0       0
152 [ 3]:      0h       0h      0       0
153 [ 4]:      0h       0h      0       0
154 ...
155 MPCC:  OPP  DPP ...
156 [ 0]:   0h   0h ...
157
158The first thing to notice from the diagram and DTN log it is the fact that we
159have different clock domains for each part of the DCN blocks. In this example,
160we have just a single **pipeline** where the data flows from DCHUB to DIO, as
161we intuitively expect. Nonetheless, DCN is flexible, as mentioned before, and
162we can split this single pipe differently, as described in the below diagram:
163
164.. kernel-figure:: pipeline_4k_split.svg
165
166Now, if we inspect the DTN log again we can see some interesting changes::
167
168 HUBP:  format  addr_hi  width  height ...
169 [ 0]:      8h      81h   1920    2160 ...
170 ...
171 [ 4]:      0h       0h      0       0 ...
172 [ 5]:      8h      81h   1920    2160 ...
173 ...
174 MPCC:  OPP  DPP ...
175 [ 0]:   0h   0h ...
176 [ 5]:   0h   5h ...
177
178From the above example, we now split the display pipeline into two vertical
179parts of 1920x2160 (i.e., 3440x2160), and as a result, we could reduce the
180clock frequency in the DPP part. This is not only useful for saving power but
181also to better handle the required throughput. The idea to keep in mind here is
182that the pipe configuration can vary a lot according to the display
183configuration, and it is the DML's responsibility to set up all required
184configuration parameters for multiple scenarios supported by our hardware.
185
186Global Sync
187-----------
188
189Many DCN registers are double buffered, most importantly the surface address.
190This allows us to update DCN hardware atomically for page flips, as well as
191for most other updates that don't require enabling or disabling of new pipes.
192
193(Note: There are many scenarios when DC will decide to reserve extra pipes
194in order to support outputs that need a very high pixel clock, or for
195power saving purposes.)
196
197These atomic register updates are driven by global sync signals in DCN. In
198order to understand how atomic updates interact with DCN hardware, and how DCN
199signals page flip and vblank events it is helpful to understand how global sync
200is programmed.
201
202Global sync consists of three signals, VSTARTUP, VUPDATE, and VREADY. These are
203calculated by the Display Mode Library - DML (drivers/gpu/drm/amd/display/dc/dml)
204based on a large number of parameters and ensure our hardware is able to feed
205the DCN pipeline without underflows or hangs in any given system configuration.
206The global sync signals always happen during VBlank, are independent from the
207VSync signal, and do not overlap each other.
208
209VUPDATE is the only signal that is of interest to the rest of the driver stack
210or userspace clients as it signals the point at which hardware latches to
211atomically programmed (i.e. double buffered) registers. Even though it is
212independent of the VSync signal we use VUPDATE to signal the VSync event as it
213provides the best indication of how atomic commits and hardware interact.
214
215Since DCN hardware is double-buffered the DC driver is able to program the
216hardware at any point during the frame.
217
218The below picture illustrates the global sync signals:
219
220.. kernel-figure:: global_sync_vblank.svg
221
222These signals affect core DCN behavior. Programming them incorrectly will lead
223to a number of negative consequences, most of them quite catastrophic.
224
225The following picture shows how global sync allows for a mailbox style of
226updates, i.e. it allows for multiple re-configurations between VUpdate
227events where only the last configuration programmed before the VUpdate signal
228becomes effective.
229
230.. kernel-figure:: config_example.svg
231