xref: /linux/Documentation/admin-guide/perf/nvidia-pmu.rst (revision c532de5a67a70f8533d495f8f2aaa9a0491c3ad0)
1=========================================================
2NVIDIA Tegra SoC Uncore Performance Monitoring Unit (PMU)
3=========================================================
4
5The NVIDIA Tegra SoC includes various system PMUs to measure key performance
6metrics like memory bandwidth, latency, and utilization:
7
8* Scalable Coherency Fabric (SCF)
9* NVLink-C2C0
10* NVLink-C2C1
11* CNVLink
12* PCIE
13
14PMU Driver
15----------
16
17The PMUs in this document are based on ARM CoreSight PMU Architecture as
18described in document: ARM IHI 0091. Since this is a standard architecture, the
19PMUs are managed by a common driver "arm-cs-arch-pmu". This driver describes
20the available events and configuration of each PMU in sysfs. Please see the
21sections below to get the sysfs path of each PMU. Like other uncore PMU drivers,
22the driver provides "cpumask" sysfs attribute to show the CPU id used to handle
23the PMU event. There is also "associated_cpus" sysfs attribute, which contains a
24list of CPUs associated with the PMU instance.
25
26.. _SCF_PMU_Section:
27
28SCF PMU
29-------
30
31The SCF PMU monitors system level cache events, CPU traffic, and
32strongly-ordered (SO) PCIE write traffic to local/remote memory. Please see
33:ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about the PMU
34traffic coverage.
35
36The events and configuration options of this PMU device are described in sysfs,
37see /sys/bus/event_sources/devices/nvidia_scf_pmu_<socket-id>.
38
39Example usage:
40
41* Count event id 0x0 in socket 0::
42
43   perf stat -a -e nvidia_scf_pmu_0/event=0x0/
44
45* Count event id 0x0 in socket 1::
46
47   perf stat -a -e nvidia_scf_pmu_1/event=0x0/
48
49NVLink-C2C0 PMU
50--------------------
51
52The NVLink-C2C0 PMU monitors incoming traffic from a GPU/CPU connected with
53NVLink-C2C (Chip-2-Chip) interconnect. The type of traffic captured by this PMU
54varies dependent on the chip configuration:
55
56* NVIDIA Grace Hopper Superchip: Hopper GPU is connected with Grace SoC.
57
58  In this config, the PMU captures GPU ATS translated or EGM traffic from the GPU.
59
60* NVIDIA Grace CPU Superchip: two Grace CPU SoCs are connected.
61
62  In this config, the PMU captures read and relaxed ordered (RO) writes from
63  PCIE device of the remote SoC.
64
65Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
66the PMU traffic coverage.
67
68The events and configuration options of this PMU device are described in sysfs,
69see /sys/bus/event_sources/devices/nvidia_nvlink_c2c0_pmu_<socket-id>.
70
71Example usage:
72
73* Count event id 0x0 from the GPU/CPU connected with socket 0::
74
75   perf stat -a -e nvidia_nvlink_c2c0_pmu_0/event=0x0/
76
77* Count event id 0x0 from the GPU/CPU connected with socket 1::
78
79   perf stat -a -e nvidia_nvlink_c2c0_pmu_1/event=0x0/
80
81* Count event id 0x0 from the GPU/CPU connected with socket 2::
82
83   perf stat -a -e nvidia_nvlink_c2c0_pmu_2/event=0x0/
84
85* Count event id 0x0 from the GPU/CPU connected with socket 3::
86
87   perf stat -a -e nvidia_nvlink_c2c0_pmu_3/event=0x0/
88
89NVLink-C2C1 PMU
90-------------------
91
92The NVLink-C2C1 PMU monitors incoming traffic from a GPU connected with
93NVLink-C2C (Chip-2-Chip) interconnect. This PMU captures untranslated GPU
94traffic, in contrast with NvLink-C2C0 PMU that captures ATS translated traffic.
95Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section` for more info about
96the PMU traffic coverage.
97
98The events and configuration options of this PMU device are described in sysfs,
99see /sys/bus/event_sources/devices/nvidia_nvlink_c2c1_pmu_<socket-id>.
100
101Example usage:
102
103* Count event id 0x0 from the GPU connected with socket 0::
104
105   perf stat -a -e nvidia_nvlink_c2c1_pmu_0/event=0x0/
106
107* Count event id 0x0 from the GPU connected with socket 1::
108
109   perf stat -a -e nvidia_nvlink_c2c1_pmu_1/event=0x0/
110
111* Count event id 0x0 from the GPU connected with socket 2::
112
113   perf stat -a -e nvidia_nvlink_c2c1_pmu_2/event=0x0/
114
115* Count event id 0x0 from the GPU connected with socket 3::
116
117   perf stat -a -e nvidia_nvlink_c2c1_pmu_3/event=0x0/
118
119CNVLink PMU
120---------------
121
122The CNVLink PMU monitors traffic from GPU and PCIE device on remote sockets
123to local memory. For PCIE traffic, this PMU captures read and relaxed ordered
124(RO) write traffic. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
125for more info about the PMU traffic coverage.
126
127The events and configuration options of this PMU device are described in sysfs,
128see /sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>.
129
130Each SoC socket can be connected to one or more sockets via CNVLink. The user can
131use "rem_socket" bitmap parameter to select the remote socket(s) to monitor.
132Each bit represents the socket number, e.g. "rem_socket=0xE" corresponds to
133socket 1 to 3.
134/sys/bus/event_sources/devices/nvidia_cnvlink_pmu_<socket-id>/format/rem_socket
135shows the valid bits that can be set in the "rem_socket" parameter.
136
137The PMU can not distinguish the remote traffic initiator, therefore it does not
138provide filter to select the traffic source to monitor. It reports combined
139traffic from remote GPU and PCIE devices.
140
141Example usage:
142
143* Count event id 0x0 for the traffic from remote socket 1, 2, and 3 to socket 0::
144
145   perf stat -a -e nvidia_cnvlink_pmu_0/event=0x0,rem_socket=0xE/
146
147* Count event id 0x0 for the traffic from remote socket 0, 2, and 3 to socket 1::
148
149   perf stat -a -e nvidia_cnvlink_pmu_1/event=0x0,rem_socket=0xD/
150
151* Count event id 0x0 for the traffic from remote socket 0, 1, and 3 to socket 2::
152
153   perf stat -a -e nvidia_cnvlink_pmu_2/event=0x0,rem_socket=0xB/
154
155* Count event id 0x0 for the traffic from remote socket 0, 1, and 2 to socket 3::
156
157   perf stat -a -e nvidia_cnvlink_pmu_3/event=0x0,rem_socket=0x7/
158
159
160PCIE PMU
161------------
162
163The PCIE PMU monitors all read/write traffic from PCIE root ports to
164local/remote memory. Please see :ref:`NVIDIA_Uncore_PMU_Traffic_Coverage_Section`
165for more info about the PMU traffic coverage.
166
167The events and configuration options of this PMU device are described in sysfs,
168see /sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>.
169
170Each SoC socket can support multiple root ports. The user can use
171"root_port" bitmap parameter to select the port(s) to monitor, i.e.
172"root_port=0xF" corresponds to root port 0 to 3.
173/sys/bus/event_sources/devices/nvidia_pcie_pmu_<socket-id>/format/root_port
174shows the valid bits that can be set in the "root_port" parameter.
175
176Example usage:
177
178* Count event id 0x0 from root port 0 and 1 of socket 0::
179
180   perf stat -a -e nvidia_pcie_pmu_0/event=0x0,root_port=0x3/
181
182* Count event id 0x0 from root port 0 and 1 of socket 1::
183
184   perf stat -a -e nvidia_pcie_pmu_1/event=0x0,root_port=0x3/
185
186.. _NVIDIA_Uncore_PMU_Traffic_Coverage_Section:
187
188Traffic Coverage
189----------------
190
191The PMU traffic coverage may vary dependent on the chip configuration:
192
193* **NVIDIA Grace Hopper Superchip**: Hopper GPU is connected with Grace SoC.
194
195  Example configuration with two Grace SoCs::
196
197   *********************************          *********************************
198   * SOCKET-A                      *          * SOCKET-B                      *
199   *                               *          *                               *
200   *                     ::::::::  *          *  ::::::::                     *
201   *                     : PCIE :  *          *  : PCIE :                     *
202   *                     ::::::::  *          *  ::::::::                     *
203   *                         |     *          *      |                        *
204   *                         |     *          *      |                        *
205   *  :::::::            ::::::::: *          *  :::::::::            ::::::: *
206   *  :     :            :       : *          *  :       :            :     : *
207   *  : GPU :<--NVLink-->: Grace :<---CNVLink--->: Grace :<--NVLink-->: GPU : *
208   *  :     :    C2C     :  SoC  : *          *  :  SoC  :    C2C     :     : *
209   *  :::::::            ::::::::: *          *  :::::::::            ::::::: *
210   *     |                   |     *          *      |                   |    *
211   *     |                   |     *          *      |                   |    *
212   *  &&&&&&&&           &&&&&&&&  *          *   &&&&&&&&           &&&&&&&& *
213   *  & GMEM &           & CMEM &  *          *   & CMEM &           & GMEM & *
214   *  &&&&&&&&           &&&&&&&&  *          *   &&&&&&&&           &&&&&&&& *
215   *                               *          *                               *
216   *********************************          *********************************
217
218   GMEM = GPU Memory (e.g. HBM)
219   CMEM = CPU Memory (e.g. LPDDR5X)
220
221  |
222  | Following table contains traffic coverage of Grace SoC PMU in socket-A:
223
224  ::
225
226   +--------------+-------+-----------+-----------+-----+----------+----------+
227   |              |                        Source                             |
228   +              +-------+-----------+-----------+-----+----------+----------+
229   | Destination  |       |GPU ATS    |GPU Not-ATS|     | Socket-B | Socket-B |
230   |              |PCI R/W|Translated,|Translated | CPU | CPU/PCIE1| GPU/PCIE2|
231   |              |       |EGM        |           |     |          |          |
232   +==============+=======+===========+===========+=====+==========+==========+
233   | Local        | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF | SCF PMU  | CNVLink  |
234   | SYSRAM/CMEM  | PMU   |PMU        |PMU        | PMU |          | PMU      |
235   +--------------+-------+-----------+-----------+-----+----------+----------+
236   | Local GMEM   | PCIE  |    N/A    |NVLink-C2C1| SCF | SCF PMU  | CNVLink  |
237   |              | PMU   |           |PMU        | PMU |          | PMU      |
238   +--------------+-------+-----------+-----------+-----+----------+----------+
239   | Remote       | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF |          |          |
240   | SYSRAM/CMEM  | PMU   |PMU        |PMU        | PMU |   N/A    |   N/A    |
241   | over CNVLink |       |           |           |     |          |          |
242   +--------------+-------+-----------+-----------+-----+----------+----------+
243   | Remote GMEM  | PCIE  |NVLink-C2C0|NVLink-C2C1| SCF |          |          |
244   | over CNVLink | PMU   |PMU        |PMU        | PMU |   N/A    |   N/A    |
245   +--------------+-------+-----------+-----------+-----+----------+----------+
246
247   PCIE1 traffic represents strongly ordered (SO) writes.
248   PCIE2 traffic represents reads and relaxed ordered (RO) writes.
249
250* **NVIDIA Grace CPU Superchip**: two Grace CPU SoCs are connected.
251
252  Example configuration with two Grace SoCs::
253
254   *******************             *******************
255   * SOCKET-A        *             * SOCKET-B        *
256   *                 *             *                 *
257   *    ::::::::     *             *    ::::::::     *
258   *    : PCIE :     *             *    : PCIE :     *
259   *    ::::::::     *             *    ::::::::     *
260   *        |        *             *        |        *
261   *        |        *             *        |        *
262   *    :::::::::    *             *    :::::::::    *
263   *    :       :    *             *    :       :    *
264   *    : Grace :<--------NVLink------->: Grace :    *
265   *    :  SoC  :    *     C2C     *    :  SoC  :    *
266   *    :::::::::    *             *    :::::::::    *
267   *        |        *             *        |        *
268   *        |        *             *        |        *
269   *     &&&&&&&&    *             *     &&&&&&&&    *
270   *     & CMEM &    *             *     & CMEM &    *
271   *     &&&&&&&&    *             *     &&&&&&&&    *
272   *                 *             *                 *
273   *******************             *******************
274
275   GMEM = GPU Memory (e.g. HBM)
276   CMEM = CPU Memory (e.g. LPDDR5X)
277
278  |
279  | Following table contains traffic coverage of Grace SoC PMU in socket-A:
280
281  ::
282
283   +-----------------+-----------+---------+----------+-------------+
284   |                 |                      Source                  |
285   +                 +-----------+---------+----------+-------------+
286   | Destination     |           |         | Socket-B | Socket-B    |
287   |                 |  PCI R/W  |   CPU   | CPU/PCIE1| PCIE2       |
288   |                 |           |         |          |             |
289   +=================+===========+=========+==========+=============+
290   | Local           |  PCIE PMU | SCF PMU | SCF PMU  | NVLink-C2C0 |
291   | SYSRAM/CMEM     |           |         |          | PMU         |
292   +-----------------+-----------+---------+----------+-------------+
293   | Remote          |           |         |          |             |
294   | SYSRAM/CMEM     |  PCIE PMU | SCF PMU |   N/A    |     N/A     |
295   | over NVLink-C2C |           |         |          |             |
296   +-----------------+-----------+---------+----------+-------------+
297
298   PCIE1 traffic represents strongly ordered (SO) writes.
299   PCIE2 traffic represents reads and relaxed ordered (RO) writes.
300