xref: /linux/Documentation/admin-guide/perf/hisi-pcie-pmu.rst (revision c532de5a67a70f8533d495f8f2aaa9a0491c3ad0)
1================================================
2HiSilicon PCIe Performance Monitoring Unit (PMU)
3================================================
4
5On Hip09, HiSilicon PCIe Performance Monitoring Unit (PMU) could monitor
6bandwidth, latency, bus utilization and buffer occupancy data of PCIe.
7
8Each PCIe Core has a PMU to monitor multi Root Ports of this PCIe Core and
9all Endpoints downstream these Root Ports.
10
11
12HiSilicon PCIe PMU driver
13=========================
14
15The PCIe PMU driver registers a perf PMU with the name of its sicl-id and PCIe
16Core id.::
17
18  /sys/bus/event_source/hisi_pcie<sicl>_core<core>
19
20PMU driver provides description of available events and filter options in sysfs,
21see /sys/bus/event_source/devices/hisi_pcie<sicl>_core<core>.
22
23The "format" directory describes all formats of the config (events) and config1
24(filter options) fields of the perf_event_attr structure. The "events" directory
25describes all documented events shown in perf list.
26
27The "identifier" sysfs file allows users to identify the version of the
28PMU hardware device.
29
30The "bus" sysfs file allows users to get the bus number of Root Ports
31monitored by PMU. Furthermore users can get the Root Ports range in
32[bdf_min, bdf_max] from "bdf_min" and "bdf_max" sysfs attributes
33respectively.
34
35Example usage of perf::
36
37  $# perf list
38  hisi_pcie0_core0/rx_mwr_latency/ [kernel PMU event]
39  hisi_pcie0_core0/rx_mwr_cnt/ [kernel PMU event]
40  ------------------------------------------
41
42  $# perf stat -e hisi_pcie0_core0/rx_mwr_latency,port=0xffff/
43  $# perf stat -e hisi_pcie0_core0/rx_mwr_cnt,port=0xffff/
44
45The related events usually used to calculate the bandwidth, latency or others.
46They need to start and end counting at the same time, therefore related events
47are best used in the same event group to get the expected value. There are two
48ways to know if they are related events:
49
50a) By event name, such as the latency events "xxx_latency, xxx_cnt" or
51   bandwidth events "xxx_flux, xxx_time".
52b) By event type, such as "event=0xXXXX, event=0x1XXXX".
53
54Example usage of perf group::
55
56  $# perf stat -e "{hisi_pcie0_core0/rx_mwr_latency,port=0xffff/,hisi_pcie0_core0/rx_mwr_cnt,port=0xffff/}"
57
58The current driver does not support sampling. So "perf record" is unsupported.
59Also attach to a task is unsupported for PCIe PMU.
60
61Filter options
62--------------
63
641. Target filter
65
66   PMU could only monitor the performance of traffic downstream target Root
67   Ports or downstream target Endpoint. PCIe PMU driver support "port" and
68   "bdf" interfaces for users.
69   Please notice that, one of these two interfaces must be set, and these two
70   interfaces aren't supported at the same time. If they are both set, only
71   "port" filter is valid.
72   If "port" filter not being set or is set explicitly to zero (default), the
73   "bdf" filter will be in effect, because "bdf=0" meaning 0000:000:00.0.
74
75   - port
76
77     "port" filter can be used in all PCIe PMU events, target Root Port can be
78     selected by configuring the 16-bits-bitmap "port". Multi ports can be
79     selected for AP-layer-events, and only one port can be selected for
80     TL/DL-layer-events.
81
82     For example, if target Root Port is 0000:00:00.0 (x8 lanes), bit0 of
83     bitmap should be set, port=0x1; if target Root Port is 0000:00:04.0 (x4
84     lanes), bit8 is set, port=0x100; if these two Root Ports are both
85     monitored, port=0x101.
86
87     Example usage of perf::
88
89       $# perf stat -e hisi_pcie0_core0/rx_mwr_latency,port=0x1/ sleep 5
90
91   - bdf
92
93     "bdf" filter can only be used in bandwidth events, target Endpoint is
94     selected by configuring BDF to "bdf". Counter only counts the bandwidth of
95     message requested by target Endpoint.
96
97     For example, "bdf=0x3900" means BDF of target Endpoint is 0000:39:00.0.
98
99     Example usage of perf::
100
101       $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,bdf=0x3900/ sleep 5
102
1032. Trigger filter
104
105   Event statistics start when the first time TLP length is greater/smaller
106   than trigger condition. You can set the trigger condition by writing
107   "trig_len", and set the trigger mode by writing "trig_mode". This filter can
108   only be used in bandwidth events.
109
110   For example, "trig_len=4" means trigger condition is 2^4 DW, "trig_mode=0"
111   means statistics start when TLP length > trigger condition, "trig_mode=1"
112   means start when TLP length < condition.
113
114   Example usage of perf::
115
116     $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,trig_len=0x4,trig_mode=1/ sleep 5
117
1183. Threshold filter
119
120   Counter counts when TLP length within the specified range. You can set the
121   threshold by writing "thr_len", and set the threshold mode by writing
122   "thr_mode". This filter can only be used in bandwidth events.
123
124   For example, "thr_len=4" means threshold is 2^4 DW, "thr_mode=0" means
125   counter counts when TLP length >= threshold, and "thr_mode=1" means counts
126   when TLP length < threshold.
127
128   Example usage of perf::
129
130     $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,thr_len=0x4,thr_mode=1/ sleep 5
131
1324. TLP Length filter
133
134   When counting bandwidth, the data can be composed of certain parts of TLP
135   packets. You can specify it through "len_mode":
136
137   - 2'b00: Reserved (Do not use this since the behaviour is undefined)
138   - 2'b01: Bandwidth of TLP payloads
139   - 2'b10: Bandwidth of TLP headers
140   - 2'b11: Bandwidth of both TLP payloads and headers
141
142   For example, "len_mode=2" means only counting the bandwidth of TLP headers
143   and "len_mode=3" means the final bandwidth data is composed of both TLP
144   headers and payloads. Default value if not specified is 2'b11.
145
146   Example usage of perf::
147
148     $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,len_mode=0x1/ sleep 5
149