1================================================ 2HiSilicon PCIe Performance Monitoring Unit (PMU) 3================================================ 4 5On Hip09, HiSilicon PCIe Performance Monitoring Unit (PMU) could monitor 6bandwidth, latency, bus utilization and buffer occupancy data of PCIe. 7 8Each PCIe Core has a PMU to monitor multi Root Ports of this PCIe Core and 9all Endpoints downstream these Root Ports. 10 11 12HiSilicon PCIe PMU driver 13========================= 14 15The PCIe PMU driver registers a perf PMU with the name of its sicl-id and PCIe 16Core id.:: 17 18 /sys/bus/event_source/hisi_pcie<sicl>_core<core> 19 20PMU driver provides description of available events and filter options in sysfs, 21see /sys/bus/event_source/devices/hisi_pcie<sicl>_core<core>. 22 23The "format" directory describes all formats of the config (events) and config1 24(filter options) fields of the perf_event_attr structure. The "events" directory 25describes all documented events shown in perf list. 26 27The "identifier" sysfs file allows users to identify the version of the 28PMU hardware device. 29 30The "bus" sysfs file allows users to get the bus number of Root Ports 31monitored by PMU. 32 33Example usage of perf:: 34 35 $# perf list 36 hisi_pcie0_core0/rx_mwr_latency/ [kernel PMU event] 37 hisi_pcie0_core0/rx_mwr_cnt/ [kernel PMU event] 38 ------------------------------------------ 39 40 $# perf stat -e hisi_pcie0_core0/rx_mwr_latency,port=0xffff/ 41 $# perf stat -e hisi_pcie0_core0/rx_mwr_cnt,port=0xffff/ 42 43The related events usually used to calculate the bandwidth, latency or others. 44They need to start and end counting at the same time, therefore related events 45are best used in the same event group to get the expected value. There are two 46ways to know if they are related events: 47 48a) By event name, such as the latency events "xxx_latency, xxx_cnt" or 49 bandwidth events "xxx_flux, xxx_time". 50b) By event type, such as "event=0xXXXX, event=0x1XXXX". 51 52Example usage of perf group:: 53 54 $# perf stat -e "{hisi_pcie0_core0/rx_mwr_latency,port=0xffff/,hisi_pcie0_core0/rx_mwr_cnt,port=0xffff/}" 55 56The current driver does not support sampling. So "perf record" is unsupported. 57Also attach to a task is unsupported for PCIe PMU. 58 59Filter options 60-------------- 61 621. Target filter 63 64 PMU could only monitor the performance of traffic downstream target Root 65 Ports or downstream target Endpoint. PCIe PMU driver support "port" and 66 "bdf" interfaces for users. 67 Please notice that, one of these two interfaces must be set, and these two 68 interfaces aren't supported at the same time. If they are both set, only 69 "port" filter is valid. 70 If "port" filter not being set or is set explicitly to zero (default), the 71 "bdf" filter will be in effect, because "bdf=0" meaning 0000:000:00.0. 72 73 - port 74 75 "port" filter can be used in all PCIe PMU events, target Root Port can be 76 selected by configuring the 16-bits-bitmap "port". Multi ports can be 77 selected for AP-layer-events, and only one port can be selected for 78 TL/DL-layer-events. 79 80 For example, if target Root Port is 0000:00:00.0 (x8 lanes), bit0 of 81 bitmap should be set, port=0x1; if target Root Port is 0000:00:04.0 (x4 82 lanes), bit8 is set, port=0x100; if these two Root Ports are both 83 monitored, port=0x101. 84 85 Example usage of perf:: 86 87 $# perf stat -e hisi_pcie0_core0/rx_mwr_latency,port=0x1/ sleep 5 88 89 - bdf 90 91 "bdf" filter can only be used in bandwidth events, target Endpoint is 92 selected by configuring BDF to "bdf". Counter only counts the bandwidth of 93 message requested by target Endpoint. 94 95 For example, "bdf=0x3900" means BDF of target Endpoint is 0000:39:00.0. 96 97 Example usage of perf:: 98 99 $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,bdf=0x3900/ sleep 5 100 1012. Trigger filter 102 103 Event statistics start when the first time TLP length is greater/smaller 104 than trigger condition. You can set the trigger condition by writing 105 "trig_len", and set the trigger mode by writing "trig_mode". This filter can 106 only be used in bandwidth events. 107 108 For example, "trig_len=4" means trigger condition is 2^4 DW, "trig_mode=0" 109 means statistics start when TLP length > trigger condition, "trig_mode=1" 110 means start when TLP length < condition. 111 112 Example usage of perf:: 113 114 $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,trig_len=0x4,trig_mode=1/ sleep 5 115 1163. Threshold filter 117 118 Counter counts when TLP length within the specified range. You can set the 119 threshold by writing "thr_len", and set the threshold mode by writing 120 "thr_mode". This filter can only be used in bandwidth events. 121 122 For example, "thr_len=4" means threshold is 2^4 DW, "thr_mode=0" means 123 counter counts when TLP length >= threshold, and "thr_mode=1" means counts 124 when TLP length < threshold. 125 126 Example usage of perf:: 127 128 $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,thr_len=0x4,thr_mode=1/ sleep 5 129 1304. TLP Length filter 131 132 When counting bandwidth, the data can be composed of certain parts of TLP 133 packets. You can specify it through "len_mode": 134 135 - 2'b00: Reserved (Do not use this since the behaviour is undefined) 136 - 2'b01: Bandwidth of TLP payloads 137 - 2'b10: Bandwidth of TLP headers 138 - 2'b11: Bandwidth of both TLP payloads and headers 139 140 For example, "len_mode=2" means only counting the bandwidth of TLP headers 141 and "len_mode=3" means the final bandwidth data is composed of both TLP 142 headers and payloads. Default value if not specified is 2'b11. 143 144 Example usage of perf:: 145 146 $# perf stat -e hisi_pcie0_core0/rx_mrd_flux,port=0xffff,len_mode=0x1/ sleep 5 147