1.. SPDX-License-Identifier: GPL-2.0 2 3==================== 4mlx5 devlink support 5==================== 6 7This document describes the devlink features implemented by the ``mlx5`` 8device driver. 9 10Parameters 11========== 12 13.. list-table:: Generic parameters implemented 14 15 * - Name 16 - Mode 17 - Validation 18 * - ``enable_roce`` 19 - driverinit 20 - Type: Boolean 21 22 If the device supports RoCE disablement, RoCE enablement state controls 23 device support for RoCE capability. Otherwise, the control occurs in the 24 driver stack. When RoCE is disabled at the driver level, only raw 25 ethernet QPs are supported. 26 * - ``io_eq_size`` 27 - driverinit 28 - The range is between 64 and 4096. 29 * - ``event_eq_size`` 30 - driverinit 31 - The range is between 64 and 4096. 32 * - ``max_macs`` 33 - driverinit 34 - The range is between 1 and 2^31. Only power of 2 values are supported. 35 36The ``mlx5`` driver also implements the following driver-specific 37parameters. 38 39.. list-table:: Driver-specific parameters implemented 40 :widths: 5 5 5 85 41 42 * - Name 43 - Type 44 - Mode 45 - Description 46 * - ``flow_steering_mode`` 47 - string 48 - runtime 49 - Controls the flow steering mode of the driver 50 51 * ``dmfs`` Device managed flow steering. In DMFS mode, the HW 52 steering entities are created and managed through firmware. 53 * ``smfs`` Software managed flow steering. In SMFS mode, the HW 54 steering entities are created and manage through the driver without 55 firmware intervention. 56 * ``hmfs`` Hardware managed flow steering. In HMFS mode, the driver 57 is configuring steering rules directly to the HW using Work Queues with 58 a special new type of WQE (Work Queue Element). 59 60 SMFS mode is faster and provides better rule insertion rate compared to 61 default DMFS mode. 62 * - ``fdb_large_groups`` 63 - u32 64 - driverinit 65 - Control the number of large groups (size > 1) in the FDB table. 66 67 * The default value is 15, and the range is between 1 and 1024. 68 * - ``esw_multiport`` 69 - Boolean 70 - runtime 71 - Control MultiPort E-Switch shared fdb mode. 72 73 An experimental mode where a single E-Switch is used and all the vports 74 and physical ports on the NIC are connected to it. 75 76 An example is to send traffic from a VF that is created on PF0 to an 77 uplink that is natively associated with the uplink of PF1 78 79 Note: Future devices, ConnectX-8 and onward, will eventually have this 80 as the default to allow forwarding between all NIC ports in a single 81 E-switch environment and the dual E-switch mode will likely get 82 deprecated. 83 84 Default: disabled 85 * - ``esw_port_metadata`` 86 - Boolean 87 - runtime 88 - When applicable, disabling eswitch metadata can increase packet rate up 89 to 20% depending on the use case and packet sizes. 90 91 Eswitch port metadata state controls whether to internally tag packets 92 with metadata. Metadata tagging must be enabled for multi-port RoCE, 93 failover between representors and stacked devices. By default metadata is 94 enabled on the supported devices in E-switch. Metadata is applicable only 95 for E-switch in switchdev mode and users may disable it when NONE of the 96 below use cases will be in use: 97 1. HCA is in Dual/multi-port RoCE mode. 98 2. VF/SF representor bonding (Usually used for Live migration) 99 3. Stacked devices 100 101 When metadata is disabled, the above use cases will fail to initialize if 102 users try to enable them. 103 104 Note: Setting this parameter does not take effect immediately. Setting 105 must happen in legacy mode and eswitch port metadata takes effect after 106 enabling switchdev mode. 107 * - ``hairpin_num_queues`` 108 - u32 109 - driverinit 110 - We refer to a TC NIC rule that involves forwarding as "hairpin". 111 Hairpin queues are mlx5 hardware specific implementation for hardware 112 forwarding of such packets. 113 114 Control the number of hairpin queues. 115 * - ``hairpin_queue_size`` 116 - u32 117 - driverinit 118 - Control the size (in packets) of the hairpin queues. 119 120The ``mlx5`` driver supports reloading via ``DEVLINK_CMD_RELOAD`` 121 122Info versions 123============= 124 125The ``mlx5`` driver reports the following versions 126 127.. list-table:: devlink info versions implemented 128 :widths: 5 5 90 129 130 * - Name 131 - Type 132 - Description 133 * - ``fw.psid`` 134 - fixed 135 - Used to represent the board id of the device. 136 * - ``fw.version`` 137 - stored, running 138 - Three digit major.minor.subminor firmware version number. 139 140Health reporters 141================ 142 143tx reporter 144----------- 145The tx reporter is responsible for reporting and recovering of the following three error scenarios: 146 147- tx timeout 148 Report on kernel tx timeout detection. 149 Recover by searching lost interrupts. 150- tx error completion 151 Report on error tx completion. 152 Recover by flushing the tx queue and reset it. 153- tx PTP port timestamping CQ unhealthy 154 Report too many CQEs never delivered on port ts CQ. 155 Recover by flushing and re-creating all PTP channels. 156 157tx reporter also support on demand diagnose callback, on which it provides 158real time information of its send queues status. 159 160User commands examples: 161 162- Diagnose send queues status:: 163 164 $ devlink health diagnose pci/0000:82:00.0 reporter tx 165 166.. note:: 167 This command has valid output only when interface is up, otherwise the command has empty output. 168 169- Show number of tx errors indicated, number of recover flows ended successfully, 170 is autorecover enabled and graceful period from last recover:: 171 172 $ devlink health show pci/0000:82:00.0 reporter tx 173 174rx reporter 175----------- 176The rx reporter is responsible for reporting and recovering of the following two error scenarios: 177 178- rx queues' initialization (population) timeout 179 Population of rx queues' descriptors on ring initialization is done 180 in napi context via triggering an irq. In case of a failure to get 181 the minimum amount of descriptors, a timeout would occur, and 182 descriptors could be recovered by polling the EQ (Event Queue). 183- rx completions with errors (reported by HW on interrupt context) 184 Report on rx completion error. 185 Recover (if needed) by flushing the related queue and reset it. 186 187rx reporter also supports on demand diagnose callback, on which it 188provides real time information of its receive queues' status. 189 190- Diagnose rx queues' status and corresponding completion queue:: 191 192 $ devlink health diagnose pci/0000:82:00.0 reporter rx 193 194.. note:: 195 This command has valid output only when interface is up. Otherwise, the command has empty output. 196 197- Show number of rx errors indicated, number of recover flows ended successfully, 198 is autorecover enabled, and graceful period from last recover:: 199 200 $ devlink health show pci/0000:82:00.0 reporter rx 201 202fw reporter 203----------- 204The fw reporter implements `diagnose` and `dump` callbacks. 205It follows symptoms of fw error such as fw syndrome by triggering 206fw core dump and storing it into the dump buffer. 207The fw reporter diagnose command can be triggered any time by the user to check 208current fw status. 209 210User commands examples: 211 212- Check fw heath status:: 213 214 $ devlink health diagnose pci/0000:82:00.0 reporter fw 215 216- Read FW core dump if already stored or trigger new one:: 217 218 $ devlink health dump show pci/0000:82:00.0 reporter fw 219 220.. note:: 221 This command can run only on the PF which has fw tracer ownership, 222 running it on other PF or any VF will return "Operation not permitted". 223 224fw fatal reporter 225----------------- 226The fw fatal reporter implements `dump` and `recover` callbacks. 227It follows fatal errors indications by CR-space dump and recover flow. 228The CR-space dump uses vsc interface which is valid even if the FW command 229interface is not functional, which is the case in most FW fatal errors. 230The recover function runs recover flow which reloads the driver and triggers fw 231reset if needed. 232On firmware error, the health buffer is dumped into the dmesg. The log 233level is derived from the error's severity (given in health buffer). 234 235User commands examples: 236 237- Run fw recover flow manually:: 238 239 $ devlink health recover pci/0000:82:00.0 reporter fw_fatal 240 241- Read FW CR-space dump if already stored or trigger new one:: 242 243 $ devlink health dump show pci/0000:82:00.1 reporter fw_fatal 244 245.. note:: 246 This command can run only on PF. 247 248vnic reporter 249------------- 250The vnic reporter implements only the `diagnose` callback. 251It is responsible for querying the vnic diagnostic counters from fw and displaying 252them in realtime. 253 254Description of the vnic counters: 255 256- total_error_queues 257 number of queues in an error state due to 258 an async error or errored command. 259- send_queue_priority_update_flow 260 number of QP/SQ priority/SL update events. 261- cq_overrun 262 number of times CQ entered an error state due to an overflow. 263- async_eq_overrun 264 number of times an EQ mapped to async events was overrun. 265- comp_eq_overrun 266 number of times an EQ mapped to completion events was 267 overrun. 268- quota_exceeded_command 269 number of commands issued and failed due to quota exceeded. 270- invalid_command 271 number of commands issued and failed dues to any reason other than quota 272 exceeded. 273- nic_receive_steering_discard 274 number of packets that completed RX flow 275 steering but were discarded due to a mismatch in flow table. 276- generated_pkt_steering_fail 277 number of packets generated by the VNIC experiencing unexpected steering 278 failure (at any point in steering flow). 279- handled_pkt_steering_fail 280 number of packets handled by the VNIC experiencing unexpected steering 281 failure (at any point in steering flow owned by the VNIC, including the FDB 282 for the eswitch owner). 283 284User commands examples: 285 286- Diagnose PF/VF vnic counters:: 287 288 $ devlink health diagnose pci/0000:82:00.1 reporter vnic 289 290- Diagnose representor vnic counters (performed by supplying devlink port of the 291 representor, which can be obtained via devlink port command):: 292 293 $ devlink health diagnose pci/0000:82:00.1/65537 reporter vnic 294 295.. note:: 296 This command can run over all interfaces such as PF/VF and representor ports. 297