1.\" 2.\" SPDX-License-Identifier: BSD-3-Clause 3.\" 4.\" Copyright (c) 2019-2020, Intel Corporation 5.\" All rights reserved. 6.\" 7.\" Redistribution and use in source and binary forms of the Software, with or 8.\" without modification, are permitted provided that the following conditions 9.\" are met: 10.\" 1. Redistributions of source code must retain the above copyright notice, 11.\" this list of conditions and the following disclaimer. 12.\" 13.\" 2. Redistributions in binary form must reproduce the above copyright notice, 14.\" this list of conditions and the following disclaimer in the documentation 15.\" and/or other materials provided with the distribution. 16.\" 17.\" 3. Neither the name of the Intel Corporation nor the names of its 18.\" contributors may be used to endorse or promote products derived from 19.\" this Software without specific prior written permission. 20.\" 21.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 22.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 23.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 24.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 25.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 26.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 27.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 28.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 29.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 30.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 31.\" POSSIBILITY OF SUCH DAMAGE. 32.\" 33.\" * Other names and brands may be claimed as the property of others. 34.\" 35.Dd October 3, 2025 36.Dt ICE 4 37.Os 38.Sh NAME 39.Nm ice 40.Nd Intel Ethernet 800 Series Driver 41.Sh SYNOPSIS 42.Cd device iflib 43.Cd device ice 44.Pp 45In 46.Xr loader.conf 5 : 47.Cd if_ice_load 48.Cd hw.ice.enable_health_events 49.Cd hw.ice.irdma 50.Cd hw.ice.irdma_max_msix 51.Cd hw.ice.debug.enable_tx_fc_filter 52.Cd hw.ice.debug.enable_tx_lldp_filter 53.Cd hw.ice.debug.ice_tx_balance_en 54.Pp 55In 56.Xr sysctl.conf 5 57or 58.Xr loader.conf 5 : 59.Cd dev.ice.#.current_speed 60.Cd dev.ice.#.fw_version 61.Cd dev.ice.#.ddp_version 62.Cd dev.ice.#.pba_number 63.Cd dev.ice.#.hw.mac.* 64.Sh DESCRIPTION 65.Ss Features 66The 67.Nm 68driver provides support for any PCI Express adapter or LOM 69(LAN On Motherboard) 70in the Intel\(rg Ethernet 800 Series. 71As of this writing, the series includes devices with these model numbers: 72.Pp 73.Bl -bullet -compact 74.It 75Intel\(rg Ethernet Controller E810\-C 76.It 77Intel\(rg Ethernet Controller E810\-XXV 78.It 79Intel\(rg Ethernet Connection E822\-C 80.It 81Intel\(rg Ethernet Connection E822\-L 82.It 83Intel\(rg Ethernet Connection E823\-C 84.It 85Intel\(rg Ethernet Connection E823\-L 86.It 87Intel\(rg Ethernet Connection E825\-C 88.It 89Intel\(rg Ethernet Connection E830\-C 90.It 91Intel\(rg Ethernet Connection E830\-CC 92.It 93Intel\(rg Ethernet Connection E830\-L 94.It 95Intel\(rg Ethernet Connection E830\-XXV 96.El 97.Pp 98For questions related to hardware requirements, refer to the documentation 99supplied with the adapter. 100.Pp 101Support for Jumbo Frames is provided via the interface MTU setting. 102Selecting an MTU larger than 1500 bytes with the 103.Xr ifconfig 8 104utility configures the adapter to receive and transmit Jumbo Frames. 105The maximum MTU size for Jumbo Frames is 9706. 106For more information, see the 107.Sx Jumbo Frames 108section. 109.Pp 110This driver version supports VLANs. 111For information on enabling VLANs, see 112.Xr vlan 4 . 113For additional information on configuring VLANs, see 114.Xr ifconfig 8 Ap s 115.Dq VLAN Parameters 116section. 117.Pp 118Offloads are also controlled via the interface, for instance, checksumming for 119both IPv4 and IPv6 can be set and unset, TSO4 and/or TSO6, and finally LRO can 120be set and unset. 121.Pp 122For more information on configuring this device, see 123.Xr ifconfig 8 . 124.Pp 125The associated Virtual Function (VF) driver for this driver is 126.Xr iavf 4 . 127.Pp 128The associated RDMA driver for this driver is 129.Xr irdma 4 . 130.Ss Dynamic Device Personalization 131The DDP package loads during device initialization. 132The driver looks for the 133.Sy ice_ddp 134module and checks that it contains a valid DDP package file. 135.Pp 136If the driver is unable to load the DDP package, the device will enter Safe 137Mode. 138Safe Mode disables advanced and performance features and supports only 139basic traffic and minimal functionality, such as updating the NVM or 140downloading a new driver or DDP package. 141Safe Mode only applies to the affected physical function and does not impact 142any other PFs. 143See the 144.Dq Intel\(rg Ethernet Adapters and Devices User Guide 145for more details on DDP and Safe Mode. 146.Pp 147If issues are encountered with the DDP package file, an updated driver or 148.Sy ice_ddp 149module may need to be downloaded. 150See the log messages for more information. 151.Pp 152The DDP package cannot be updated if any PF drivers are already loaded. 153To overwrite a package, unload all PFs and then reload the driver with the 154new package. 155.Pp 156Only one DDP package can be used per driver, even if more than one 157device installed that uses the driver. 158.Pp 159Only the first loaded PF per device can download a package for that device. 160.Ss Jumbo Frames 161Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) 162to a value larger than the default value of 1500. 163.Pp 164Use 165.Xr ifconfig 8 166to increase the MTU size. 167.Pp 168The maximum MTU setting for jumbo frames is 9706. 169This corresponds to the maximum jumbo frame size of 9728 bytes. 170.Pp 171This driver will attempt to use multiple page sized buffers to receive 172each jumbo packet. 173This should help to avoid buffer starvation issues when allocating receive 174packets. 175.Pp 176Packet loss may have a greater impact on throughput when jumbo frames are in 177use. 178If a drop in performance is observed after enabling jumbo frames, enabling 179flow control may mitigate the issue. 180.Ss Remote Direct Memory Access 181Remote Direct Memory Access, or RDMA, allows a network device to transfer data 182directly to and from application memory on another system, increasing 183throughput and lowering latency in certain networking environments. 184.Pp 185The ice driver supports both the iWARP (Internet Wide Area RDMA Protocol) and 186RoCEv2 (RDMA over Converged Ethernet) protocols. 187The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses 188UDP. 189.Pp 190Devices based on the Intel\(rg Ethernet 800 Series do not support RDMA when 191operating in multiport mode with more than 4 ports. 192.Pp 193For detailed installation and configuration information for RDMA, see 194.Xr irdma 4 . 195.Ss RDMA Monitoring 196For debugging/testing purposes, a sysctl can be used to set up a mirroring 197interface on a port. 198The interface can receive mirrored RDMA traffic for packet 199analysis tools like 200.Xr tcpdump 1 . 201This mirroring may impact performance. 202.Pp 203To use RDMA monitoring, more MSI\-X interrupts may need to be reserved. 204Before the 205.Nm 206driver loads, configure the following tunable provided by 207.Xr iflib 4 : 208.Bd -literal -offset indent 209dev.ice.<interface #>.iflib.use_extra_msix_vectors=4 210.Ed 211.Pp 212The number of extra MSI\-X interrupt vectors may need to be adjusted. 213.Pp 214To create/delete the interface: 215.Bd -literal -offset indent 216sysctl dev.ice.<interface #>.create_interface=1 217sysctl dev.ice.<interface #>.delete_interface=1 218.Ed 219.Pp 220The mirrored interface receives both LAN and RDMA traffic. 221Additional filters can be configured in tcpdump. 222.Pp 223To differentiate the mirrored interface from the primary interface, the network 224interface naming convention is: 225.Bd -literal -offset indent 226<driver name><port number><modifier><modifier unit number> 227.Ed 228.Pp 229For example, 230.Dq Li ice0m0 231is the first mirroring interface on 232.Dq Li ice0 . 233.Ss Data Center Bridging 234Data Center Bridging (DCB) is a configuration Quality of Service 235implementation in hardware. 236It uses the VLAN priority tag (802.1p) to filter traffic. 237That means that there are 8 different priorities that traffic can be filtered 238into. 239It also enables priority flow control (802.1Qbb) which can limit or eliminate 240the number of dropped packets during network stress. 241Bandwidth can be allocated to each of these priorities, which is enforced at 242the hardware level (802.1Qaz). 243.Pp 244DCB is normally configured on the network using the DCBX protocol (802.1Qaz), a 245specialization of LLDP (802.1AB). The 246.Nm 247driver supports the following mutually exclusive variants of DCBX support: 248.Bl -bullet -compact 249.It 250Firmware\-based LLDP Agent 251.It 252Software\-based LLDP Agent 253.El 254.Pp 255In firmware\-based mode, firmware intercepts all LLDP traffic and handles DCBX 256negotiation transparently for the user. 257In this mode, the adapter operates in 258.Dq willing 259DCBX mode, receiving DCB settings from the link partner (typically a 260switch). 261The local user can only query the negotiated DCB configuration. 262For information on configuring DCBX parameters on a switch, please consult the 263switch manufacturer'ss documentation. 264.Pp 265In software\-based mode, LLDP traffic is forwarded to the network stack and user 266space, where a software agent can handle it. 267In this mode, the adapter can operate in 268.Dq nonwilling 269DCBX mode and DCB configuration can be both queried and set locally. 270This mode requires the FW\-based LLDP Agent to be disabled. 271.Pp 272Firmware\-based mode and software\-based mode are controlled by the 273.Dq fw_lldp_agent 274sysctl. 275Refer to the Firmware Link Layer Discovery Protocol Agent section for more 276information. 277.Pp 278Link\-level flow control and priority flow control are mutually exclusive. 279The ice driver will disable link flow control when priority flow control 280is enabled on any traffic class (TC). 281It will disable priority flow control when link flow control is enabled. 282.Pp 283To enable/disable priority flow control in software\-based DCBX mode: 284.Bd -literal -offset indent 285sysctl dev.ice.<interface #>.pfc=1 (or 0 to disable) 286.Ed 287.Pp 288Enhanced Transmission Selection (ETS) allows bandwidth to be assigned to certain 289TCs, to help ensure traffic reliability. 290To view the assigned ETS configuration, use the following: 291.Bd -literal -offset indent 292sysctl dev.ice.<interface #>.ets_min_rate 293.Ed 294.Pp 295To set the minimum ETS bandwidth per TC, separate the values by commas. 296All values must add up to 100. 297For example, to set all TCs to a minimum bandwidth of 10% and TC 7 to 30%, 298use the following: 299.Bd -literal -offset indent 300sysctl dev.ice.<interface #>.ets_min_rate=10,10,10,10,10,10,10,30 301.Ed 302.Pp 303To set the User Priority (UP) to a TC mapping for a port, separate the values 304by commas. 305For example, to map UP 0 and 1 to TC 0, UP 2 and 3 to TC 1, UP 4 and 3065 to TC 2, and UP 6 and 7 to TC 3, use the following: 307.Bd -literal -offset indent 308sysctl dev.ice.<interface #>.up2tc_map=0,0,1,1,2,2,3,3 309.Ed 310.Ss L3 QoS mode 311The 312.Nm 313driver supports setting DSCP\-based Layer 3 Quality of Service (L3 QoS) 314in the PF driver. 315The driver initializes in L2 QoS mode by default; L3 QoS is disabled by 316default. 317Use the following sysctl to enable or disable L3 QoS: 318.Bd -literal -offset indent 319sysctl dev.ice.<interface #>.pfc_mode=1 (or 0 to disable) 320.Ed 321.Pp 322If the L3 QoS mode is disabled, it returns to L2 QoS mode. 323.Pp 324To map a DSCP value to a traffic class, separate the values by commas. 325For example, to map DSCPs 0\-3 and DSCP 8 to DCB TCs 0\-3 and 4, respectively: 326.Bd -literal -offset indent 327sysctl dev.ice.<interface #>.dscp2tc_map.0\-7=0,1,2,3,0,0,0,0 328sysctl dev.ice.<interface #>.dscp2tc_map.8\-15=4,0,0,0,0,0,0,0 329.Ed 330.Pp 331To change the DSCP mapping back to the default traffic class, set all the 332values back to 0. 333.Pp 334To view the currently configured mappings, use the following: 335.Bd -literal -offset indent 336sysctl dev.ice.<interface #>.dscp2tc_map 337.Ed 338.Pp 339L3 QoS mode is not available when FW\-LLDP is enabled. 340.Pp 341FW\-LLDP cannot be enabled if L3 QoS mode is active. 342.Pp 343Disable FW\-LLDP before switching to L3 QoS mode. 344.Pp 345Refer to the 346.Sx Firmware Link Layer Discovery Protocol Agent 347section in this README for more information on disabling FW\-LLDP. 348.Ss Firmware Link Layer Discovery Protocol Agent 349Use sysctl to change FW\-LLDP settings. 350The FW\-LLDP setting is per port and persists across boots. 351.Pp 352To enable the FW\-LLDP Agent: 353.Bd -literal -offset indent 354sysctl dev.ice.<interface #>.fw_lldp_agent=1 355.Ed 356.Pp 357To disable the FW\-LLDP Agebt: 358.Bd -literal -offset indent 359sysctl dev.ice.<interface #>.fw_lldp_agent=0 360.Ed 361.Pp 362To check the current LLDP setting: 363.Bd -literal -offset indent 364sysctl dev.ice.<interface #>.fw_lldp_agent 365.Ed 366.Pp 367The UEFI HII LLDP Agent attribute must be enabled for this setting 368to take effect. 369If the 370.Dq LLDP AGENT 371attribute is set to disabled, the FW\-LLDP Agent cannot be enabled from the 372driver. 373.Ss Link\-Level Flow Control (LFC) 374Ethernet Flow Control (IEEE 802.3x) can be configured with sysctl to enable 375receiving and transmitting pause frames for 376.Nm . 377When transmit is enabled, pause frames are generated when the receive packet 378buffer crosses a predefined threshold. 379When receive is enabled, the transmit unit will halt for the time delay 380specified in the firmware when a pause frame is received. 381.Pp 382Flow Control is disabled by default. 383.Pp 384Use sysctl to change the flow control settings for a single interface without 385reloading the driver: 386.Bd -literal -offset indent 387sysctl dev.ice.<interface #>.fc 388.Ed 389.Pp 390The available values for flow control are: 391.Bd -literal -offset indent 3920 = Disable flow control 3931 = Enable Rx pause 3942 = Enable Tx pause 3953 = Enable Rx and Tx pause 396.Ed 397.Pp 398Verify that link flow control was negotiated on the link by checking the 399interface entry in 400.Xr ifconfig 8 401and looking for the flags 402.Dq txpause 403and/or 404.Dq rxpause 405in the 406.Dq media 407status. 408.Pp 409The 410.Nm 411driver requires flow control on both the port and link partner. 412If flow control is disabled on one of the sides, the port may appear to 413hang on heavy traffic. 414.Pp 415For more information on priority flow control, refer to the 416.Sx Data Center Bridging 417section. 418.Pp 419The VF driver does not have access to flow control. 420It must be managed from the host side. 421.Ss Forward Error Correction 422Forward Error Correction (FEC) improves link stability but increases latency. 423Many high quality optics, direct attach cables, and backplane channels can 424provide a stable link without FEC. 425.Pp 426For devices to benefit from this feature, link partners must have FEC enabled. 427.Pp 428If the 429.Va allow_no_fec_modules_in_auto 430sysctl is enabled Auto FEC negotiation will include 431.Dq No FEC 432in case the link partner does not have FEC enabled or is not FEC capable: 433.Bd -literal -offset indent 434sysctl dev.ice.<interface #>.allow_no_fec_modules_in_auto=1 435.Ed 436.Pp 437NOTE: This flag is currently not supported on the Intel\(rg Ethernet 830 438Series. 439.Pp 440To show the current FEC settings that are negotiated on the link: 441.Bd -literal -offset indent 442sysctl dev.ice.<interface #>.negotiated_fec 443.Ed 444.Pp 445To view or set the FEC setting that was requested on the link: 446.Bd -literal -offset indent 447sysctl dev.ice.<interface #>.requested_fec 448.Ed 449.Pp 450To see the valid FEC modes for the link: 451.Bd -literal -offset indent 452sysctl \-d dev.ice.<interface #>.requested_fec 453.Ed 454.Ss Speed and Duplex Configuration 455The speed and duplex settings cannot be hard set. 456.Pp 457To have the device change the speeds it will use in auto-negotiation or 458force link with: 459.Bd -literal -offset indent 460sysctl dev.ice.<interface #>.advertise_speed=<mask> 461.Ed 462.Pp 463Supported speeds will vary by device. 464Depending on the speeds the device supports, valid bits used in a speed mask 465could include: 466.Bd -literal -offset indent 4670x0 \- Auto 4680x2 \- 100 Mbps 4690x4 \- 1 Gbps 4700x8 \- 2.5 Gbps 4710x10 \- 5 Gbps 4720x20 \- 10 Gbps 4730x80 \- 25 Gbps 4740x100 \- 40 Gbps 4750x200 \- 50 Gbps 4760x400 \- 100 Gbps 4770x800 \- 200 Gbps 478.Ed 479.Ss Disabling physical link when the interface is brought down 480When the 481.Va link_active_on_if_down 482sysctl is set to 483.Dq 0 , 484the port's link will go down when the interface is brought down. 485By default, link will stay up. 486.Pp 487To disable link when the interface is down: 488.Bd -literal -offset indent 489sysctl dev.ice.<interface #>.link_active_on_if_down=0 490.Ed 491.Ss Firmware Logging 492The 493.Nm 494driver allows for the generation of firmware logs for supported categories of 495events, to help debug issues with Customer Support. 496Refer to the 497.Dq Intel\(rg Ethernet Adapters and Devices User Guide 498for an overview of this feature and additional tips. 499.Pp 500At a high level, to capture a firmware log: 501.Bl -enum -compact 502.It 503Set the configuration for the firmware log. 504.It 505Perform the necessary steps to reproduce the issue. 506.It 507Capture the firmware log. 508.It 509Stop capturing the firmware log. 510.It 511Reset the firmware log settings as needed. 512.It 513Work with Customer Support to debug the issue. 514.El 515.Pp 516NOTE: Firmware logs are generated in a binary format and must be decoded by 517Customer Support. 518Information collected is related only to firmware and hardware for debug 519purposes. 520.Pp 521Once the driver is loaded, it will create the 522.Va fw_log 523sysctl node under the debug section of the driver's sysctl list. 524The driver groups these events into categories, called 525.Dq modules . 526Supported modules include: 527.Pp 528.Bl -tag -offset indent -compact -width "task_dispatch" 529.It Va general 530General (Bit 0) 531.It Va ctrl 532Control (Bit 1) 533.It Va link 534Link Management (Bit 2) 535.It Va link_topo 536Link Topology Detection (Bit 3) 537.It Va dnl 538Link Control Technology (Bit 4) 539.It Va i2c 540I2C (Bit 5) 541.It Va sdp 542SDP (Bit 6) 543.It Va mdio 544MDIO (Bit 7) 545.It Va adminq 546Admin Queue (Bit 8) 547.It Va hdma 548Host DMA (Bit 9) 549.It Va lldp 550LLDP (Bit 10) 551.It Va dcbx 552DCBx (Bit 11) 553.It Va dcb 554DCB (Bit 12) 555.It Va xlr 556XLR (function\-level resets; Bit 13) 557.It Va nvm 558NVM (Bit 14) 559.It Va auth 560Authentication (Bit 15) 561.It Va vpd 562Vital Product Data (Bit 16) 563.It Va iosf 564Intel On\-Chip System Fabric (Bit 17) 565.It Va parser 566Parser (Bit 18) 567.It Va sw 568Switch (Bit 19) 569.It Va scheduler 570Scheduler (Bit 20) 571.It Va txq 572TX Queue Management (Bit 21) 573.It Va acl 574ACL (Access Control List; Bit 22) 575.It Va post 576Post (Bit 23) 577.It Va watchdog 578Watchdog (Bit 24) 579.It Va task_dispatch 580Task Dispatcher (Bit 25) 581.It Va mng 582Manageability (Bit 26) 583.It Va synce 584SyncE (Bit 27) 585.It Va health 586Health (Bit 28) 587.It Va tsdrv 588Time Sync (Bit 29) 589.It Va pfreg 590PF Registration (Bit 30) 591.It Va mdlver 592Module Version (Bit 31) 593.El 594.Pp 595The verbosity level of the firmware logs can be modified. 596It is possible to set only one log level per module, and each level includes the 597verbosity levels lower than it. 598For instance, setting the level to 599.Dq normal 600will also log warning and error messages. 601Available verbosity levels are: 602.Pp 603.Bl -item -offset indent -compact 604.It 6050 = none 606.It 6071 = error 608.It 6092 = warning 610.It 6113 = normal 612.It 6134 = verbose 614.El 615.Pp 616To set the desired verbosity level for a module, use the following sysctl 617command and then register it: 618.Bd -literal -offset indent 619sysctl dev.ice.<interface #>.debug.fw_log.severity.<module>=<level> 620.Ed 621.Pp 622For example: 623.Bd -literal -offset indent 624sysctl dev.ice.0.debug.fw_log.severity.link=1 625sysctl dev.ice.0.debug.fw_log.severity.link_topo=2 626sysctl dev.ice.0.debug.fw_log.register=1 627.Ed 628.Pp 629To log firmware messages after booting, but before the driver initializes, use 630.Xr kenv 1 631to set the tunable. 632The 633.Va on_load 634setting tells the device to register the variable as soon as possible during 635driver load. 636For example: 637.Bd -literal -offset indent 638kenv dev.ice.0.debug.fw_log.severity.link=1 639kenv dev.ice.0.debug.fw_log.severity.link_topo=2 640kenv dev.ice.0.debug.fw_log.on_load=1 641.Ed 642.Pp 643To view the firmware logs and redirect them to a file, use the following 644command: 645.Bd -literal -offset indent 646dmesg > log_output 647.Ed 648.Pp 649NOTE: Logging a large number of modules or too high of a verbosity level will 650add extraneous messages to dmesg and could hinder debug efforts. 651.Ss Debug Dump 652Intel\(rg Ethernet 800 Series devices support debug dump, which allows 653gathering of runtime register values from the firmware for 654.Dq clusters 655of events and then write the results to a single dump file, for debugging 656complicated issues in the field. 657.Pp 658This debug dump contains a snapshot of the device and its existing hardware 659configuration, such as switch tables, transmit scheduler tables, and other 660information. 661Debug dump captures the current state of the specified cluster(s) and is a 662stateless snapshot of the whole device. 663.Pp 664NOTE: Like with firmware logs, the contents of the debug dump are not 665human\-readable. 666Work with Customer Support to decode the file. 667.Pp 668Debug dump is per device, not per PF. 669.Pp 670Debug dump writes all information to a single file. 671.Pp 672To generate a debug dump file in 673.Fx 674do the following: 675.Pp 676Specify the cluster(s) to include in the dump file, using a bitmask and the 677following command: 678.Bd -literal -offset indent 679sysctl dev.ice.<interface #>.debug.dump.clusters=<bitmask> 680.Ed 681.Pp 682To print the complete cluster bitmask and parameter list to the screen, 683pass the 684.Fl d 685argument. 686For example: 687.Bd -literal -offset indent 688sysctl \-d dev.ice.0.debug.dump.clusters 689.Ed 690.Pp 691Possible bitmask values for 692.Va clusters 693are: 694.Bl -bullet -compact 695.It 6960 \- Dump all clusters (only supported on Intel\(rg Ethernet E810 Series and 697Intel\(rg Ethernet E830 Series) 698.It 6990x1 \- Switch 700.It 7010x2 \- ACL 702.It 7030x4 \- Tx Scheduler 704.It 7050x8 \- Profile Configuration 706.It 7070x20 \- Link 708.It 7090x80 \- DCB 710.It 7110x100 \- L2P 712.It 7130x400000 \- Manageability Transactions (only supported on Intel\(rg Ethernet 714E810 Series) 715.El 716.Pp 717For example, to dump the Switch, DCB, and L2P clusters, use the following: 718.Bd -literal -offset indent 719sysctl dev.ice.0.debug.dump.clusters=0x181 720.Ed 721.Pp 722To dump all clusters, use the following: 723.Bd -literal -offset indent 724sysctl dev.ice.0.debug.dump.clusters=0 725.Ed 726.Pp 727NOTE: Using 0 will skip Manageability Transactions data. 728.Pp 729If a single cluster is not specified, the driver will dump all clusters to a 730single file. 731Issue the debug dump command, using the following: 732.Bd -literal -offset indent 733sysctl \-b dev.ice.<interface #>.debug.dump.dump=1 > dump.bin 734.Ed 735.Pp 736NOTE: The driver will not receive the command if the sysctl is not set to 737.Dq 1 . 738.Pp 739Replace 740.Dq dump.bin 741above with the preferred file name. 742.Pp 743To clear the 744.Va clusters 745mask before a subsequent debug dump and then do the dump: 746.Bd -literal -offset indent 747sysctl dev.ice.0.debug.dump.clusters=0 748sysctl dev.ice.0.debug.dump.dump=1 749.Ed 750.Ss Debugging PHY Statistics 751The ice driver supports the ability to obtain the values of the PHY registers 752from Intel(R) Ethernet 810 Series devices in order to debug link and 753connection issues during runtime. 754.Pp 755The driver provides information about: 756.Bl -bullet 757.It 758Rx and Tx Equalization parameters 759.It 760RS FEC correctable and uncorrectable block counts 761.El 762.Pp 763Use the following sysctl to read the PHY registers: 764.Bd -literal -offset indent 765sysctl dev.ice.<interface #>.debug.phy_statistics 766.Ed 767.Pp 768NOTE: The contents of the registers are not human\-readable. 769Like with firmware logs and debug dump, work with Customer Support 770to decode the file. 771.Ss Transmit Balancing 772Some Intel(R) Ethernet 800 Series devices allow for enabling a transmit 773balancing feature to improve transmit performance under certain conditions. 774When enabled, the feature should provide more consistent transmit 775performance across queues and/or PFs and VFs. 776.Pp 777By default, transmit balancing is disabled in the NVM. 778To enable this feature, use one of the following to persistently change the 779setting for the device: 780.Bl -bullet 781.It 782Use the Ethernet Port Configuration Tool (EPCT) to enable the 783.Va tx_balancing 784option. 785Refer to the EPCT readme for more information. 786.It 787Enable the Transmit Balancing device setting in UEFI HII. 788.El 789.Pp 790When the driver loads, it reads the transmit balancing setting from the NVM and 791configures the device accordingly. 792.Pp 793NOTE: The user selection for transmit balancing in EPCT or HII is persistent 794across reboots. 795The system must be rebooted for the selected setting to take effect. 796.Pp 797This setting is device wide. 798.Pp 799The driver, NVM, and DDP package must all support this functionality to 800enable the feature. 801.Ss Thermal Monitoring 802Intel(R) Ethernet 810 Series and Intel(R) Ethernet 830 Series devices can 803display temperature data (in degrees Celsius) via: 804.Bd -literal -offset indent 805sysctl dev.ice.<interface #>.temp 806.Ed 807.Ss Network Memory Buffer Allocation 808.Fx 809may have a low number of network memory buffers (mbufs) by default. 810If the number of mbufs available is too low, it may cause the driver to fail 811to initialize and/or cause the system to become unresponsive. 812Check to see if the system is mbuf\-starved by running 813.Ic netstat Fl m . 814Increase the number of mbufs by editing the lines below in 815.Pa /etc/sysctl.conf : 816.Bd -literal -offset indent 817kern.ipc.nmbclusters 818kern.ipc.nmbjumbop 819kern.ipc.nmbjumbo9 820kern.ipc.nmbjumbo16 821kern.ipc.nmbufs 822.Ed 823.Pp 824The amount of memory that should be allocated is system specific, and may require some 825trial and error. 826Also, increasing the following in 827.Pa /etc/sysctl.conf 828could help increase network performance: 829.Bd -literal -offset indent 830kern.ipc.maxsockbuf 831net.inet.tcp.sendspace 832net.inet.tcp.recvspace 833net.inet.udp.maxdgram 834net.inet.udp.recvspace 835.Ed 836.Ss Additional Utilities 837There are additional tools available from Intel to help configure and update 838the adapters covered by this driver. 839These tools can be downloaded directly from Intel at 840.Lk https://downloadcenter.intel.com , 841by searching for their names: 842.Bl -bullet 843.It 844To change the behavior of the QSFP28 ports on E810-C adapters, use the Intel 845.Sy Ethernet Port Configuration Tool - FreeBSD . 846.It 847To update the firmware on an adapter, use the Intel 848.Sy Non-Volatile Memory (NVM) Update Utility for Intel Ethernet Network Adapters E810 series - FreeBSD 849.El 850.Sh HARDWARE 851The 852.Nm 853driver supports the Intel Ethernet 800 series. 854Some adapters in this series with SFP28/QSFP28 cages 855have firmware that requires that Intel qualified modules are used; these 856qualified modules are listed below. 857This qualification check cannot be disabled by the driver. 858.Pp 859The 860.Nm 861driver supports 100Gb Ethernet adapters with these QSFP28 modules: 862.Pp 863.Bl -bullet -compact 864.It 865Intel\(rg 100G QSFP28 100GBASE-SR4 E100GQSFPSR28SRX 866.It 867Intel\(rg 100G QSFP28 100GBASE-SR4 SPTMBP1PMCDF 868.It 869Intel\(rg 100G QSFP28 100GBASE-CWDM4 SPTSBP3CLCCO 870.It 871Intel\(rg 100G QSFP28 100GBASE-DR SPTSLP2SLCDF 872.El 873.Pp 874The 875.Nm 876driver supports 25Gb and 10Gb Ethernet adapters with these SFP28 modules: 877.Pp 878.Bl -bullet -compact 879.It 880Intel\(rg 10G/25G SFP28 25GBASE-SR E25GSFP28SR 881.It 882Intel\(rg 25G SFP28 25GBASE-SR E25GSFP28SRX (Extended Temp) 883.It 884Intel\(rg 25G SFP28 25GBASE-LR E25GSFP28LRX (Extended Temp) 885.El 886.Pp 887The 888.Nm 889driver supports 10Gb and 1Gb Ethernet adapters with these SFP+ modules: 890.Pp 891.Bl -bullet -compact 892.It 893Intel\(rg 1G/10G SFP+ 10GBASE-SR E10GSFPSR 894.It 895Intel\(rg 1G/10G SFP+ 10GBASE-SR E10GSFPSRG1P5 896.It 897Intel\(rg 1G/10G SFP+ 10GBASE-SR E10GSFPSRG2P5 898.It 899Intel\(rg 10G SFP+ 10GBASE-SR E10GSFPSRX (Extended Temp) 900.It 901Intel\(rg 1G/10G SFP+ 10GBASE-LR E10GSFPLR 902.El 903.Pp 904Note that adapters also support all passive and active 905limiting direct attach cables that comply with SFF-8431 v4.1 and 906SFF-8472 v10.4 specifications. 907.Pp 908This is not an exhaustive list; please consult product documentation for an 909up-to-date list of supported media. 910.Ss Fiber optics and auto\-negotiation 911Modules based on 100GBASE\-SR4, active optical cable (AOC), and active copper 912cable (ACC) do not support auto\-negotiation per the IEEE specification. 913To obtain link with these modules, auto\-negotiation must be turned off on the 914link partner's switch ports. 915.Ss PCI-Express Slot Bandwidth 916Some PCIe x8 slots are actually configured as x4 slots. 917These slots have insufficient bandwidth for full line rate with dual port and 918quad port devices. 919In addition, if a PCIe v4.0 or v3.0\-capable adapter is placed into a PCIe v2.x 920slot, full bandwidth will not be possible. 921.Pp 922The driver detects this situation and writes the following message in the 923system log: 924.Bd -literal -offset indent 925PCI\-Express bandwidth available for this device may be insufficient for 926optimal performance. 927Please move the device to a different PCI\-e link with more lanes and/or 928higher transfer rate. 929.Ed 930.Pp 931If this error occurs, moving the adapter to a true PCIe x8 or x16 slot will 932resolve the issue. 933For best performance, install devices in the following PCI slots: 934.Bl -bullet 935.It 936Any 100Gbps\-capable Intel(R) Ethernet 800 Series device: Install in a 937PCIe v4.0 x8 or v3.0 x16 slot 938.It 939A 200Gbps\-capable Intel(R) Ethernet 830 Series device: Install in a 940PCIe v5.0 x8 or v4.0 x16 slot 941.El 942.Sh LOADER TUNABLES 943Tunables can be set at the 944.Xr loader 8 945prompt before booting the kernel or stored in 946.Xr loader.conf 5 . 947See the 948.Xr iflib 4 949man page for more information on using iflib sysctl variables as tunables. 950.Bl -tag -width indent 951.It Va hw.ice.enable_health_events 952Set to 1 to enable firmware health event reporting across all devices. 953Enabled by default. 954.Pp 955If enabled, when the driver receives a firmware health event message, it will 956print out a description of the event to the kernel message buffer and if 957applicable, possible actions to take to remedy it. 958.It Va hw.ice.irdma 959Set to 1 to enable the RDMA client interface, required by the 960.Xr irdma 4 961driver. 962Enabled by default. 963.It Va hw.ice.rdma_max_msix 964Set the maximum number of per-device MSI-X vectors that are allocated for use 965by the 966.Xr irdma 4 967driver. 968Set to 64 by default. 969.It Va hw.ice.debug.enable_tx_fc_filter 970Set to 1 to enable the TX Flow Control filter across all devices. 971Enabled by default. 972.Pp 973If enabled, the hardware will drop any transmitted Ethertype 0x8808 control 974frames that do not originate from the hardware. 975.It Va hw.ice.debug.enable_tx_lldp_filter 976Set to 1 to enable the TX LLDP filter across all devices. 977Enabled by default. 978.Pp 979If enabled, the hardware will drop any transmitted Ethertype 0x88cc LLDP frames 980that do not originate from the hardware. 981This must be disabled in order to use LLDP daemon software such as 982.Xr lldpd 8 . 983.It Va hw.ice.debug.ice_tx_balance_en 984Set to 1 to allow the driver to use the 5-layer Tx Scheduler tree topology if 985configured by the DDP package. 986.Pp 987Enabled by default. 988.El 989.Sh SYSCTL VARIABLES 990.Bl -tag -width indent 991.It Va dev.ice.#.current_speed 992This is a display of the current link speed of the interface. 993This is expected to match the speed of the media type in-use displayed by 994.Xr ifconfig 8 . 995.It Va dev.ice.#.fw_version 996Displays the current firmware and NVM versions of the adapter. 997This information should be submitted along with any support requests. 998.It Va dev.ice.#.ddp_version 999Displays the current DDP package version downloaded to the adapter. 1000This information should be submitted along with any support requests. 1001.It Va dev.ice.#.pba_number 1002Displays the Product Board Assembly Number. 1003May be used to help identify the type of adapter in use. 1004This sysctl may not exist depending on the adapter type. 1005.It Va dev.ice.#.hw.mac.* 1006This sysctl tree contains statistics collected by the hardware for the port. 1007.El 1008.Sh INTERRUPT STORMS 1009It is important to note that 100G operation can generate high 1010numbers of interrupts, often incorrectly being interpreted as 1011a storm condition in the kernel. 1012It is suggested that this be resolved by setting 1013.Va hw.intr_storm_threshold 1014to 0. 1015.Sh IOVCTL OPTIONS 1016The driver supports additional optional parameters for created VFs 1017(Virtual Functions) when using 1018.Xr iovctl 8 : 1019.Bl -tag -width indent 1020.It mac-addr Pq unicast-mac 1021Set the Ethernet MAC address that the VF will use. 1022If unspecified, the VF will use a randomly generated MAC address and 1023.Dq allow-set-mac 1024will be set to true. 1025.It mac-anti-spoof Pq bool 1026Prevent the VF from sending Ethernet frames with a source address 1027that does not match its own. 1028Enabled by default. 1029.It allow-set-mac Pq bool 1030Allow the VF to set its own Ethernet MAC address. 1031Disallowed by default. 1032.It allow-promisc Pq bool 1033Allow the VF to inspect all of the traffic sent to the port that it is created 1034on. 1035Disabled by default. 1036.It num-queues Pq uint16_t 1037Specify the number of queues the VF will have. 1038By default, this is set to the number of MSI\-X vectors supported by the VF 1039minus one. 1040.It mirror-src-vsi Pq uint16_t 1041Specify which VSI the VF will mirror traffic from by setting this to a value 1042other than \-1. 1043All traffic from that VSI will be mirrored to this VF. 1044Can be used as an alternative method to mirror RDMA traffic to another 1045interface than the method described in the 1046.Sx RDMA Monitoring 1047section. 1048Not affected by the 1049.Dq allow-promisc 1050parameter. 1051.It max-vlan-allowed Pq uint16_t 1052Specify maximum number of VLAN filters that the VF can use. 1053Receiving traffic on a VLAN requires a hardware filter which are a finite 1054resource; this is used to prevent a VF from starving other VFs or the PF of 1055filter resources. 1056By default, this is set to 16. 1057.It max-mac-filters Pq uint16_t 1058Specify maximum number of MAC address filters that the VF can use. 1059Each allowed MAC address requires a hardware filter which are a finite 1060resource; this is used to prevent a VF from starving other VFs or the PF of 1061filter resources. 1062The VF's default mac address does not count towards this limit. 1063By default, this is set to 64. 1064.El 1065.Pp 1066An up to date list of parameters and their defaults can be found by using 1067.Xr iovctl 8 1068with the 1069.Fl S 1070option. 1071.Pp 1072For more information on standard and mandatory parameters, see 1073.Xr iovctl.conf 5 . 1074.Sh SUPPORT 1075For general information and support, go to the Intel support website at: 1076.Lk http://www.intel.com/support/ . 1077.Pp 1078If an issue is identified with this driver with a supported adapter, 1079email all the specific information related to the issue to 1080.Aq Mt freebsd@intel.com . 1081.Sh SEE ALSO 1082.Xr iflib 4 , 1083.Xr vlan 4 , 1084.Xr ifconfig 8 , 1085.Xr sysctl 8 1086.Sh HISTORY 1087The 1088.Nm 1089device driver first appeared in 1090.Fx 12.2 . 1091.Sh AUTHORS 1092The 1093.Nm 1094driver was written by 1095.An Intel Corporation Aq Mt freebsd@intel.com . 1096