xref: /freebsd/share/man/man4/ice.4 (revision 8ced50767933f3e2949456367d4d9a64797daec3)
1.\"
2.\" SPDX-License-Identifier: BSD-3-Clause
3.\"
4.\" Copyright (c) 2019-2020, Intel Corporation
5.\" All rights reserved.
6.\"
7.\" Redistribution and use in source and binary forms of the Software, with or
8.\" without modification, are permitted provided that the following conditions
9.\" are met:
10.\" 1. Redistributions of source code must retain the above copyright notice,
11.\"    this list of conditions and the following disclaimer.
12.\"
13.\" 2. Redistributions in binary form must reproduce the above copyright notice,
14.\"    this list of conditions and the following disclaimer in the documentation
15.\"    and/or other materials provided with the distribution.
16.\"
17.\" 3. Neither the name of the Intel Corporation nor the names of its
18.\"    contributors may be used to endorse or promote products derived from
19.\"    this Software without specific prior written permission.
20.\"
21.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22.\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23.\" IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
24.\" ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
25.\" LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
26.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
27.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
28.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
29.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
30.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
31.\" POSSIBILITY OF SUCH DAMAGE.
32.\"
33.\" * Other names and brands may be claimed as the property of others.
34.\"
35.Dd November 5, 2025
36.Dt ICE 4
37.Os
38.Sh NAME
39.Nm ice
40.Nd Intel Ethernet 800 Series 1GbE to 200GbE driver
41.Sh SYNOPSIS
42.Cd device iflib
43.Cd device ice
44.Pp
45In
46.Xr loader.conf 5 :
47.Cd if_ice_load
48.Cd hw.ice.enable_health_events
49.Cd hw.ice.irdma
50.Cd hw.ice.irdma_max_msix
51.Cd hw.ice.debug.enable_tx_fc_filter
52.Cd hw.ice.debug.enable_tx_lldp_filter
53.Cd hw.ice.debug.ice_tx_balance_en
54.Pp
55In
56.Xr sysctl.conf 5
57or
58.Xr loader.conf 5 :
59.Cd dev.ice.#.current_speed
60.Cd dev.ice.#.fw_version
61.Cd dev.ice.#.ddp_version
62.Cd dev.ice.#.pba_number
63.Cd dev.ice.#.hw.mac.*
64.Sh DESCRIPTION
65The
66.Nm
67driver provides support for any PCI Express adapter or LOM
68.Pq LAN On Motherboard
69in the Intel Ethernet 800 Series.
70.Pp
71The following topics are covered in this manual:
72.Pp
73.Bl -bullet -compact
74.It
75.Sx Features
76.It
77.Sx Dynamic Device Personalization
78.It
79.Sx Jumbo Frames
80.It
81.Sx Remote Direct Memory Access
82.It
83.Sx RDMA Monitoring
84.It
85.Sx Data Center Bridging
86.It
87.Sx L3 QoS Mode
88.It
89.Sx Firmware Link Layer Discovery Protocol Agent
90.It
91.Sx Link-Level Flow Control
92.It
93.Sx Forward Error Correction
94.It
95.Sx Speed and Duplex Configuration
96.It
97.Sx Disabling physical link when the interface is brought down
98.It
99.Sx Firmware Logging
100.It
101.Sx Debug Dump
102.It
103.Sx Debugging PHY Statistics
104.It
105.Sx Transmit Balancing
106.It
107.Sx Thermal Monitoring
108.It
109.Sx Network Memory Buffer Allocation
110.It
111.Sx Additional Utilities
112.It
113.Sx Optics and auto-negotiation
114.It
115.Sx PCI-Express Slot Bandwidth
116.It
117.Sx HARDWARE
118.It
119.Sx LOADER TUNABLES
120.It
121.Sx SYSCTL VARIABLES
122.It
123.Sx INTERRUPT STORMS
124.It
125.Sx IOVCTL OPTIONS
126.It
127.Sx SUPPORT
128.It
129.Sx SEE ALSO
130.It
131.Sx HISTORY
132.El
133.Ss Features
134Support for Jumbo Frames is provided via the interface MTU setting.
135Selecting an MTU larger than 1500 bytes with the
136.Xr ifconfig 8
137utility configures the adapter to receive and transmit Jumbo Frames.
138The maximum MTU size for Jumbo Frames is 9706.
139For more information, see the
140.Sx Jumbo Frames
141section.
142.Pp
143This driver version supports VLANs.
144For information on enabling VLANs, see
145.Xr vlan 4 .
146For additional information on configuring VLANs, see
147.Xr ifconfig 8 Ap s
148.Dq VLAN Parameters
149section.
150.Pp
151Offloads are also controlled via the interface, for instance, checksumming for
152both IPv4 and IPv6 can be set and unset, TSO4 and/or TSO6, and finally LRO can
153be set and unset.
154.Pp
155For more information on configuring this device, see
156.Xr ifconfig 8 .
157.Pp
158The associated Virtual Function (VF) driver for this driver is
159.Xr iavf 4 .
160.Pp
161The associated RDMA driver for this driver is
162.Xr irdma 4 .
163.Ss Dynamic Device Personalization
164The DDP package loads during device initialization.
165The driver looks for the
166.Sy ice_ddp
167module and checks that it contains a valid DDP package file.
168.Pp
169If the driver is unable to load the DDP package, the device will enter Safe
170Mode.
171Safe Mode disables advanced and performance features and supports only
172basic traffic and minimal functionality, such as updating the NVM or
173downloading a new driver or DDP package.
174Safe Mode only applies to the affected physical function and does not impact
175any other PFs.
176See the
177.Dq Intel Ethernet Adapters and Devices User Guide
178for more details on DDP and Safe Mode.
179.Pp
180If issues are encountered with the DDP package file, an updated driver or
181.Sy ice_ddp
182module may need to be downloaded.
183See the log messages for more information.
184.Pp
185The DDP package cannot be updated if any PF drivers are already loaded.
186To overwrite a package, unload all PFs and then reload the driver with the
187new package.
188.Pp
189Only one DDP package can be used per driver,
190even if more than one installed device uses the driver.
191.Pp
192Only the first loaded PF per device can download a package for that device.
193.Ss Jumbo Frames
194Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
195to a value larger than the default value of 1500.
196.Pp
197Use
198.Xr ifconfig 8
199to increase the MTU size.
200.Pp
201The maximum MTU setting for jumbo frames is 9706.
202This corresponds to the maximum jumbo frame size of 9728 bytes.
203.Pp
204This driver will attempt to use multiple page sized buffers to receive
205each jumbo packet.
206This should help to avoid buffer starvation issues when allocating receive
207packets.
208.Pp
209Packet loss may have a greater impact on throughput when jumbo frames are in
210use.
211If a drop in performance is observed after enabling jumbo frames, enabling
212flow control may mitigate the issue.
213.Ss Remote Direct Memory Access
214Remote Direct Memory Access, or RDMA, allows a network device to transfer data
215directly to and from application memory on another system, increasing
216throughput and lowering latency in certain networking environments.
217.Pp
218The ice driver supports both the iWARP (Internet Wide Area RDMA Protocol) and
219RoCEv2 (RDMA over Converged Ethernet) protocols.
220The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses
221UDP.
222.Pp
223Devices based on the Intel Ethernet 800 Series do not support RDMA when
224operating in multiport mode with more than 4 ports.
225.Pp
226For detailed installation and configuration information for RDMA, see
227.Xr irdma 4 .
228.Ss RDMA Monitoring
229For debugging/testing purposes, a sysctl can be used to set up a mirroring
230interface on a port.
231The interface can receive mirrored RDMA traffic for packet
232analysis tools like
233.Xr tcpdump 1 .
234This mirroring may impact performance.
235.Pp
236To use RDMA monitoring, more MSI-X interrupts may need to be reserved.
237Before the
238.Nm
239driver loads, configure the following tunable provided by
240.Xr iflib 4 :
241.Bd -literal -offset indent
242dev.ice.<interface #>.iflib.use_extra_msix_vectors=4
243.Ed
244.Pp
245The number of extra MSI-X interrupt vectors may need to be adjusted.
246.Pp
247To create/delete the interface:
248.Bd -literal -offset indent
249sysctl dev.ice.<interface #>.create_interface=1
250sysctl dev.ice.<interface #>.delete_interface=1
251.Ed
252.Pp
253The mirrored interface receives both LAN and RDMA traffic.
254Additional filters can be configured in tcpdump.
255.Pp
256To differentiate the mirrored interface from the primary interface, the network
257interface naming convention is:
258.Bd -literal -offset indent
259<driver name><port number><modifier><modifier unit number>
260.Ed
261.Pp
262For example,
263.Dq Li ice0m0
264is the first mirroring interface on
265.Dq Li ice0 .
266.Ss Data Center Bridging
267Data Center Bridging (DCB) is a configuration Quality of Service
268implementation in hardware.
269It uses the VLAN priority tag (802.1p) to filter traffic.
270That means that there are 8 different priorities that traffic can be filtered
271into.
272It also enables priority flow control (802.1Qbb) which can limit or eliminate
273the number of dropped packets during network stress.
274Bandwidth can be allocated to each of these priorities, which is enforced at
275the hardware level (802.1Qaz).
276.Pp
277DCB is normally configured on the network using the DCBX protocol (802.1Qaz), a
278specialization of LLDP (802.1AB). The
279.Nm
280driver supports the following mutually exclusive variants of DCBX support:
281.Pp
282.Bl -bullet -compact
283.It
284Firmware-based LLDP Agent
285.It
286Software-based LLDP Agent
287.El
288.Pp
289In firmware-based mode, firmware intercepts all LLDP traffic and handles DCBX
290negotiation transparently for the user.
291In this mode, the adapter operates in
292.Dq willing
293DCBX mode, receiving DCB settings from the link partner (typically a
294switch).
295The local user can only query the negotiated DCB configuration.
296For information on configuring DCBX parameters on a switch, please consult the
297switch manufacturer'ss documentation.
298.Pp
299In software-based mode, LLDP traffic is forwarded to the network stack and user
300space, where a software agent can handle it.
301In this mode, the adapter can operate in
302.Dq nonwilling
303DCBX mode and DCB configuration can be both queried and set locally.
304This mode requires the FW-based LLDP Agent to be disabled.
305.Pp
306Firmware-based mode and software-based mode are controlled by the
307.Dq fw_lldp_agent
308sysctl.
309Refer to the Firmware Link Layer Discovery Protocol Agent section for more
310information.
311.Pp
312Link-level flow control and priority flow control are mutually exclusive.
313The ice driver will disable link flow control when priority flow control
314is enabled on any traffic class (TC).
315It will disable priority flow control when link flow control is enabled.
316.Pp
317To enable/disable priority flow control in software-based DCBX mode:
318.Bd -literal -offset indent
319sysctl dev.ice.<interface #>.pfc=1 (or 0 to disable)
320.Ed
321.Pp
322Enhanced Transmission Selection (ETS) allows bandwidth to be assigned to certain
323TCs, to help ensure traffic reliability.
324To view the assigned ETS configuration, use the following:
325.Bd -literal -offset indent
326sysctl dev.ice.<interface #>.ets_min_rate
327.Ed
328.Pp
329To set the minimum ETS bandwidth per TC, separate the values by commas.
330All values must add up to 100.
331For example, to set all TCs to a minimum bandwidth of 10% and TC 7 to 30%,
332use the following:
333.Bd -literal -offset indent
334sysctl dev.ice.<interface #>.ets_min_rate=10,10,10,10,10,10,10,30
335.Ed
336.Pp
337To set the User Priority (UP) to a TC mapping for a port, separate the values
338by commas.
339For example, to map UP 0 and 1 to TC 0, UP 2 and 3 to TC 1, UP 4 and
3405 to TC 2, and UP 6 and 7 to TC 3, use the following:
341.Bd -literal -offset indent
342sysctl dev.ice.<interface #>.up2tc_map=0,0,1,1,2,2,3,3
343.Ed
344.Ss L3 QoS Mode
345The
346.Nm
347driver supports setting DSCP-based Layer 3 Quality of Service (L3 QoS)
348in the PF driver.
349The driver initializes in L2 QoS mode by default; L3 QoS is disabled by
350default.
351Use the following sysctl to enable or disable L3 QoS:
352.Bd -literal -offset indent
353sysctl dev.ice.<interface #>.pfc_mode=1 (or 0 to disable)
354.Ed
355.Pp
356If L3 QoS mode is disabled, it returns to L2 QoS mode.
357.Pp
358To map a DSCP value to a traffic class, separate the values by commas.
359For example, to map DSCPs 0-3 and DSCP 8 to DCB TCs 0-3 and 4, respectively:
360.Bd -literal -offset indent
361sysctl dev.ice.<interface #>.dscp2tc_map.0-7=0,1,2,3,0,0,0,0
362sysctl dev.ice.<interface #>.dscp2tc_map.8-15=4,0,0,0,0,0,0,0
363.Ed
364.Pp
365To change the DSCP mapping back to the default traffic class, set all the
366values back to 0.
367.Pp
368To view the currently configured mappings, use the following:
369.Bd -literal -offset indent
370sysctl dev.ice.<interface #>.dscp2tc_map
371.Ed
372.Pp
373L3 QoS mode is not available when FW-LLDP is enabled.
374.Pp
375FW-LLDP cannot be enabled if L3 QoS mode is active.
376.Pp
377Disable FW-LLDP before switching to L3 QoS mode.
378.Pp
379Refer to the
380.Sx Firmware Link Layer Discovery Protocol Agent
381section in this README for more information on disabling FW-LLDP.
382.Ss Firmware Link Layer Discovery Protocol Agent
383Use sysctl to change FW-LLDP settings.
384The FW-LLDP setting is per port and persists across boots.
385.Pp
386To enable the FW-LLDP Agent:
387.Bd -literal -offset indent
388sysctl dev.ice.<interface #>.fw_lldp_agent=1
389.Ed
390.Pp
391To disable the FW-LLDP Agebt:
392.Bd -literal -offset indent
393sysctl dev.ice.<interface #>.fw_lldp_agent=0
394.Ed
395.Pp
396To check the current LLDP setting:
397.Bd -literal -offset indent
398sysctl dev.ice.<interface #>.fw_lldp_agent
399.Ed
400.Pp
401The UEFI HII LLDP Agent attribute must be enabled for this setting
402to take effect.
403If the
404.Dq LLDP AGENT
405attribute is set to disabled, the FW-LLDP Agent cannot be enabled from the
406driver.
407.Ss Link-Level Flow Control
408Ethernet Flow Control
409.Pq IEEE 802.3x or LFC
410can be configured with
411.Xr sysctl 8
412to enable receiving and transmitting pause frames for
413.Nm .
414When transmit is enabled, pause frames are generated when the receive packet
415buffer crosses a predefined threshold.
416When receive is enabled, the transmit unit will halt for the time delay
417specified in the firmware when a pause frame is received.
418.Pp
419Flow Control is disabled by default.
420.Pp
421Use sysctl to change the flow control settings for a single interface without
422reloading the driver:
423.Bd -literal -offset indent
424sysctl dev.ice.<interface #>.fc
425.Ed
426.Pp
427The available values for flow control are:
428.Bd -literal -offset indent
4290 = Disable flow control
4301 = Enable Rx pause
4312 = Enable Tx pause
4323 = Enable Rx and Tx pause
433.Ed
434.Pp
435Verify that link flow control was negotiated on the link by checking the
436interface entry in
437.Xr ifconfig 8
438and looking for the flags
439.Dq txpause
440and/or
441.Dq rxpause
442in the
443.Dq media
444status.
445.Pp
446The
447.Nm
448driver requires flow control on both the port and link partner.
449If flow control is disabled on one of the sides, the port may appear to
450hang on heavy traffic.
451.Pp
452For more information on priority flow control, refer to the
453.Sx Data Center Bridging
454section.
455.Pp
456The VF driver does not have access to flow control.
457It must be managed from the host side.
458.Ss Forward Error Correction
459Forward Error Correction (FEC) improves link stability but increases latency.
460Many high quality optics, direct attach cables, and backplane channels can
461provide a stable link without FEC.
462.Pp
463For devices to benefit from this feature, link partners must have FEC enabled.
464.Pp
465If the
466.Va allow_no_fec_modules_in_auto
467sysctl is enabled Auto FEC negotiation will include
468.Dq No FEC
469in case the link partner does not have FEC enabled or is not FEC capable:
470.Bd -literal -offset indent
471sysctl dev.ice.<interface #>.allow_no_fec_modules_in_auto=1
472.Ed
473.Pp
474NOTE: This flag is currently not supported on the Intel Ethernet 830
475Series.
476.Pp
477To show the current FEC settings that are negotiated on the link:
478.Bd -literal -offset indent
479sysctl dev.ice.<interface #>.negotiated_fec
480.Ed
481.Pp
482To view or set the FEC setting that was requested on the link:
483.Bd -literal -offset indent
484sysctl dev.ice.<interface #>.requested_fec
485.Ed
486.Pp
487To see the valid FEC modes for the link:
488.Bd -literal -offset indent
489sysctl -d dev.ice.<interface #>.requested_fec
490.Ed
491.Ss Speed and Duplex Configuration
492The speed and duplex settings cannot be hard set.
493.Pp
494To have the device change the speeds it will use in auto-negotiation or
495force link with:
496.Bd -literal -offset indent
497sysctl dev.ice.<interface #>.advertise_speed=<mask>
498.Ed
499.Pp
500Supported speeds will vary by device.
501Depending on the speeds the device supports, valid bits used in a speed mask
502could include:
503.Bd -literal -offset indent
5040x0 - Auto
5050x2 - 100 Mbps
5060x4 - 1 Gbps
5070x8 - 2.5 Gbps
5080x10 - 5 Gbps
5090x20 - 10 Gbps
5100x80 - 25 Gbps
5110x100 - 40 Gbps
5120x200 - 50 Gbps
5130x400 - 100 Gbps
5140x800 - 200 Gbps
515.Ed
516.Ss Disabling physical link when the interface is brought down
517When the
518.Va link_active_on_if_down
519sysctl is set to
520.Dq 0 ,
521the port's link will go down when the interface is brought down.
522By default, link will stay up.
523.Pp
524To disable link when the interface is down:
525.Bd -literal -offset indent
526sysctl dev.ice.<interface #>.link_active_on_if_down=0
527.Ed
528.Ss Firmware Logging
529The
530.Nm
531driver allows for the generation of firmware logs for supported categories of
532events, to help debug issues with Customer Support.
533Refer to the
534.Dq Intel Ethernet Adapters and Devices User Guide
535for an overview of this feature and additional tips.
536.Pp
537At a high level, to capture a firmware log:
538.Bl -enum -compact
539.It
540Set the configuration for the firmware log.
541.It
542Perform the necessary steps to reproduce the issue.
543.It
544Capture the firmware log.
545.It
546Stop capturing the firmware log.
547.It
548Reset the firmware log settings as needed.
549.It
550Work with Customer Support to debug the issue.
551.El
552.Pp
553NOTE: Firmware logs are generated in a binary format and must be decoded by
554Customer Support.
555Information collected is related only to firmware and hardware for debug
556purposes.
557.Pp
558Once the driver is loaded, it will create the
559.Va fw_log
560sysctl node under the debug section of the driver's sysctl list.
561The driver groups these events into categories, called
562.Dq modules .
563Supported modules include:
564.Pp
565.Bl -tag -offset indent -compact -width "task_dispatch"
566.It Va general
567General (Bit 0)
568.It Va ctrl
569Control (Bit 1)
570.It Va link
571Link Management (Bit 2)
572.It Va link_topo
573Link Topology Detection (Bit 3)
574.It Va dnl
575Link Control Technology (Bit 4)
576.It Va i2c
577I2C (Bit 5)
578.It Va sdp
579SDP (Bit 6)
580.It Va mdio
581MDIO (Bit 7)
582.It Va adminq
583Admin Queue (Bit 8)
584.It Va hdma
585Host DMA (Bit 9)
586.It Va lldp
587LLDP (Bit 10)
588.It Va dcbx
589DCBx (Bit 11)
590.It Va dcb
591DCB (Bit 12)
592.It Va xlr
593XLR (function-level resets; Bit 13)
594.It Va nvm
595NVM (Bit 14)
596.It Va auth
597Authentication (Bit 15)
598.It Va vpd
599Vital Product Data (Bit 16)
600.It Va iosf
601Intel On-Chip System Fabric (Bit 17)
602.It Va parser
603Parser (Bit 18)
604.It Va sw
605Switch (Bit 19)
606.It Va scheduler
607Scheduler (Bit 20)
608.It Va txq
609TX Queue Management (Bit 21)
610.It Va acl
611ACL (Access Control List; Bit 22)
612.It Va post
613Post (Bit 23)
614.It Va watchdog
615Watchdog (Bit 24)
616.It Va task_dispatch
617Task Dispatcher (Bit 25)
618.It Va mng
619Manageability (Bit 26)
620.It Va synce
621SyncE (Bit 27)
622.It Va health
623Health (Bit 28)
624.It Va tsdrv
625Time Sync (Bit 29)
626.It Va pfreg
627PF Registration (Bit 30)
628.It Va mdlver
629Module Version (Bit 31)
630.El
631.Pp
632The verbosity level of the firmware logs can be modified.
633It is possible to set only one log level per module, and each level includes the
634verbosity levels lower than it.
635For instance, setting the level to
636.Dq normal
637will also log warning and error messages.
638Available verbosity levels are:
639.Pp
640.Bl -item -offset indent -compact
641.It
6420 = none
643.It
6441 = error
645.It
6462 = warning
647.It
6483 = normal
649.It
6504 = verbose
651.El
652.Pp
653To set the desired verbosity level for a module, use the following sysctl
654command and then register it:
655.Bd -literal -offset indent
656sysctl dev.ice.<interface #>.debug.fw_log.severity.<module>=<level>
657.Ed
658.Pp
659For example:
660.Bd -literal -offset indent
661sysctl dev.ice.0.debug.fw_log.severity.link=1
662sysctl dev.ice.0.debug.fw_log.severity.link_topo=2
663sysctl dev.ice.0.debug.fw_log.register=1
664.Ed
665.Pp
666To log firmware messages after booting, but before the driver initializes, use
667.Xr kenv 1
668to set the tunable.
669The
670.Va on_load
671setting tells the device to register the variable as soon as possible during
672driver load.
673For example:
674.Bd -literal -offset indent
675kenv dev.ice.0.debug.fw_log.severity.link=1
676kenv dev.ice.0.debug.fw_log.severity.link_topo=2
677kenv dev.ice.0.debug.fw_log.on_load=1
678.Ed
679.Pp
680To view the firmware logs and redirect them to a file, use the following
681command:
682.Bd -literal -offset indent
683dmesg > log_output
684.Ed
685.Pp
686NOTE: Logging a large number of modules or too high of a verbosity level will
687add extraneous messages to dmesg and could hinder debug efforts.
688.Ss Debug Dump
689Intel Ethernet 800 Series devices support debug dump,
690which allows gathering of runtime register values from the firmware for
691.Dq clusters
692of events and then write the results to a single dump file, for debugging
693complicated issues in the field.
694.Pp
695This debug dump contains a snapshot of the device and its existing hardware
696configuration, such as switch tables, transmit scheduler tables, and other
697information.
698Debug dump captures the current state of the specified cluster(s) and is a
699stateless snapshot of the whole device.
700.Pp
701NOTE: Like with firmware logs, the contents of the debug dump are not
702human-readable.
703Work with Customer Support to decode the file.
704.Pp
705Debug dump is per device, not per PF.
706.Pp
707Debug dump writes all information to a single file.
708.Pp
709To generate a debug dump file in
710.Fx
711do the following:
712.Pp
713Specify the cluster(s) to include in the dump file, using a bitmask and the
714following command:
715.Bd -literal -offset indent
716sysctl dev.ice.<interface #>.debug.dump.clusters=<bitmask>
717.Ed
718.Pp
719To print the complete cluster bitmask and parameter list to the screen,
720pass the
721.Fl d
722argument.
723For example:
724.Bd -literal -offset indent
725sysctl -d dev.ice.0.debug.dump.clusters
726.Ed
727.Pp
728Possible bitmask values for
729.Va clusters
730are:
731.Bl -bullet -compact
732.It
7330 - Dump all clusters (only supported on Intel Ethernet E810 Series and
734Intel Ethernet E830 Series)
735.It
7360x1 - Switch
737.It
7380x2 - ACL
739.It
7400x4 - Tx Scheduler
741.It
7420x8 - Profile Configuration
743.It
7440x20 - Link
745.It
7460x80 - DCB
747.It
7480x100 - L2P
749.It
7500x400000 - Manageability Transactions (only supported on Intel Ethernet
751E810 Series)
752.El
753.Pp
754For example, to dump the Switch, DCB, and L2P clusters, use the following:
755.Bd -literal -offset indent
756sysctl dev.ice.0.debug.dump.clusters=0x181
757.Ed
758.Pp
759To dump all clusters, use the following:
760.Bd -literal -offset indent
761sysctl dev.ice.0.debug.dump.clusters=0
762.Ed
763.Pp
764NOTE: Using 0 will skip Manageability Transactions data.
765.Pp
766If a single cluster is not specified,
767the driver will dump all clusters to a single file.
768Issue the debug dump command, using the following:
769.Bd -literal -offset indent
770sysctl -b dev.ice.<interface #>.debug.dump.dump=1 > dump.bin
771.Ed
772.Pp
773NOTE: The driver will not receive the command if the sysctl is not set to
774.Dq 1 .
775.Pp
776Replace
777.Dq dump.bin
778above with the preferred file name.
779.Pp
780To clear the
781.Va clusters
782mask before a subsequent debug dump and then do the dump:
783.Bd -literal -offset indent
784sysctl dev.ice.0.debug.dump.clusters=0
785sysctl dev.ice.0.debug.dump.dump=1
786.Ed
787.Ss Debugging PHY Statistics
788The ice driver supports the ability to obtain the values of the PHY registers
789from Intel(R) Ethernet 810 Series devices in order to debug link and
790connection issues during runtime.
791.Pp
792The driver provides information about:
793.Bl -bullet
794.It
795Rx and Tx Equalization parameters
796.It
797RS FEC correctable and uncorrectable block counts
798.El
799.Pp
800Use the following sysctl to read the PHY registers:
801.Bd -literal -offset indent
802sysctl dev.ice.<interface #>.debug.phy_statistics
803.Ed
804.Pp
805NOTE: The contents of the registers are not human-readable.
806Like with firmware logs and debug dump, work with Customer Support
807to decode the file.
808.Ss Transmit Balancing
809Some Intel(R) Ethernet 800 Series devices allow for enabling a transmit
810balancing feature to improve transmit performance under certain conditions.
811When enabled, this feature should provide more consistent transmit
812performance across queues and/or PFs and VFs.
813.Pp
814By default, transmit balancing is disabled in the NVM.
815To enable this feature, use one of the following to persistently change the
816setting for the device:
817.Bl -bullet
818.It
819Use the Ethernet Port Configuration Tool (EPCT) to enable the
820.Va tx_balancing
821option.
822Refer to the EPCT readme for more information.
823.It
824Enable the Transmit Balancing device setting in UEFI HII.
825.El
826.Pp
827When the driver loads, it reads the transmit balancing setting from the NVM and
828configures the device accordingly.
829.Pp
830NOTE: The user selection for transmit balancing in EPCT or HII is persistent
831across reboots.
832The system must be rebooted for the selected setting to take effect.
833.Pp
834This setting is device wide.
835.Pp
836The driver, NVM, and DDP package must all support this functionality to
837enable the feature.
838.Ss Thermal Monitoring
839Intel(R) Ethernet 810 Series and Intel(R) Ethernet 830 Series devices can
840display temperature data (in degrees Celsius) via:
841.Bd -literal -offset indent
842sysctl dev.ice.<interface #>.temp
843.Ed
844.Ss Network Memory Buffer Allocation
845.Fx
846may have a low number of network memory buffers (mbufs) by default.
847If the number of mbufs available is too low, it may cause the driver to fail
848to initialize and/or cause the system to become unresponsive.
849Check to see if the system is mbuf-starved by running
850.Ic netstat Fl m .
851Increase the number of mbufs by editing the lines below in
852.Pa /etc/sysctl.conf :
853.Bd -literal -offset indent
854kern.ipc.nmbclusters
855kern.ipc.nmbjumbop
856kern.ipc.nmbjumbo9
857kern.ipc.nmbjumbo16
858kern.ipc.nmbufs
859.Ed
860.Pp
861The amount of memory that should be allocated is system specific,
862and may require some trial and error.
863Also, increasing the following in
864.Pa /etc/sysctl.conf
865could help increase network performance:
866.Bd -literal -offset indent
867kern.ipc.maxsockbuf
868net.inet.tcp.sendspace
869net.inet.tcp.recvspace
870net.inet.udp.maxdgram
871net.inet.udp.recvspace
872.Ed
873.Ss Additional Utilities
874There are additional tools available from Intel to help configure and update
875the adapters covered by this driver.
876These tools can be downloaded directly from Intel at
877.Lk https://downloadcenter.intel.com ,
878by searching for their names:
879.Bl -bullet
880.It
881To change the behavior of the QSFP28 ports on E810-C adapters, use the Intel
882.Sy Ethernet Port Configuration Tool - FreeBSD .
883.It
884To update the firmware on an adapter, use the Intel
885.Sy Non-Volatile Memory (NVM) Update Utility for Intel Ethernet Network Adapters E810 series - FreeBSD
886.El
887.Ss Optics and auto-negotiation
888Modules based on 100GBASE-SR4,
889active optical cable (AOC), and active copper cable (ACC)
890do not support auto-negotiation per the IEEE specification.
891To obtain link with these modules,
892auto-negotiation must be turned off on the link partner's switch ports.
893.Pp
894Note that adapters also support
895all passive and active limiting direct attach cables
896that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
897.Ss PCI-Express Slot Bandwidth
898Some PCIe x8 slots are actually configured as x4 slots.
899These slots have insufficient bandwidth
900for full line rate with dual port and quad port devices.
901In addition,
902if a PCIe v4.0 or v3.0-capable adapter is placed into into a PCIe v2.x
903slot, full bandwidth will not be possible.
904.Pp
905The driver detects this situation and
906writes the following message in the system log:
907.Bd -ragged -offset indent
908PCI-Express bandwidth available for this device
909may be insufficient for optimal performance.
910Please move the device to a different PCI-e link
911with more lanes and/or higher transfer rate.
912.Ed
913.Pp
914If this error occurs,
915moving the adapter to a true PCIe x8 or x16 slot will resolve the issue.
916For best performance, install devices in the following PCI slots:
917.Bl -bullet
918.It
919Any 100Gbps-capable Intel(R) Ethernet 800 Series device: Install in a
920PCIe v4.0 x8 or v3.0 x16 slot
921.It
922A 200Gbps-capable Intel(R) Ethernet 830 Series device: Install in a
923PCIe v5.0 x8 or v4.0 x16 slot
924.El
925.Pp
926For questions related to hardware requirements,
927refer to the documentation supplied with the adapter.
928.Sh HARDWARE
929The
930.Nm
931driver supports the following
932Intel 800 series 1Gb to 200Gb Ethernet controllers:
933.Pp
934.Bl -bullet -compact
935.It
936Intel Ethernet Controller E810-C
937.It
938Intel Ethernet Controller E810-XXV
939.It
940Intel Ethernet Connection E822-C
941.It
942Intel Ethernet Connection E822-L
943.It
944Intel Ethernet Connection E823-C
945.It
946Intel Ethernet Connection E823-L
947.It
948Intel Ethernet Connection E825-C
949.It
950Intel Ethernet Connection E830-C
951.It
952Intel Ethernet Connection E830-CC
953.It
954Intel Ethernet Connection E830-L
955.It
956Intel Ethernet Connection E830-XXV
957.It
958Intel Ethernet Connection E835-C
959.It
960Intel Ethernet Connection E835-CC
961.It
962Intel Ethernet Connection E835-L
963.It
964Intel Ethernet Connection E835-XXV
965.El
966.Pp
967The
968.Nm
969driver supports some adapters in this series with SFP28/QSFP28 cages
970which have firmware that requires that Intel qualified modules are used;
971these qualified modules are listed below.
972This qualification check cannot be disabled by the driver.
973.Pp
974The
975.Nm
976driver supports 100Gb Ethernet adapters with these QSFP28 modules:
977.Pp
978.Bl -bullet -compact
979.It
980Intel 100G QSFP28 100GBASE-SR4   E100GQSFPSR28SRX
981.It
982Intel 100G QSFP28 100GBASE-SR4   SPTMBP1PMCDF
983.It
984Intel 100G QSFP28 100GBASE-CWDM4 SPTSBP3CLCCO
985.It
986Intel 100G QSFP28 100GBASE-DR    SPTSLP2SLCDF
987.El
988.Pp
989The
990.Nm
991driver supports 25Gb and 10Gb Ethernet adapters with these SFP28 modules:
992.Pp
993.Bl -bullet -compact
994.It
995Intel 10G/25G SFP28 25GBASE-SR E25GSFP28SR
996.It
997Intel     25G SFP28 25GBASE-SR E25GSFP28SRX (Extended Temp)
998.It
999Intel     25G SFP28 25GBASE-LR E25GSFP28LRX (Extended Temp)
1000.El
1001.Pp
1002The
1003.Nm
1004driver supports 10Gb and 1Gb Ethernet adapters with these SFP+ modules:
1005.Pp
1006.Bl -bullet -compact
1007.It
1008Intel 1G/10G SFP+ 10GBASE-SR E10GSFPSR
1009.It
1010Intel 1G/10G SFP+ 10GBASE-SR E10GSFPSRG1P5
1011.It
1012Intel 1G/10G SFP+ 10GBASE-SR E10GSFPSRG2P5
1013.It
1014Intel    10G SFP+ 10GBASE-SR E10GSFPSRX (Extended Temp)
1015.It
1016Intel 1G/10G SFP+ 10GBASE-LR E10GSFPLR
1017.El
1018.Sh LOADER TUNABLES
1019Tunables can be set at the
1020.Xr loader 8
1021prompt before booting the kernel or stored in
1022.Xr loader.conf 5 .
1023See the
1024.Xr iflib 4
1025man page for more information on using iflib sysctl variables as tunables.
1026.Bl -tag -width indent
1027.It Va hw.ice.enable_health_events
1028Set to 1 to enable firmware health event reporting across all devices.
1029Enabled by default.
1030.Pp
1031If enabled, when the driver receives a firmware health event message, it will
1032print out a description of the event to the kernel message buffer and if
1033applicable, possible actions to take to remedy it.
1034.It Va hw.ice.irdma
1035Set to 1 to enable the RDMA client interface, required by the
1036.Xr irdma 4
1037driver.
1038Enabled by default.
1039.It Va hw.ice.rdma_max_msix
1040Set the maximum number of per-device MSI-X vectors that are allocated for use
1041by the
1042.Xr irdma 4
1043driver.
1044Set to 64 by default.
1045.It Va hw.ice.debug.enable_tx_fc_filter
1046Set to 1 to enable the TX Flow Control filter across all devices.
1047Enabled by default.
1048.Pp
1049If enabled, the hardware will drop any transmitted Ethertype 0x8808 control
1050frames that do not originate from the hardware.
1051.It Va hw.ice.debug.enable_tx_lldp_filter
1052Set to 1 to enable the TX LLDP filter across all devices.
1053Enabled by default.
1054.Pp
1055If enabled, the hardware will drop any transmitted Ethertype 0x88cc LLDP frames
1056that do not originate from the hardware.
1057This must be disabled in order to use LLDP daemon software such as
1058.Xr lldpd 8 .
1059.It Va hw.ice.debug.ice_tx_balance_en
1060Set to 1 to allow the driver to use the 5-layer Tx Scheduler tree topology if
1061configured by the DDP package.
1062.Pp
1063Enabled by default.
1064.El
1065.Sh SYSCTL VARIABLES
1066.Bl -tag -width indent
1067.It Va dev.ice.#.current_speed
1068This is a display of the current link speed of the interface.
1069This is expected to match the speed of the media type in-use displayed by
1070.Xr ifconfig 8 .
1071.It Va dev.ice.#.fw_version
1072Displays the current firmware and NVM versions of the adapter.
1073This information should be submitted along with any support requests.
1074.It Va dev.ice.#.ddp_version
1075Displays the current DDP package version downloaded to the adapter.
1076This information should be submitted along with any support requests.
1077.It Va dev.ice.#.pba_number
1078Displays the Product Board Assembly Number.
1079May be used to help identify the type of adapter in use.
1080This sysctl may not exist depending on the adapter type.
1081.It Va dev.ice.#.hw.mac.*
1082This sysctl tree contains statistics collected by the hardware for the port.
1083.El
1084.Sh INTERRUPT STORMS
1085It is important to note that 100G operation can generate high
1086numbers of interrupts, often incorrectly being interpreted as
1087a storm condition in the kernel.
1088It is suggested that this be resolved by setting
1089.Va hw.intr_storm_threshold
1090to 0.
1091.Sh IOVCTL OPTIONS
1092The driver supports additional optional parameters for created VFs
1093(Virtual Functions) when using
1094.Xr iovctl 8 :
1095.Bl -tag -width indent
1096.It mac-addr Pq unicast-mac
1097Set the Ethernet MAC address that the VF will use.
1098If unspecified, the VF will use a randomly generated MAC address and
1099.Dq allow-set-mac
1100will be set to true.
1101.It mac-anti-spoof Pq bool
1102Prevent the VF from sending Ethernet frames with a source address
1103that does not match its own.
1104Enabled by default.
1105.It allow-set-mac Pq bool
1106Allow the VF to set its own Ethernet MAC address.
1107Disallowed by default.
1108.It allow-promisc Pq bool
1109Allow the VF to inspect all of the traffic sent to the port that it is created
1110on.
1111Disabled by default.
1112.It num-queues Pq uint16_t
1113Specify the number of queues the VF will have.
1114By default, this is set to the number of MSI-X vectors supported by the VF
1115minus one.
1116.It mirror-src-vsi Pq uint16_t
1117Specify which VSI the VF will mirror traffic from by setting this to a value
1118other than -1.
1119All traffic from that VSI will be mirrored to this VF.
1120Can be used as an alternative method to mirror RDMA traffic to another
1121interface than the method described in the
1122.Sx RDMA Monitoring
1123section.
1124Not affected by the
1125.Dq allow-promisc
1126parameter.
1127.It max-vlan-allowed Pq uint16_t
1128Specify maximum number of VLAN filters that the VF can use.
1129Receiving traffic on a VLAN requires a hardware filter which are a finite
1130resource; this is used to prevent a VF from starving other VFs or the PF of
1131filter resources.
1132By default, this is set to 16.
1133.It max-mac-filters Pq uint16_t
1134Specify maximum number of MAC address filters that the VF can use.
1135Each allowed MAC address requires a hardware filter which are a finite
1136resource; this is used to prevent a VF from starving other VFs or the PF of
1137filter resources.
1138The VF's default mac address does not count towards this limit.
1139By default, this is set to 64.
1140.El
1141.Pp
1142An up to date list of parameters and their defaults can be found by using
1143.Xr iovctl 8
1144with the
1145.Fl S
1146option.
1147.Pp
1148For more information on standard and mandatory parameters, see
1149.Xr iovctl.conf 5 .
1150.Sh SUPPORT
1151For general information and support, go to the Intel support website at:
1152.Lk http://www.intel.com/support/ .
1153.Pp
1154If an issue is identified with this driver with a supported adapter,
1155email all the specific information related to the issue to
1156.Aq Mt freebsd@intel.com .
1157.Sh SEE ALSO
1158.Xr iflib 4 ,
1159.Xr vlan 4 ,
1160.Xr ifconfig 8 ,
1161.Xr sysctl 8
1162.Sh HISTORY
1163The
1164.Nm
1165device driver first appeared in
1166.Fx 12.2 .
1167.Sh AUTHORS
1168The
1169.Nm
1170driver was written by
1171.An Intel Corporation Aq Mt freebsd@intel.com .
1172