Lines Matching +full:rx +full:- +full:port +full:- +full:mapping

1 .. SPDX-License-Identifier: GPL-2.0
13 multi-processor systems.
17 - RSS: Receive Side Scaling
18 - RPS: Receive Packet Steering
19 - RFS: Receive Flow Steering
20 - Accelerated Receive Flow Steering
21 - XPS: Transmit Packet Steering
28 (multi-queue). On reception, a NIC can send different packets to different
33 generally known as “Receive-side Scaling” (RSS). The goal of RSS and
35 Multi-queue distribution can also be used for traffic prioritization, but
39 and/or transport layer headers-- for example, a 4-tuple hash over
41 implementation of RSS uses a 128-entry indirection table where each entry
48 destination address) and TCP/UDP (source port, destination port) tuples
51 both directions of the flow to land on the same Rx queue (and CPU). The
52 "Symmetric-XOR" and "Symmetric-OR-XOR" are types of RSS algorithms that
57 Specifically, the "Symmetric-XOR" algorithm XORs the input
62 The "Symmetric-OR-XOR" algorithm, on the other hand, transforms the input as
70 programmable filters. For example, webserver bound TCP port 80 packets
71 can be directed to their own receive queue. Such “n-tuple” filters can
72 be configured from ethtool (--config-ntuple).
76 -----------------
78 The driver for a multi-queue capable NIC typically provides a kernel
88 default mapping is to distribute the queues evenly in the table, but the
90 commands (--show-rxfh-indir and --set-rxfh-indir). Modifying the
100 signaling path for PCIe devices uses message signaled interrupts (MSI-X),
101 that can route each interrupt to a particular CPU. The active mapping
103 an IRQ may be handled on any CPU. Because a non-negligible part of packet
106 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems
118 NIC maximum, if lower). The most efficient high-rate configuration
124 Per-cpu load can be observed using the mpstat utility, but note that on
133 Modern NICs support creating multiple co-existing RSS configurations
136 traffic for e.g. a particular destination port or IP address.
137 The example below shows how to direct all traffic to TCP port 22
142 # ethtool -X eth0 hfunc toeplitz context new
149 # ethtool -x eth0 context 1
150 RX flow hash indirection table for eth0 with 13 RX ring(s):
154 # ethtool -X eth0 equal 2 context 1
155 # ethtool -x eth0 context 1
156 RX flow hash indirection table for eth0 with 13 RX ring(s):
161 To make use of the new context direct traffic to it using an n-tuple
164 # ethtool -N eth0 flow-type tcp6 dst-port 22 context 1
169 # ethtool -N eth0 delete 1023
170 # ethtool -X eth0 context 1 delete
187 introduce inter-processor interrupts (IPIs))
195 flow hash over the packet’s addresses or ports (2-tuple or 4-tuple hash
201 skb->hash and can be used elsewhere in the stack as a hash of the
216 -----------------
223 /sys/class/net/<dev>/queues/rx-<n>/rps_cpus
227 CPU. Documentation/core-api/irq/irq-affinity.rst explains how CPUs are assigned to
240 For a multi-queue system, if RSS is configured so that a hardware
248 --------------
251 reordering. The trade-off to sending all packets from the same flow
263 net.core.netdev_max_backlog), the kernel starts a per-flow packet
284 Per-flow rate is calculated by hashing each packet into a hashtable
285 bucket and incrementing a per-bucket counter. The hash function is
287 be much larger than the number of CPUs, flow limit has finer-grained
303 network rx interrupts (as set in /proc/irq/N/smp_affinity).
369 - The current CPU's queue head counter >= the recorded tail counter
371 - The current CPU is unset (>= nr_cpu_ids)
372 - The current CPU is offline
382 -----------------
390 The number of entries in the per-queue flow table are set through::
392 /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
407 For a multi-queue device, the rps_flow_cnt for each queue might be
417 Accelerated RFS is to RFS what RSS is to RPS: a hardware-accelerated load
434 is maintained by the NIC driver. This is an auto-generated reverse map of
444 -----------------------------
465 which transmit queue to use when transmitting a packet on a multi-queue
467 a mapping of CPU to hardware queue(s) or a mapping of receive queue(s)
472 The goal of this mapping is usually to assign queues
484 This mapping is used to pick transmit queue based on the receive
487 the common use case is a 1:1 mapping. This will enable sending packets
489 busy polling multi-threaded workloads where there are challenges in
496 the same queue-association that a given application is polling on. This
503 CPUs/receive-queues that may use that queue to transmit. The reverse
504 mapping, from CPUs to transmit queues or from receive-queues to transmit
508 for the socket connection for a match in the receive queue-to-transmit queue
510 running CPU as a key into the CPU-to-queue lookup table. If the
523 skb->ooo_okay is set for a packet in the flow. This flag indicates that
531 -----------------
535 how, XPS is configured at device init. The mapping of CPUs/receive-queues
540 /sys/class/net/<dev>/queues/tx-<n>/xps_cpus
542 For selection based on receive-queues map::
544 /sys/class/net/<dev>/queues/tx-<n>/xps_rxqs
551 has no effect, since there is no choice in this case. In a multi-queue
561 explicitly configured mapping receive-queue(s) to transmit queue(s). If the
562 user configuration for receive-queue map does not apply, then the transmit
569 These are rate-limitation mechanisms implemented by HW, where currently
570 a max-rate attribute is supported, by setting a Mbps value to::
572 /sys/class/net/<dev>/queues/tx-<n>/tx_maxrate
588 - Tom Herbert (therbert@google.com)
589 - Willem de Bruijn (willemb@google.com)