Lines Matching full:each
30 applying a filter to each packet that assigns it to one of a small number
31 of logical flows. Packets for each flow are steered to a separate receive
41 implementation of RSS uses a 128-entry indirection table where each entry
82 for each CPU if the device supports enough queues, or otherwise at least
83 one for each memory domain, where a memory domain is a set of CPUs that
98 Each receive queue has a separate IRQ associated with it. The NIC triggers
101 that can route each interrupt to a particular CPU. The active mapping
106 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems
122 interrupts (and thus work) grows with each additional queue.
125 processors with hyperthreading (HT), each hyperthread is represented as
204 Each receive hardware queue has an associated list of CPUs to which
205 RPS may enqueue packets for processing. For each received packet,
221 can be configured for each receive queue using a sysfs file entry::
241 receive queue is mapped to each CPU, then RPS is probably redundant
243 RPS might be beneficial if the rps_cpus for each queue are the ones that
277 turned on. It is implemented for each CPU independently (to avoid lock
284 Per-flow rate is calculated by hashing each packet into a hashtable
327 The CPU recorded in each entry is the one which last processed the flow.
335 Each table value is a CPU index that is updated during calls to recvmsg
342 for each flow: rps_dev_flow_table is a table specific to each hardware
343 receive queue of each device. Each table value stores a CPU index and a
352 CPU's backlog when a packet in this flow was last enqueued. Each backlog
407 For a multi-queue device, the rps_flow_cnt for each queue might be
410 are 16 configured receive queues, rps_flow_cnt for each queue might be
419 the application thread consuming the packets of each flow is running.
438 management to the Kernel by calling netif_enable_cpu_rmap(). For each CPU,
450 configured for each receive queue by the driver, so no additional
477 (contention can be eliminated completely if each CPU has its own
491 threads are not pinned to CPUs and each thread handles packets
505 queues, is computed and maintained for each network device. When
552 system, XPS is preferably configured so that each CPU maps onto one queue.
553 If there are as many queues as there are CPUs in the system, then each