xref: /linux/Documentation/networking/napi.rst (revision 7f71507851fc7764b36a3221839607d3a45c2025)
1.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
2
3.. _napi:
4
5====
6NAPI
7====
8
9NAPI is the event handling mechanism used by the Linux networking stack.
10The name NAPI no longer stands for anything in particular [#]_.
11
12In basic operation the device notifies the host about new events
13via an interrupt.
14The host then schedules a NAPI instance to process the events.
15The device may also be polled for events via NAPI without receiving
16interrupts first (:ref:`busy polling<poll>`).
17
18NAPI processing usually happens in the software interrupt context,
19but there is an option to use :ref:`separate kernel threads<threaded>`
20for NAPI processing.
21
22All in all NAPI abstracts away from the drivers the context and configuration
23of event (packet Rx and Tx) processing.
24
25Driver API
26==========
27
28The two most important elements of NAPI are the struct napi_struct
29and the associated poll method. struct napi_struct holds the state
30of the NAPI instance while the method is the driver-specific event
31handler. The method will typically free Tx packets that have been
32transmitted and process newly received packets.
33
34.. _drv_ctrl:
35
36Control API
37-----------
38
39netif_napi_add() and netif_napi_del() add/remove a NAPI instance
40from the system. The instances are attached to the netdevice passed
41as argument (and will be deleted automatically when netdevice is
42unregistered). Instances are added in a disabled state.
43
44napi_enable() and napi_disable() manage the disabled state.
45A disabled NAPI can't be scheduled and its poll method is guaranteed
46to not be invoked. napi_disable() waits for ownership of the NAPI
47instance to be released.
48
49The control APIs are not idempotent. Control API calls are safe against
50concurrent use of datapath APIs but an incorrect sequence of control API
51calls may result in crashes, deadlocks, or race conditions. For example,
52calling napi_disable() multiple times in a row will deadlock.
53
54Datapath API
55------------
56
57napi_schedule() is the basic method of scheduling a NAPI poll.
58Drivers should call this function in their interrupt handler
59(see :ref:`drv_sched` for more info). A successful call to napi_schedule()
60will take ownership of the NAPI instance.
61
62Later, after NAPI is scheduled, the driver's poll method will be
63called to process the events/packets. The method takes a ``budget``
64argument - drivers can process completions for any number of Tx
65packets but should only process up to ``budget`` number of
66Rx packets. Rx processing is usually much more expensive.
67
68In other words for Rx processing the ``budget`` argument limits how many
69packets driver can process in a single poll. Rx specific APIs like page
70pool or XDP cannot be used at all when ``budget`` is 0.
71skb Tx processing should happen regardless of the ``budget``, but if
72the argument is 0 driver cannot call any XDP (or page pool) APIs.
73
74.. warning::
75
76   The ``budget`` argument may be 0 if core tries to only process
77   skb Tx completions and no Rx or XDP packets.
78
79The poll method returns the amount of work done. If the driver still
80has outstanding work to do (e.g. ``budget`` was exhausted)
81the poll method should return exactly ``budget``. In that case,
82the NAPI instance will be serviced/polled again (without the
83need to be scheduled).
84
85If event processing has been completed (all outstanding packets
86processed) the poll method should call napi_complete_done()
87before returning. napi_complete_done() releases the ownership
88of the instance.
89
90.. warning::
91
92   The case of finishing all events and using exactly ``budget``
93   must be handled carefully. There is no way to report this
94   (rare) condition to the stack, so the driver must either
95   not call napi_complete_done() and wait to be called again,
96   or return ``budget - 1``.
97
98   If the ``budget`` is 0 napi_complete_done() should never be called.
99
100Call sequence
101-------------
102
103Drivers should not make assumptions about the exact sequencing
104of calls. The poll method may be called without the driver scheduling
105the instance (unless the instance is disabled). Similarly,
106it's not guaranteed that the poll method will be called, even
107if napi_schedule() succeeded (e.g. if the instance gets disabled).
108
109As mentioned in the :ref:`drv_ctrl` section - napi_disable() and subsequent
110calls to the poll method only wait for the ownership of the instance
111to be released, not for the poll method to exit. This means that
112drivers should avoid accessing any data structures after calling
113napi_complete_done().
114
115.. _drv_sched:
116
117Scheduling and IRQ masking
118--------------------------
119
120Drivers should keep the interrupts masked after scheduling
121the NAPI instance - until NAPI polling finishes any further
122interrupts are unnecessary.
123
124Drivers which have to mask the interrupts explicitly (as opposed
125to IRQ being auto-masked by the device) should use the napi_schedule_prep()
126and __napi_schedule() calls:
127
128.. code-block:: c
129
130  if (napi_schedule_prep(&v->napi)) {
131      mydrv_mask_rxtx_irq(v->idx);
132      /* schedule after masking to avoid races */
133      __napi_schedule(&v->napi);
134  }
135
136IRQ should only be unmasked after a successful call to napi_complete_done():
137
138.. code-block:: c
139
140  if (budget && napi_complete_done(&v->napi, work_done)) {
141    mydrv_unmask_rxtx_irq(v->idx);
142    return min(work_done, budget - 1);
143  }
144
145napi_schedule_irqoff() is a variant of napi_schedule() which takes advantage
146of guarantees given by being invoked in IRQ context (no need to
147mask interrupts). napi_schedule_irqoff() will fall back to napi_schedule() if
148IRQs are threaded (such as if ``PREEMPT_RT`` is enabled).
149
150Instance to queue mapping
151-------------------------
152
153Modern devices have multiple NAPI instances (struct napi_struct) per
154interface. There is no strong requirement on how the instances are
155mapped to queues and interrupts. NAPI is primarily a polling/processing
156abstraction without specific user-facing semantics. That said, most networking
157devices end up using NAPI in fairly similar ways.
158
159NAPI instances most often correspond 1:1:1 to interrupts and queue pairs
160(queue pair is a set of a single Rx and single Tx queue).
161
162In less common cases a NAPI instance may be used for multiple queues
163or Rx and Tx queues can be serviced by separate NAPI instances on a single
164core. Regardless of the queue assignment, however, there is usually still
165a 1:1 mapping between NAPI instances and interrupts.
166
167It's worth noting that the ethtool API uses a "channel" terminology where
168each channel can be either ``rx``, ``tx`` or ``combined``. It's not clear
169what constitutes a channel; the recommended interpretation is to understand
170a channel as an IRQ/NAPI which services queues of a given type. For example,
171a configuration of 1 ``rx``, 1 ``tx`` and 1 ``combined`` channel is expected
172to utilize 3 interrupts, 2 Rx and 2 Tx queues.
173
174User API
175========
176
177User interactions with NAPI depend on NAPI instance ID. The instance IDs
178are only visible to the user thru the ``SO_INCOMING_NAPI_ID`` socket option.
179It's not currently possible to query IDs used by a given device.
180
181Software IRQ coalescing
182-----------------------
183
184NAPI does not perform any explicit event coalescing by default.
185In most scenarios batching happens due to IRQ coalescing which is done
186by the device. There are cases where software coalescing is helpful.
187
188NAPI can be configured to arm a repoll timer instead of unmasking
189the hardware interrupts as soon as all packets are processed.
190The ``gro_flush_timeout`` sysfs configuration of the netdevice
191is reused to control the delay of the timer, while
192``napi_defer_hard_irqs`` controls the number of consecutive empty polls
193before NAPI gives up and goes back to using hardware IRQs.
194
195The above parameters can also be set on a per-NAPI basis using netlink via
196netdev-genl. When used with netlink and configured on a per-NAPI basis, the
197parameters mentioned above use hyphens instead of underscores:
198``gro-flush-timeout`` and ``napi-defer-hard-irqs``.
199
200Per-NAPI configuration can be done programmatically in a user application
201or by using a script included in the kernel source tree:
202``tools/net/ynl/cli.py``.
203
204For example, using the script:
205
206.. code-block:: bash
207
208  $ kernel-source/tools/net/ynl/cli.py \
209            --spec Documentation/netlink/specs/netdev.yaml \
210            --do napi-set \
211            --json='{"id": 345,
212                     "defer-hard-irqs": 111,
213                     "gro-flush-timeout": 11111}'
214
215Similarly, the parameter ``irq-suspend-timeout`` can be set using netlink
216via netdev-genl. There is no global sysfs parameter for this value.
217
218``irq-suspend-timeout`` is used to determine how long an application can
219completely suspend IRQs. It is used in combination with SO_PREFER_BUSY_POLL,
220which can be set on a per-epoll context basis with ``EPIOCSPARAMS`` ioctl.
221
222.. _poll:
223
224Busy polling
225------------
226
227Busy polling allows a user process to check for incoming packets before
228the device interrupt fires. As is the case with any busy polling it trades
229off CPU cycles for lower latency (production uses of NAPI busy polling
230are not well known).
231
232Busy polling is enabled by either setting ``SO_BUSY_POLL`` on
233selected sockets or using the global ``net.core.busy_poll`` and
234``net.core.busy_read`` sysctls. An io_uring API for NAPI busy polling
235also exists.
236
237epoll-based busy polling
238------------------------
239
240It is possible to trigger packet processing directly from calls to
241``epoll_wait``. In order to use this feature, a user application must ensure
242all file descriptors which are added to an epoll context have the same NAPI ID.
243
244If the application uses a dedicated acceptor thread, the application can obtain
245the NAPI ID of the incoming connection using SO_INCOMING_NAPI_ID and then
246distribute that file descriptor to a worker thread. The worker thread would add
247the file descriptor to its epoll context. This would ensure each worker thread
248has an epoll context with FDs that have the same NAPI ID.
249
250Alternatively, if the application uses SO_REUSEPORT, a bpf or ebpf program can
251be inserted to distribute incoming connections to threads such that each thread
252is only given incoming connections with the same NAPI ID. Care must be taken to
253carefully handle cases where a system may have multiple NICs.
254
255In order to enable busy polling, there are two choices:
256
2571. ``/proc/sys/net/core/busy_poll`` can be set with a time in useconds to busy
258   loop waiting for events. This is a system-wide setting and will cause all
259   epoll-based applications to busy poll when they call epoll_wait. This may
260   not be desirable as many applications may not have the need to busy poll.
261
2622. Applications using recent kernels can issue an ioctl on the epoll context
263   file descriptor to set (``EPIOCSPARAMS``) or get (``EPIOCGPARAMS``) ``struct
264   epoll_params``:, which user programs can define as follows:
265
266.. code-block:: c
267
268  struct epoll_params {
269      uint32_t busy_poll_usecs;
270      uint16_t busy_poll_budget;
271      uint8_t prefer_busy_poll;
272
273      /* pad the struct to a multiple of 64bits */
274      uint8_t __pad;
275  };
276
277IRQ mitigation
278---------------
279
280While busy polling is supposed to be used by low latency applications,
281a similar mechanism can be used for IRQ mitigation.
282
283Very high request-per-second applications (especially routing/forwarding
284applications and especially applications using AF_XDP sockets) may not
285want to be interrupted until they finish processing a request or a batch
286of packets.
287
288Such applications can pledge to the kernel that they will perform a busy
289polling operation periodically, and the driver should keep the device IRQs
290permanently masked. This mode is enabled by using the ``SO_PREFER_BUSY_POLL``
291socket option. To avoid system misbehavior the pledge is revoked
292if ``gro_flush_timeout`` passes without any busy poll call. For epoll-based
293busy polling applications, the ``prefer_busy_poll`` field of ``struct
294epoll_params`` can be set to 1 and the ``EPIOCSPARAMS`` ioctl can be issued to
295enable this mode. See the above section for more details.
296
297The NAPI budget for busy polling is lower than the default (which makes
298sense given the low latency intention of normal busy polling). This is
299not the case with IRQ mitigation, however, so the budget can be adjusted
300with the ``SO_BUSY_POLL_BUDGET`` socket option. For epoll-based busy polling
301applications, the ``busy_poll_budget`` field can be adjusted to the desired value
302in ``struct epoll_params`` and set on a specific epoll context using the ``EPIOCSPARAMS``
303ioctl. See the above section for more details.
304
305It is important to note that choosing a large value for ``gro_flush_timeout``
306will defer IRQs to allow for better batch processing, but will induce latency
307when the system is not fully loaded. Choosing a small value for
308``gro_flush_timeout`` can cause interference of the user application which is
309attempting to busy poll by device IRQs and softirq processing. This value
310should be chosen carefully with these tradeoffs in mind. epoll-based busy
311polling applications may be able to mitigate how much user processing happens
312by choosing an appropriate value for ``maxevents``.
313
314Users may want to consider an alternate approach, IRQ suspension, to help deal
315with these tradeoffs.
316
317IRQ suspension
318--------------
319
320IRQ suspension is a mechanism wherein device IRQs are masked while epoll
321triggers NAPI packet processing.
322
323While application calls to epoll_wait successfully retrieve events, the kernel will
324defer the IRQ suspension timer. If the kernel does not retrieve any events
325while busy polling (for example, because network traffic levels subsided), IRQ
326suspension is disabled and the IRQ mitigation strategies described above are
327engaged.
328
329This allows users to balance CPU consumption with network processing
330efficiency.
331
332To use this mechanism:
333
334  1. The per-NAPI config parameter ``irq-suspend-timeout`` should be set to the
335     maximum time (in nanoseconds) the application can have its IRQs
336     suspended. This is done using netlink, as described above. This timeout
337     serves as a safety mechanism to restart IRQ driver interrupt processing if
338     the application has stalled. This value should be chosen so that it covers
339     the amount of time the user application needs to process data from its
340     call to epoll_wait, noting that applications can control how much data
341     they retrieve by setting ``max_events`` when calling epoll_wait.
342
343  2. The sysfs parameter or per-NAPI config parameters ``gro_flush_timeout``
344     and ``napi_defer_hard_irqs`` can be set to low values. They will be used
345     to defer IRQs after busy poll has found no data.
346
347  3. The ``prefer_busy_poll`` flag must be set to true. This can be done using
348     the ``EPIOCSPARAMS`` ioctl as described above.
349
350  4. The application uses epoll as described above to trigger NAPI packet
351     processing.
352
353As mentioned above, as long as subsequent calls to epoll_wait return events to
354userland, the ``irq-suspend-timeout`` is deferred and IRQs are disabled. This
355allows the application to process data without interference.
356
357Once a call to epoll_wait results in no events being found, IRQ suspension is
358automatically disabled and the ``gro_flush_timeout`` and
359``napi_defer_hard_irqs`` mitigation mechanisms take over.
360
361It is expected that ``irq-suspend-timeout`` will be set to a value much larger
362than ``gro_flush_timeout`` as ``irq-suspend-timeout`` should suspend IRQs for
363the duration of one userland processing cycle.
364
365While it is not stricly necessary to use ``napi_defer_hard_irqs`` and
366``gro_flush_timeout`` to use IRQ suspension, their use is strongly
367recommended.
368
369IRQ suspension causes the system to alternate between polling mode and
370irq-driven packet delivery. During busy periods, ``irq-suspend-timeout``
371overrides ``gro_flush_timeout`` and keeps the system busy polling, but when
372epoll finds no events, the setting of ``gro_flush_timeout`` and
373``napi_defer_hard_irqs`` determine the next step.
374
375There are essentially three possible loops for network processing and
376packet delivery:
377
3781) hardirq -> softirq -> napi poll; basic interrupt delivery
3792) timer -> softirq -> napi poll; deferred irq processing
3803) epoll -> busy-poll -> napi poll; busy looping
381
382Loop 2 can take control from Loop 1, if ``gro_flush_timeout`` and
383``napi_defer_hard_irqs`` are set.
384
385If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are set, Loops 2
386and 3 "wrestle" with each other for control.
387
388During busy periods, ``irq-suspend-timeout`` is used as timer in Loop 2,
389which essentially tilts network processing in favour of Loop 3.
390
391If ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` are not set, Loop 3
392cannot take control from Loop 1.
393
394Therefore, setting ``gro_flush_timeout`` and ``napi_defer_hard_irqs`` is
395the recommended usage, because otherwise setting ``irq-suspend-timeout``
396might not have any discernible effect.
397
398.. _threaded:
399
400Threaded NAPI
401-------------
402
403Threaded NAPI is an operating mode that uses dedicated kernel
404threads rather than software IRQ context for NAPI processing.
405The configuration is per netdevice and will affect all
406NAPI instances of that device. Each NAPI instance will spawn a separate
407thread (called ``napi/${ifc-name}-${napi-id}``).
408
409It is recommended to pin each kernel thread to a single CPU, the same
410CPU as the CPU which services the interrupt. Note that the mapping
411between IRQs and NAPI instances may not be trivial (and is driver
412dependent). The NAPI instance IDs will be assigned in the opposite
413order than the process IDs of the kernel threads.
414
415Threaded NAPI is controlled by writing 0/1 to the ``threaded`` file in
416netdev's sysfs directory.
417
418.. rubric:: Footnotes
419
420.. [#] NAPI was originally referred to as New API in 2.4 Linux.
421