#
146e3055 |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add option to run single test
Add a command line option to be able to run a single test. This option (-t) takes a number from the list of tests available with the "-l" option. Here ar
selftests/xsk: add option to run single test
Add a command line option to be able to run a single test. This option (-t) takes a number from the list of tests available with the "-l" option. Here are two examples:
Run test number 2, the "receive single packet" test in all available modes:
./test_xsk.sh -t 2
Run test number 21, the metadata copy test in skb mode only
./test_xsh.sh -t 21 -m skb
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-8-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
c53dab7d |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add option that lists all tests
Add a command line option (-l) that lists all the tests. The number before the test will be used in the next commit for specifying a single test to run
selftests/xsk: add option that lists all tests
Add a command line option (-l) that lists all the tests. The number before the test will be used in the next commit for specifying a single test to run. Here is an example of the output:
Tests: 0: SEND_RECEIVE 1: SEND_RECEIVE_2K_FRAME 2: SEND_RECEIVE_SINGLE_PKT 3: POLL_RX 4: POLL_TX 5: POLL_RXQ_FULL 6: POLL_TXQ_FULL 7: SEND_RECEIVE_UNALIGNED : :
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-7-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
f20fbcd0 |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: declare test names in struct
Declare the test names statically in a struct so that we can refer to them when adding the support to execute a single test in the next commit. Before thi
selftests/xsk: declare test names in struct
Declare the test names statically in a struct so that we can refer to them when adding the support to execute a single test in the next commit. Before this patch, the names of them were not declared in a single place which made it not possible to refer to them.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-6-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
13c341c4 |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: move all tests to separate functions
Prepare for the capability to be able to run a single test by moving all the tests to their own functions. This function can then be called to exe
selftests/xsk: move all tests to separate functions
Prepare for the capability to be able to run a single test by moving all the tests to their own functions. This function can then be called to execute that test in the next commit.
Also, the tests named RUN_TO_COMPLETION_* were not named well, so change them to SEND_RECEIVE_* as it is just a basic send and receive test of 4K packets.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-5-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
3956bc34 |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add option to only run tests in a single mode
Add an option -m on the command line that allows the user to run the tests in a single mode instead of all of them. Valid modes are skb,
selftests/xsk: add option to only run tests in a single mode
Add an option -m on the command line that allows the user to run the tests in a single mode instead of all of them. Valid modes are skb, drv, and zc (zero-copy). An example:
To run test suite in drv mode only:
./test_xsk.sh -m drv
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-4-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
64370d7c |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add timeout for Tx thread
Add a timeout for the transmission thread. If packets are not completed properly, for some reason, the test harness would previously get stuck forever in a w
selftests/xsk: add timeout for Tx thread
Add a timeout for the transmission thread. If packets are not completed properly, for some reason, the test harness would previously get stuck forever in a while loop. But with this patch, this timeout will trigger, flag the test as a failure, and continue with the next test.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-3-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
2d2712ca |
| 14-Sep-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: print per packet info in verbose mode
Print info about every packet in verbose mode, both for Tx and Rx. This is useful to have when a test fails or to validate that a test is really
selftests/xsk: print per packet info in verbose mode
Print info about every packet in verbose mode, both for Tx and Rx. This is useful to have when a test fails or to validate that a test is really doing what it was designed to do. Info on what is supposed to be received and sent is also printed for the custom packet streams since they differ from the base line. Here is an example:
Tx addr: 37e0 len: 64 options: 0 pkt_nb: 8 Tx addr: 4000 len: 64 options: 0 pkt_nb: 9 Rx: addr: 100 len: 64 options: 0 pkt_nb: 0 valid: 1 Rx: addr: 1100 len: 64 options: 0 pkt_nb: 1 valid: 1 Rx: addr: 2100 len: 64 options: 0 pkt_nb: 4 valid: 1 Rx: addr: 3100 len: 64 options: 0 pkt_nb: 8 valid: 1 Rx: addr: 4100 len: 64 options: 0 pkt_nb: 9 valid: 1
One pointless verbose print statement is also deleted and another one is made clearer.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230914084900.492-2-magnus.karlsson@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
c900529f |
| 12-Sep-2023 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-fixes into drm-misc-fixes
Forwarding to v6.6-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
Revision tags: v6.6-rc1 |
|
#
34069d12 |
| 05-Sep-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge tag 'v6.5' into next
Sync up with mainline to bring in updates to the shared infrastructure.
|
#
1ac731c5 |
| 31-Aug-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge branch 'next' into for-linus
Prepare input updates for 6.6 merge window.
|
#
bd6c11bc |
| 29-Aug-2023 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'net-next-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni: "Core:
- Increase size limits for to-be-sent skb frag allocat
Merge tag 'net-next-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni: "Core:
- Increase size limits for to-be-sent skb frag allocations. This allows tun, tap devices and packet sockets to better cope with large writes operations
- Store netdevs in an xarray, to simplify iterating over netdevs
- Refactor nexthop selection for multipath routes
- Improve sched class lifetime handling
- Add backup nexthop ID support for bridge
- Implement drop reasons support in openvswitch
- Several data races annotations and fixes
- Constify the sk parameter of routing functions
- Prepend kernel version to netconsole message
Protocols:
- Implement support for TCP probing the peer being under memory pressure
- Remove hard coded limitation on IPv6 specific info placement inside the socket struct
- Get rid of sysctl_tcp_adv_win_scale and use an auto-estimated per socket scaling factor
- Scaling-up the IPv6 expired route GC via a separated list of expiring routes
- In-kernel support for the TLS alert protocol
- Better support for UDP reuseport with connected sockets
- Add NEXT-C-SID support for SRv6 End.X behavior, reducing the SR header size
- Get rid of additional ancillary per MPTCP connection struct socket
- Implement support for BPF-based MPTCP packet schedulers
- Format MPTCP subtests selftests results in TAP
- Several new SMC 2.1 features including unique experimental options, max connections per lgr negotiation, max links per lgr negotiation
BPF:
- Multi-buffer support in AF_XDP
- Add multi uprobe BPF links for attaching multiple uprobes and usdt probes, which is significantly faster and saves extra fds
- Implement an fd-based tc BPF attach API (TCX) and BPF link support on top of it
- Add SO_REUSEPORT support for TC bpf_sk_assign
- Support new instructions from cpu v4 to simplify the generated code and feature completeness, for x86, arm64, riscv64
- Support defragmenting IPv(4|6) packets in BPF
- Teach verifier actual bounds of bpf_get_smp_processor_id() and fix perf+libbpf issue related to custom section handling
- Introduce bpf map element count and enable it for all program types
- Add a BPF hook in sys_socket() to change the protocol ID from IPPROTO_TCP to IPPROTO_MPTCP to cover migration for legacy
- Introduce bpf_me_mcache_free_rcu() and fix OOM under stress
- Add uprobe support for the bpf_get_func_ip helper
- Check skb ownership against full socket
- Support for up to 12 arguments in BPF trampoline
- Extend link_info for kprobe_multi and perf_event links
Netfilter:
- Speed-up process exit by aborting ruleset validation if a fatal signal is pending
- Allow NLA_POLICY_MASK to be used with BE16/BE32 types
Driver API:
- Page pool optimizations, to improve data locality and cache usage
- Introduce ndo_hwtstamp_get() and ndo_hwtstamp_set() to avoid the need for raw ioctl() handling in drivers
- Simplify genetlink dump operations (doit/dumpit) providing them the common information already populated in struct genl_info
- Extend and use the yaml devlink specs to [re]generate the split ops
- Introduce devlink selective dumps, to allow SF filtering SF based on handle and other attributes
- Add yaml netlink spec for netlink-raw families, allow route, link and address related queries via the ynl tool
- Remove phylink legacy mode support
- Support offload LED blinking to phy
- Add devlink port function attributes for IPsec
New hardware / drivers:
- Ethernet: - Broadcom ASP 2.0 (72165) ethernet controller - MediaTek MT7988 SoC - Texas Instruments AM654 SoC - Texas Instruments IEP driver - Atheros qca8081 phy - Marvell 88Q2110 phy - NXP TJA1120 phy
- WiFi: - MediaTek mt7981 support
- Can: - Kvaser SmartFusion2 PCI Express devices - Allwinner T113 controllers - Texas Instruments tcan4552/4553 chips
- Bluetooth: - Intel Gale Peak - Qualcomm WCN3988 and WCN7850 - NXP AW693 and IW624 - Mediatek MT2925
Drivers:
- Ethernet NICs: - nVidia/Mellanox: - mlx5: - support UDP encapsulation in packet offload mode - IPsec packet offload support in eswitch mode - improve aRFS observability by adding new set of counters - extends MACsec offload support to cover RoCE traffic - dynamic completion EQs - mlx4: - convert to use auxiliary bus instead of custom interface logic - Intel - ice: - implement switchdev bridge offload, even for LAG interfaces - implement SRIOV support for LAG interfaces - igc: - add support for multiple in-flight TX timestamps - Broadcom: - bnxt: - use the unified RX page pool buffers for XDP and non-XDP - use the NAPI skb allocation cache - OcteonTX2: - support Round Robin scheduling HTB offload - TC flower offload support for SPI field - Freescale: - add XDP_TX feature support - AMD: - ionic: add support for PCI FLR event - sfc: - basic conntrack offload - introduce eth, ipv4 and ipv6 pedit offloads - ST Microelectronics: - stmmac: maximze PTP timestamping resolution
- Virtual NICs: - Microsoft vNIC: - batch ringing RX queue doorbell on receiving packets - add page pool for RX buffers - Virtio vNIC: - add per queue interrupt coalescing support - Google vNIC: - add queue-page-list mode support
- Ethernet high-speed switches: - nVidia/Mellanox (mlxsw): - add port range matching tc-flower offload - permit enslavement to netdevices with uppers
- Ethernet embedded switches: - Marvell (mv88e6xxx): - convert to phylink_pcs - Renesas: - r8A779fx: add speed change support - rzn1: enables vlan support
- Ethernet PHYs: - convert mv88e6xxx to phylink_pcs
- WiFi: - Qualcomm Wi-Fi 7 (ath12k): - extremely High Throughput (EHT) PHY support - RealTek (rtl8xxxu): - enable AP mode for: RTL8192FU, RTL8710BU (RTL8188GU), RTL8192EU and RTL8723BU - RealTek (rtw89): - Introduce Time Averaged SAR (TAS) support
- Connector: - support for event filtering"
* tag 'net-next-6.6' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1806 commits) net: ethernet: mtk_wed: minor change in wed_{tx,rx}info_show net: ethernet: mtk_wed: add some more info in wed_txinfo_show handler net: stmmac: clarify difference between "interface" and "phy_interface" r8152: add vendor/device ID pair for D-Link DUB-E250 devlink: move devlink_notify_register/unregister() to dev.c devlink: move small_ops definition into netlink.c devlink: move tracepoint definitions into core.c devlink: push linecard related code into separate file devlink: push rate related code into separate file devlink: push trap related code into separate file devlink: use tracepoint_enabled() helper devlink: push region related code into separate file devlink: push param related code into separate file devlink: push resource related code into separate file devlink: push dpipe related code into separate file devlink: move and rename devlink_dpipe_send_and_alloc_skb() helper devlink: push shared buffer related code into separate file devlink: push port related code into separate file devlink: push object register/unregister notifications into separate helpers inet: fix IP_TRANSPARENT error handling ...
show more ...
|
Revision tags: v6.5, v6.5-rc7, v6.5-rc6 |
|
#
2612e3bb |
| 07-Aug-2023 |
Rodrigo Vivi <rodrigo.vivi@intel.com> |
Merge drm/drm-next into drm-intel-next
Catching-up with drm-next and drm-intel-gt-next. It will unblock a code refactor around the platform definitions (names vs acronyms).
Signed-off-by: Rodrigo V
Merge drm/drm-next into drm-intel-next
Catching-up with drm-next and drm-intel-gt-next. It will unblock a code refactor around the platform definitions (names vs acronyms).
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
show more ...
|
#
9f771739 |
| 07-Aug-2023 |
Joonas Lahtinen <joonas.lahtinen@linux.intel.com> |
Merge drm/drm-next into drm-intel-gt-next
Need to pull in b3e4aae612ec ("drm/i915/hdcp: Modify hdcp_gsc_message msg sending mechanism") as a dependency for https://patchwork.freedesktop.org/series/1
Merge drm/drm-next into drm-intel-gt-next
Need to pull in b3e4aae612ec ("drm/i915/hdcp: Modify hdcp_gsc_message msg sending mechanism") as a dependency for https://patchwork.freedesktop.org/series/121735/
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
show more ...
|
Revision tags: v6.5-rc5 |
|
#
d07b7b32 |
| 04-Aug-2023 |
Jakub Kicinski <kuba@kernel.org> |
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Martin KaFai Lau says:
==================== pull-request: bpf-next 2023-08-03
We've added 54 non-merge commit
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Martin KaFai Lau says:
==================== pull-request: bpf-next 2023-08-03
We've added 54 non-merge commits during the last 10 day(s) which contain a total of 84 files changed, 4026 insertions(+), 562 deletions(-).
The main changes are:
1) Add SO_REUSEPORT support for TC bpf_sk_assign from Lorenz Bauer, Daniel Borkmann
2) Support new insns from cpu v4 from Yonghong Song
3) Non-atomically allocate freelist during prefill from YiFei Zhu
4) Support defragmenting IPv(4|6) packets in BPF from Daniel Xu
5) Add tracepoint to xdp attaching failure from Leon Hwang
6) struct netdev_rx_queue and xdp.h reshuffling to reduce rebuild time from Jakub Kicinski
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (54 commits) net: invert the netdevice.h vs xdp.h dependency net: move struct netdev_rx_queue out of netdevice.h eth: add missing xdp.h includes in drivers selftests/bpf: Add testcase for xdp attaching failure tracepoint bpf, xdp: Add tracepoint to xdp attaching failure selftests/bpf: fix static assert compilation issue for test_cls_*.c bpf: fix bpf_probe_read_kernel prototype mismatch riscv, bpf: Adapt bpf trampoline to optimized riscv ftrace framework libbpf: fix typos in Makefile tracing: bpf: use struct trace_entry in struct syscall_tp_t bpf, devmap: Remove unused dtab field from bpf_dtab_netdev bpf, cpumap: Remove unused cmap field from bpf_cpu_map_entry netfilter: bpf: Only define get_proto_defrag_hook() if necessary bpf: Fix an array-index-out-of-bounds issue in disasm.c net: remove duplicate INDIRECT_CALLABLE_DECLARE of udp[6]_ehashfn docs/bpf: Fix malformed documentation bpf: selftests: Add defrag selftests bpf: selftests: Support custom type and proto for client sockets bpf: selftests: Support not connecting client socket netfilter: bpf: Support BPF_F_NETFILTER_IP_DEFRAG in netfilter link ... ====================
Link: https://lore.kernel.org/r/20230803174845.825419-1-martin.lau@linux.dev Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
Revision tags: v6.5-rc4, v6.5-rc3 |
|
#
13fd5e14 |
| 20-Jul-2023 |
Colin Ian King <colin.i.king@gmail.com> |
selftests/xsk: Fix spelling mistake "querrying" -> "querying"
There is a spelling mistake in an error message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Björn Töpel <
selftests/xsk: Fix spelling mistake "querrying" -> "querying"
There is a spelling mistake in an error message. Fix it.
Signed-off-by: Colin Ian King <colin.i.king@gmail.com> Acked-by: Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20230720104815.123146-1-colin.i.king@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
show more ...
|
#
61b73694 |
| 24-Jul-2023 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-next into drm-misc-next
Backmerging to get v6.5-rc2.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
#
e93165d5 |
| 20-Jul-2023 |
Jakub Kicinski <kuba@kernel.org> |
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
==================== pull-request: bpf-next 2023-07-19
We've added 45 non-merge comm
Merge tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says:
==================== pull-request: bpf-next 2023-07-19
We've added 45 non-merge commits during the last 3 day(s) which contain a total of 71 files changed, 7808 insertions(+), 592 deletions(-).
The main changes are:
1) multi-buffer support in AF_XDP, from Maciej Fijalkowski, Magnus Karlsson, Tirthendu Sarkar.
2) BPF link support for tc BPF programs, from Daniel Borkmann.
3) Enable bpf_map_sum_elem_count kfunc for all program types, from Anton Protopopov.
4) Add 'owner' field to bpf_rb_node to fix races in shared ownership, Dave Marchevsky.
5) Prevent potential skb_header_pointer() misuse, from Alexei Starovoitov.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (45 commits) bpf, net: Introduce skb_pointer_if_linear(). bpf: sync tools/ uapi header with selftests/bpf: Add mprog API tests for BPF tcx links selftests/bpf: Add mprog API tests for BPF tcx opts bpftool: Extend net dump with tcx progs libbpf: Add helper macro to clear opts structs libbpf: Add link-based API for tcx libbpf: Add opts-based attach/detach/query API for tcx bpf: Add fd-based tcx multi-prog infra with link support bpf: Add generic attach/detach/query API for multi-progs selftests/xsk: reset NIC settings to default after running test suite selftests/xsk: add test for too many frags selftests/xsk: add metadata copy test for multi-buff selftests/xsk: add invalid descriptor test for multi-buffer selftests/xsk: add unaligned mode test for multi-buffer selftests/xsk: add basic multi-buffer test selftests/xsk: transmit and receive multi-buffer packets xsk: add multi-buffer documentation i40e: xsk: add TX multi-buffer support ice: xsk: Tx multi-buffer support ... ====================
Link: https://lore.kernel.org/r/20230719175424.75717-1-alexei.starovoitov@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
show more ...
|
#
3226e313 |
| 19-Jul-2023 |
Alexei Starovoitov <ast@kernel.org> |
Merge branch 'xsk-multi-buffer-support'
Maciej Fijalkowski says:
==================== xsk: multi-buffer support
v6->v7: - rebase...[Alexei]
v5->v6: - update bpf_xdp_query_opts__last_field in patc
Merge branch 'xsk-multi-buffer-support'
Maciej Fijalkowski says:
==================== xsk: multi-buffer support
v6->v7: - rebase...[Alexei]
v5->v6: - update bpf_xdp_query_opts__last_field in patch 10 [Alexei]
v4->v5: - align options argument size to match options from xdp_desc [Benjamin] - cleanup skb from xdp_sock on socket termination [Toke] - introduce new netlink attribute for letting user space know about Tx frag limit; this substitutes xdp_features flag previously dedicated for setting ZC multi-buffer support [Toke, Jakub] - include i40e ZC multi-buffer support - enable TOO_MANY_FRAGS for ZC on xskxceiver; this is now possible due to netlink attribute mentioned two bullets above
v3->v4: -rely on ynl for adding new xdp_features flag [Jakub] - move xskb_list to xsk_buff_pool
v2->v3: - Fix issue with the next valid packet getting dropped after an invalid packet with MAX_SKB_FRAGS + 1 frags [Magnus] - query NETDEV_XDP_ACT_ZC_SG flag within xskxceiver and act on it - remove redundant include in xsk.c [kernel test robot] - s/NETDEV_XDP_ACT_NDO_ZC_SG/NETDEV_XDP_ACT_ZC_SG + kernel doc [Magnus, Simon]
v1->v2: - fix spelling issues in commit messages [Simon] - remove XSK_DESC_MAX_FRAGS, use MAX_SKB_FRAGS instead [Stan, Alexei] - add documentation patch - fix build error from kernel test robot on patch 10
This series of patches add multi-buffer support for AF_XDP. XDP and various NIC drivers already have support for multi-buffer packets. With this patch set, programs using AF_XDP sockets can now also receive and transmit multi-buffer packets both in copy as well as zero-copy mode. ZC multi-buffer implementation is based on ice driver.
Some definitions to put us all on the same page:
* A packet consists of one or more frames
* A descriptor in one of the AF_XDP rings always refers to a single frame. In the case the packet consists of a single frame, the descriptor refers to the whole packet.
To represent a packet consisting of multiple frames, we introduce a new flag called XDP_PKT_CONTD in the options field of the Rx and Tx descriptors. If it is true (1) the packet continues with the next descriptor and if it is false (0) it means this is the last descriptor of the packet. Why the reverse logic of end-of-packet (eop) flag found in many NICs? Just to preserve compatibility with non-multi-buffer applications that have this bit set to false for all packets on Rx, and the apps set the options field to zero for Tx, as anything else will be treated as an invalid descriptor.
These are the semantics for producing packets onto XSK Tx ring consisting of multiple frames:
* When an invalid descriptor is found, all the other descriptors/frames of this packet are marked as invalid and not completed. The next descriptor is treated as the start of a new packet, even if this was not the intent (because we cannot guess the intent). As before, if your program is producing invalid descriptors you have a bug that must be fixed.
* Zero length descriptors are treated as invalid descriptors.
* For copy mode, the maximum supported number of frames in a packet is equal to CONFIG_MAX_SKB_FRAGS + 1. If it is exceeded, all descriptors accumulated so far are dropped and treated as invalid. To produce an application that will work on any system regardless of this config setting, limit the number of frags to 18, as the minimum value of the config is 17.
* For zero-copy mode, the limit is up to what the NIC HW supports. User space can discover this via newly introduced NETDEV_A_DEV_XDP_ZC_MAX_SEGS netlink attribute.
Here is an example Tx path pseudo-code (using libxdp interfaces for simplicity) ignoring that the umem is finite in size, and that we eventually will run out of packets to send. Also assumes pkts.addr points to a valid location in the umem.
void tx_packets(struct xsk_socket_info *xsk, struct pkt *pkts, int batch_size) { u32 idx, i, pkt_nb = 0;
xsk_ring_prod__reserve(&xsk->tx, batch_size, &idx);
for (i = 0; i < batch_size;) { u64 addr = pkts[pkt_nb].addr; u32 len = pkts[pkt_nb].size;
do { struct xdp_desc *tx_desc;
tx_desc = xsk_ring_prod__tx_desc(&xsk->tx, idx + i++); tx_desc->addr = addr;
if (len > xsk_frame_size) { tx_desc->len = xsk_frame_size; tx_desc->options |= XDP_PKT_CONTD; } else { tx_desc->len = len; tx_desc->options = 0; pkt_nb++; } len -= tx_desc->len; addr += xsk_frame_size;
if (i == batch_size) { /* Remember len, addr, pkt_nb for next * iteration. Skipped for simplicity. */ break; } } while (len); }
xsk_ring_prod__submit(&xsk->tx, i); }
On the Rx path in copy mode, the xsk core copies the XDP data into multiple descriptors, if needed, and sets the XDP_PKT_CONTD flag as detailed before. Zero-copy mode in order to avoid the copies has to maintain a chain of xdp_buff_xsk structs that represent whole packet. This is because what actually is redirected is the xdp_buff and we currently have no equivalent mechanism that is used for copy mode (embedded skb_shared_info in xdp_buff) to carry the frags. This means xdp_buff_xsk grows in size but these members are at the end and should not be touched when data path is not dealing with fragmented packets. This solution kept us within assumed performance impact, hence we decided to proceed with it.
When the application gets a descriptor with the XDP_PKT_CONTD flag set to one, it means that the packet consists of multiple buffers and it continues with the next buffer in the following descriptor. When a descriptor with XDP_PKT_CONTD == 0 is received, it means that this is the last buffer of the packet. AF_XDP guarantees that only a complete packet (all frames in the packet) is sent to the application.
If application reads a batch of descriptors, using for example the libxdp interfaces, it is not guaranteed that the batch will end with a full packet. It might end in the middle of a packet and the rest of the buffers of that packet will arrive at the beginning of the next batch, since the libxdp interface does not read the whole ring (unless you have an enormous batch size or a very small ring size).
Here is a simple Rx path pseudo-code example (using libxdp interfaces for simplicity). Error paths have been excluded for simplicity:
void rx_packets(struct xsk_socket_info *xsk) { static bool new_packet = true; u32 idx_rx = 0, idx_fq = 0; static char *pkt;
int rcvd = xsk_ring_cons__peek(&xsk->rx, opt_batch_size, &idx_rx);
xsk_ring_prod__reserve(&xsk->umem->fq, rcvd, &idx_fq);
for (int i = 0; i < rcvd; i++) { struct xdp_desc *desc = xsk_ring_cons__rx_desc(&xsk->rx, idx_rx++); char *frag = xsk_umem__get_data(xsk->umem->buffer, desc->addr); bool eop = !(desc->options & XDP_PKT_CONTD);
if (new_packet) pkt = frag; else add_frag_to_pkt(pkt, frag);
if (eop) process_pkt(pkt);
new_packet = eop;
*xsk_ring_prod__fill_addr(&xsk->umem->fq, idx_fq++) = desc->addr; }
xsk_ring_prod__submit(&xsk->umem->fq, rcvd); xsk_ring_cons__release(&xsk->rx, rcvd); }
We had to introduce a new bind flag (XDP_USE_SG) on the AF_XDP level to enable multi-buffer support. The reason we need to differentiate between non multi-buffer and multi-buffer is the behaviour when the kernel gets a packet that is larger than the frame size. Without multi-buffer, this packet is dropped and marked in the stats. With multi-buffer on, we want to split it up into multiple frames instead.
At the start, we thought that riding on the .frags section name of the XDP program was a good idea. You do not have to introduce yet another flag and all AF_XDP users must load an XDP program anyway to get any traffic up to the socket, so why not just say that the XDP program decides if the AF_XDP socket should get multi-buffer packets or not? The problem is that we can create an AF_XDP socket that is Tx only and that works without having to load an XDP program at all. Another problem is that the XDP program might change during the execution, so we would have to check this for every single packet.
Here is the observed throughput when compared to a codebase without any multi-buffer changes and measured with xdpsock for 64B packets. Apparently ZC Tx takes a hit from explicit zero length descriptors validation. Overall, in terms of ZC performance, there is a room for improvement, but for now we think this work is in a good shape in terms of correctness and functionality. We were targetting for up to 5% overhead though. Note that ZC performance drops come from core + driver support being combined, whereas copy mode had already driver support in place.
Mode rxdrop l2fwd txonly ice-zc -4% -7% -6% i40e-zc -7% -6% -7% drv -1.2% 0% +2% skb -0.6% -1% +2%
Thank you, Tirthendu, Magnus and Maciej ====================
Link: https://lore.kernel.org/r/20230719132421.584801-1-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
807bf4da |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add test for too many frags
Add a test that will exercise maximum number of supported fragments. This number depends on mode of the test - for SKB and DRV it will be 18 whereas for ZC
selftests/xsk: add test for too many frags
Add a test that will exercise maximum number of supported fragments. This number depends on mode of the test - for SKB and DRV it will be 18 whereas for ZC this is defined by a value from NETDEV_A_DEV_XDP_ZC_MAX_SEGS netlink attribute.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # made use of new netlink attribute Link: https://lore.kernel.org/r/20230719132421.584801-24-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
f80ddbec |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add metadata copy test for multi-buff
Enable the already existing metadata copy test to also run in multi-buffer mode with 9K packets.
Signed-off-by: Magnus Karlsson <magnus.karlsson
selftests/xsk: add metadata copy test for multi-buff
Enable the already existing metadata copy test to also run in multi-buffer mode with 9K packets.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230719132421.584801-23-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
69760449 |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add invalid descriptor test for multi-buffer
Add a test that produces lots of nasty descriptors testing the corner cases of the descriptor validation. Some of these descriptors are va
selftests/xsk: add invalid descriptor test for multi-buffer
Add a test that produces lots of nasty descriptors testing the corner cases of the descriptor validation. Some of these descriptors are valid and some are not as indicated by the valid flag. For a description of all the test combinations, please see the code.
To stress the API, we need to be able to generate combinations of descriptors that make little sense. A new verbatim mode is introduced for the packet_stream to accomplish this. In this mode, all packets in the packet_stream are sent as is. We do not try to chop them up into frames that are of the right size that we know are going to work as we would normally do. The packets are just written into the Tx ring even if we know they make no sense.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> # adjusted valid flags for frags Link: https://lore.kernel.org/r/20230719132421.584801-22-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
1005a226 |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add unaligned mode test for multi-buffer
Add a test for multi-buffer AF_XDP when using unaligned mode. The test sends 4096 9K-buffers.
Signed-off-by: Magnus Karlsson <magnus.karlsson
selftests/xsk: add unaligned mode test for multi-buffer
Add a test for multi-buffer AF_XDP when using unaligned mode. The test sends 4096 9K-buffers.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230719132421.584801-21-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
f540d44e |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: add basic multi-buffer test
Add the first basic multi-buffer test that sends a stream of 9K packets and validates that they are received at the other end. In order to enable sending a
selftests/xsk: add basic multi-buffer test
Add the first basic multi-buffer test that sends a stream of 9K packets and validates that they are received at the other end. In order to enable sending and receiving multi-buffer packets, code that sets the MTU is introduced as well as modifications to the XDP programs so that they signal that they are multi-buffer enabled.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230719132421.584801-20-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
17f1034d |
| 19-Jul-2023 |
Magnus Karlsson <magnus.karlsson@intel.com> |
selftests/xsk: transmit and receive multi-buffer packets
Add the ability to send and receive packets that are larger than the size of a umem frame, using the AF_XDP /XDP multi-buffer support. There
selftests/xsk: transmit and receive multi-buffer packets
Add the ability to send and receive packets that are larger than the size of a umem frame, using the AF_XDP /XDP multi-buffer support. There are three pieces of code that need to be changed to achieve this: the Rx path, the Tx path, and the validation logic.
Both the Rx path and Tx could only deal with a single fragment per packet. The Tx path is extended with a new function called pkt_nb_frags() that can be used to retrieve the number of fragments a packet will consume. We then create these many fragments in a loop and fill the N-1 first ones to the max size limit to use the buffer space efficiently, and the Nth one with whatever data that is left. This goes on until we have filled in at the most BATCH_SIZE worth of descriptors and fragments. If we detect that the next packet would lead to BATCH_SIZE number of fragments sent being exceeded, we do not send this packet and finish the batch. This packet is instead sent in the next iteration of BATCH_SIZE fragments.
For Rx, we loop over all fragments we receive as usual, but for every descriptor that we receive we call a new validation function called is_frag_valid() to validate the consistency of this fragment. The code then checks if the packet continues in the next frame. If so, it loops over the next packet and performs the same validation. once we have received the last fragment of the packet we also call the function is_pkt_valid() to validate the packet as a whole. If we get to the end of the batch and we are not at the end of the current packet, we back out the partial packet and end the loop. Once we get into the receive loop next time, we start over from the beginning of that packet. This so the code becomes simpler at the cost of some performance.
The validation function is_frag_valid() checks that the sequence and packet numbers are correct at the start and end of each fragment.
Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com> Link: https://lore.kernel.org/r/20230719132421.584801-19-maciej.fijalkowski@intel.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
#
50501936 |
| 17-Jul-2023 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge tag 'v6.4' into next
Sync up with mainline to bring in updates to shared infrastructure.
|