xref: /linux/Documentation/bpf/map_cpumap.rst (revision dfd5e53dd72113f37663f59a6337fe9a0dfbf0f6)
1.. SPDX-License-Identifier: GPL-2.0-only
2.. Copyright (C) 2022 Red Hat, Inc.
3
4===================
5BPF_MAP_TYPE_CPUMAP
6===================
7
8.. note::
9   - ``BPF_MAP_TYPE_CPUMAP`` was introduced in kernel version 4.15
10
11.. kernel-doc:: kernel/bpf/cpumap.c
12 :doc: cpu map
13
14An example use-case for this map type is software based Receive Side Scaling (RSS).
15
16The CPUMAP represents the CPUs in the system indexed as the map-key, and the
17map-value is the config setting (per CPUMAP entry). Each CPUMAP entry has a dedicated
18kernel thread bound to the given CPU to represent the remote CPU execution unit.
19
20Starting from Linux kernel version 5.9 the CPUMAP can run a second XDP program
21on the remote CPU. This allows an XDP program to split its processing across
22multiple CPUs. For example, a scenario where the initial CPU (that sees/receives
23the packets) needs to do minimal packet processing and the remote CPU (to which
24the packet is directed) can afford to spend more cycles processing the frame. The
25initial CPU is where the XDP redirect program is executed. The remote CPU
26receives raw ``xdp_frame`` objects.
27
28Usage
29=====
30
31Kernel BPF
32----------
33.. c:function::
34     long bpf_redirect_map(struct bpf_map *map, u32 key, u64 flags)
35
36 Redirect the packet to the endpoint referenced by ``map`` at index ``key``.
37 For ``BPF_MAP_TYPE_CPUMAP`` this map contains references to CPUs.
38
39 The lower two bits of ``flags`` are used as the return code if the map lookup
40 fails. This is so that the return value can be one of the XDP program return
41 codes up to ``XDP_TX``, as chosen by the caller.
42
43Userspace
44---------
45.. note::
46    CPUMAP entries can only be updated/looked up/deleted from user space and not
47    from an eBPF program. Trying to call these functions from a kernel eBPF
48    program will result in the program failing to load and a verifier warning.
49
50.. c:function::
51    int bpf_map_update_elem(int fd, const void *key, const void *value,
52                   __u64 flags);
53
54 CPU entries can be added or updated using the ``bpf_map_update_elem()``
55 helper. This helper replaces existing elements atomically. The ``value`` parameter
56 can be ``struct bpf_cpumap_val``.
57
58 .. code-block:: c
59
60    struct bpf_cpumap_val {
61        __u32 qsize;  /* queue size to remote target CPU */
62        union {
63            int   fd; /* prog fd on map write */
64            __u32 id; /* prog id on map read */
65        } bpf_prog;
66    };
67
68 The flags argument can be one of the following:
69  - BPF_ANY: Create a new element or update an existing element.
70  - BPF_NOEXIST: Create a new element only if it did not exist.
71  - BPF_EXIST: Update an existing element.
72
73.. c:function::
74    int bpf_map_lookup_elem(int fd, const void *key, void *value);
75
76 CPU entries can be retrieved using the ``bpf_map_lookup_elem()``
77 helper.
78
79.. c:function::
80    int bpf_map_delete_elem(int fd, const void *key);
81
82 CPU entries can be deleted using the ``bpf_map_delete_elem()``
83 helper. This helper will return 0 on success, or negative error in case of
84 failure.
85
86Examples
87========
88Kernel
89------
90
91The following code snippet shows how to declare a ``BPF_MAP_TYPE_CPUMAP`` called
92``cpu_map`` and how to redirect packets to a remote CPU using a round robin scheme.
93
94.. code-block:: c
95
96   struct {
97        __uint(type, BPF_MAP_TYPE_CPUMAP);
98        __type(key, __u32);
99        __type(value, struct bpf_cpumap_val);
100        __uint(max_entries, 12);
101    } cpu_map SEC(".maps");
102
103    struct {
104        __uint(type, BPF_MAP_TYPE_ARRAY);
105        __type(key, __u32);
106        __type(value, __u32);
107        __uint(max_entries, 12);
108    } cpus_available SEC(".maps");
109
110    struct {
111        __uint(type, BPF_MAP_TYPE_PERCPU_ARRAY);
112        __type(key, __u32);
113        __type(value, __u32);
114        __uint(max_entries, 1);
115    } cpus_iterator SEC(".maps");
116
117    SEC("xdp")
118    int  xdp_redir_cpu_round_robin(struct xdp_md *ctx)
119    {
120        __u32 key = 0;
121        __u32 cpu_dest = 0;
122        __u32 *cpu_selected, *cpu_iterator;
123        __u32 cpu_idx;
124
125        cpu_iterator = bpf_map_lookup_elem(&cpus_iterator, &key);
126        if (!cpu_iterator)
127            return XDP_ABORTED;
128        cpu_idx = *cpu_iterator;
129
130        *cpu_iterator += 1;
131        if (*cpu_iterator == bpf_num_possible_cpus())
132            *cpu_iterator = 0;
133
134        cpu_selected = bpf_map_lookup_elem(&cpus_available, &cpu_idx);
135        if (!cpu_selected)
136            return XDP_ABORTED;
137        cpu_dest = *cpu_selected;
138
139        if (cpu_dest >= bpf_num_possible_cpus())
140            return XDP_ABORTED;
141
142        return bpf_redirect_map(&cpu_map, cpu_dest, 0);
143    }
144
145Userspace
146---------
147
148The following code snippet shows how to dynamically set the max_entries for a
149CPUMAP to the max number of cpus available on the system.
150
151.. code-block:: c
152
153    int set_max_cpu_entries(struct bpf_map *cpu_map)
154    {
155        if (bpf_map__set_max_entries(cpu_map, libbpf_num_possible_cpus()) < 0) {
156            fprintf(stderr, "Failed to set max entries for cpu_map map: %s",
157                strerror(errno));
158            return -1;
159        }
160        return 0;
161    }
162
163References
164===========
165
166- https://developers.redhat.com/blog/2021/05/13/receive-side-scaling-rss-with-ebpf-and-cpumap#redirecting_into_a_cpumap
167