xref: /linux/Documentation/bpf/kfuncs.rst (revision c4101e55974cc7d835fbd2d8e01553a3f61e9e75)
1.. SPDX-License-Identifier: GPL-2.0
2
3.. _kfuncs-header-label:
4
5=============================
6BPF Kernel Functions (kfuncs)
7=============================
8
91. Introduction
10===============
11
12BPF Kernel Functions or more commonly known as kfuncs are functions in the Linux
13kernel which are exposed for use by BPF programs. Unlike normal BPF helpers,
14kfuncs do not have a stable interface and can change from one kernel release to
15another. Hence, BPF programs need to be updated in response to changes in the
16kernel. See :ref:`BPF_kfunc_lifecycle_expectations` for more information.
17
182. Defining a kfunc
19===================
20
21There are two ways to expose a kernel function to BPF programs, either make an
22existing function in the kernel visible, or add a new wrapper for BPF. In both
23cases, care must be taken that BPF program can only call such function in a
24valid context. To enforce this, visibility of a kfunc can be per program type.
25
26If you are not creating a BPF wrapper for existing kernel function, skip ahead
27to :ref:`BPF_kfunc_nodef`.
28
292.1 Creating a wrapper kfunc
30----------------------------
31
32When defining a wrapper kfunc, the wrapper function should have extern linkage.
33This prevents the compiler from optimizing away dead code, as this wrapper kfunc
34is not invoked anywhere in the kernel itself. It is not necessary to provide a
35prototype in a header for the wrapper kfunc.
36
37An example is given below::
38
39        /* Disables missing prototype warnings */
40        __bpf_kfunc_start_defs();
41
42        __bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr)
43        {
44                return find_get_task_by_vpid(nr);
45        }
46
47        __bpf_kfunc_end_defs();
48
49A wrapper kfunc is often needed when we need to annotate parameters of the
50kfunc. Otherwise one may directly make the kfunc visible to the BPF program by
51registering it with the BPF subsystem. See :ref:`BPF_kfunc_nodef`.
52
532.2 Annotating kfunc parameters
54-------------------------------
55
56Similar to BPF helpers, there is sometime need for additional context required
57by the verifier to make the usage of kernel functions safer and more useful.
58Hence, we can annotate a parameter by suffixing the name of the argument of the
59kfunc with a __tag, where tag may be one of the supported annotations.
60
612.2.1 __sz Annotation
62---------------------
63
64This annotation is used to indicate a memory and size pair in the argument list.
65An example is given below::
66
67        __bpf_kfunc void bpf_memzero(void *mem, int mem__sz)
68        {
69        ...
70        }
71
72Here, the verifier will treat first argument as a PTR_TO_MEM, and second
73argument as its size. By default, without __sz annotation, the size of the type
74of the pointer is used. Without __sz annotation, a kfunc cannot accept a void
75pointer.
76
772.2.2 __k Annotation
78--------------------
79
80This annotation is only understood for scalar arguments, where it indicates that
81the verifier must check the scalar argument to be a known constant, which does
82not indicate a size parameter, and the value of the constant is relevant to the
83safety of the program.
84
85An example is given below::
86
87        __bpf_kfunc void *bpf_obj_new(u32 local_type_id__k, ...)
88        {
89        ...
90        }
91
92Here, bpf_obj_new uses local_type_id argument to find out the size of that type
93ID in program's BTF and return a sized pointer to it. Each type ID will have a
94distinct size, hence it is crucial to treat each such call as distinct when
95values don't match during verifier state pruning checks.
96
97Hence, whenever a constant scalar argument is accepted by a kfunc which is not a
98size parameter, and the value of the constant matters for program safety, __k
99suffix should be used.
100
1012.2.3 __uninit Annotation
102-------------------------
103
104This annotation is used to indicate that the argument will be treated as
105uninitialized.
106
107An example is given below::
108
109        __bpf_kfunc int bpf_dynptr_from_skb(..., struct bpf_dynptr_kern *ptr__uninit)
110        {
111        ...
112        }
113
114Here, the dynptr will be treated as an uninitialized dynptr. Without this
115annotation, the verifier will reject the program if the dynptr passed in is
116not initialized.
117
1182.2.4 __opt Annotation
119-------------------------
120
121This annotation is used to indicate that the buffer associated with an __sz or __szk
122argument may be null. If the function is passed a nullptr in place of the buffer,
123the verifier will not check that length is appropriate for the buffer. The kfunc is
124responsible for checking if this buffer is null before using it.
125
126An example is given below::
127
128        __bpf_kfunc void *bpf_dynptr_slice(..., void *buffer__opt, u32 buffer__szk)
129        {
130        ...
131        }
132
133Here, the buffer may be null. If buffer is not null, it at least of size buffer_szk.
134Either way, the returned buffer is either NULL, or of size buffer_szk. Without this
135annotation, the verifier will reject the program if a null pointer is passed in with
136a nonzero size.
137
1382.2.5 __str Annotation
139----------------------------
140This annotation is used to indicate that the argument is a constant string.
141
142An example is given below::
143
144        __bpf_kfunc bpf_get_file_xattr(..., const char *name__str, ...)
145        {
146        ...
147        }
148
149In this case, ``bpf_get_file_xattr()`` can be called as::
150
151        bpf_get_file_xattr(..., "xattr_name", ...);
152
153Or::
154
155        const char name[] = "xattr_name";  /* This need to be global */
156        int BPF_PROG(...)
157        {
158                ...
159                bpf_get_file_xattr(..., name, ...);
160                ...
161        }
162
163.. _BPF_kfunc_nodef:
164
1652.3 Using an existing kernel function
166-------------------------------------
167
168When an existing function in the kernel is fit for consumption by BPF programs,
169it can be directly registered with the BPF subsystem. However, care must still
170be taken to review the context in which it will be invoked by the BPF program
171and whether it is safe to do so.
172
1732.4 Annotating kfuncs
174---------------------
175
176In addition to kfuncs' arguments, verifier may need more information about the
177type of kfunc(s) being registered with the BPF subsystem. To do so, we define
178flags on a set of kfuncs as follows::
179
180        BTF_SET8_START(bpf_task_set)
181        BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL)
182        BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE)
183        BTF_SET8_END(bpf_task_set)
184
185This set encodes the BTF ID of each kfunc listed above, and encodes the flags
186along with it. Ofcourse, it is also allowed to specify no flags.
187
188kfunc definitions should also always be annotated with the ``__bpf_kfunc``
189macro. This prevents issues such as the compiler inlining the kfunc if it's a
190static kernel function, or the function being elided in an LTO build as it's
191not used in the rest of the kernel. Developers should not manually add
192annotations to their kfunc to prevent these issues. If an annotation is
193required to prevent such an issue with your kfunc, it is a bug and should be
194added to the definition of the macro so that other kfuncs are similarly
195protected. An example is given below::
196
197        __bpf_kfunc struct task_struct *bpf_get_task_pid(s32 pid)
198        {
199        ...
200        }
201
2022.4.1 KF_ACQUIRE flag
203---------------------
204
205The KF_ACQUIRE flag is used to indicate that the kfunc returns a pointer to a
206refcounted object. The verifier will then ensure that the pointer to the object
207is eventually released using a release kfunc, or transferred to a map using a
208referenced kptr (by invoking bpf_kptr_xchg). If not, the verifier fails the
209loading of the BPF program until no lingering references remain in all possible
210explored states of the program.
211
2122.4.2 KF_RET_NULL flag
213----------------------
214
215The KF_RET_NULL flag is used to indicate that the pointer returned by the kfunc
216may be NULL. Hence, it forces the user to do a NULL check on the pointer
217returned from the kfunc before making use of it (dereferencing or passing to
218another helper). This flag is often used in pairing with KF_ACQUIRE flag, but
219both are orthogonal to each other.
220
2212.4.3 KF_RELEASE flag
222---------------------
223
224The KF_RELEASE flag is used to indicate that the kfunc releases the pointer
225passed in to it. There can be only one referenced pointer that can be passed
226in. All copies of the pointer being released are invalidated as a result of
227invoking kfunc with this flag. KF_RELEASE kfuncs automatically receive the
228protection afforded by the KF_TRUSTED_ARGS flag described below.
229
2302.4.4 KF_TRUSTED_ARGS flag
231--------------------------
232
233The KF_TRUSTED_ARGS flag is used for kfuncs taking pointer arguments. It
234indicates that the all pointer arguments are valid, and that all pointers to
235BTF objects have been passed in their unmodified form (that is, at a zero
236offset, and without having been obtained from walking another pointer, with one
237exception described below).
238
239There are two types of pointers to kernel objects which are considered "valid":
240
2411. Pointers which are passed as tracepoint or struct_ops callback arguments.
2422. Pointers which were returned from a KF_ACQUIRE kfunc.
243
244Pointers to non-BTF objects (e.g. scalar pointers) may also be passed to
245KF_TRUSTED_ARGS kfuncs, and may have a non-zero offset.
246
247The definition of "valid" pointers is subject to change at any time, and has
248absolutely no ABI stability guarantees.
249
250As mentioned above, a nested pointer obtained from walking a trusted pointer is
251no longer trusted, with one exception. If a struct type has a field that is
252guaranteed to be valid (trusted or rcu, as in KF_RCU description below) as long
253as its parent pointer is valid, the following macros can be used to express
254that to the verifier:
255
256* ``BTF_TYPE_SAFE_TRUSTED``
257* ``BTF_TYPE_SAFE_RCU``
258* ``BTF_TYPE_SAFE_RCU_OR_NULL``
259
260For example,
261
262.. code-block:: c
263
264	BTF_TYPE_SAFE_TRUSTED(struct socket) {
265		struct sock *sk;
266	};
267
268or
269
270.. code-block:: c
271
272	BTF_TYPE_SAFE_RCU(struct task_struct) {
273		const cpumask_t *cpus_ptr;
274		struct css_set __rcu *cgroups;
275		struct task_struct __rcu *real_parent;
276		struct task_struct *group_leader;
277	};
278
279In other words, you must:
280
2811. Wrap the valid pointer type in a ``BTF_TYPE_SAFE_*`` macro.
282
2832. Specify the type and name of the valid nested field. This field must match
284   the field in the original type definition exactly.
285
286A new type declared by a ``BTF_TYPE_SAFE_*`` macro also needs to be emitted so
287that it appears in BTF. For example, ``BTF_TYPE_SAFE_TRUSTED(struct socket)``
288is emitted in the ``type_is_trusted()`` function as follows:
289
290.. code-block:: c
291
292	BTF_TYPE_EMIT(BTF_TYPE_SAFE_TRUSTED(struct socket));
293
294
2952.4.5 KF_SLEEPABLE flag
296-----------------------
297
298The KF_SLEEPABLE flag is used for kfuncs that may sleep. Such kfuncs can only
299be called by sleepable BPF programs (BPF_F_SLEEPABLE).
300
3012.4.6 KF_DESTRUCTIVE flag
302--------------------------
303
304The KF_DESTRUCTIVE flag is used to indicate functions calling which is
305destructive to the system. For example such a call can result in system
306rebooting or panicking. Due to this additional restrictions apply to these
307calls. At the moment they only require CAP_SYS_BOOT capability, but more can be
308added later.
309
3102.4.7 KF_RCU flag
311-----------------
312
313The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with
314KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees
315that the objects are valid and there is no use-after-free. The pointers are not
316NULL, but the object's refcount could have reached zero. The kfuncs need to
317consider doing refcnt != 0 check, especially when returning a KF_ACQUIRE
318pointer. Note as well that a KF_ACQUIRE kfunc that is KF_RCU should very likely
319also be KF_RET_NULL.
320
321.. _KF_deprecated_flag:
322
3232.4.8 KF_DEPRECATED flag
324------------------------
325
326The KF_DEPRECATED flag is used for kfuncs which are scheduled to be
327changed or removed in a subsequent kernel release. A kfunc that is
328marked with KF_DEPRECATED should also have any relevant information
329captured in its kernel doc. Such information typically includes the
330kfunc's expected remaining lifespan, a recommendation for new
331functionality that can replace it if any is available, and possibly a
332rationale for why it is being removed.
333
334Note that while on some occasions, a KF_DEPRECATED kfunc may continue to be
335supported and have its KF_DEPRECATED flag removed, it is likely to be far more
336difficult to remove a KF_DEPRECATED flag after it's been added than it is to
337prevent it from being added in the first place. As described in
338:ref:`BPF_kfunc_lifecycle_expectations`, users that rely on specific kfuncs are
339encouraged to make their use-cases known as early as possible, and participate
340in upstream discussions regarding whether to keep, change, deprecate, or remove
341those kfuncs if and when such discussions occur.
342
3432.5 Registering the kfuncs
344--------------------------
345
346Once the kfunc is prepared for use, the final step to making it visible is
347registering it with the BPF subsystem. Registration is done per BPF program
348type. An example is shown below::
349
350        BTF_SET8_START(bpf_task_set)
351        BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL)
352        BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE)
353        BTF_SET8_END(bpf_task_set)
354
355        static const struct btf_kfunc_id_set bpf_task_kfunc_set = {
356                .owner = THIS_MODULE,
357                .set   = &bpf_task_set,
358        };
359
360        static int init_subsystem(void)
361        {
362                return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_task_kfunc_set);
363        }
364        late_initcall(init_subsystem);
365
3662.6  Specifying no-cast aliases with ___init
367--------------------------------------------
368
369The verifier will always enforce that the BTF type of a pointer passed to a
370kfunc by a BPF program, matches the type of pointer specified in the kfunc
371definition. The verifier, does, however, allow types that are equivalent
372according to the C standard to be passed to the same kfunc arg, even if their
373BTF_IDs differ.
374
375For example, for the following type definition:
376
377.. code-block:: c
378
379	struct bpf_cpumask {
380		cpumask_t cpumask;
381		refcount_t usage;
382	};
383
384The verifier would allow a ``struct bpf_cpumask *`` to be passed to a kfunc
385taking a ``cpumask_t *`` (which is a typedef of ``struct cpumask *``). For
386instance, both ``struct cpumask *`` and ``struct bpf_cpmuask *`` can be passed
387to bpf_cpumask_test_cpu().
388
389In some cases, this type-aliasing behavior is not desired. ``struct
390nf_conn___init`` is one such example:
391
392.. code-block:: c
393
394	struct nf_conn___init {
395		struct nf_conn ct;
396	};
397
398The C standard would consider these types to be equivalent, but it would not
399always be safe to pass either type to a trusted kfunc. ``struct
400nf_conn___init`` represents an allocated ``struct nf_conn`` object that has
401*not yet been initialized*, so it would therefore be unsafe to pass a ``struct
402nf_conn___init *`` to a kfunc that's expecting a fully initialized ``struct
403nf_conn *`` (e.g. ``bpf_ct_change_timeout()``).
404
405In order to accommodate such requirements, the verifier will enforce strict
406PTR_TO_BTF_ID type matching if two types have the exact same name, with one
407being suffixed with ``___init``.
408
409.. _BPF_kfunc_lifecycle_expectations:
410
4113. kfunc lifecycle expectations
412===============================
413
414kfuncs provide a kernel <-> kernel API, and thus are not bound by any of the
415strict stability restrictions associated with kernel <-> user UAPIs. This means
416they can be thought of as similar to EXPORT_SYMBOL_GPL, and can therefore be
417modified or removed by a maintainer of the subsystem they're defined in when
418it's deemed necessary.
419
420Like any other change to the kernel, maintainers will not change or remove a
421kfunc without having a reasonable justification.  Whether or not they'll choose
422to change a kfunc will ultimately depend on a variety of factors, such as how
423widely used the kfunc is, how long the kfunc has been in the kernel, whether an
424alternative kfunc exists, what the norm is in terms of stability for the
425subsystem in question, and of course what the technical cost is of continuing
426to support the kfunc.
427
428There are several implications of this:
429
430a) kfuncs that are widely used or have been in the kernel for a long time will
431   be more difficult to justify being changed or removed by a maintainer. In
432   other words, kfuncs that are known to have a lot of users and provide
433   significant value provide stronger incentives for maintainers to invest the
434   time and complexity in supporting them. It is therefore important for
435   developers that are using kfuncs in their BPF programs to communicate and
436   explain how and why those kfuncs are being used, and to participate in
437   discussions regarding those kfuncs when they occur upstream.
438
439b) Unlike regular kernel symbols marked with EXPORT_SYMBOL_GPL, BPF programs
440   that call kfuncs are generally not part of the kernel tree. This means that
441   refactoring cannot typically change callers in-place when a kfunc changes,
442   as is done for e.g. an upstreamed driver being updated in place when a
443   kernel symbol is changed.
444
445   Unlike with regular kernel symbols, this is expected behavior for BPF
446   symbols, and out-of-tree BPF programs that use kfuncs should be considered
447   relevant to discussions and decisions around modifying and removing those
448   kfuncs. The BPF community will take an active role in participating in
449   upstream discussions when necessary to ensure that the perspectives of such
450   users are taken into account.
451
452c) A kfunc will never have any hard stability guarantees. BPF APIs cannot and
453   will not ever hard-block a change in the kernel purely for stability
454   reasons. That being said, kfuncs are features that are meant to solve
455   problems and provide value to users. The decision of whether to change or
456   remove a kfunc is a multivariate technical decision that is made on a
457   case-by-case basis, and which is informed by data points such as those
458   mentioned above. It is expected that a kfunc being removed or changed with
459   no warning will not be a common occurrence or take place without sound
460   justification, but it is a possibility that must be accepted if one is to
461   use kfuncs.
462
4633.1 kfunc deprecation
464---------------------
465
466As described above, while sometimes a maintainer may find that a kfunc must be
467changed or removed immediately to accommodate some changes in their subsystem,
468usually kfuncs will be able to accommodate a longer and more measured
469deprecation process. For example, if a new kfunc comes along which provides
470superior functionality to an existing kfunc, the existing kfunc may be
471deprecated for some period of time to allow users to migrate their BPF programs
472to use the new one. Or, if a kfunc has no known users, a decision may be made
473to remove the kfunc (without providing an alternative API) after some
474deprecation period so as to provide users with a window to notify the kfunc
475maintainer if it turns out that the kfunc is actually being used.
476
477It's expected that the common case will be that kfuncs will go through a
478deprecation period rather than being changed or removed without warning. As
479described in :ref:`KF_deprecated_flag`, the kfunc framework provides the
480KF_DEPRECATED flag to kfunc developers to signal to users that a kfunc has been
481deprecated. Once a kfunc has been marked with KF_DEPRECATED, the following
482procedure is followed for removal:
483
4841. Any relevant information for deprecated kfuncs is documented in the kfunc's
485   kernel docs. This documentation will typically include the kfunc's expected
486   remaining lifespan, a recommendation for new functionality that can replace
487   the usage of the deprecated function (or an explanation as to why no such
488   replacement exists), etc.
489
4902. The deprecated kfunc is kept in the kernel for some period of time after it
491   was first marked as deprecated. This time period will be chosen on a
492   case-by-case basis, and will typically depend on how widespread the use of
493   the kfunc is, how long it has been in the kernel, and how hard it is to move
494   to alternatives. This deprecation time period is "best effort", and as
495   described :ref:`above<BPF_kfunc_lifecycle_expectations>`, circumstances may
496   sometimes dictate that the kfunc be removed before the full intended
497   deprecation period has elapsed.
498
4993. After the deprecation period the kfunc will be removed. At this point, BPF
500   programs calling the kfunc will be rejected by the verifier.
501
5024. Core kfuncs
503==============
504
505The BPF subsystem provides a number of "core" kfuncs that are potentially
506applicable to a wide variety of different possible use cases and programs.
507Those kfuncs are documented here.
508
5094.1 struct task_struct * kfuncs
510-------------------------------
511
512There are a number of kfuncs that allow ``struct task_struct *`` objects to be
513used as kptrs:
514
515.. kernel-doc:: kernel/bpf/helpers.c
516   :identifiers: bpf_task_acquire bpf_task_release
517
518These kfuncs are useful when you want to acquire or release a reference to a
519``struct task_struct *`` that was passed as e.g. a tracepoint arg, or a
520struct_ops callback arg. For example:
521
522.. code-block:: c
523
524	/**
525	 * A trivial example tracepoint program that shows how to
526	 * acquire and release a struct task_struct * pointer.
527	 */
528	SEC("tp_btf/task_newtask")
529	int BPF_PROG(task_acquire_release_example, struct task_struct *task, u64 clone_flags)
530	{
531		struct task_struct *acquired;
532
533		acquired = bpf_task_acquire(task);
534		if (acquired)
535			/*
536			 * In a typical program you'd do something like store
537			 * the task in a map, and the map will automatically
538			 * release it later. Here, we release it manually.
539			 */
540			bpf_task_release(acquired);
541		return 0;
542	}
543
544
545References acquired on ``struct task_struct *`` objects are RCU protected.
546Therefore, when in an RCU read region, you can obtain a pointer to a task
547embedded in a map value without having to acquire a reference:
548
549.. code-block:: c
550
551	#define private(name) SEC(".data." #name) __hidden __attribute__((aligned(8)))
552	private(TASK) static struct task_struct *global;
553
554	/**
555	 * A trivial example showing how to access a task stored
556	 * in a map using RCU.
557	 */
558	SEC("tp_btf/task_newtask")
559	int BPF_PROG(task_rcu_read_example, struct task_struct *task, u64 clone_flags)
560	{
561		struct task_struct *local_copy;
562
563		bpf_rcu_read_lock();
564		local_copy = global;
565		if (local_copy)
566			/*
567			 * We could also pass local_copy to kfuncs or helper functions here,
568			 * as we're guaranteed that local_copy will be valid until we exit
569			 * the RCU read region below.
570			 */
571			bpf_printk("Global task %s is valid", local_copy->comm);
572		else
573			bpf_printk("No global task found");
574		bpf_rcu_read_unlock();
575
576		/* At this point we can no longer reference local_copy. */
577
578		return 0;
579	}
580
581----
582
583A BPF program can also look up a task from a pid. This can be useful if the
584caller doesn't have a trusted pointer to a ``struct task_struct *`` object that
585it can acquire a reference on with bpf_task_acquire().
586
587.. kernel-doc:: kernel/bpf/helpers.c
588   :identifiers: bpf_task_from_pid
589
590Here is an example of it being used:
591
592.. code-block:: c
593
594	SEC("tp_btf/task_newtask")
595	int BPF_PROG(task_get_pid_example, struct task_struct *task, u64 clone_flags)
596	{
597		struct task_struct *lookup;
598
599		lookup = bpf_task_from_pid(task->pid);
600		if (!lookup)
601			/* A task should always be found, as %task is a tracepoint arg. */
602			return -ENOENT;
603
604		if (lookup->pid != task->pid) {
605			/* bpf_task_from_pid() looks up the task via its
606			 * globally-unique pid from the init_pid_ns. Thus,
607			 * the pid of the lookup task should always be the
608			 * same as the input task.
609			 */
610			bpf_task_release(lookup);
611			return -EINVAL;
612		}
613
614		/* bpf_task_from_pid() returns an acquired reference,
615		 * so it must be dropped before returning from the
616		 * tracepoint handler.
617		 */
618		bpf_task_release(lookup);
619		return 0;
620	}
621
6224.2 struct cgroup * kfuncs
623--------------------------
624
625``struct cgroup *`` objects also have acquire and release functions:
626
627.. kernel-doc:: kernel/bpf/helpers.c
628   :identifiers: bpf_cgroup_acquire bpf_cgroup_release
629
630These kfuncs are used in exactly the same manner as bpf_task_acquire() and
631bpf_task_release() respectively, so we won't provide examples for them.
632
633----
634
635Other kfuncs available for interacting with ``struct cgroup *`` objects are
636bpf_cgroup_ancestor() and bpf_cgroup_from_id(), allowing callers to access
637the ancestor of a cgroup and find a cgroup by its ID, respectively. Both
638return a cgroup kptr.
639
640.. kernel-doc:: kernel/bpf/helpers.c
641   :identifiers: bpf_cgroup_ancestor
642
643.. kernel-doc:: kernel/bpf/helpers.c
644   :identifiers: bpf_cgroup_from_id
645
646Eventually, BPF should be updated to allow this to happen with a normal memory
647load in the program itself. This is currently not possible without more work in
648the verifier. bpf_cgroup_ancestor() can be used as follows:
649
650.. code-block:: c
651
652	/**
653	 * Simple tracepoint example that illustrates how a cgroup's
654	 * ancestor can be accessed using bpf_cgroup_ancestor().
655	 */
656	SEC("tp_btf/cgroup_mkdir")
657	int BPF_PROG(cgrp_ancestor_example, struct cgroup *cgrp, const char *path)
658	{
659		struct cgroup *parent;
660
661		/* The parent cgroup resides at the level before the current cgroup's level. */
662		parent = bpf_cgroup_ancestor(cgrp, cgrp->level - 1);
663		if (!parent)
664			return -ENOENT;
665
666		bpf_printk("Parent id is %d", parent->self.id);
667
668		/* Return the parent cgroup that was acquired above. */
669		bpf_cgroup_release(parent);
670		return 0;
671	}
672
6734.3 struct cpumask * kfuncs
674---------------------------
675
676BPF provides a set of kfuncs that can be used to query, allocate, mutate, and
677destroy struct cpumask * objects. Please refer to :ref:`cpumasks-header-label`
678for more details.
679