1.. SPDX-License-Identifier: GPL-2.0 2 3.. _kfuncs-header-label: 4 5============================= 6BPF Kernel Functions (kfuncs) 7============================= 8 91. Introduction 10=============== 11 12BPF Kernel Functions or more commonly known as kfuncs are functions in the Linux 13kernel which are exposed for use by BPF programs. Unlike normal BPF helpers, 14kfuncs do not have a stable interface and can change from one kernel release to 15another. Hence, BPF programs need to be updated in response to changes in the 16kernel. See :ref:`BPF_kfunc_lifecycle_expectations` for more information. 17 182. Defining a kfunc 19=================== 20 21There are two ways to expose a kernel function to BPF programs, either make an 22existing function in the kernel visible, or add a new wrapper for BPF. In both 23cases, care must be taken that BPF program can only call such function in a 24valid context. To enforce this, visibility of a kfunc can be per program type. 25 26If you are not creating a BPF wrapper for existing kernel function, skip ahead 27to :ref:`BPF_kfunc_nodef`. 28 292.1 Creating a wrapper kfunc 30---------------------------- 31 32When defining a wrapper kfunc, the wrapper function should have extern linkage. 33This prevents the compiler from optimizing away dead code, as this wrapper kfunc 34is not invoked anywhere in the kernel itself. It is not necessary to provide a 35prototype in a header for the wrapper kfunc. 36 37An example is given below:: 38 39 /* Disables missing prototype warnings */ 40 __diag_push(); 41 __diag_ignore_all("-Wmissing-prototypes", 42 "Global kfuncs as their definitions will be in BTF"); 43 44 __bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) 45 { 46 return find_get_task_by_vpid(nr); 47 } 48 49 __diag_pop(); 50 51A wrapper kfunc is often needed when we need to annotate parameters of the 52kfunc. Otherwise one may directly make the kfunc visible to the BPF program by 53registering it with the BPF subsystem. See :ref:`BPF_kfunc_nodef`. 54 552.2 Annotating kfunc parameters 56------------------------------- 57 58Similar to BPF helpers, there is sometime need for additional context required 59by the verifier to make the usage of kernel functions safer and more useful. 60Hence, we can annotate a parameter by suffixing the name of the argument of the 61kfunc with a __tag, where tag may be one of the supported annotations. 62 632.2.1 __sz Annotation 64--------------------- 65 66This annotation is used to indicate a memory and size pair in the argument list. 67An example is given below:: 68 69 __bpf_kfunc void bpf_memzero(void *mem, int mem__sz) 70 { 71 ... 72 } 73 74Here, the verifier will treat first argument as a PTR_TO_MEM, and second 75argument as its size. By default, without __sz annotation, the size of the type 76of the pointer is used. Without __sz annotation, a kfunc cannot accept a void 77pointer. 78 792.2.2 __k Annotation 80-------------------- 81 82This annotation is only understood for scalar arguments, where it indicates that 83the verifier must check the scalar argument to be a known constant, which does 84not indicate a size parameter, and the value of the constant is relevant to the 85safety of the program. 86 87An example is given below:: 88 89 __bpf_kfunc void *bpf_obj_new(u32 local_type_id__k, ...) 90 { 91 ... 92 } 93 94Here, bpf_obj_new uses local_type_id argument to find out the size of that type 95ID in program's BTF and return a sized pointer to it. Each type ID will have a 96distinct size, hence it is crucial to treat each such call as distinct when 97values don't match during verifier state pruning checks. 98 99Hence, whenever a constant scalar argument is accepted by a kfunc which is not a 100size parameter, and the value of the constant matters for program safety, __k 101suffix should be used. 102 1032.2.2 __uninit Annotation 104------------------------- 105 106This annotation is used to indicate that the argument will be treated as 107uninitialized. 108 109An example is given below:: 110 111 __bpf_kfunc int bpf_dynptr_from_skb(..., struct bpf_dynptr_kern *ptr__uninit) 112 { 113 ... 114 } 115 116Here, the dynptr will be treated as an uninitialized dynptr. Without this 117annotation, the verifier will reject the program if the dynptr passed in is 118not initialized. 119 120.. _BPF_kfunc_nodef: 121 1222.3 Using an existing kernel function 123------------------------------------- 124 125When an existing function in the kernel is fit for consumption by BPF programs, 126it can be directly registered with the BPF subsystem. However, care must still 127be taken to review the context in which it will be invoked by the BPF program 128and whether it is safe to do so. 129 1302.4 Annotating kfuncs 131--------------------- 132 133In addition to kfuncs' arguments, verifier may need more information about the 134type of kfunc(s) being registered with the BPF subsystem. To do so, we define 135flags on a set of kfuncs as follows:: 136 137 BTF_SET8_START(bpf_task_set) 138 BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL) 139 BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE) 140 BTF_SET8_END(bpf_task_set) 141 142This set encodes the BTF ID of each kfunc listed above, and encodes the flags 143along with it. Ofcourse, it is also allowed to specify no flags. 144 145kfunc definitions should also always be annotated with the ``__bpf_kfunc`` 146macro. This prevents issues such as the compiler inlining the kfunc if it's a 147static kernel function, or the function being elided in an LTO build as it's 148not used in the rest of the kernel. Developers should not manually add 149annotations to their kfunc to prevent these issues. If an annotation is 150required to prevent such an issue with your kfunc, it is a bug and should be 151added to the definition of the macro so that other kfuncs are similarly 152protected. An example is given below:: 153 154 __bpf_kfunc struct task_struct *bpf_get_task_pid(s32 pid) 155 { 156 ... 157 } 158 1592.4.1 KF_ACQUIRE flag 160--------------------- 161 162The KF_ACQUIRE flag is used to indicate that the kfunc returns a pointer to a 163refcounted object. The verifier will then ensure that the pointer to the object 164is eventually released using a release kfunc, or transferred to a map using a 165referenced kptr (by invoking bpf_kptr_xchg). If not, the verifier fails the 166loading of the BPF program until no lingering references remain in all possible 167explored states of the program. 168 1692.4.2 KF_RET_NULL flag 170---------------------- 171 172The KF_RET_NULL flag is used to indicate that the pointer returned by the kfunc 173may be NULL. Hence, it forces the user to do a NULL check on the pointer 174returned from the kfunc before making use of it (dereferencing or passing to 175another helper). This flag is often used in pairing with KF_ACQUIRE flag, but 176both are orthogonal to each other. 177 1782.4.3 KF_RELEASE flag 179--------------------- 180 181The KF_RELEASE flag is used to indicate that the kfunc releases the pointer 182passed in to it. There can be only one referenced pointer that can be passed in. 183All copies of the pointer being released are invalidated as a result of invoking 184kfunc with this flag. 185 1862.4.4 KF_KPTR_GET flag 187---------------------- 188 189The KF_KPTR_GET flag is used to indicate that the kfunc takes the first argument 190as a pointer to kptr, safely increments the refcount of the object it points to, 191and returns a reference to the user. The rest of the arguments may be normal 192arguments of a kfunc. The KF_KPTR_GET flag should be used in conjunction with 193KF_ACQUIRE and KF_RET_NULL flags. 194 1952.4.5 KF_TRUSTED_ARGS flag 196-------------------------- 197 198The KF_TRUSTED_ARGS flag is used for kfuncs taking pointer arguments. It 199indicates that the all pointer arguments are valid, and that all pointers to 200BTF objects have been passed in their unmodified form (that is, at a zero 201offset, and without having been obtained from walking another pointer, with one 202exception described below). 203 204There are two types of pointers to kernel objects which are considered "valid": 205 2061. Pointers which are passed as tracepoint or struct_ops callback arguments. 2072. Pointers which were returned from a KF_ACQUIRE or KF_KPTR_GET kfunc. 208 209Pointers to non-BTF objects (e.g. scalar pointers) may also be passed to 210KF_TRUSTED_ARGS kfuncs, and may have a non-zero offset. 211 212The definition of "valid" pointers is subject to change at any time, and has 213absolutely no ABI stability guarantees. 214 215As mentioned above, a nested pointer obtained from walking a trusted pointer is 216no longer trusted, with one exception. If a struct type has a field that is 217guaranteed to be valid as long as its parent pointer is trusted, the 218``BTF_TYPE_SAFE_NESTED`` macro can be used to express that to the verifier as 219follows: 220 221.. code-block:: c 222 223 BTF_TYPE_SAFE_NESTED(struct task_struct) { 224 const cpumask_t *cpus_ptr; 225 }; 226 227In other words, you must: 228 2291. Wrap the trusted pointer type in the ``BTF_TYPE_SAFE_NESTED`` macro. 230 2312. Specify the type and name of the trusted nested field. This field must match 232 the field in the original type definition exactly. 233 2342.4.6 KF_SLEEPABLE flag 235----------------------- 236 237The KF_SLEEPABLE flag is used for kfuncs that may sleep. Such kfuncs can only 238be called by sleepable BPF programs (BPF_F_SLEEPABLE). 239 2402.4.7 KF_DESTRUCTIVE flag 241-------------------------- 242 243The KF_DESTRUCTIVE flag is used to indicate functions calling which is 244destructive to the system. For example such a call can result in system 245rebooting or panicking. Due to this additional restrictions apply to these 246calls. At the moment they only require CAP_SYS_BOOT capability, but more can be 247added later. 248 2492.4.8 KF_RCU flag 250----------------- 251 252The KF_RCU flag is a weaker version of KF_TRUSTED_ARGS. The kfuncs marked with 253KF_RCU expect either PTR_TRUSTED or MEM_RCU arguments. The verifier guarantees 254that the objects are valid and there is no use-after-free. The pointers are not 255NULL, but the object's refcount could have reached zero. The kfuncs need to 256consider doing refcnt != 0 check, especially when returning a KF_ACQUIRE 257pointer. Note as well that a KF_ACQUIRE kfunc that is KF_RCU should very likely 258also be KF_RET_NULL. 259 260.. _KF_deprecated_flag: 261 2622.4.9 KF_DEPRECATED flag 263------------------------ 264 265The KF_DEPRECATED flag is used for kfuncs which are scheduled to be 266changed or removed in a subsequent kernel release. A kfunc that is 267marked with KF_DEPRECATED should also have any relevant information 268captured in its kernel doc. Such information typically includes the 269kfunc's expected remaining lifespan, a recommendation for new 270functionality that can replace it if any is available, and possibly a 271rationale for why it is being removed. 272 273Note that while on some occasions, a KF_DEPRECATED kfunc may continue to be 274supported and have its KF_DEPRECATED flag removed, it is likely to be far more 275difficult to remove a KF_DEPRECATED flag after it's been added than it is to 276prevent it from being added in the first place. As described in 277:ref:`BPF_kfunc_lifecycle_expectations`, users that rely on specific kfuncs are 278encouraged to make their use-cases known as early as possible, and participate 279in upstream discussions regarding whether to keep, change, deprecate, or remove 280those kfuncs if and when such discussions occur. 281 2822.5 Registering the kfuncs 283-------------------------- 284 285Once the kfunc is prepared for use, the final step to making it visible is 286registering it with the BPF subsystem. Registration is done per BPF program 287type. An example is shown below:: 288 289 BTF_SET8_START(bpf_task_set) 290 BTF_ID_FLAGS(func, bpf_get_task_pid, KF_ACQUIRE | KF_RET_NULL) 291 BTF_ID_FLAGS(func, bpf_put_pid, KF_RELEASE) 292 BTF_SET8_END(bpf_task_set) 293 294 static const struct btf_kfunc_id_set bpf_task_kfunc_set = { 295 .owner = THIS_MODULE, 296 .set = &bpf_task_set, 297 }; 298 299 static int init_subsystem(void) 300 { 301 return register_btf_kfunc_id_set(BPF_PROG_TYPE_TRACING, &bpf_task_kfunc_set); 302 } 303 late_initcall(init_subsystem); 304 3052.6 Specifying no-cast aliases with ___init 306-------------------------------------------- 307 308The verifier will always enforce that the BTF type of a pointer passed to a 309kfunc by a BPF program, matches the type of pointer specified in the kfunc 310definition. The verifier, does, however, allow types that are equivalent 311according to the C standard to be passed to the same kfunc arg, even if their 312BTF_IDs differ. 313 314For example, for the following type definition: 315 316.. code-block:: c 317 318 struct bpf_cpumask { 319 cpumask_t cpumask; 320 refcount_t usage; 321 }; 322 323The verifier would allow a ``struct bpf_cpumask *`` to be passed to a kfunc 324taking a ``cpumask_t *`` (which is a typedef of ``struct cpumask *``). For 325instance, both ``struct cpumask *`` and ``struct bpf_cpmuask *`` can be passed 326to bpf_cpumask_test_cpu(). 327 328In some cases, this type-aliasing behavior is not desired. ``struct 329nf_conn___init`` is one such example: 330 331.. code-block:: c 332 333 struct nf_conn___init { 334 struct nf_conn ct; 335 }; 336 337The C standard would consider these types to be equivalent, but it would not 338always be safe to pass either type to a trusted kfunc. ``struct 339nf_conn___init`` represents an allocated ``struct nf_conn`` object that has 340*not yet been initialized*, so it would therefore be unsafe to pass a ``struct 341nf_conn___init *`` to a kfunc that's expecting a fully initialized ``struct 342nf_conn *`` (e.g. ``bpf_ct_change_timeout()``). 343 344In order to accommodate such requirements, the verifier will enforce strict 345PTR_TO_BTF_ID type matching if two types have the exact same name, with one 346being suffixed with ``___init``. 347 348.. _BPF_kfunc_lifecycle_expectations: 349 3503. kfunc lifecycle expectations 351=============================== 352 353kfuncs provide a kernel <-> kernel API, and thus are not bound by any of the 354strict stability restrictions associated with kernel <-> user UAPIs. This means 355they can be thought of as similar to EXPORT_SYMBOL_GPL, and can therefore be 356modified or removed by a maintainer of the subsystem they're defined in when 357it's deemed necessary. 358 359Like any other change to the kernel, maintainers will not change or remove a 360kfunc without having a reasonable justification. Whether or not they'll choose 361to change a kfunc will ultimately depend on a variety of factors, such as how 362widely used the kfunc is, how long the kfunc has been in the kernel, whether an 363alternative kfunc exists, what the norm is in terms of stability for the 364subsystem in question, and of course what the technical cost is of continuing 365to support the kfunc. 366 367There are several implications of this: 368 369a) kfuncs that are widely used or have been in the kernel for a long time will 370 be more difficult to justify being changed or removed by a maintainer. In 371 other words, kfuncs that are known to have a lot of users and provide 372 significant value provide stronger incentives for maintainers to invest the 373 time and complexity in supporting them. It is therefore important for 374 developers that are using kfuncs in their BPF programs to communicate and 375 explain how and why those kfuncs are being used, and to participate in 376 discussions regarding those kfuncs when they occur upstream. 377 378b) Unlike regular kernel symbols marked with EXPORT_SYMBOL_GPL, BPF programs 379 that call kfuncs are generally not part of the kernel tree. This means that 380 refactoring cannot typically change callers in-place when a kfunc changes, 381 as is done for e.g. an upstreamed driver being updated in place when a 382 kernel symbol is changed. 383 384 Unlike with regular kernel symbols, this is expected behavior for BPF 385 symbols, and out-of-tree BPF programs that use kfuncs should be considered 386 relevant to discussions and decisions around modifying and removing those 387 kfuncs. The BPF community will take an active role in participating in 388 upstream discussions when necessary to ensure that the perspectives of such 389 users are taken into account. 390 391c) A kfunc will never have any hard stability guarantees. BPF APIs cannot and 392 will not ever hard-block a change in the kernel purely for stability 393 reasons. That being said, kfuncs are features that are meant to solve 394 problems and provide value to users. The decision of whether to change or 395 remove a kfunc is a multivariate technical decision that is made on a 396 case-by-case basis, and which is informed by data points such as those 397 mentioned above. It is expected that a kfunc being removed or changed with 398 no warning will not be a common occurrence or take place without sound 399 justification, but it is a possibility that must be accepted if one is to 400 use kfuncs. 401 4023.1 kfunc deprecation 403--------------------- 404 405As described above, while sometimes a maintainer may find that a kfunc must be 406changed or removed immediately to accommodate some changes in their subsystem, 407usually kfuncs will be able to accommodate a longer and more measured 408deprecation process. For example, if a new kfunc comes along which provides 409superior functionality to an existing kfunc, the existing kfunc may be 410deprecated for some period of time to allow users to migrate their BPF programs 411to use the new one. Or, if a kfunc has no known users, a decision may be made 412to remove the kfunc (without providing an alternative API) after some 413deprecation period so as to provide users with a window to notify the kfunc 414maintainer if it turns out that the kfunc is actually being used. 415 416It's expected that the common case will be that kfuncs will go through a 417deprecation period rather than being changed or removed without warning. As 418described in :ref:`KF_deprecated_flag`, the kfunc framework provides the 419KF_DEPRECATED flag to kfunc developers to signal to users that a kfunc has been 420deprecated. Once a kfunc has been marked with KF_DEPRECATED, the following 421procedure is followed for removal: 422 4231. Any relevant information for deprecated kfuncs is documented in the kfunc's 424 kernel docs. This documentation will typically include the kfunc's expected 425 remaining lifespan, a recommendation for new functionality that can replace 426 the usage of the deprecated function (or an explanation as to why no such 427 replacement exists), etc. 428 4292. The deprecated kfunc is kept in the kernel for some period of time after it 430 was first marked as deprecated. This time period will be chosen on a 431 case-by-case basis, and will typically depend on how widespread the use of 432 the kfunc is, how long it has been in the kernel, and how hard it is to move 433 to alternatives. This deprecation time period is "best effort", and as 434 described :ref:`above<BPF_kfunc_lifecycle_expectations>`, circumstances may 435 sometimes dictate that the kfunc be removed before the full intended 436 deprecation period has elapsed. 437 4383. After the deprecation period the kfunc will be removed. At this point, BPF 439 programs calling the kfunc will be rejected by the verifier. 440 4414. Core kfuncs 442============== 443 444The BPF subsystem provides a number of "core" kfuncs that are potentially 445applicable to a wide variety of different possible use cases and programs. 446Those kfuncs are documented here. 447 4484.1 struct task_struct * kfuncs 449------------------------------- 450 451There are a number of kfuncs that allow ``struct task_struct *`` objects to be 452used as kptrs: 453 454.. kernel-doc:: kernel/bpf/helpers.c 455 :identifiers: bpf_task_acquire bpf_task_release 456 457These kfuncs are useful when you want to acquire or release a reference to a 458``struct task_struct *`` that was passed as e.g. a tracepoint arg, or a 459struct_ops callback arg. For example: 460 461.. code-block:: c 462 463 /** 464 * A trivial example tracepoint program that shows how to 465 * acquire and release a struct task_struct * pointer. 466 */ 467 SEC("tp_btf/task_newtask") 468 int BPF_PROG(task_acquire_release_example, struct task_struct *task, u64 clone_flags) 469 { 470 struct task_struct *acquired; 471 472 acquired = bpf_task_acquire(task); 473 474 /* 475 * In a typical program you'd do something like store 476 * the task in a map, and the map will automatically 477 * release it later. Here, we release it manually. 478 */ 479 bpf_task_release(acquired); 480 return 0; 481 } 482 483---- 484 485A BPF program can also look up a task from a pid. This can be useful if the 486caller doesn't have a trusted pointer to a ``struct task_struct *`` object that 487it can acquire a reference on with bpf_task_acquire(). 488 489.. kernel-doc:: kernel/bpf/helpers.c 490 :identifiers: bpf_task_from_pid 491 492Here is an example of it being used: 493 494.. code-block:: c 495 496 SEC("tp_btf/task_newtask") 497 int BPF_PROG(task_get_pid_example, struct task_struct *task, u64 clone_flags) 498 { 499 struct task_struct *lookup; 500 501 lookup = bpf_task_from_pid(task->pid); 502 if (!lookup) 503 /* A task should always be found, as %task is a tracepoint arg. */ 504 return -ENOENT; 505 506 if (lookup->pid != task->pid) { 507 /* bpf_task_from_pid() looks up the task via its 508 * globally-unique pid from the init_pid_ns. Thus, 509 * the pid of the lookup task should always be the 510 * same as the input task. 511 */ 512 bpf_task_release(lookup); 513 return -EINVAL; 514 } 515 516 /* bpf_task_from_pid() returns an acquired reference, 517 * so it must be dropped before returning from the 518 * tracepoint handler. 519 */ 520 bpf_task_release(lookup); 521 return 0; 522 } 523 5244.2 struct cgroup * kfuncs 525-------------------------- 526 527``struct cgroup *`` objects also have acquire and release functions: 528 529.. kernel-doc:: kernel/bpf/helpers.c 530 :identifiers: bpf_cgroup_acquire bpf_cgroup_release 531 532These kfuncs are used in exactly the same manner as bpf_task_acquire() and 533bpf_task_release() respectively, so we won't provide examples for them. 534 535---- 536 537You may also acquire a reference to a ``struct cgroup`` kptr that's already 538stored in a map using bpf_cgroup_kptr_get(): 539 540.. kernel-doc:: kernel/bpf/helpers.c 541 :identifiers: bpf_cgroup_kptr_get 542 543Here's an example of how it can be used: 544 545.. code-block:: c 546 547 /* struct containing the struct task_struct kptr which is actually stored in the map. */ 548 struct __cgroups_kfunc_map_value { 549 struct cgroup __kptr * cgroup; 550 }; 551 552 /* The map containing struct __cgroups_kfunc_map_value entries. */ 553 struct { 554 __uint(type, BPF_MAP_TYPE_HASH); 555 __type(key, int); 556 __type(value, struct __cgroups_kfunc_map_value); 557 __uint(max_entries, 1); 558 } __cgroups_kfunc_map SEC(".maps"); 559 560 /* ... */ 561 562 /** 563 * A simple example tracepoint program showing how a 564 * struct cgroup kptr that is stored in a map can 565 * be acquired using the bpf_cgroup_kptr_get() kfunc. 566 */ 567 SEC("tp_btf/cgroup_mkdir") 568 int BPF_PROG(cgroup_kptr_get_example, struct cgroup *cgrp, const char *path) 569 { 570 struct cgroup *kptr; 571 struct __cgroups_kfunc_map_value *v; 572 s32 id = cgrp->self.id; 573 574 /* Assume a cgroup kptr was previously stored in the map. */ 575 v = bpf_map_lookup_elem(&__cgroups_kfunc_map, &id); 576 if (!v) 577 return -ENOENT; 578 579 /* Acquire a reference to the cgroup kptr that's already stored in the map. */ 580 kptr = bpf_cgroup_kptr_get(&v->cgroup); 581 if (!kptr) 582 /* If no cgroup was present in the map, it's because 583 * we're racing with another CPU that removed it with 584 * bpf_kptr_xchg() between the bpf_map_lookup_elem() 585 * above, and our call to bpf_cgroup_kptr_get(). 586 * bpf_cgroup_kptr_get() internally safely handles this 587 * race, and will return NULL if the task is no longer 588 * present in the map by the time we invoke the kfunc. 589 */ 590 return -EBUSY; 591 592 /* Free the reference we just took above. Note that the 593 * original struct cgroup kptr is still in the map. It will 594 * be freed either at a later time if another context deletes 595 * it from the map, or automatically by the BPF subsystem if 596 * it's still present when the map is destroyed. 597 */ 598 bpf_cgroup_release(kptr); 599 600 return 0; 601 } 602 603---- 604 605Other kfuncs available for interacting with ``struct cgroup *`` objects are 606bpf_cgroup_ancestor() and bpf_cgroup_from_id(), allowing callers to access 607the ancestor of a cgroup and find a cgroup by its ID, respectively. Both 608return a cgroup kptr. 609 610.. kernel-doc:: kernel/bpf/helpers.c 611 :identifiers: bpf_cgroup_ancestor 612 613.. kernel-doc:: kernel/bpf/helpers.c 614 :identifiers: bpf_cgroup_from_id 615 616Eventually, BPF should be updated to allow this to happen with a normal memory 617load in the program itself. This is currently not possible without more work in 618the verifier. bpf_cgroup_ancestor() can be used as follows: 619 620.. code-block:: c 621 622 /** 623 * Simple tracepoint example that illustrates how a cgroup's 624 * ancestor can be accessed using bpf_cgroup_ancestor(). 625 */ 626 SEC("tp_btf/cgroup_mkdir") 627 int BPF_PROG(cgrp_ancestor_example, struct cgroup *cgrp, const char *path) 628 { 629 struct cgroup *parent; 630 631 /* The parent cgroup resides at the level before the current cgroup's level. */ 632 parent = bpf_cgroup_ancestor(cgrp, cgrp->level - 1); 633 if (!parent) 634 return -ENOENT; 635 636 bpf_printk("Parent id is %d", parent->self.id); 637 638 /* Return the parent cgroup that was acquired above. */ 639 bpf_cgroup_release(parent); 640 return 0; 641 } 642 6434.3 struct cpumask * kfuncs 644--------------------------- 645 646BPF provides a set of kfuncs that can be used to query, allocate, mutate, and 647destroy struct cpumask * objects. Please refer to :ref:`cpumasks-header-label` 648for more details. 649