History log of /linux/tools/testing/selftests/bpf/progs/struct_ops_assoc.c (Results 1 – 5 of 5)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# f17b474e 10-Feb-2026 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Support associating BPF program with struct_ops (Amery Hung)

-

Merge tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Support associating BPF program with struct_ops (Amery Hung)

- Switch BPF local storage to rqspinlock and remove recursion detection
counters which were causing false positives (Amery Hung)

- Fix live registers marking for indirect jumps (Anton Protopopov)

- Introduce execution context detection BPF helpers (Changwoo Min)

- Improve verifier precision for 32bit sign extension pattern
(Cupertino Miranda)

- Optimize BTF type lookup by sorting vmlinux BTF and doing binary
search (Donglin Peng)

- Allow states pruning for misc/invalid slots in iterator loops (Eduard
Zingerman)

- In preparation for ASAN support in BPF arenas teach libbpf to move
global BPF variables to the end of the region and enable arena kfuncs
while holding locks (Emil Tsalapatis)

- Introduce support for implicit arguments in kfuncs and migrate a
number of them to new API. This is a prerequisite for cgroup
sub-schedulers in sched-ext (Ihor Solodrai)

- Fix incorrect copied_seq calculation in sockmap (Jiayuan Chen)

- Fix ORC stack unwind from kprobe_multi (Jiri Olsa)

- Speed up fentry attach by using single ftrace direct ops in BPF
trampolines (Jiri Olsa)

- Require frozen map for calculating map hash (KP Singh)

- Fix lock entry creation in TAS fallback in rqspinlock (Kumar
Kartikeya Dwivedi)

- Allow user space to select cpu in lookup/update operations on per-cpu
array and hash maps (Leon Hwang)

- Make kfuncs return trusted pointers by default (Matt Bobrowski)

- Introduce "fsession" support where single BPF program is executed
upon entry and exit from traced kernel function (Menglong Dong)

- Allow bpf_timer and bpf_wq use in all programs types (Mykyta
Yatsenko, Andrii Nakryiko, Kumar Kartikeya Dwivedi, Alexei
Starovoitov)

- Make KF_TRUSTED_ARGS the default for all kfuncs and clean up their
definition across the tree (Puranjay Mohan)

- Allow BPF arena calls from non-sleepable context (Puranjay Mohan)

- Improve register id comparison logic in the verifier and extend
linked registers with negative offsets (Puranjay Mohan)

- In preparation for BPF-OOM introduce kfuncs to access memcg events
(Roman Gushchin)

- Use CFI compatible destructor kfunc type (Sami Tolvanen)

- Add bitwise tracking for BPF_END in the verifier (Tianci Cao)

- Add range tracking for BPF_DIV and BPF_MOD in the verifier (Yazhou
Tang)

- Make BPF selftests work with 64k page size (Yonghong Song)

* tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (268 commits)
selftests/bpf: Fix outdated test on storage->smap
selftests/bpf: Choose another percpu variable in bpf for btf_dump test
selftests/bpf: Remove test_task_storage_map_stress_lookup
selftests/bpf: Update task_local_storage/task_storage_nodeadlock test
selftests/bpf: Update task_local_storage/recursion test
selftests/bpf: Update sk_storage_omem_uncharge test
bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy}
bpf: Support lockless unlink when freeing map or local storage
bpf: Prepare for bpf_selem_unlink_nofail()
bpf: Remove unused percpu counter from bpf_local_storage_map_free
bpf: Remove cgroup local storage percpu counter
bpf: Remove task local storage percpu counter
bpf: Change local_storage->lock and b->lock to rqspinlock
bpf: Convert bpf_selem_unlink to failable
bpf: Convert bpf_selem_link_map to failable
bpf: Convert bpf_selem_unlink_map to failable
bpf: Select bpf_local_storage_map_bucket based on bpf_local_storage
selftests/xsk: fix number of Tx frags in invalid packet
selftests/xsk: properly handle batch ending in the middle of a packet
bpf: Prevent reentrance into call_rcu_tasks_trace()
...

show more ...


Revision tags: v6.19, v6.19-rc8, v6.19-rc7
# b236134f 21-Jan-2026 Alexei Starovoitov <ast@kernel.org>

Merge branch 'bpf-kernel-functions-with-kf_implicit_args'

Ihor Solodrai says:

====================
bpf: Kernel functions with KF_IMPLICIT_ARGS

This series implements a generic "implicit arguments"

Merge branch 'bpf-kernel-functions-with-kf_implicit_args'

Ihor Solodrai says:

====================
bpf: Kernel functions with KF_IMPLICIT_ARGS

This series implements a generic "implicit arguments" feature for BPF
kernel functions. For context see prior work [1][2].

A mechanism is created for kfuncs to have arguments that are not
visible to the BPF programs, and are provided to the kernel function
implementation by the verifier.

This mechanism is then used in the kfuncs that have a parameter with
__prog annotation [3], which is the current way of passing struct
bpf_prog_aux pointer to kfuncs.

The function with implicit arguments is defined by KF_IMPLICIT_ARGS
flag in BTF_IDS_FLAGS set. In this series, only a pointer to struct
bpf_prog_aux can be implicit, although it is simple to extend this to
more types.

The verifier handles a kfunc with KF_IMPLICIT_ARGS by resolving it to
a different (actual) BTF prototype early in verification (patch #3).

A <kfunc>_impl function generated in BTF for a kfunc with implicit
args does not have a "bpf_kfunc" decl tag, and a kernel address. The
verifier will reject a program trying to call such an _impl kfunc.

The usage of <kfunc>_impl functions in BPF is only allowed for kfuncs
with an explicit kernel (or kmodule) declaration, that is in "legacy"
cases. As of this series, there are no legacy kernel functions, as all
__prog users are migrated to KF_IMPLICIT_ARGS. However the
implementation allows for legacy cases support in principle.

The series removes the following BPF kernel functions:
- bpf_stream_vprintk_impl
- bpf_task_work_schedule_resume_impl
- bpf_task_work_schedule_signal_impl
- bpf_wq_set_callback_impl

This will break existing BPF programs calling these functions (the
verifier will not load them) on new kernels.

To mitigate, BPF users are advised to use the following pattern [4]:

if (xxx_impl)
xxx_impl(..., NULL);
else
xxx(...);

Which can be wrapped in a macro.

The series consists of the following patches:
- patches #1 and #2 are non-functional refactoring in kernel/bpf
- patch #3 defines KF_IMPLICIT_ARGS flag and teaches the verifier
about it
- patches #4-#5 implement btf2btf transformation in resolve_btfids
- patch #6 adds selftests specific to KF_IMPLICIT_ARGS feature
- patches #7-#11 migrate the current users of __prog argument to
KF_IMPLICIT_ARGS
- patch #12 removes __prog arg suffix support from the kernel
- patch #13 updates the docs

[1] https://lore.kernel.org/bpf/20251029190113.3323406-1-ihor.solodrai@linux.dev/
[2] https://lore.kernel.org/bpf/20250924211716.1287715-1-ihor.solodrai@linux.dev/
[3] https://docs.kernel.org/bpf/kfuncs.html#prog-annotation
[4] https://lore.kernel.org/bpf/CAEf4BzbgPfRm9BX=TsZm-TsHFAHcwhPY4vTt=9OT-uhWqf8tqw@mail.gmail.com/
---

v2->v3:
- resolve_btfids: Use dynamic reallocation for btf2btf_context arrays (Andrii)
- resolve_btfids: Add missing free() for btf2btf_context arrays (AI)
- Other nits in resolve_btfids (Andrii, Eduard)

v2: https://lore.kernel.org/bpf/20260116201700.864797-1-ihor.solodrai@linux.dev/

v1->v2:
- Replace the following kernel functions with KF_IMPLICIT_ARGS version:
- bpf_stream_vprintk_impl -> bpf_stream_vprintk
- bpf_task_work_schedule_resume_impl -> bpf_task_work_schedule_resume
- bpf_task_work_schedule_signal_impl -> bpf_task_work_schedule_signal
- bpf_wq_set_callback_impl -> bpf_wq_set_callback_impl
- Remove __prog arg suffix support from the verifier
- Rework btf2btf implementation in resolve_btfids
- Do distill base and sort before BTF_ids patching
- Collect kfuncs based on BTF decl tags, before BTF_ids are patched
- resolve_btfids: use dynamic memory for intermediate data (Andrii)
- verifier: reset .subreg_def for caller saved registers on kfunc
call (Eduard)
- selftests/hid: remove Makefile changes (Benjamin)
- selftests/bpf: Add a patch (#11) migrating struct_ops_assoc test
to KF_IMPLICIT_ARGS
- Various nits across the series (Alexei, Andrii, Eduard)

v1: https://lore.kernel.org/bpf/20260109184852.1089786-1-ihor.solodrai@linux.dev/

---
====================

Link: https://patch.msgid.link/20260120222638.3976562-1-ihor.solodrai@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# bd06b977 20-Jan-2026 Ihor Solodrai <ihor.solodrai@linux.dev>

selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS

A test kfunc named bpf_kfunc_multi_st_ops_test_1_impl() is a user of
__prog suffix. Subsequent patch removes __prog support in favor

selftests/bpf: Migrate struct_ops_assoc test to KF_IMPLICIT_ARGS

A test kfunc named bpf_kfunc_multi_st_ops_test_1_impl() is a user of
__prog suffix. Subsequent patch removes __prog support in favor of
KF_IMPLICIT_ARGS, so migrate this kfunc to use implicit argument.

Signed-off-by: Ihor Solodrai <ihor.solodrai@linux.dev>
Link: https://lore.kernel.org/r/20260120222638.3976562-12-ihor.solodrai@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


Revision tags: v6.19-rc6, v6.19-rc5, v6.19-rc4, v6.19-rc3, v6.19-rc2, v6.19-rc1
# 5d9fb42f 06-Dec-2025 Andrii Nakryiko <andrii@kernel.org>

Merge branch 'support-associating-bpf-programs-with-struct_ops'

Amery Hung says:

====================
Support associating BPF programs with struct_ops

Hi,

This patchset adds a new BPF command BPF

Merge branch 'support-associating-bpf-programs-with-struct_ops'

Amery Hung says:

====================
Support associating BPF programs with struct_ops

Hi,

This patchset adds a new BPF command BPF_PROG_ASSOC_STRUCT_OPS to
the bpf() syscall to allow associating a BPF program with a struct_ops.
The command is introduced to address a emerging need from struct_ops
users. As the number of subsystems adopting struct_ops grows, more
users are building their struct_ops-based solution with some help from
other BPF programs. For example, scx_layer uses a syscall program as
a user space trigger to refresh layers [0]. It also uses tracing program
to infer whether a task is using GPU and needs to be prioritized [1]. In
these use cases, when there are multiple struct_ops instances, the
struct_ops kfuncs called from different BPF programs, whether struct_ops
or not needs to be able to refer to a specific one, which currently is
not possible.

The new BPF command will allow users to explicitly associate a BPF
program with a struct_ops map. The libbpf wrapper can be called after
loading programs and before attaching programs and struct_ops.

Internally, it will set prog->aux->st_ops_assoc to the struct_ops
map. struct_ops kfuncs can then get the associated struct_ops struct
by calling bpf_prog_get_assoc_struct_ops() with prog->aux, which can
be acquired from a "__prog" argument. The value of the special
argument will be fixed up by the verifier during verification.

The command conceptually associates the implementation of BPF programs
with struct_ops map, not the attachment. A program associated with the
map will take a refcount of it so that st_ops_assoc always points to a
valid struct_ops struct. struct_ops implementers can use the helper,
bpf_prog_get_assoc_struct_ops to get the pointer. The returned
struct_ops if not NULL is guaranteed to be valid and initialized.
However, it is not guaranteed that the struct_ops is attached. The
struct_ops implementer still need to take steps to track and check the
state of the struct_ops in kdata, if the use case demand the struct_ops
to be attached.

We can also consider support associating struct_ops link with BPF
programs, which on one hand make struct_ops implementer's job easier,
but might complicate libbpf workflow and does not apply to legacy
struct_ops attachment.

[0] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L557
[1] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c#L754
---
v7 -> v8
- Fix libbpf return (Andrii)
- Follow kfunc _impl suffic naming convention in selftest (Alexei)
Link: https://lore.kernel.org/bpf/20251121231352.4032020-1-ameryhung@gmail.com/

v6 -> v7
- Drop the guarantee that bpf_prog_get_assoc_struct_ops() will always return
an initialized struct_ops (Martin)
- Minor misc. changes in selftests
Link: https://lore.kernel.org/bpf/20251114221741.317631-1-ameryhung@gmail.com/

v5 -> v6
- Drop refcnt bumping for async callbacks and add RCU annotation (Martin)
- Fix libbpf bug and update comments (Andrii)
- Fix refcount bug in bpf_prog_assoc_struct_ops() (AI)
Link: https://lore.kernel.org/bpf/20251104172652.1746988-1-ameryhung@gmail.com/

v4 -> v5
- Simplify the API for getting associated struct_ops and dont't
expose struct_ops map lifecycle management (Andrii, Alexei)
Link: https://lore.kernel.org/bpf/20251024212914.1474337-1-ameryhung@gmail.com/

v3 -> v4
- Fix potential dangling pointer in timer callback. Protect
st_ops_assoc with RCU. The get helper now needs to be paired with
bpf_struct_ops_put()
- The command should only increase refcount once for a program
(Andrii)
- Test a struct_ops program reused in two struct_ops maps
- Test getting associated struct_ops in timer callback
Link: https://lore.kernel.org/bpf/20251017215627.722338-1-ameryhung@gmail.com/

v2 -> v3
- Change the type of st_ops_assoc from void* (i.e., kdata) to bpf_map
(Andrii)
- Fix a bug that clears BPF_PTR_POISON when a struct_ops map is freed
(Andrii)
- Return NULL if the map is not fully initialized (Martin)
- Move struct_ops map refcount inc/dec into internal helpers (Martin)
- Add libbpf API, bpf_program__assoc_struct_ops (Andrii)
Link: https://lore.kernel.org/bpf/20251016204503.3203690-1-ameryhung@gmail.com/

v1 -> v2
- Poison st_ops_assoc when reusing the program in more than one
struct_ops maps and add a helper to access the pointer (Andrii)
- Minor style and naming changes (Andrii)
Link: https://lore.kernel.org/bpf/20251010174953.2884682-1-ameryhung@gmail.com/

---
====================

Link: https://patch.msgid.link/20251203233748.668365-1-ameryhung@gmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


# 33a165f9 04-Dec-2025 Amery Hung <ameryhung@gmail.com>

selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command

Test BPF_PROG_ASSOC_STRUCT_OPS command that associates a BPF program
with a struct_ops. The test follows the same logic in commit
ba7000f1c360 (

selftests/bpf: Test BPF_PROG_ASSOC_STRUCT_OPS command

Test BPF_PROG_ASSOC_STRUCT_OPS command that associates a BPF program
with a struct_ops. The test follows the same logic in commit
ba7000f1c360 ("selftests/bpf: Test multi_st_ops and calling kfuncs from
different programs"), but instead of using map id to identify a specific
struct_ops, this test uses the new BPF command to associate a struct_ops
with a program.

The test consists of two sets of almost identical struct_ops maps and BPF
programs associated with the map. Their only difference is the unique
value returned by bpf_testmod_multi_st_ops::test_1().

The test first loads the programs and associates them with struct_ops
maps. Then, it exercises the BPF programs. They will in turn call kfunc
bpf_kfunc_multi_st_ops_test_1_prog_arg() to trigger test_1() of the
associated struct_ops map, and then check if the right unique value is
returned.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20251203233748.668365-5-ameryhung@gmail.com

show more ...