History log of /linux/tools/testing/selftests/bpf/progs/timer_start_delete_race.c (Results 1 – 3 of 3)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# f17b474e 10-Feb-2026 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Support associating BPF program with struct_ops (Amery Hung)

-

Merge tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Support associating BPF program with struct_ops (Amery Hung)

- Switch BPF local storage to rqspinlock and remove recursion detection
counters which were causing false positives (Amery Hung)

- Fix live registers marking for indirect jumps (Anton Protopopov)

- Introduce execution context detection BPF helpers (Changwoo Min)

- Improve verifier precision for 32bit sign extension pattern
(Cupertino Miranda)

- Optimize BTF type lookup by sorting vmlinux BTF and doing binary
search (Donglin Peng)

- Allow states pruning for misc/invalid slots in iterator loops (Eduard
Zingerman)

- In preparation for ASAN support in BPF arenas teach libbpf to move
global BPF variables to the end of the region and enable arena kfuncs
while holding locks (Emil Tsalapatis)

- Introduce support for implicit arguments in kfuncs and migrate a
number of them to new API. This is a prerequisite for cgroup
sub-schedulers in sched-ext (Ihor Solodrai)

- Fix incorrect copied_seq calculation in sockmap (Jiayuan Chen)

- Fix ORC stack unwind from kprobe_multi (Jiri Olsa)

- Speed up fentry attach by using single ftrace direct ops in BPF
trampolines (Jiri Olsa)

- Require frozen map for calculating map hash (KP Singh)

- Fix lock entry creation in TAS fallback in rqspinlock (Kumar
Kartikeya Dwivedi)

- Allow user space to select cpu in lookup/update operations on per-cpu
array and hash maps (Leon Hwang)

- Make kfuncs return trusted pointers by default (Matt Bobrowski)

- Introduce "fsession" support where single BPF program is executed
upon entry and exit from traced kernel function (Menglong Dong)

- Allow bpf_timer and bpf_wq use in all programs types (Mykyta
Yatsenko, Andrii Nakryiko, Kumar Kartikeya Dwivedi, Alexei
Starovoitov)

- Make KF_TRUSTED_ARGS the default for all kfuncs and clean up their
definition across the tree (Puranjay Mohan)

- Allow BPF arena calls from non-sleepable context (Puranjay Mohan)

- Improve register id comparison logic in the verifier and extend
linked registers with negative offsets (Puranjay Mohan)

- In preparation for BPF-OOM introduce kfuncs to access memcg events
(Roman Gushchin)

- Use CFI compatible destructor kfunc type (Sami Tolvanen)

- Add bitwise tracking for BPF_END in the verifier (Tianci Cao)

- Add range tracking for BPF_DIV and BPF_MOD in the verifier (Yazhou
Tang)

- Make BPF selftests work with 64k page size (Yonghong Song)

* tag 'bpf-next-7.0' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (268 commits)
selftests/bpf: Fix outdated test on storage->smap
selftests/bpf: Choose another percpu variable in bpf for btf_dump test
selftests/bpf: Remove test_task_storage_map_stress_lookup
selftests/bpf: Update task_local_storage/task_storage_nodeadlock test
selftests/bpf: Update task_local_storage/recursion test
selftests/bpf: Update sk_storage_omem_uncharge test
bpf: Switch to bpf_selem_unlink_nofail in bpf_local_storage_{map_free, destroy}
bpf: Support lockless unlink when freeing map or local storage
bpf: Prepare for bpf_selem_unlink_nofail()
bpf: Remove unused percpu counter from bpf_local_storage_map_free
bpf: Remove cgroup local storage percpu counter
bpf: Remove task local storage percpu counter
bpf: Change local_storage->lock and b->lock to rqspinlock
bpf: Convert bpf_selem_unlink to failable
bpf: Convert bpf_selem_link_map to failable
bpf: Convert bpf_selem_unlink_map to failable
bpf: Select bpf_local_storage_map_bucket based on bpf_local_storage
selftests/xsk: fix number of Tx frags in invalid packet
selftests/xsk: properly handle batch ending in the middle of a packet
bpf: Prevent reentrance into call_rcu_tasks_trace()
...

show more ...


Revision tags: v6.19
# b28dac3f 04-Feb-2026 Andrii Nakryiko <andrii@kernel.org>

Merge branch 'bpf-avoid-locks-in-bpf_timer-and-bpf_wq'

Alexei Starovoitov says:

====================
bpf: Avoid locks in bpf_timer and bpf_wq

From: Alexei Starovoitov <ast@kernel.org>

This series

Merge branch 'bpf-avoid-locks-in-bpf_timer-and-bpf_wq'

Alexei Starovoitov says:

====================
bpf: Avoid locks in bpf_timer and bpf_wq

From: Alexei Starovoitov <ast@kernel.org>

This series reworks implementation of BPF timer and workqueue APIs to
make them usable from any context.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>

Changes in v9:
- Different approach for patches 1 and 3:
- s/EBUSY/ENOENT/ when refcnt==0 to match existing
- drop latch, use refcnt and kmalloc_nolock() instead
- address race between timer/wq_start and delete_elem, add a test
- Link to v8: https://lore.kernel.org/bpf/20260127-timer_nolock-v8-0-5a29a9571059@meta.com/

Changes in v8:
- Return -EBUSY in bpf_async_read_op() if last_seq is failed to be set
- In bpf_async_cancel_and_free() drop bpf_async_cb ref after calling bpf_async_process()
- Link to v7: https://lore.kernel.org/r/20260122-timer_nolock-v7-0-04a45c55c2e2@meta.com

Changes in v7:
- Addressed Andrii's review points from the previous version - nothing
very significang.
- Added NMI stress tests for bpf_timer - hit few verifier failing checks
and removed them.
- Address sparse warning in the bpf_async_update_prog_callback()
- Link to v6: https://lore.kernel.org/r/20260120-timer_nolock-v6-0-670ffdd787b4@meta.com

Changes in v6:
- Reworked destruction and refcnt use:
- On cancel_and_free() set last_seq to BPF_ASYNC_DESTROY value, drop
map's reference
- In irq work callback, atomically switch DESTROY to DESTROYED, cancel
timer/wq
- Free bpf_async_cb on refcnt going to 0.
- Link to v5: https://lore.kernel.org/r/20260115-timer_nolock-v5-0-15e3aef2703d@meta.com

Changes in v5:
- Extracted lock-free algorithm for updating cb->prog and
cb->callback_fn into a function bpf_async_update_prog_callback(),
added a new commit and introduces this function and uses it in
__bpf_async_set_callback(), bpf_timer_cancel() and
bpf_async_cancel_and_free().
This allows to move the change into the separate commit without breaking
correctness.
- Handle NULL prog in bpf_async_update_prog_callback().
- Link to v4: https://lore.kernel.org/r/20260114-timer_nolock-v4-0-fa6355f51fa7@meta.com

Changes in v4:
- Handle irq_work_queue failures in both schedule and cancel_and_free
paths: introduced bpf_async_refcnt_dec_cleanup() that decrements refcnt
and makes sure if last reference is put, there is at least one irq_work
scheduled to execute final cleanup.
- Additional refcnt inc/dec in set_callback() + rcu lock to make sure
cleanup is not running at the same time as set_callback().
- Added READ_ONCE where it was needed.
- Squash 'bpf: Refactor __bpf_async_set_callback()' commit into 'bpf:
Add lock-free cell for NMI-safe
async operations'
- Removed mpmc_cell, use seqcount_latch_t instead.
- Link to v3: https://lore.kernel.org/r/20260107-timer_nolock-v3-0-740d3ec3e5f9@meta.com

Changes in v3:
- Major rework
- Introduce mpmc_cell, allowing concurrent writes and reads
- Implement irq_work deferring
- Adding selftests
- Introduces bpf_timer_cancel_async kfunc
- Link to v2: https://lore.kernel.org/r/20251105-timer_nolock-v2-0-32698db08bfa@meta.com

Changes in v2:
- Move refcnt initialization and put (from cancel_and_free())
from patch 5 into the patch 4, so that patch 4 has more clear and full
implementation and use of refcnt
- Link to v1: https://lore.kernel.org/r/20251031-timer_nolock-v1-0-b064ae403bfb@meta.com
====================

Link: https://patch.msgid.link/20260201025403.66625-1-alexei.starovoitov@gmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


Revision tags: v6.19-rc8
# b135beb0 01-Feb-2026 Alexei Starovoitov <ast@kernel.org>

selftests/bpf: Add a test to stress bpf_timer_start and map_delete race

Add a test to stress bpf_timer_start and map_delete race

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: An

selftests/bpf: Add a test to stress bpf_timer_start and map_delete race

Add a test to stress bpf_timer_start and map_delete race

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20260201025403.66625-10-alexei.starovoitov@gmail.com

show more ...