|
Revision tags: v6.18-rc2 |
|
| #
4f38da1f |
| 13-Oct-2025 |
Mark Brown <broonie@kernel.org> |
spi: Merge up v6.18-rc1
Ensure my CI has a sensible baseline.
|
| #
ec2e0fb0 |
| 16-Oct-2025 |
Takashi Iwai <tiwai@suse.de> |
Merge tag 'asoc-fix-v6.18-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v6.18
A moderately large collection of driver specific fixes, plus a f
Merge tag 'asoc-fix-v6.18-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v6.18
A moderately large collection of driver specific fixes, plus a few new quirks and device IDs. The NAU8821 changes are a little large but more in mechanical ways than in ways that are complex.
show more ...
|
| #
48a71076 |
| 14-Oct-2025 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-fixes into drm-misc-fixes
Updating drm-misc-fixes to the state of v6.18-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
|
Revision tags: v6.18-rc1 |
|
| #
ae28ed45 |
| 01-Oct-2025 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'bpf-next-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Support pulling non-linear xdp data with bpf_xdp_pull_data() kfu
Merge tag 'bpf-next-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Support pulling non-linear xdp data with bpf_xdp_pull_data() kfunc (Amery Hung)
Applied as a stable branch in bpf-next and net-next trees.
- Support reading skb metadata via bpf_dynptr (Jakub Sitnicki)
Also a stable branch in bpf-next and net-next trees.
- Enforce expected_attach_type for tailcall compatibility (Daniel Borkmann)
- Replace path-sensitive with path-insensitive live stack analysis in the verifier (Eduard Zingerman)
This is a significant change in the verification logic. More details, motivation, long term plans are in the cover letter/merge commit.
- Support signed BPF programs (KP Singh)
This is another major feature that took years to materialize.
Algorithm details are in the cover letter/marge commit
- Add support for may_goto instruction to s390 JIT (Ilya Leoshkevich)
- Add support for may_goto instruction to arm64 JIT (Puranjay Mohan)
- Fix USDT SIB argument handling in libbpf (Jiawei Zhao)
- Allow uprobe-bpf program to change context registers (Jiri Olsa)
- Support signed loads from BPF arena (Kumar Kartikeya Dwivedi and Puranjay Mohan)
- Allow access to union arguments in tracing programs (Leon Hwang)
- Optimize rcu_read_lock() + migrate_disable() combination where it's used in BPF subsystem (Menglong Dong)
- Introduce bpf_task_work_schedule*() kfuncs to schedule deferred execution of BPF callback in the context of a specific task using the kernel’s task_work infrastructure (Mykyta Yatsenko)
- Enforce RCU protection for KF_RCU_PROTECTED kfuncs (Kumar Kartikeya Dwivedi)
- Add stress test for rqspinlock in NMI (Kumar Kartikeya Dwivedi)
- Improve the precision of tnum multiplier verifier operation (Nandakumar Edamana)
- Use tnums to improve is_branch_taken() logic (Paul Chaignon)
- Add support for atomic operations in arena in riscv JIT (Pu Lehui)
- Report arena faults to BPF error stream (Puranjay Mohan)
- Search for tracefs at /sys/kernel/tracing first in bpftool (Quentin Monnet)
- Add bpf_strcasecmp() kfunc (Rong Tao)
- Support lookup_and_delete_elem command in BPF_MAP_STACK_TRACE (Tao Chen)
* tag 'bpf-next-6.18' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (197 commits) libbpf: Replace AF_ALG with open coded SHA-256 selftests/bpf: Add stress test for rqspinlock in NMI selftests/bpf: Add test case for different expected_attach_type bpf: Enforce expected_attach_type for tailcall compatibility bpftool: Remove duplicate string.h header bpf: Remove duplicate crypto/sha2.h header libbpf: Fix error when st-prefix_ops and ops from differ btf selftests/bpf: Test changing packet data from kfunc selftests/bpf: Add stacktrace map lookup_and_delete_elem test case selftests/bpf: Refactor stacktrace_map case with skeleton bpf: Add lookup_and_delete_elem for BPF_MAP_STACK_TRACE selftests/bpf: Fix flaky bpf_cookie selftest selftests/bpf: Test changing packet data from global functions with a kfunc bpf: Emit struct bpf_xdp_sock type in vmlinux BTF selftests/bpf: Task_work selftest cleanup fixes MAINTAINERS: Delete inactive maintainers from AF_XDP bpf: Mark kfuncs as __noclone selftests/bpf: Add kprobe multi write ctx attach test selftests/bpf: Add kprobe write ctx attach test selftests/bpf: Add uprobe context ip register change test ...
show more ...
|
|
Revision tags: v6.17, v6.17-rc7, v6.17-rc6, v6.17-rc5, v6.17-rc4, v6.17-rc3, v6.17-rc2, v6.17-rc1 |
|
| #
911c0359 |
| 06-Aug-2025 |
Martin KaFai Lau <martin.lau@kernel.org> |
Merge branch 'allow-struct_ops-to-create-map-id-to'
Amery Hung says:
==================== This patchset allows struct_ops implementors to get map id from kdata in reg(), unreg() and update() so tha
Merge branch 'allow-struct_ops-to-create-map-id-to'
Amery Hung says:
==================== This patchset allows struct_ops implementors to get map id from kdata in reg(), unreg() and update() so that they create an id to struct_ops instance mapping. This in turn allows struct_ops kfuncs to refer to the calling instance without passing a pointer to the struct_ops. The selftest provides an end-to-end example.
Some struct_ops users extend themselves with other bpf programs, which also need to call struct_ops kfuncs. For example, scx_layered uses syscall bpf programs as a scx_layered specific control plane and uses tracing programs to get additional information for scheduling [0]. The kfuncs may need to refer to the struct_ops instance and perform jobs accordingly. To allow calling struct_ops kfuncs referring to specific instances from different program types and context (e.g., struct_ops, tracing, async callbacks), the traditional way is to pass the struct_ops pointer to kfuncs.
This patchset provides an alternative way, through a combination of bpf map id and global variable. First, a struct_ops implementor will use the map id of the struct_ops map as the id of an instance. Then, it needs to maintain an id to instance mapping: inserting a new mapping during reg() and removing it during unreg(). The map id can be acquired by calling bpf_struct_ops_id().
v1 -> v2 Add bpf_struct_ops_id() instead of using bpf_struct_ops_get()
[0] https://github.com/sched-ext/scx/blob/main/scheds/rust/scx_layered/src/bpf/main.bpf.c ====================
Link: https://patch.msgid.link/20250806162540.681679-1-ameryhung@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
show more ...
|
| #
ba7000f1 |
| 06-Aug-2025 |
Amery Hung <ameryhung@gmail.com> |
selftests/bpf: Test multi_st_ops and calling kfuncs from different programs
Test multi_st_ops and demonstrate how different bpf programs can call a kfuncs that refers to the struct_ops instance in t
selftests/bpf: Test multi_st_ops and calling kfuncs from different programs
Test multi_st_ops and demonstrate how different bpf programs can call a kfuncs that refers to the struct_ops instance in the same source file by id. The id is defined as a global vairable and initialized before attaching the skeleton. Kfuncs that take the id can hide the argument with a macro to make it almost transparent to bpf program developers.
The test involves two struct_ops returning different values from .test_1. In syscall and tracing programs, check if the correct value is returned by a kfunc that calls .test_1.
Signed-off-by: Amery Hung <ameryhung@gmail.com> Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org> Link: https://patch.msgid.link/20250806162540.681679-4-ameryhung@gmail.com
show more ...
|