| #
f5ad4101 |
| 15-Apr-2026 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'bpf-next-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Welcome new BPF maintainers: Kumar Kartikeya Dwivedi, Eduard Z
Merge tag 'bpf-next-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Pull bpf updates from Alexei Starovoitov:
- Welcome new BPF maintainers: Kumar Kartikeya Dwivedi, Eduard Zingerman while Martin KaFai Lau reduced his load to Reviwer.
- Lots of fixes everywhere from many first time contributors. Thank you All.
- Diff stat is dominated by mechanical split of verifier.c into multiple components:
- backtrack.c: backtracking logic and jump history - states.c: state equivalence - cfg.c: control flow graph, postorder, strongly connected components - liveness.c: register and stack liveness - fixups.c: post-verification passes: instruction patching, dead code removal, bpf_loop inlining, finalize fastcall
8k line were moved. verifier.c still stands at 20k lines.
Further refactoring is planned for the next release.
- Replace dynamic stack liveness with static stack liveness based on data flow analysis.
This improved the verification time by 2x for some programs and equally reduced memory consumption. New logic is in liveness.c and supported by constant folding in const_fold.c (Eduard Zingerman, Alexei Starovoitov)
- Introduce BTF layout to ease addition of new BTF kinds (Alan Maguire)
- Use kmalloc_nolock() universally in BPF local storage (Amery Hung)
- Fix several bugs in linked registers delta tracking (Daniel Borkmann)
- Improve verifier support of arena pointers (Emil Tsalapatis)
- Improve verifier tracking of register bounds in min/max and tnum domains (Harishankar Vishwanathan, Paul Chaignon, Hao Sun)
- Further extend support for implicit arguments in the verifier (Ihor Solodrai)
- Add support for nop,nop5 instruction combo for USDT probes in libbpf (Jiri Olsa)
- Support merging multiple module BTFs (Josef Bacik)
- Extend applicability of bpf_kptr_xchg (Kaitao Cheng)
- Retire rcu_trace_implies_rcu_gp() (Kumar Kartikeya Dwivedi)
- Support variable offset context access for 'syscall' programs (Kumar Kartikeya Dwivedi)
- Migrate bpf_task_work and dynptr to kmalloc_nolock() (Mykyta Yatsenko)
- Fix UAF in in open-coded task_vma iterator (Puranjay Mohan)
* tag 'bpf-next-7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (241 commits) selftests/bpf: cover short IPv4/IPv6 inputs with adjust_room bpf: reject short IPv4/IPv6 inputs in bpf_prog_test_run_skb selftests/bpf: Use memfd_create instead of shm_open in cgroup_iter_memcg selftests/bpf: Add test for cgroup storage OOB read bpf: Fix OOB in pcpu_init_value selftests/bpf: Fix reg_bounds to match new tnum-based refinement selftests/bpf: Add tests for non-arena/arena operations bpf: Allow instructions with arena source and non-arena dest registers bpftool: add missing fsession to the usage and docs of bpftool docs/bpf: add missing fsession attach type to docs bpf: add missing fsession to the verifier log bpf: Move BTF checking logic into check_btf.c bpf: Move backtracking logic to backtrack.c bpf: Move state equivalence logic to states.c bpf: Move check_cfg() into cfg.c bpf: Move compute_insn_live_regs() into liveness.c bpf: Move fixup/post-processing logic from verifier.c into fixups.c bpf: Simplify do_check_insn() bpf: Move checks for reserved fields out of the main pass bpf: Delete unused variable ...
show more ...
|
|
Revision tags: v7.0 |
|
| #
e2e6a6ea |
| 11-Apr-2026 |
Alexei Starovoitov <ast@kernel.org> |
Merge branch 'bpf-static-stack-liveness-data-flow-analysis'
Eduard Zingerman says:
==================== bpf: static stack liveness data flow analysis
This patch set converts current dynamic stack
Merge branch 'bpf-static-stack-liveness-data-flow-analysis'
Eduard Zingerman says:
==================== bpf: static stack liveness data flow analysis
This patch set converts current dynamic stack slot liveness tracking mechanism to a static data flow analysis. The result is used during state pruning (clean_verifier_state): to zero out dead stack slots, enabling more aggressive state equivalence and pruning. To improve analysis precision live stack slot tracking is converted to 4-byte granularity.
The key ideas and the bulk of the execution behind the series belong to Alexei Starovoitov. I contributed to patch set integration with existing liveness tracking mechanism.
Due to complexity of the changes the bisectability property of the patch set is not preserved. Some selftests may fail between intermediate patches of the series.
Analysis consists of two passes: - A forward fixed-point analysis that tracks which frame's FP each register value is derived from, and at what byte offset. This is needed because a callee can receive a pointer to its caller's stack frame (e.g. r1 = fp-16 at the call site), then do *(u64 *)(r1 + 0) inside the callee - a cross-frame stack access that the callee's local liveness must attribute to the caller's stack. - A backward dataflow pass within each callee subprog that computes live_in = (live_out \ def) ∪ use for both local and non-local (ancestor) stack slots. The result of the analysis for callee is propagated up to the callsite.
The key idea making such analysis possible is that limited and conservative argument tracking pass is sufficient to recover most of the offsets / stack pointer arguments.
Changelog: v3 -> v4: liveness.c: - fill_from_stack(): correct conservative stack mask for imprecise result, instead of picking frames from pointer register (Alexei, sashiko). - spill_to_stack(): join with existing values instead of overwriting when dst has multiple offsets (cnt > 1) or imprecise offset (cnt == 0) (Alexei, sashiko). - analyze_subprog(): big change, now each analyze_subprog() is called with a fresh func_instance, once read/write marks are collected the instance is joined with the one accumulated for (callsite, depth) and update_instance() is called. This handles several issues: - Avoids stale must_write marks when same func_instance is reused by analyze_subprog() several times. - Handles potential calls multiple calls for mark_stack_write() within single instruction. (Alexei, sashiko). - analyze_subprog(): added complexity limit to avoid exponential analysis time blowup for crafted programs with lots of nested function calls (Alexei, sashiko). - the patch "bpf: record arg tracking results in bpf_liveness masks" is reinstated, it was accidentally squashed during v1->v2 transition.
verifier.c: - clean_live_states() is replaced by a direct call to clean_verifier_state(), bpf_verifier_state->cleaned is dropped.
verifier_live_stack.c: - added selftests for arg tracking changes.
v2 -> v3: liveness.c: - record_stack_access(): handle S64_MIN (unknown read) with imprecise offset. Test case can't be created with existing helpers/kfuncs (sashiko). - fmt_subprog(): handle NULL name (subprogs without BTF info). - print_instance(): use u64 for pos/insn_pos avoid truncation (bot+bpf-ci). - compute_subprog_args(): return error if 'env->callsite_at_stack[idx] = kvmalloc_objs(...)' fails (sashiko). - clear_overlapping_stack_slots(): avoid integer promoting issues by adding explicit (int) cast (sashiko).
bpf_verifier.h, verifier.c, liveness.c: - Fixes in comments and commit messages (bot+bpf-ci).
v1 -> v2: liveness.c: - Removed func_instance->callsites and replaced it with explicit spine passed through analys_subprog() calls (sashiko). - Fixed BPF_LOAD_ACQ handling in arg_track_xfer: don't clear dst register tracking (sashiko). - Various error threading nits highlighted by bots (sashiko, bot+bpf-ci). - Massaged fmt_spis_mask() to be more concise (Alexei)
verifier.c: - Move subprog_info[i].name assignment from add_subprog_and_kfunc to check_btf_func (sashiko, bot+bpf-ci). - Fixed inverse usage of msb/lsb halves by patch "bpf: make liveness.c track stack with 4-byte granularity" (sashiko, bot+bpf-ci).
v1: https://lore.kernel.org/bpf/20260408-patch-set-v1-0-1a666e860d42@gmail.com/ v2: https://lore.kernel.org/bpf/20260409-patch-set-v2-0-651804512349@gmail.com/ v3: https://lore.kernel.org/bpf/20260410-patch-set-v3-0-1f5826dc0ef2@gmail.com/
Verification performance impact (negative % is good):
========= selftests: master vs patch-set =========
File Program Insns (A) Insns (B) Insns (DIFF) ----------------------- ------------- --------- --------- --------------- xdp_synproxy_kern.bpf.o syncookie_tc 20363 22910 +2547 (+12.51%) xdp_synproxy_kern.bpf.o syncookie_xdp 20450 23001 +2551 (+12.47%)
Total progs: 4490 Old success: 2856 New success: 2856 total_insns diff min: -80.26% total_insns diff max: 12.51% 0 -> value: 0 value -> 0: 0 total_insns abs max old: 837,487 total_insns abs max new: 837,487 -85 .. -75 %: 1 -50 .. -40 %: 1 -35 .. -25 %: 1 -20 .. -10 %: 5 -10 .. 0 %: 18 0 .. 5 %: 4458 5 .. 15 %: 6
========= scx: master vs patch-set =========
File Program Insns (A) Insns (B) Insns (DIFF) -------------- --------- --------- --------- -------------- scx_qmap.bpf.o qmap_init 20230 19022 -1208 (-5.97%)
Total progs: 376 Old success: 351 New success: 351 total_insns diff min: -27.15% total_insns diff max: 0.50% 0 -> value: 0 value -> 0: 0 total_insns abs max old: 236,251 total_insns abs max new: 233,669 -30 .. -20 %: 8 -20 .. -10 %: 2 -10 .. 0 %: 21 0 .. 5 %: 345
========= meta: master vs patch-set =========
File Program Insns (A) Insns (B) Insns (DIFF) ---------------------------------------------------------------------------- ----------------- --------- --------- ----------------- ... third-party-scx-backports-scheds-rust-scx_layered-bpf_skel_genskel-bpf.bpf.o layered_dispatch 13944 13104 -840 (-6.02%) third-party-scx-backports-scheds-rust-scx_layered-bpf_skel_genskel-bpf.bpf.o layered_dispatch 13944 13104 -840 (-6.02%) third-party-scx-gefe21962f49a-__scx_layered_bpf_skel_genskel-bpf.bpf.o layered_dispatch 13825 12985 -840 (-6.08%) third-party-scx-v1.0.16-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_enqueue 15501 13602 -1899 (-12.25%) third-party-scx-v1.0.16-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_select_cpu 19814 16231 -3583 (-18.08%) third-party-scx-v1.0.17-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_enqueue 15501 13602 -1899 (-12.25%) third-party-scx-v1.0.17-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_select_cpu 19814 16231 -3583 (-18.08%) third-party-scx-v1.0.17-__scx_layered_bpf_skel_genskel-bpf.bpf.o layered_dispatch 13976 13151 -825 (-5.90%) third-party-scx-v1.0.18-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_dispatch 260628 237930 -22698 (-8.71%) third-party-scx-v1.0.18-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_enqueue 13437 12225 -1212 (-9.02%) third-party-scx-v1.0.18-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_select_cpu 17744 14730 -3014 (-16.99%) third-party-scx-v1.0.19-10-6b1958477-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_cpu_offline 19676 18418 -1258 (-6.39%) third-party-scx-v1.0.19-10-6b1958477-__scx_lavd_bpf_skel_genskel-bpf.bpf.o lavd_cpu_online 19674 18416 -1258 (-6.39%) ...
Total progs: 1540 Old success: 1492 New success: 1493 total_insns diff min: -75.83% total_insns diff max: 73.60% 0 -> value: 0 value -> 0: 0 total_insns abs max old: 434,763 total_insns abs max new: 666,036 -80 .. -70 %: 2 -55 .. -50 %: 7 -50 .. -45 %: 10 -45 .. -35 %: 4 -35 .. -25 %: 4 -25 .. -20 %: 8 -20 .. -15 %: 15 -15 .. -10 %: 11 -10 .. -5 %: 45 -5 .. 0 %: 112 0 .. 5 %: 1316 5 .. 15 %: 2 15 .. 25 %: 1 25 .. 35 %: 1 55 .. 65 %: 1 70 .. 75 %: 1
========= cilium: master vs patch-set =========
File Program Insns (A) Insns (B) Insns (DIFF) --------------- --------------------------------- --------- --------- ---------------- bpf_host.o cil_host_policy 45801 32027 -13774 (-30.07%) bpf_host.o cil_to_netdev 100287 69042 -31245 (-31.16%) bpf_host.o tail_handle_ipv4_cont_from_host 60911 20962 -39949 (-65.59%) bpf_host.o tail_handle_ipv4_from_netdev 59735 33155 -26580 (-44.50%) bpf_host.o tail_handle_ipv6_cont_from_host 23529 17036 -6493 (-27.60%) bpf_host.o tail_handle_ipv6_from_host 11906 10303 -1603 (-13.46%) bpf_host.o tail_handle_ipv6_from_netdev 29778 23743 -6035 (-20.27%) bpf_host.o tail_handle_snat_fwd_ipv4 61616 67463 +5847 (+9.49%) bpf_host.o tail_handle_snat_fwd_ipv6 30802 22806 -7996 (-25.96%) bpf_host.o tail_ipv4_host_policy_ingress 20017 10528 -9489 (-47.40%) bpf_host.o tail_ipv6_host_policy_ingress 20693 17301 -3392 (-16.39%) bpf_host.o tail_nodeport_nat_egress_ipv4 16455 13684 -2771 (-16.84%) bpf_host.o tail_nodeport_nat_ingress_ipv4 36174 20080 -16094 (-44.49%) bpf_host.o tail_nodeport_nat_ingress_ipv6 48039 25779 -22260 (-46.34%) bpf_lxc.o tail_handle_ipv4 13765 10001 -3764 (-27.34%) bpf_lxc.o tail_handle_ipv4_cont 96891 68725 -28166 (-29.07%) bpf_lxc.o tail_handle_ipv6_cont 21809 17697 -4112 (-18.85%) bpf_lxc.o tail_ipv4_ct_egress 15949 17746 +1797 (+11.27%) bpf_lxc.o tail_nodeport_nat_egress_ipv4 16183 13432 -2751 (-17.00%) bpf_lxc.o tail_nodeport_nat_ingress_ipv4 18532 10697 -7835 (-42.28%) bpf_overlay.o tail_handle_inter_cluster_revsnat 15708 11099 -4609 (-29.34%) bpf_overlay.o tail_handle_ipv4 105672 76108 -29564 (-27.98%) bpf_overlay.o tail_handle_ipv6 15733 19944 +4211 (+26.77%) bpf_overlay.o tail_handle_snat_fwd_ipv4 19327 26468 +7141 (+36.95%) bpf_overlay.o tail_handle_snat_fwd_ipv6 20817 12556 -8261 (-39.68%) bpf_overlay.o tail_nodeport_nat_egress_ipv4 16175 12184 -3991 (-24.67%) bpf_overlay.o tail_nodeport_nat_ingress_ipv4 20760 11951 -8809 (-42.43%) bpf_wireguard.o tail_handle_ipv4 27466 28909 +1443 (+5.25%) bpf_wireguard.o tail_nodeport_nat_egress_ipv4 15937 12094 -3843 (-24.11%) bpf_wireguard.o tail_nodeport_nat_ingress_ipv4 20624 11993 -8631 (-41.85%) bpf_xdp.o tail_lb_ipv4 42673 60855 +18182 (+42.61%) bpf_xdp.o tail_lb_ipv6 87903 108585 +20682 (+23.53%) bpf_xdp.o tail_nodeport_nat_ingress_ipv4 28787 20991 -7796 (-27.08%) bpf_xdp.o tail_nodeport_nat_ingress_ipv6 207593 152012 -55581 (-26.77%)
Total progs: 134 Old success: 134 New success: 134 total_insns diff min: -65.59% total_insns diff max: 42.61% 0 -> value: 0 value -> 0: 0 total_insns abs max old: 207,593 total_insns abs max new: 152,012 -70 .. -60 %: 1 -50 .. -40 %: 7 -40 .. -30 %: 9 -30 .. -25 %: 9 -25 .. -20 %: 12 -20 .. -15 %: 7 -15 .. -10 %: 14 -10 .. -5 %: 6 -5 .. 0 %: 16 0 .. 5 %: 42 5 .. 15 %: 5 15 .. 25 %: 2 25 .. 35 %: 2 35 .. 45 %: 2 ====================
Link: https://patch.msgid.link/20260410-patch-set-v4-0-5d4eecb343db@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
| #
b42eb55f |
| 10-Apr-2026 |
Alexei Starovoitov <ast@kernel.org> |
selftests/bpf: update existing tests due to liveness changes
The verifier cleans all dead registers and stack slots in the current state. Adjust expected output in tests or insert dummy stack/regist
selftests/bpf: update existing tests due to liveness changes
The verifier cleans all dead registers and stack slots in the current state. Adjust expected output in tests or insert dummy stack/register reads. Also update verifier_live_stack tests to adhere to new logging scheme.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260410-patch-set-v4-11-5d4eecb343db@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
|
Revision tags: v7.0-rc7, v7.0-rc6, v7.0-rc5, v7.0-rc4, v7.0-rc3, v7.0-rc2, v7.0-rc1 |
|
| #
bd86ab5b |
| 13-Feb-2026 |
Alexei Starovoitov <ast@kernel.org> |
Merge branch 'bpf-consolidate-pointer-offset-tracking-in-var_off'
Eduard Zingerman says:
==================== bpf: consolidate pointer offset tracking in var_off
Consolidate static and varying poi
Merge branch 'bpf-consolidate-pointer-offset-tracking-in-var_off'
Eduard Zingerman says:
==================== bpf: consolidate pointer offset tracking in var_off
Consolidate static and varying pointer offset tracking logic in the BPF verifier. All pointer offsets are now represented solely using `reg->var_off` and min/max fields, simplifying pointer tracking code and making it easier to widen pointer registers for loop convergence checks.
Patch 1 is a preparatory refactoring of check_reg_sane_offset(). Patch 2 is the main change, moving pointer offsets from `reg->off` to `reg->var_off`. Patch 3 removes references to `reg->off` in netronome code. Patch 4 renames the now-repurposed `reg->off` field to `reg->delta`, reflecting its remaining role as a constant delta between linked scalar registers.
Note: netronome changes are compile-tested only!
Changelog: v1 -> v2: - put back WARN_ON_ONCE in mark_ptr_or_null_reg() (Alexei). - references to `ptr->off` field are removed from netronome code (bot+bpf-ci, kernel test robot). - fix for a comment referencing `ptr->off` in bpf_verifier.h (bot+bpf-ci).
v1: https://lore.kernel.org/bpf/20260211-ptrs-off-migration-v1-0-996c2a37b063@gmail.com/ --- ====================
Link: https://patch.msgid.link/20260212-ptrs-off-migration-v2-0-00820e4d3438@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
| #
022ac075 |
| 12-Feb-2026 |
Eduard Zingerman <eddyz87@gmail.com> |
bpf: use reg->var_off instead of reg->off for pointers
This commit consolidates static and varying pointer offset tracking logic. All offsets are now represented solely using `.var_off` and min/max
bpf: use reg->var_off instead of reg->off for pointers
This commit consolidates static and varying pointer offset tracking logic. All offsets are now represented solely using `.var_off` and min/max fields. The reasons are twofold: - This simplifies pointer tracking code, as each relevant function needs to check the `.var_off` field anyway. - It makes it easier to widen pointer registers for the purpose of loop convergence checks, by forgoing the `regsafe()` logic demanding `.off` fields to be identical.
The changes are spread across many functions and are hard to group into smaller patches. Some of the logical changes include: - Checks in __check_ptr_off_reg() are reordered so that the tnum_is_const() check is done before operating on reg->var_off.value. - check_packet_access() now uses check_mem_region_access() to handle possible 'off' overflow cases. - In check_helper_mem_access() utility functions like check_packet_access() are now called with 'off=0', as these utility functions now account for the complete register offset range. - In check_reg_type() a call to __check_ptr_off_reg() is added before a call to btf_struct_ids_match(). This prevents btf_struct_ids_match() from potentially working on non-constant reg->var_off.value. - regsafe() is relaxed to avoid comparing '.off' field for pointers.
As a precaution, the changes are verified in [1] by adding a pass checking that no pointer has non-zero '.off' field on each do_check_insn() iteration.
[1] https://github.com/eddyz87/bpf/tree/ptrs-off-migration
Notable selftests changes: - `.var_off` value changed because it now combines static and varying offsets. Affected tests: - linked_list/incorrect_node_var_off - linked_list/incorrect_head_var_off2 - verifier_align/packet_variable_offset
- Overflowing `smax_value` bound leads to a pointer with big negative or positive offset to be rejected immediately (previously overflowing `rX += const` instruction updated `.off` field avoiding the overflow). Affected tests: - verifier_align/dubious_pointer_arithmetic - verifier_bounds/var_off_insn_off_test1
- Invalid access to packet now reports full offset inside a packet. Affected tests: - verifier_direct_packet_access/test23_x_pkt_ptr_4
- A change in check_mem_region_access() behavior: when register `.smin_value` is negative, it reports "rX min value is negative..." before calling into __check_mem_access() which reports "invalid access to ...". In the tests below, the `.off` field was negative, while `.smin_value` remained positive. This is no longer the case after the changes in this commit. Affected tests: - verifier_gotox/jump_table_invalid_mem_acceess_neg - verifier_helper_packet_access/test15_cls_helper_fail_sub - verifier_helper_value_access/imm_out_of_bound_2 - verifier_helper_value_access/reg_out_of_bound_2 - verifier_meta_access/meta_access_test2 - verifier_value_ptr_arith/known_scalar_from_different_maps - lower_oob_arith_test_1 - value_ptr_known_scalar_3 - access_value_ptr_known_scalar
- Usage of check_mem_region_access() instead of __check_mem_access() in check_packet_access() changes the reported message from "rX offset is outside ..." to "rX min/max value is outside ...". Affected tests: - verifier_xdp_direct_packet_access/*
- In check_func_arg_reg_off() the check for zero offset now operates on `.var_off` field instead of `.off` field. For tests where the pattern looks like `kfunc(reg_with_var_off, ...)`, this changes the reported error: - previously the error "variable ... access ... disallowed" was reported by __check_ptr_off_reg(); - now "R1 must have zero offset ..." is reported by check_func_arg_reg_off() itself. Affected tests: - verifier/calls.c "calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset"
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260212-ptrs-off-migration-v2-2-00820e4d3438@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|
| #
f632de6e |
| 11-Feb-2026 |
Eduard Zingerman <eddyz87@gmail.com> |
selftests/bpf: Migrate align.c tests to test_loader framework
While working on pointer tracking changes I found it necessary to update expected log messages in align.c series of tests. As a prelimin
selftests/bpf: Migrate align.c tests to test_loader framework
While working on pointer tracking changes I found it necessary to update expected log messages in align.c series of tests. As a preliminary step, migrate these tests to test_loader framework.
The tests in question load BPF program and check if expected log is produced, the log is specified as:
.matches = { ... {4, "R3", "32"}, ... }
Where: - '4' is an *instruction number* (contrary to the field name in struct bpf_reg_match). - 'R3' is the name of the register to check. - '32' is the value expected for this register.
Mimic the same logic using __msg macro.
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com> Link: https://lore.kernel.org/r/20260211051310.2782558-1-eddyz87@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
show more ...
|