History log of /linux/arch/arm64/net/bpf_jit_comp.c (Results 1 – 25 of 935)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# ae90f6a6 25-Oct-2024 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Pull bpf fixes from Daniel Borkmann:

- Fix an out-of-bounds read in bpf_link_show_fdinfo for BPF sockmap
link file

Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Pull bpf fixes from Daniel Borkmann:

- Fix an out-of-bounds read in bpf_link_show_fdinfo for BPF sockmap
link file descriptors (Hou Tao)

- Fix BPF arm64 JIT's address emission with tag-based KASAN enabled
reserving not enough size (Peter Collingbourne)

- Fix BPF verifier do_misc_fixups patching for inlining of the
bpf_get_branch_snapshot BPF helper (Andrii Nakryiko)

- Fix a BPF verifier bug and reject BPF program write attempts into
read-only marked BPF maps (Daniel Borkmann)

- Fix perf_event_detach_bpf_prog error handling by removing an invalid
check which would skip BPF program release (Jiri Olsa)

- Fix memory leak when parsing mount options for the BPF filesystem
(Hou Tao)

* tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
bpf: Check validity of link->type in bpf_link_show_fdinfo()
bpf: Add the missing BPF_LINK_TYPE invocation for sockmap
bpf: fix do_misc_fixups() for bpf_get_branch_snapshot()
bpf,perf: Fix perf_event_detach_bpf_prog error handling
selftests/bpf: Add test for passing in uninit mtu_len
selftests/bpf: Add test for writes to .rodata
bpf: Remove MEM_UNINIT from skb/xdp MTU helpers
bpf: Fix overloading of MEM_UNINIT's meaning
bpf: Add MEM_WRITE attribute
bpf: Preserve param->string when parsing mount options
bpf, arm64: Fix address emission with tag-based KASAN enabled

show more ...


Revision tags: v6.12-rc4
# a552e2ef 19-Oct-2024 Peter Collingbourne <pcc@google.com>

bpf, arm64: Fix address emission with tag-based KASAN enabled

When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image
struct on the stack is passed during the size calculation pass a

bpf, arm64: Fix address emission with tag-based KASAN enabled

When BPF_TRAMP_F_CALL_ORIG is enabled, the address of a bpf_tramp_image
struct on the stack is passed during the size calculation pass and
an address on the heap is passed during code generation. This may
cause a heap buffer overflow if the heap address is tagged because
emit_a64_mov_i64() will emit longer code than it did during the size
calculation pass. The same problem could occur without tag-based
KASAN if one of the 16-bit words of the stack address happened to
be all-ones during the size calculation pass. Fix the problem by
assuming the worst case (4 instructions) when calculating the size
of the bpf_tramp_image address emission.

Fixes: 19d3c179a377 ("bpf, arm64: Fix trampoline for BPF_TRAMP_F_CALL_ORIG")
Signed-off-by: Peter Collingbourne <pcc@google.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Xu Kuohai <xukuohai@huawei.com>
Link: https://linux-review.googlesource.com/id/I1496f2bc24fba7a1d492e16e2b94cf43714f2d3c
Link: https://lore.kernel.org/bpf/20241018221644.3240898-1-pcc@google.com

show more ...


Revision tags: v6.12-rc3, v6.12-rc2
# c8d430db 06-Oct-2024 Paolo Bonzini <pbonzini@redhat.com>

Merge tag 'kvmarm-fixes-6.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 fixes for 6.12, take #1

- Fix pKVM error path on init, making sure we do not chang

Merge tag 'kvmarm-fixes-6.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 fixes for 6.12, take #1

- Fix pKVM error path on init, making sure we do not change critical
system registers as we're about to fail

- Make sure that the host's vector length is at capped by a value
common to all CPUs

- Fix kvm_has_feat*() handling of "negative" features, as the current
code is pretty broken

- Promote Joey to the status of official reviewer, while James steps
down -- hopefully only temporarly

show more ...


# 0c436dfe 02-Oct-2024 Takashi Iwai <tiwai@suse.de>

Merge tag 'asoc-fix-v6.12-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v6.12

A bunch of fixes here that came in during the merge window and t

Merge tag 'asoc-fix-v6.12-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus

ASoC: Fixes for v6.12

A bunch of fixes here that came in during the merge window and the first
week of release, plus some new quirks and device IDs. There's nothing
major here, it's a bit bigger than it might've been due to there being
no fixes sent during the merge window due to your vacation.

show more ...


# 2cd86f02 01-Oct-2024 Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

Merge remote-tracking branch 'drm/drm-fixes' into drm-misc-fixes

Required for a panthor fix that broke when
FOP_UNSIGNED_OFFSET was added in place of FMODE_UNSIGNED_OFFSET.

Signed-off-by: Maarten L

Merge remote-tracking branch 'drm/drm-fixes' into drm-misc-fixes

Required for a panthor fix that broke when
FOP_UNSIGNED_OFFSET was added in place of FMODE_UNSIGNED_OFFSET.

Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>

show more ...


Revision tags: v6.12-rc1
# 3a39d672 27-Sep-2024 Paolo Abeni <pabeni@redhat.com>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts and no adjacent changes.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>


# 36ec807b 20-Sep-2024 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'next' into for-linus

Prepare input updates for 6.12 merge window.


Revision tags: v6.11, v6.11-rc7
# f057b572 06-Sep-2024 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'ib/6.11-rc6-matrix-keypad-spitz' into next

Bring in changes removing support for platform data from matrix-keypad
driver.


Revision tags: v6.11-rc6, v6.11-rc5, v6.11-rc4, v6.11-rc3, v6.11-rc2, v6.11-rc1
# 3daee2e4 16-Jul-2024 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge tag 'v6.10' into next

Sync up with mainline to bring in device_for_each_child_node_scoped()
and other newer APIs.


# 66e72a01 29-Jul-2024 Jerome Brunet <jbrunet@baylibre.com>

Merge tag 'v6.11-rc1' into clk-meson-next

Linux 6.11-rc1


# ee057c8c 14-Aug-2024 Steven Rostedt <rostedt@goodmis.org>

Merge tag 'v6.11-rc3' into trace/ring-buffer/core

The "reserve_mem" kernel command line parameter has been pulled into
v6.11. Merge the latest -rc3 to allow the persistent ring buffer memory to
be a

Merge tag 'v6.11-rc3' into trace/ring-buffer/core

The "reserve_mem" kernel command line parameter has been pulled into
v6.11. Merge the latest -rc3 to allow the persistent ring buffer memory to
be able to be mapped at the address specified by the "reserve_mem" command
line parameter.

Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>

show more ...


# 440b6523 21-Sep-2024 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'bpf-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Introduce '__attribute__((bpf_fastcall))' for helpers and kfuncs

Merge tag 'bpf-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next

Pull bpf updates from Alexei Starovoitov:

- Introduce '__attribute__((bpf_fastcall))' for helpers and kfuncs with
corresponding support in LLVM.

It is similar to existing 'no_caller_saved_registers' attribute in
GCC/LLVM with a provision for backward compatibility. It allows
compilers generate more efficient BPF code assuming the verifier or
JITs will inline or partially inline a helper/kfunc with such
attribute. bpf_cast_to_kern_ctx, bpf_rdonly_cast,
bpf_get_smp_processor_id are the first set of such helpers.

- Harden and extend ELF build ID parsing logic.

When called from sleepable context the relevants parts of ELF file
will be read to find and fetch .note.gnu.build-id information. Also
harden the logic to avoid TOCTOU, overflow, out-of-bounds problems.

- Improvements and fixes for sched-ext:
- Allow passing BPF iterators as kfunc arguments
- Make the pointer returned from iter_next method trusted
- Fix x86 JIT convergence issue due to growing/shrinking conditional
jumps in variable length encoding

- BPF_LSM related:
- Introduce few VFS kfuncs and consolidate them in
fs/bpf_fs_kfuncs.c
- Enforce correct range of return values from certain LSM hooks
- Disallow attaching to other LSM hooks

- Prerequisite work for upcoming Qdisc in BPF:
- Allow kptrs in program provided structs
- Support for gen_epilogue in verifier_ops

- Important fixes:
- Fix uprobe multi pid filter check
- Fix bpf_strtol and bpf_strtoul helpers
- Track equal scalars history on per-instruction level
- Fix tailcall hierarchy on x86 and arm64
- Fix signed division overflow to prevent INT_MIN/-1 trap on x86
- Fix get kernel stack in BPF progs attached to tracepoint:syscall

- Selftests:
- Add uprobe bench/stress tool
- Generate file dependencies to drastically improve re-build time
- Match JIT-ed and BPF asm with __xlated/__jited keywords
- Convert older tests to test_progs framework
- Add support for RISC-V
- Few fixes when BPF programs are compiled with GCC-BPF backend
(support for GCC-BPF in BPF CI is ongoing in parallel)
- Add traffic monitor
- Enable cross compile and musl libc

* tag 'bpf-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next: (260 commits)
btf: require pahole 1.21+ for DEBUG_INFO_BTF with default DWARF version
btf: move pahole check in scripts/link-vmlinux.sh to lib/Kconfig.debug
btf: remove redundant CONFIG_BPF test in scripts/link-vmlinux.sh
bpf: Call the missed kfree() when there is no special field in btf
bpf: Call the missed btf_record_free() when map creation fails
selftests/bpf: Add a test case to write mtu result into .rodata
selftests/bpf: Add a test case to write strtol result into .rodata
selftests/bpf: Rename ARG_PTR_TO_LONG test description
selftests/bpf: Fix ARG_PTR_TO_LONG {half-,}uninitialized test
bpf: Zero former ARG_PTR_TO_{LONG,INT} args in case of error
bpf: Improve check_raw_mode_ok test for MEM_UNINIT-tagged types
bpf: Fix helper writes to read-only maps
bpf: Remove truncation test in bpf_strtol and bpf_strtoul helpers
bpf: Fix bpf_strtol and bpf_strtoul helpers for 32bit
selftests/bpf: Add tests for sdiv/smod overflow cases
bpf: Fix a sdiv overflow issue
libbpf: Add bpf_object__token_fd accessor
docs/bpf: Add missing BPF program types to docs
docs/bpf: Add constant values for linkages
bpf: Use fake pt_regs when doing bpf syscall tracepoint tracing
...

show more ...


# 649e980d 04-Sep-2024 Tejun Heo <tj@kernel.org>

Merge branch 'bpf/master' into for-6.12

Pull bpf/master to receive baebe9aaba1e ("bpf: allow passing struct
bpf_iter_<type> as kfunc arguments") and related changes in preparation for
the DSQ iterat

Merge branch 'bpf/master' into for-6.12

Pull bpf/master to receive baebe9aaba1e ("bpf: allow passing struct
bpf_iter_<type> as kfunc arguments") and related changes in preparation for
the DSQ iterator patchset.

Signed-off-by: Tejun Heo <tj@kernel.org>

show more ...


# ddbe9ec5 03-Sep-2024 Xu Kuohai <xukuohai@huawei.com>

bpf, arm64: Jit BPF_CALL to direct call when possible

Currently, BPF_CALL is always jited to indirect call. When target is
within the range of direct call, BPF_CALL can be jited to direct call.

For

bpf, arm64: Jit BPF_CALL to direct call when possible

Currently, BPF_CALL is always jited to indirect call. When target is
within the range of direct call, BPF_CALL can be jited to direct call.

For example, the following BPF_CALL

call __htab_map_lookup_elem

is always jited to indirect call:

mov x10, #0xffffffffffff18f4
movk x10, #0x821, lsl #16
movk x10, #0x8000, lsl #32
blr x10

When the address of target __htab_map_lookup_elem is within the range of
direct call, the BPF_CALL can be jited to:

bl 0xfffffffffd33bc98

This patch does such jit optimization by emitting arm64 direct calls for
BPF_CALL when possible, indirect calls otherwise.

Without this patch, the jit works as follows.

1. First pass
A. Determine jited position and size for each bpf instruction.
B. Computed the jited image size.

2. Allocate jited image with size computed in step 1.

3. Second pass
A. Adjust jump offset for jump instructions
B. Write the final image.

This works because, for a given bpf prog, regardless of where the jited
image is allocated, the jited result for each instruction is fixed. The
second pass differs from the first only in adjusting the jump offsets,
like changing "jmp imm1" to "jmp imm2", while the position and size of
the "jmp" instruction remain unchanged.

Now considering whether to jit BPF_CALL to arm64 direct or indirect call
instruction. The choice depends solely on the jump offset: direct call
if the jump offset is within 128MB, indirect call otherwise.

For a given BPF_CALL, the target address is known, so the jump offset is
decided by the jited address of the BPF_CALL instruction. In other words,
for a given bpf prog, the jited result for each BPF_CALL is determined
by its jited address.

The jited address for a BPF_CALL is the jited image address plus the
total jited size of all preceding instructions. For a given bpf prog,
there are clearly no BPF_CALL instructions before the first BPF_CALL
instruction. Since the jited result for all other instructions other
than BPF_CALL are fixed, the total jited size preceding the first
BPF_CALL is also fixed. Therefore, once the jited image is allocated,
the jited address for the first BPF_CALL is fixed.

Now that the jited result for the first BPF_CALL is fixed, the jited
results for all instructions preceding the second BPF_CALL are fixed.
So the jited address and result for the second BPF_CALL are also fixed.

Similarly, we can conclude that the jited addresses and results for all
subsequent BPF_CALL instructions are fixed.

This means that, for a given bpf prog, once the jited image is allocated,
the jited address and result for all instructions, including all BPF_CALL
instructions, are fixed.

Based on the observation, with this patch, the jit works as follows.

1. First pass
Estimate the maximum jited image size. In this pass, all BPF_CALLs
are jited to arm64 indirect calls since the jump offsets are unknown
because the jited image is not allocated.

2. Allocate jited image with size estimated in step 1.

3. Second pass
A. Determine the jited result for each BPF_CALL.
B. Determine jited address and size for each bpf instruction.

4. Third pass
A. Adjust jump offset for jump instructions.
B. Write the final image.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Reviewed-by: Puranjay Mohan <puranjay@kernel.org>
Link: https://lore.kernel.org/r/20240903094407.601107-1-xukuohai@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 4961d8f4 28-Aug-2024 Alexei Starovoitov <ast@kernel.org>

Merge branch 'bpf-arm64-simplify-jited-prologue-epilogue'

Xu Kuohai says:

====================
bpf, arm64: Simplify jited prologue/epilogue

From: Xu Kuohai <xukuohai@huawei.com>

The arm64 jit bli

Merge branch 'bpf-arm64-simplify-jited-prologue-epilogue'

Xu Kuohai says:

====================
bpf, arm64: Simplify jited prologue/epilogue

From: Xu Kuohai <xukuohai@huawei.com>

The arm64 jit blindly saves/restores all callee-saved registers, making
the jited result looks a bit too compliated. For example, for an empty
prog, the jited result is:

0: bti jc
4: mov x9, lr
8: nop
c: paciasp
10: stp fp, lr, [sp, #-16]!
14: mov fp, sp
18: stp x19, x20, [sp, #-16]!
1c: stp x21, x22, [sp, #-16]!
20: stp x26, x25, [sp, #-16]!
24: mov x26, #0
28: stp x26, x25, [sp, #-16]!
2c: mov x26, sp
30: stp x27, x28, [sp, #-16]!
34: mov x25, sp
38: bti j // tailcall target
3c: sub sp, sp, #0
40: mov x7, #0
44: add sp, sp, #0
48: ldp x27, x28, [sp], #16
4c: ldp x26, x25, [sp], #16
50: ldp x26, x25, [sp], #16
54: ldp x21, x22, [sp], #16
58: ldp x19, x20, [sp], #16
5c: ldp fp, lr, [sp], #16
60: mov x0, x7
64: autiasp
68: ret

Clearly, there is no need to save/restore unused callee-saved registers.
This patch does this change, making the jited image to only save/restore
the callee-saved registers it uses.

Now the jited result of empty prog is:

0: bti jc
4: mov x9, lr
8: nop
c: paciasp
10: stp fp, lr, [sp, #-16]!
14: mov fp, sp
18: stp xzr, x26, [sp, #-16]!
1c: mov x26, sp
20: bti j // tailcall target
24: mov x7, #0
28: ldp xzr, x26, [sp], #16
2c: ldp fp, lr, [sp], #16
30: mov x0, x7
34: autiasp
38: ret
====================

Acked-by: Puranjay Mohan <puranjay@kernel.org>
Link: https://lore.kernel.org/r/20240826071624.350108-1-xukuohai@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 5d4fa9ec 26-Aug-2024 Xu Kuohai <xukuohai@huawei.com>

bpf, arm64: Avoid blindly saving/restoring all callee-saved registers

The arm64 jit blindly saves/restores all callee-saved registers, making
the jited result looks a bit too compliated. For example

bpf, arm64: Avoid blindly saving/restoring all callee-saved registers

The arm64 jit blindly saves/restores all callee-saved registers, making
the jited result looks a bit too compliated. For example, for an empty
prog, the jited result is:

0: bti jc
4: mov x9, lr
8: nop
c: paciasp
10: stp fp, lr, [sp, #-16]!
14: mov fp, sp
18: stp x19, x20, [sp, #-16]!
1c: stp x21, x22, [sp, #-16]!
20: stp x26, x25, [sp, #-16]!
24: mov x26, #0
28: stp x26, x25, [sp, #-16]!
2c: mov x26, sp
30: stp x27, x28, [sp, #-16]!
34: mov x25, sp
38: bti j // tailcall target
3c: sub sp, sp, #0
40: mov x7, #0
44: add sp, sp, #0
48: ldp x27, x28, [sp], #16
4c: ldp x26, x25, [sp], #16
50: ldp x26, x25, [sp], #16
54: ldp x21, x22, [sp], #16
58: ldp x19, x20, [sp], #16
5c: ldp fp, lr, [sp], #16
60: mov x0, x7
64: autiasp
68: ret

Clearly, there is no need to save/restore unused callee-saved registers.
This patch does this change, making the jited image to only save/restore
the callee-saved registers it uses.

Now the jited result of empty prog is:

0: bti jc
4: mov x9, lr
8: nop
c: paciasp
10: stp fp, lr, [sp, #-16]!
14: mov fp, sp
18: stp xzr, x26, [sp, #-16]!
1c: mov x26, sp
20: bti j // tailcall target
24: mov x7, #0
28: ldp xzr, x26, [sp], #16
2c: ldp fp, lr, [sp], #16
30: mov x0, x7
34: autiasp
38: ret

Since bpf prog saves/restores its own callee-saved registers as needed,
to make tailcall work correctly, the caller needs to restore its saved
registers before tailcall, and the callee needs to save its callee-saved
registers after tailcall. This extra restoring/saving instructions
increases preformance overhead.

[1] provides 2 benchmarks for tailcall scenarios. Below is the perf
number measured in an arm64 KVM guest. The result indicates that the
performance difference before and after the patch in typical tailcall
scenarios is negligible.

- Before:

Performance counter stats for './test_progs -t tailcalls' (5 runs):

4313.43 msec task-clock # 0.874 CPUs utilized ( +- 0.16% )
574 context-switches # 133.073 /sec ( +- 1.14% )
0 cpu-migrations # 0.000 /sec
538 page-faults # 124.727 /sec ( +- 0.57% )
10697772784 cycles # 2.480 GHz ( +- 0.22% ) (61.19%)
25511241955 instructions # 2.38 insn per cycle ( +- 0.08% ) (66.70%)
5108910557 branches # 1.184 G/sec ( +- 0.08% ) (72.38%)
2800459 branch-misses # 0.05% of all branches ( +- 0.51% ) (72.36%)
TopDownL1 # 0.60 retiring ( +- 0.09% ) (66.84%)
# 0.21 frontend_bound ( +- 0.15% ) (61.31%)
# 0.12 bad_speculation ( +- 0.08% ) (50.11%)
# 0.07 backend_bound ( +- 0.16% ) (33.30%)
8274201819 L1-dcache-loads # 1.918 G/sec ( +- 0.18% ) (33.15%)
468268 L1-dcache-load-misses # 0.01% of all L1-dcache accesses ( +- 4.69% ) (33.16%)
385383 LLC-loads # 89.345 K/sec ( +- 5.22% ) (33.16%)
38296 LLC-load-misses # 9.94% of all LL-cache accesses ( +- 42.52% ) (38.69%)
6886576501 L1-icache-loads # 1.597 G/sec ( +- 0.35% ) (38.69%)
1848585 L1-icache-load-misses # 0.03% of all L1-icache accesses ( +- 4.52% ) (44.23%)
9043645883 dTLB-loads # 2.097 G/sec ( +- 0.10% ) (44.33%)
416672 dTLB-load-misses # 0.00% of all dTLB cache accesses ( +- 5.15% ) (49.89%)
6925626111 iTLB-loads # 1.606 G/sec ( +- 0.35% ) (55.46%)
66220 iTLB-load-misses # 0.00% of all iTLB cache accesses ( +- 1.88% ) (55.50%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses

4.9372 +- 0.0526 seconds time elapsed ( +- 1.07% )

Performance counter stats for './test_progs -t flow_dissector' (5 runs):

10924.50 msec task-clock # 0.945 CPUs utilized ( +- 0.08% )
603 context-switches # 55.197 /sec ( +- 1.13% )
0 cpu-migrations # 0.000 /sec
566 page-faults # 51.810 /sec ( +- 0.42% )
27381270695 cycles # 2.506 GHz ( +- 0.18% ) (60.46%)
56996583922 instructions # 2.08 insn per cycle ( +- 0.21% ) (66.11%)
10321647567 branches # 944.816 M/sec ( +- 0.17% ) (71.79%)
3347735 branch-misses # 0.03% of all branches ( +- 3.72% ) (72.15%)
TopDownL1 # 0.52 retiring ( +- 0.13% ) (66.74%)
# 0.27 frontend_bound ( +- 0.14% ) (61.27%)
# 0.14 bad_speculation ( +- 0.19% ) (50.36%)
# 0.07 backend_bound ( +- 0.42% ) (33.89%)
18740797617 L1-dcache-loads # 1.715 G/sec ( +- 0.43% ) (33.71%)
13715669 L1-dcache-load-misses # 0.07% of all L1-dcache accesses ( +- 32.85% ) (33.34%)
4087551 LLC-loads # 374.164 K/sec ( +- 29.53% ) (33.26%)
267906 LLC-load-misses # 6.55% of all LL-cache accesses ( +- 23.90% ) (38.76%)
15811864229 L1-icache-loads # 1.447 G/sec ( +- 0.12% ) (38.73%)
2976833 L1-icache-load-misses # 0.02% of all L1-icache accesses ( +- 9.73% ) (44.22%)
20138907471 dTLB-loads # 1.843 G/sec ( +- 0.18% ) (44.15%)
732850 dTLB-load-misses # 0.00% of all dTLB cache accesses ( +- 11.18% ) (49.64%)
15895726702 iTLB-loads # 1.455 G/sec ( +- 0.15% ) (55.13%)
152075 iTLB-load-misses # 0.00% of all iTLB cache accesses ( +- 4.71% ) (54.98%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses

11.5613 +- 0.0317 seconds time elapsed ( +- 0.27% )

- After:

Performance counter stats for './test_progs -t tailcalls' (5 runs):

4278.78 msec task-clock # 0.871 CPUs utilized ( +- 0.15% )
569 context-switches # 132.982 /sec ( +- 0.58% )
0 cpu-migrations # 0.000 /sec
539 page-faults # 125.970 /sec ( +- 0.43% )
10588986432 cycles # 2.475 GHz ( +- 0.20% ) (60.91%)
25303825043 instructions # 2.39 insn per cycle ( +- 0.08% ) (66.48%)
5110756256 branches # 1.194 G/sec ( +- 0.07% ) (72.03%)
2719569 branch-misses # 0.05% of all branches ( +- 2.42% ) (72.03%)
TopDownL1 # 0.60 retiring ( +- 0.22% ) (66.31%)
# 0.22 frontend_bound ( +- 0.21% ) (60.83%)
# 0.12 bad_speculation ( +- 0.26% ) (50.25%)
# 0.06 backend_bound ( +- 0.17% ) (33.52%)
8163648527 L1-dcache-loads # 1.908 G/sec ( +- 0.33% ) (33.52%)
694979 L1-dcache-load-misses # 0.01% of all L1-dcache accesses ( +- 30.53% ) (33.52%)
1902347 LLC-loads # 444.600 K/sec ( +- 48.84% ) (33.69%)
96677 LLC-load-misses # 5.08% of all LL-cache accesses ( +- 43.48% ) (39.30%)
6863517589 L1-icache-loads # 1.604 G/sec ( +- 0.37% ) (39.17%)
1871519 L1-icache-load-misses # 0.03% of all L1-icache accesses ( +- 6.78% ) (44.56%)
8927782813 dTLB-loads # 2.087 G/sec ( +- 0.14% ) (44.37%)
438237 dTLB-load-misses # 0.00% of all dTLB cache accesses ( +- 6.00% ) (49.75%)
6886906831 iTLB-loads # 1.610 G/sec ( +- 0.36% ) (55.08%)
67568 iTLB-load-misses # 0.00% of all iTLB cache accesses ( +- 3.27% ) (54.86%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses

4.9114 +- 0.0309 seconds time elapsed ( +- 0.63% )

Performance counter stats for './test_progs -t flow_dissector' (5 runs):

10948.40 msec task-clock # 0.942 CPUs utilized ( +- 0.05% )
615 context-switches # 56.173 /sec ( +- 1.65% )
1 cpu-migrations # 0.091 /sec ( +- 31.62% )
567 page-faults # 51.788 /sec ( +- 0.44% )
27334194328 cycles # 2.497 GHz ( +- 0.08% ) (61.05%)
56656528828 instructions # 2.07 insn per cycle ( +- 0.08% ) (66.67%)
10270389422 branches # 938.072 M/sec ( +- 0.10% ) (72.21%)
3453837 branch-misses # 0.03% of all branches ( +- 3.75% ) (72.27%)
TopDownL1 # 0.52 retiring ( +- 0.16% ) (66.55%)
# 0.27 frontend_bound ( +- 0.09% ) (60.91%)
# 0.14 bad_speculation ( +- 0.08% ) (49.85%)
# 0.07 backend_bound ( +- 0.16% ) (33.33%)
18982866028 L1-dcache-loads # 1.734 G/sec ( +- 0.24% ) (33.34%)
8802454 L1-dcache-load-misses # 0.05% of all L1-dcache accesses ( +- 52.30% ) (33.31%)
2612962 LLC-loads # 238.661 K/sec ( +- 29.78% ) (33.45%)
264107 LLC-load-misses # 10.11% of all LL-cache accesses ( +- 18.34% ) (39.07%)
15793205997 L1-icache-loads # 1.443 G/sec ( +- 0.15% ) (39.09%)
3930802 L1-icache-load-misses # 0.02% of all L1-icache accesses ( +- 3.72% ) (44.66%)
20097828496 dTLB-loads # 1.836 G/sec ( +- 0.09% ) (44.68%)
961757 dTLB-load-misses # 0.00% of all dTLB cache accesses ( +- 3.32% ) (50.15%)
15838728506 iTLB-loads # 1.447 G/sec ( +- 0.09% ) (55.62%)
167652 iTLB-load-misses # 0.00% of all iTLB cache accesses ( +- 1.28% ) (55.52%)
<not supported> L1-dcache-prefetches
<not supported> L1-dcache-prefetch-misses

11.6173 +- 0.0268 seconds time elapsed ( +- 0.23% )

[1] https://lore.kernel.org/bpf/20200724123644.5096-1-maciej.fijalkowski@intel.com/

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Link: https://lore.kernel.org/r/20240826071624.350108-3-xukuohai@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# bd737fcb 26-Aug-2024 Xu Kuohai <xukuohai@huawei.com>

bpf, arm64: Get rid of fpb

bpf prog accesses stack using BPF_FP as the base address and a negative
immediate number as offset. But arm64 ldr/str instructions only support
non-negative immediate numb

bpf, arm64: Get rid of fpb

bpf prog accesses stack using BPF_FP as the base address and a negative
immediate number as offset. But arm64 ldr/str instructions only support
non-negative immediate number as offset. To simplify the jited result,
commit 5b3d19b9bd40 ("bpf, arm64: Adjust the offset of str/ldr(immediate)
to positive number") introduced FPB to represent the lowest stack address
that the bpf prog being jited may access, and with this address as the
baseline, it converts BPF_FP plus negative immediate offset number to FPB
plus non-negative immediate offset.

Considering that for a given bpf prog, the jited stack space is fixed
with A64_SP as the lowest address and BPF_FP as the highest address.
Thus we can get rid of FPB and converts BPF_FP plus negative immediate
offset to A64_SP plus non-negative immediate offset.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Link: https://lore.kernel.org/r/20240826071624.350108-2-xukuohai@huaweicloud.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# c8faf11c 30-Jul-2024 Tejun Heo <tj@kernel.org>

Merge tag 'v6.11-rc1' into for-6.12

Linux 6.11-rc1


# 81a0b954 20-Jul-2024 Alexei Starovoitov <ast@kernel.org>

Merge branch 'bpf-fix-tailcall-hierarchy'

Leon Hwang says:

====================
bpf: Fix tailcall hierarchy

This patchset fixes a tailcall hierarchy issue.

The issue is confirmed in the discussio

Merge branch 'bpf-fix-tailcall-hierarchy'

Leon Hwang says:

====================
bpf: Fix tailcall hierarchy

This patchset fixes a tailcall hierarchy issue.

The issue is confirmed in the discussions of
"bpf, x64: Fix tailcall infinite loop" [0].

The issue has been resolved on both x86_64 and arm64 [1].

I provide a long commit message in the "bpf, x64: Fix tailcall hierarchy"
patch to describe how the issue happens and how this patchset resolves the
issue in details.

How does this patchset resolve the issue?

In short, it stores tail_call_cnt on the stack of main prog, and propagates
tail_call_cnt_ptr to its subprogs.

First, at the prologue of main prog, it initializes tail_call_cnt and
prepares tail_call_cnt_ptr. And at the prologue of subprog, it reuses
the tail_call_cnt_ptr from caller.

Then, when a tailcall happens, it increments tail_call_cnt by its pointer.

v5 -> v6:
* Address comments from Eduard:
* Add JITed dumping along annotating comments
* Rewrite two selftests with RUN_TESTS macro.

v4 -> v5:
* Solution changes from tailcall run ctx to tail_call_cnt and its pointer.
It's because v4 solution is unable to handle the case that there is no
tailcall in subprog but there is tailcall in EXT prog which attaches to
the subprog.

v3 -> v4:
* Solution changes from per-task tail_call_cnt to tailcall run ctx.
As for per-cpu/per-task solution, there is a case it is unable to handle [2].

v2 -> v3:
* Solution changes from percpu tail_call_cnt to tail_call_cnt at task_struct.

v1 -> v2:
* Solution changes from extra run-time call insn to percpu tail_call_cnt.
* Address comments from Alexei:
* Use percpu tail_call_cnt.
* Use asm to make sure no callee saved registers are touched.

RFC v2 -> v1:
* Solution changes from propagating tail_call_cnt with its pointer to extra
run-time call insn.
* Address comments from Maciej:
* Replace all memcpy(prog, x86_nops[5], X86_PATCH_SIZE) with
emit_nops(&prog, X86_PATCH_SIZE)

RFC v1 -> RFC v2:
* Address comments from Stanislav:
* Separate moving emit_nops() as first patch.

Links:
[0] https://lore.kernel.org/bpf/6203dd01-789d-f02c-5293-def4c1b18aef@gmail.com/
[1] https://github.com/kernel-patches/bpf/pull/7350/checks
[2] https://lore.kernel.org/bpf/CAADnVQK1qF+uBjwom2s2W-yEmgd_3rGi5Nr+KiV3cW0T+UPPfA@mail.gmail.com/
====================

Link: https://lore.kernel.org/r/20240714123902.32305-1-hffilwlqm@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


Revision tags: v6.10
# 66ff4d61 14-Jul-2024 Leon Hwang <hffilwlqm@gmail.com>

bpf, arm64: Fix tailcall hierarchy

This patch fixes a tailcall issue caused by abusing the tailcall in
bpf2bpf feature on arm64 like the way of "bpf, x64: Fix tailcall
hierarchy".

On arm64, when a

bpf, arm64: Fix tailcall hierarchy

This patch fixes a tailcall issue caused by abusing the tailcall in
bpf2bpf feature on arm64 like the way of "bpf, x64: Fix tailcall
hierarchy".

On arm64, when a tail call happens, it uses tail_call_cnt_ptr to
increment tail_call_cnt, too.

At the prologue of main prog, it has to initialize tail_call_cnt and
prepare tail_call_cnt_ptr.

At the prologue of subprog, it pushes x26 register twice, and does not
initialize tail_call_cnt.

At the epilogue, it pops x26 twice, no matter whether it is main prog or
subprog.

Fixes: d4609a5d8c70 ("bpf, arm64: Keep tail call count across bpf2bpf calls")
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Leon Hwang <hffilwlqm@gmail.com>
Link: https://lore.kernel.org/r/20240714123902.32305-3-hffilwlqm@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


# ed7171ff 16-Aug-2024 Lucas De Marchi <lucas.demarchi@intel.com>

Merge drm/drm-next into drm-xe-next

Get drm-xe-next on v6.11-rc2 and synchronized with drm-intel-next for
the display side. This resolves the current conflict for the
enable_display module parameter

Merge drm/drm-next into drm-xe-next

Get drm-xe-next on v6.11-rc2 and synchronized with drm-intel-next for
the display side. This resolves the current conflict for the
enable_display module parameter and allows further pending refactors.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>

show more ...


# 5c61f598 12-Aug-2024 Thomas Zimmermann <tzimmermann@suse.de>

Merge drm/drm-next into drm-misc-next

Get drm-misc-next to the state of v6.11-rc2.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>


# 3663e2c4 01-Aug-2024 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync with v6.11-rc1 in general, and specifically get the new
BACKLIGHT_POWER_ constants for power states.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>


# 4436e6da 02-Aug-2024 Thomas Gleixner <tglx@linutronix.de>

Merge branch 'linus' into x86/mm

Bring x86 and selftests up to date


# a1ff5a7d 30-Jul-2024 Maxime Ripard <mripard@kernel.org>

Merge drm/drm-fixes into drm-misc-fixes

Let's start the new drm-misc-fixes cycle by bringing in 6.11-rc1.

Signed-off-by: Maxime Ripard <mripard@kernel.org>


12345678910>>...38