History log of /linux/kernel/bpf/helpers.c (Results 26 – 50 of 1067)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 65ab5ac4 23-Aug-2024 Jordan Rome <linux@jordanrome.com>

bpf: Add bpf_copy_from_user_str kfunc

This adds a kfunc wrapper around strncpy_from_user,
which can be called from sleepable BPF programs.

This matches the non-sleepable 'bpf_probe_read_user_str'
h

bpf: Add bpf_copy_from_user_str kfunc

This adds a kfunc wrapper around strncpy_from_user,
which can be called from sleepable BPF programs.

This matches the non-sleepable 'bpf_probe_read_user_str'
helper except it includes an additional 'flags'
param, which allows consumers to clear the entire
destination buffer on success or failure.

Signed-off-by: Jordan Rome <linux@jordanrome.com>
Link: https://lore.kernel.org/r/20240823195101.3621028-1-linux@jordanrome.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 5cd0aea0 23-Aug-2024 Alexei Starovoitov <ast@kernel.org>

Merge branch 'support-bpf_kptr_xchg-into-local-kptr'

Amery Hung says:

====================
Support bpf_kptr_xchg into local kptr

This revision adds substaintial changes to patch 2 to support struc

Merge branch 'support-bpf_kptr_xchg-into-local-kptr'

Amery Hung says:

====================
Support bpf_kptr_xchg into local kptr

This revision adds substaintial changes to patch 2 to support structures
with kptr as the only special btf type. The test is split into
local_kptr_stash and task_kfunc_success to remove dependencies on
bpf_testmod that would break veristat results.

This series allows stashing kptr into local kptr. Currently, kptrs are
only allowed to be stashed into map value with bpf_kptr_xchg(). A
motivating use case of this series is to enable adding referenced kptr to
bpf_rbtree or bpf_list by using allocated object as graph node and the
storage of referenced kptr. For example, a bpf qdisc [0] enqueuing a
referenced kptr to a struct sk_buff* to a bpf_list serving as a fifo:

struct skb_node {
struct sk_buff __kptr *skb;
struct bpf_list_node node;
};

private(A) struct bpf_spin_lock fifo_lock;
private(A) struct bpf_list_head fifo __contains(skb_node, node);

/* In Qdisc_ops.enqueue */
struct skb_node *skbn;

skbn = bpf_obj_new(typeof(*skbn));
if (!skbn)
goto drop;

/* skb is a referenced kptr to struct sk_buff acquired earilier
* but not shown in this code snippet.
*/
skb = bpf_kptr_xchg(&skbn->skb, skb);
if (skb)
/* should not happen; do something below releasing skb to
* satisfy the verifier */
...

bpf_spin_lock(&fifo_lock);
bpf_list_push_back(&fifo, &skbn->node);
bpf_spin_unlock(&fifo_lock);

The implementation first searches for BPF_KPTR when generating program
BTF. Then, we teach the verifier that the detination argument of
bpf_kptr_xchg() can be local kptr, and use the btf_record in program BTF
to check against the source argument.

This series is mostly developed by Dave, who kindly helped and sent me
the patchset. The selftests in bpf qdisc (WIP) relies on this series to
work.

[0] https://lore.kernel.org/netdev/20240714175130.4051012-10-amery.hung@bytedance.com/
---
v3 -> v4
- Allow struct in prog btf w/ kptr as the only special field type
- Split tests of stashing referenced kptr and local kptr
- v3: https://lore.kernel.org/bpf/20240809005131.3916464-1-amery.hung@bytedance.com/

v2 -> v3
- Fix prog btf memory leak
- Test stashing kptr in prog btf
- Test unstashing kptrs after stashing into local kptrs
- v2: https://lore.kernel.org/bpf/20240803001145.635887-1-amery.hung@bytedance.com/

v1 -> v2
- Fix the document for bpf_kptr_xchg()
- Add a comment explaining changes in the verifier
- v1: https://lore.kernel.org/bpf/20240728030115.3970543-1-amery.hung@bytedance.com/
====================

Link: https://lore.kernel.org/r/20240813212424.2871455-1-amery.hung@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# b0966c72 13-Aug-2024 Dave Marchevsky <davemarchevsky@fb.com>

bpf: Support bpf_kptr_xchg into local kptr

Currently, users can only stash kptr into map values with bpf_kptr_xchg().
This patch further supports stashing kptr into local kptr by adding local
kptr a

bpf: Support bpf_kptr_xchg into local kptr

Currently, users can only stash kptr into map values with bpf_kptr_xchg().
This patch further supports stashing kptr into local kptr by adding local
kptr as a valid destination type.

When stashing into local kptr, btf_record in program BTF is used instead
of btf_record in map to search for the btf_field of the local kptr.

The local kptr specific checks in check_reg_type() only apply when the
source argument of bpf_kptr_xchg() is local kptr. Therefore, we make the
scope of the check explicit as the destination now can also be local kptr.

Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
Link: https://lore.kernel.org/r/20240813212424.2871455-5-amery.hung@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# d59232af 13-Aug-2024 Dave Marchevsky <davemarchevsky@fb.com>

bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST

ARG_PTR_TO_KPTR is currently only used by the bpf_kptr_xchg helper.
Although it limits reg types for that helper's first arg to
PTR_TO_MAP_VALUE, an

bpf: Rename ARG_PTR_TO_KPTR -> ARG_KPTR_XCHG_DEST

ARG_PTR_TO_KPTR is currently only used by the bpf_kptr_xchg helper.
Although it limits reg types for that helper's first arg to
PTR_TO_MAP_VALUE, any arbitrary mapval won't do: further custom
verification logic ensures that the mapval reg being xchgd-into is
pointing to a kptr field. If this is not the case, it's not safe to xchg
into that reg's pointee.

Let's rename the bpf_arg_type to more accurately describe the fairly
specific expectations that this arg type encodes.

This is a nonfunctional change.

Acked-by: Martin KaFai Lau <martin.lau@kernel.org>
Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com>
Signed-off-by: Amery Hung <amery.hung@bytedance.com>
Link: https://lore.kernel.org/r/20240813212424.2871455-4-amery.hung@bytedance.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# d352eca2 22-Aug-2024 Alexei Starovoitov <ast@kernel.org>

Merge branch 'support-bpf_fastcall-patterns-for-calls-to-kfuncs'

Eduard Zingerman says:

====================
support bpf_fastcall patterns for calls to kfuncs

As an extension of [1], allow bpf_fas

Merge branch 'support-bpf_fastcall-patterns-for-calls-to-kfuncs'

Eduard Zingerman says:

====================
support bpf_fastcall patterns for calls to kfuncs

As an extension of [1], allow bpf_fastcall patterns for kfuncs:
- pattern rules are the same as for helpers;
- spill/fill removal is allowed only for kfuncs listed in the
is_fastcall_kfunc_call (under assumption that such kfuncs would
always be members of special_kfunc_list).

Allow bpf_fastcall rewrite for bpf_cast_to_kern_ctx() and
bpf_rdonly_cast() in order to conjure selftests for this feature.

After this patch-set verifier would rewrite the program below:

r2 = 1
*(u64 *)(r10 - 32) = r2
call %[bpf_cast_to_kern_ctx]
r2 = *(u64 *)(r10 - 32)
r0 = r2;"

As follows:

r2 = 1 /* spill/fill at r10[-32] is removed */
r0 = r1 /* replacement for bpf_cast_to_kern_ctx() */
r0 = r2
exit

Also, attribute used by LLVM implementation of the feature had been
changed from no_caller_saved_registers to bpf_fastcall (see [2]).
This patch-set replaces references to nocsr by references to
bpf_fastcall to keep LLVM and Kernel parts in sync.

[1] no_caller_saved_registers attribute for helper calls
https://lore.kernel.org/bpf/20240722233844.1406874-1-eddyz87@gmail.com/
[2] [BPF] introduce __attribute__((bpf_fastcall))
https://github.com/llvm/llvm-project/pull/105417

Changes v2->v3:
- added a patch fixing arch_mask handling in test_loader,
otherwise newly added tests for the feature were skipped
(a fix for regression introduced by a recent commit);
- fixed warning regarding unused 'params' variable;
- applied stylistical fixes suggested by Yonghong;
- added acks from Yonghong;

Changes v1->v2:
- added two patches replacing all mentions of nocsr by bpf_fastcall
(suggested by Andrii);
- removed KF_NOCSR flag (suggested by Yonghong).

v1: https://lore.kernel.org/bpf/20240812234356.2089263-1-eddyz87@gmail.com/
v2: https://lore.kernel.org/bpf/20240817015140.1039351-1-eddyz87@gmail.com/
====================

Link: https://lore.kernel.org/r/20240822084112.3257995-1-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# ae010757 22-Aug-2024 Eduard Zingerman <eddyz87@gmail.com>

bpf: rename nocsr -> bpf_fastcall in verifier

Attribute used by LLVM implementation of the feature had been changed
from no_caller_saved_registers to bpf_fastcall (see [1]).
This commit replaces ref

bpf: rename nocsr -> bpf_fastcall in verifier

Attribute used by LLVM implementation of the feature had been changed
from no_caller_saved_registers to bpf_fastcall (see [1]).
This commit replaces references to nocsr by references to bpf_fastcall
to keep LLVM and Kernel parts in sync.

[1] https://github.com/llvm/llvm-project/pull/105417

Acked-by: Yonghong Song <yonghong.song@linux.dev>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240822084112.3257995-2-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 6d641ca5 11-Aug-2024 Uros Bizjak <ubizjak@gmail.com>

bpf: Fix percpu address space issues

In arraymap.c:

In bpf_array_map_seq_start() and bpf_array_map_seq_next()
cast return values from the __percpu address space to
the generic address space via uin

bpf: Fix percpu address space issues

In arraymap.c:

In bpf_array_map_seq_start() and bpf_array_map_seq_next()
cast return values from the __percpu address space to
the generic address space via uintptr_t [1].

Correct the declaration of pptr pointer in __bpf_array_map_seq_show()
to void __percpu * and cast the value from the generic address
space to the __percpu address space via uintptr_t [1].

In hashtab.c:

Assign the return value from bpf_mem_cache_alloc() to void pointer
and cast the value to void __percpu ** (void pointer to percpu void
pointer) before dereferencing.

In memalloc.c:

Explicitly declare __percpu variables.

Cast obj to void __percpu **.

In helpers.c:

Cast ptr in BPF_CALL_1 and BPF_CALL_2 from generic address space
to __percpu address space via const uintptr_t [1].

Found by GCC's named address space checks.

There were no changes in the resulting object files.

[1] https://sparse.docs.kernel.org/en/latest/annotations.html#address-space-name

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Eduard Zingerman <eddyz87@gmail.com>
Cc: Song Liu <song@kernel.org>
Cc: Yonghong Song <yonghong.song@linux.dev>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Stanislav Fomichev <sdf@fomichev.me>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240811161414.56744-1-ubizjak@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>

show more ...


# 955bba7e 20-Aug-2024 Andrii Nakryiko <andrii@kernel.org>

Merge branch 'bpf-enable-some-functions-in-cgroup-programs'

Matteo Croce says:

====================
bpf: enable some functions in cgroup programs

From: Matteo Croce <teknoraver@meta.com>

Enable s

Merge branch 'bpf-enable-some-functions-in-cgroup-programs'

Matteo Croce says:

====================
bpf: enable some functions in cgroup programs

From: Matteo Croce <teknoraver@meta.com>

Enable some BPF kfuncs and the helper bpf_current_task_under_cgroup()
for program types BPF_CGROUP_*.
These will be used by systemd-networkd:
https://github.com/systemd/systemd/pull/32212

v5->v6:
Called register_btf_kfunc_id_set() only once
Fixed build error with !CONFIG_CGROUPS

v4->v5:
Same code, but v4 had an old cover letter

v3->v4:
Reset all the acked-by tags because the code changed a bit.

Signed-off-by: Matteo Croce <teknoraver@meta.com>
====================

Link: https://lore.kernel.org/r/20240819162805.78235-1-technoboy85@gmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


# 7f628741 19-Aug-2024 Matteo Croce <teknoraver@meta.com>

bpf: Allow bpf_current_task_under_cgroup() with BPF_CGROUP_*

The helper bpf_current_task_under_cgroup() currently is only allowed for
tracing programs, allow its usage also in the BPF_CGROUP_* progr

bpf: Allow bpf_current_task_under_cgroup() with BPF_CGROUP_*

The helper bpf_current_task_under_cgroup() currently is only allowed for
tracing programs, allow its usage also in the BPF_CGROUP_* program types.

Move the code from kernel/trace/bpf_trace.c to kernel/bpf/helpers.c,
so it compiles also without CONFIG_BPF_EVENTS.

This will be used in systemd-networkd to monitor the sysctl writes,
and filter it's own writes from others:
https://github.com/systemd/systemd/pull/32212

Signed-off-by: Matteo Croce <teknoraver@meta.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240819162805.78235-3-technoboy85@gmail.com

show more ...


# 67666479 19-Aug-2024 Matteo Croce <teknoraver@meta.com>

bpf: Enable generic kfuncs for BPF_CGROUP_* programs

These kfuncs are enabled even in BPF_PROG_TYPE_TRACING, so they
should be safe also in BPF_CGROUP_* programs.
Since all BPF_CGROUP_* programs sha

bpf: Enable generic kfuncs for BPF_CGROUP_* programs

These kfuncs are enabled even in BPF_PROG_TYPE_TRACING, so they
should be safe also in BPF_CGROUP_* programs.
Since all BPF_CGROUP_* programs share the same hook,
call register_btf_kfunc_id_set() only once.

In enum btf_kfunc_hook, rename BTF_KFUNC_HOOK_CGROUP_SKB to a more
generic BTF_KFUNC_HOOK_CGROUP, since it's used for all the cgroup
related program types.

Signed-off-by: Matteo Croce <teknoraver@meta.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240819162805.78235-2-technoboy85@gmail.com

show more ...


# c8faf11c 30-Jul-2024 Tejun Heo <tj@kernel.org>

Merge tag 'v6.11-rc1' into for-6.12

Linux 6.11-rc1


# 52839f31 24-Jul-2024 Alexei Starovoitov <ast@kernel.org>

Merge branch 'no_caller_saved_registers-attribute-for-helper-calls'

Eduard Zingerman says:

====================
no_caller_saved_registers attribute for helper calls

This patch-set seeks to allow u

Merge branch 'no_caller_saved_registers-attribute-for-helper-calls'

Eduard Zingerman says:

====================
no_caller_saved_registers attribute for helper calls

This patch-set seeks to allow using no_caller_saved_registers gcc/clang
attribute with some BPF helper functions (and kfuncs in the future).

As documented in [1], this attribute means that function scratches
only some of the caller saved registers defined by ABI.
For BPF the set of such registers could be defined as follows:
- R0 is scratched only if function is non-void;
- R1-R5 are scratched only if corresponding parameter type is defined
in the function prototype.

The goal of the patch-set is to implement no_caller_saved_registers
(nocsr for short) in a backwards compatible manner:
- for kernels that support the feature, gain some performance boost
from better register allocation;
- for kernels that don't support the feature, allow programs execution
with minor performance losses.

To achieve this, use a scheme suggested by Alexei Starovoitov:
- for nocsr calls clang allocates registers as-if relevant r0-r5
registers are not scratched by the call;
- as a post-processing step, clang visits each nocsr call and adds
spill/fill for every live r0-r5;
- stack offsets used for spills/fills are allocated as lowest
stack offsets in whole function and are not used for any other
purpose;
- when kernel loads a program, it looks for such patterns
(nocsr function surrounded by spills/fills) and checks if
spill/fill stack offsets are used exclusively in nocsr patterns;
- if so, and if current JIT inlines the call to the nocsr function
(e.g. a helper call), kernel removes unnecessary spill/fill pairs;
- when old kernel loads a program, presence of spill/fill pairs
keeps BPF program valid, albeit slightly less efficient.

Corresponding clang/llvm changes are available in [2].

The patch-set uses bpf_get_smp_processor_id() function as a canary,
making it the first helper with nocsr attribute.

For example, consider the following program:

#define __no_csr __attribute__((no_caller_saved_registers))
#define SEC(name) __attribute__((section(name), used))
#define bpf_printk(fmt, ...) bpf_trace_printk((fmt), sizeof(fmt), __VA_ARGS__)

typedef unsigned int __u32;

static long (* const bpf_trace_printk)(const char *fmt, __u32 fmt_size, ...) = (void *) 6;
static __u32 (*const bpf_get_smp_processor_id)(void) __no_csr = (void *)8;

SEC("raw_tp")
int test(void *ctx)
{
__u32 task = bpf_get_smp_processor_id();
bpf_printk("ctx=%p, smp=%d", ctx, task);
return 0;
}

char _license[] SEC("license") = "GPL";

Compiled (using [2]) as follows:

$ clang --target=bpf -O2 -g -c -o nocsr.bpf.o nocsr.bpf.c
$ llvm-objdump --no-show-raw-insn -Sd nocsr.bpf.o
...
3rd parameter for printk call removable spill/fill pair
.--- 0: r3 = r1 |
; | __u32 task = bpf_get_smp_processor_id(); |
| 1: *(u64 *)(r10 - 0x8) = r3 <----------|
| 2: call 0x8 |
| 3: r3 = *(u64 *)(r10 - 0x8) <----------'
; | bpf_printk("ctx=%p, smp=%d", ctx, task);
| 4: r1 = 0x0 ll
| 6: r2 = 0xf
| 7: r4 = r0
'--> 8: call 0x6
; return 0;
9: r0 = 0x0
10: exit

Here is how the program looks after verifier processing:

# bpftool prog load ./nocsr.bpf.o /sys/fs/bpf/nocsr-test
# bpftool prog dump xlated pinned /sys/fs/bpf/nocsr-test

int test(void * ctx):
0: (bf) r3 = r1 <--- 3rd printk parameter
; __u32 task = bpf_get_smp_processor_id();
1: (b4) w0 = 197324 <--. inlined helper call,
2: (bf) r0 = &(void __percpu *)(r0) <--- spill/fill
3: (61) r0 = *(u32 *)(r0 +0) <--' pair removed
; bpf_printk("ctx=%p, smp=%d", ctx, task);
4: (18) r1 = map[id:5][0]+0
6: (b7) r2 = 15
7: (bf) r4 = r0
8: (85) call bpf_trace_printk#-125920
; return 0;
9: (b7) r0 = 0
10: (95) exit

[1] https://clang.llvm.org/docs/AttributeReference.html#no-caller-saved-registers
[2] https://github.com/eddyz87/llvm-project/tree/bpf-no-caller-saved-registers

Change list:
- v3 -> v4:
- When nocsr spills/fills are removed in the subprogram, allow these
spills/fills to reside in [-MAX_BPF_STACK-48..MAX_BPF_STACK) range
(suggested by Alexei);
- Dropped patches with special handling for bpf_probe_read_kernel()
(requested by Alexei);
- Reset aux .nocsr_pattern and .nocsr_spills_num fields in
check_nocsr_stack_contract() (requested by Andrii).
Andrii, I have not added an additional flag to
struct bpf_subprog_info, it currently does not have holes
and I really don't like adding a bool field there just as an
alternative indicator that nocsr is disabled.
Indicator at the moment:
- nocsr_stack_off >= S16_MIN means that nocsr rewrite is enabled;
- nocsr_stack_off == S16_MIN means that nocsr rewrite is disabled.
- v2 -> v3:
- As suggested by Andrii, 'nocsr_stack_off' is no longer checked at
rewrite time, instead mark_nocsr_patterns() now does two passes
over BPF program:
- on a first pass it computes the lowest stack spill offset for
the subprogram;
- on a second pass this offset is used to recognize nocsr pattern.
- As suggested by Alexei, a new mechanic is added to work around a
situation mentioned by Andrii, when more helper functions are
marked as nocsr at compile time than current kernel supports:
- all {spill*,helper call,fill*} patterns are now marked as
insn_aux_data[*].nocsr_pattern, thus relaxing failure condition
for check_nocsr_stack_contract();
- spill/fill pairs are not removed for patterns where helper can't
be inlined;
- see mark_nocsr_pattern_for_call() for details an example.
- As suggested by Alexei, subprogram stack depth is now adjusted
if all spill/fill pairs could be removed. This adjustment has
to take place before optimize_bpf_loop(), hence the rewrite
is moved from do_misc_fixups() to remove_nocsr_spills_fills()
(again).
- As suggested by Andrii, special measures are taken to work around
bpf_probe_read_kernel() access to BPF stack, see patches 11, 12.
Patch #11 is very simplistic, a more comprehensive solution would
be to change the type of the third parameter of the
bpf_probe_read_kernel() from ARG_ANYTHING to something else and
not only check nocsr contract, but also propagate stack slot
liveness information. However, such change would require update in
struct bpf_call_arg_meta processing, which currently implies that
every memory parameter is followed by a size parameter.
I can work on these changes, please comment.
- Stylistic changes suggested by Andrii.
- Added acks from Andrii.
- Dropped RFC tag.
- v1 -> v2:
- assume that functions inlined by either jit or verifier
conform to no_caller_saved_registers contract (Andrii, Puranjay);
- allow nocsr rewrite for bpf_get_smp_processor_id()
on arm64 and riscv64 architectures (Puranjay);
- __arch_{x86_64,arm64,riscv64} macro for test_loader;
- moved remove_nocsr_spills_fills() inside do_misc_fixups() (Andrii);
- moved nocsr pattern detection from check_cfg() to a separate pass
(Andrii);
- various stylistic/correctness changes according to Andrii's
comments.

Revisions:
- v1 https://lore.kernel.org/bpf/20240629094733.3863850-1-eddyz87@gmail.com/
- v2 https://lore.kernel.org/bpf/20240704102402.1644916-1-eddyz87@gmail.com/
- v3 https://lore.kernel.org/bpf/20240715230201.3901423-1-eddyz87@gmail.com/
====================

Link: https://lore.kernel.org/r/20240722233844.1406874-1-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


# 91b7fbf3 23-Jul-2024 Eduard Zingerman <eddyz87@gmail.com>

bpf, x86, riscv, arm: no_caller_saved_registers for bpf_get_smp_processor_id()

The function bpf_get_smp_processor_id() is processed in a different
way, depending on the arch:
- on x86 verifier repla

bpf, x86, riscv, arm: no_caller_saved_registers for bpf_get_smp_processor_id()

The function bpf_get_smp_processor_id() is processed in a different
way, depending on the arch:
- on x86 verifier replaces call to bpf_get_smp_processor_id() with a
sequence of instructions that modify only r0;
- on riscv64 jit replaces call to bpf_get_smp_processor_id() with a
sequence of instructions that modify only r0;
- on arm64 jit replaces call to bpf_get_smp_processor_id() with a
sequence of instructions that modify only r0 and tmp registers.

These rewrites satisfy attribute no_caller_saved_registers contract.
Allow rewrite of no_caller_saved_registers patterns for
bpf_get_smp_processor_id() in order to use this function as a canary
for no_caller_saved_registers tests.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240722233844.1406874-4-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

show more ...


Revision tags: v6.10, v6.10-rc7, v6.10-rc6, v6.10-rc5
# 8cce4759 18-Jun-2024 Tejun Heo <tj@kernel.org>

Merge branch 'bpf/for-next' into sched_ext-base


# ed7171ff 16-Aug-2024 Lucas De Marchi <lucas.demarchi@intel.com>

Merge drm/drm-next into drm-xe-next

Get drm-xe-next on v6.11-rc2 and synchronized with drm-intel-next for
the display side. This resolves the current conflict for the
enable_display module parameter

Merge drm/drm-next into drm-xe-next

Get drm-xe-next on v6.11-rc2 and synchronized with drm-intel-next for
the display side. This resolves the current conflict for the
enable_display module parameter and allows further pending refactors.

Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>

show more ...


# 5c61f598 12-Aug-2024 Thomas Zimmermann <tzimmermann@suse.de>

Merge drm/drm-next into drm-misc-next

Get drm-misc-next to the state of v6.11-rc2.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>


# 3663e2c4 01-Aug-2024 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync with v6.11-rc1 in general, and specifically get the new
BACKLIGHT_POWER_ constants for power states.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>


# 4436e6da 02-Aug-2024 Thomas Gleixner <tglx@linutronix.de>

Merge branch 'linus' into x86/mm

Bring x86 and selftests up to date


# a1ff5a7d 30-Jul-2024 Maxime Ripard <mripard@kernel.org>

Merge drm/drm-fixes into drm-misc-fixes

Let's start the new drm-misc-fixes cycle by bringing in 6.11-rc1.

Signed-off-by: Maxime Ripard <mripard@kernel.org>


# a23e1966 15-Jul-2024 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'next' into for-linus

Prepare input updates for 6.11 merge window.


Revision tags: v6.10-rc4, v6.10-rc3, v6.10-rc2
# 6f47c7ae 28-May-2024 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge tag 'v6.9' into next

Sync up with the mainline to bring in the new cleanup API.


# afeea275 04-Jul-2024 Maxime Ripard <mripard@kernel.org>

Merge drm-misc-next-2024-07-04 into drm-misc-next-fixes

Let's start the drm-misc-next-fixes cycle.

Signed-off-by: Maxime Ripard <mripard@kernel.org>


# d754ed28 19-Jun-2024 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync to v6.10-rc3.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>


# 89aa02ed 12-Jun-2024 Rodrigo Vivi <rodrigo.vivi@intel.com>

Merge drm/drm-next into drm-xe-next

Needed to get tracing cleanup and add mmio tracing series.

Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>


# 92815da4 12-Jun-2024 Dmitry Baryshkov <dmitry.baryshkov@linaro.org>

Merge remote-tracking branch 'drm-misc/drm-misc-next' into HEAD

Merge drm-misc-next tree into the msm-next tree in order to be able to
use HDMI connector framework for the MSM HDMI driver.


12345678910>>...43