Home
last modified time | relevance | path

Searched refs:branches (Results 1 – 25 of 92) sorted by relevance

1234

/linux/Documentation/admin-guide/hw-vuln/
H A Dindirect-target-selection.rst8 of indirect branches and RETs located in the lower half of a cacheline.
14 - **eIBRS Guest/Host Isolation**: Indirect branches in KVM/kernel may still be
21 branches may still be predicted with targets corresponding to direct branches
57 As only the indirect branches and RETs that have their last byte of instruction
59 the mitigation is to not allow indirect branches in the lower half.
75 Note, for simplicity, indirect branches in eBPF programs are always replaced
82 thunks. But, RETs significantly outnumber indirect branches, and any benefit
88 Retpoline sequence also mitigates ITS-unsafe indirect branches. For this
119 indirect branches.
157 - The mitigation is enabled, affected indirect branches and RETs are
H A Dspectre.rst61 conditional branches, while Spectre variant 2 attacks use speculative
62 execution of indirect branches to leak privileged memory.
93 execution of indirect branches :ref:`[3] <spec_ref3>`. The indirect
95 indirect branches can be influenced by an attacker, causing gadget code
103 branches in the victim to gadget code by poisoning the branch target
217 indirect branches. Return trampolines trap speculative execution paths
289 for indirect branches to bypass the poisoned branch target buffer,
536 can be compiled with return trampolines for indirect branches.
/linux/tools/perf/tests/shell/
H A Daddr2line_inlines.sh66 if [ ! -f /sys/bus/event_source/devices/cpu/caps/branches ] &&
67 [ ! -f /sys/bus/event_source/devices/cpu_core/caps/branches ]
H A Drecord_lbr.sh11 if [ ! -f /sys/bus/event_source/devices/cpu/caps/branches ] &&
12 [ ! -f /sys/bus/event_source/devices/cpu_core/caps/branches ]
H A Dstat+std_output.sh14 …e-faults stalled-cycles-frontend stalled-cycles-backend cycles instructions branches branch-misses)
/linux/Documentation/staging/
H A Dstatic-keys.rst76 Using the 'asm goto', we can create branches that are either taken or not taken
157 Note that switching branches results in some locks being taken,
303 208,368,926 branches # 243.507 M/sec ( +- 0.06% )
304 5,569,188 branch-misses # 2.67% of all branches ( +- 0.54% )
320 206,859,359 branches # 245.956 M/sec ( +- 0.04% )
321 4,884,119 branch-misses # 2.36% of all branches ( +- 0.85% )
325 The percentage of saved branches is .7%, and we've saved 12% on
327 this optimization is about reducing the number of branches. In addition, we've
/linux/tools/perf/scripts/python/
H A Dexport-to-sqlite.py121 branches = (columns == "branches") variable
205 if branches:
589 if branches:
706 if branches:
H A Dexport-to-postgresql.py300 branches = (columns == "branches") variable
381 if branches:
1032 if branches:
/linux/fs/ubifs/
H A Dmisc.h203 return (struct ubifs_branch *)((void *)idx->branches + in ubifs_idx_branch()
215 return (void *)((struct ubifs_branch *)idx->branches)->key; in ubifs_idx_key()
/linux/arch/powerpc/net/
H A Dbpf_jit_comp.c683 int run_ctx_off, u32 *branches) in invoke_bpf_mod_ret() argument
711 branches[i] = ctx->idx; in invoke_bpf_mod_ret()
825 u32 *branches = NULL; in __arch_prepare_bpf_trampoline() local
1047 branches = kcalloc(fmod_ret->nr_links, sizeof(u32), GFP_KERNEL); in __arch_prepare_bpf_trampoline()
1048 if (!branches) in __arch_prepare_bpf_trampoline()
1052 run_ctx_off, branches)) { in __arch_prepare_bpf_trampoline()
1097 if (create_cond_branch(&branch_insn, &image[branches[i]], in __arch_prepare_bpf_trampoline()
1103 image[branches[i]] = ppc_inst_val(branch_insn); in __arch_prepare_bpf_trampoline()
1183 kfree(branches); in __arch_prepare_bpf_trampoline()
/linux/tools/perf/Documentation/
H A Dtips.txt22 Treat branches as callchains: perf record -b ... ; perf report --branch-history
37 To only collect call graph on one event use perf record -e cpu/cpu-cycles,callgraph=1/,branches ; perf report --show-ref-call-graph
38 To set sampling period of individual events use perf record -e cpu/cpu-cycles,period=100001/,cpu/branches,period=10001/ ...
39 To group events which need to be collected together for accuracy use {}: perf record -e {cycles,branches}' ...
H A Dperf-top.txt248 taken branches. The number of branches captured with each sample depends on the
249 underlying hardware, the type of branches of interest, and the executed code.
250 It is possible to select the types of branches captured by enabling filters.
262 Add the addresses of sampled taken branches to the callstack.
H A Dintel-hybrid.txt186 cpu_core/branches/,
187 cpu_atom/branches/,
H A Dsecurity.txt203 19,628,798 branches # 0.539 M/sec
204 1,259,201 branch-misses # 6.42% of all branches
/linux/arch/m68k/ifpsp060/
H A Diskeleton.S60 | _isp_unimp() branches to here so that the operating system
88 | stack frame and branches to this routine.
123 | Integer Instruction stack frame and branches to this routine.
128 | stack frame and branches to the _real_trace() entry point.
/linux/Documentation/features/core/jump-labels/
H A Darch-support.txt4 # description: arch supports live patched, high efficiency branches
/linux/Documentation/process/
H A Dmaintainer-kvm-x86.rst38 The KVM x86 tree is organized into multiple topic branches. The purpose of
39 using finer-grained topic branches is to make it easier to keep tabs on an area
45 All topic branches, except for ``next`` and ``fixes``, are rolled into ``next``
57 following rc7 for "normal" releases. If all goes well, the topic branches are
95 dependencies across topic branches, it is the maintainer's job to sort them
166 Note, these don't align with the topics branches (the topic branches care much
340 solution is to derive the names of your development branches based on their
381 patch's SHA1 changes. However, in some scenarios, e.g. if all KVM x86 branches
/linux/Documentation/networking/
H A Dtc-actions-env-rules.rst14 or intentionally branches by redirecting a packet, then you need to
/linux/arch/mips/include/asm/
H A Dfpu_emulator.h28 unsigned long branches; member
/linux/arch/x86/events/
H A DKconfig53 16 consecutive taken branches in registers.
/linux/arch/powerpc/platforms/8xx/
H A DKconfig120 (by not placing conditional branches or branches to LR or CTR
/linux/arch/mips/math-emu/
H A Dme-debugfs.c57 __this_cpu_write((fpuemustats).branches, 0); in fpuemustats_clear_show()
213 FPU_STAT_CREATE(branches); in debugfs_fpuemu()
/linux/Documentation/maintainer/
H A Drebasing-and-merging.rst100 which may contain multiple topic branches; each branch is usually developed
104 Many projects require that branches in pull requests be based on the
106 is not such a project; any rebasing of branches to avoid merges will, most
128 branches. Failure to do so threatens the security of the development
/linux/arch/arm64/net/
H A Dbpf_jit_comp.c2319 __le32 **branches) in invoke_bpf_mod_ret() argument
2337 branches[i] = ctx->image + ctx->idx; in invoke_bpf_mod_ret()
2503 __le32 **branches = NULL; in prepare_trampoline() local
2662 branches = kcalloc(fmod_ret->nr_links, sizeof(__le32 *), in prepare_trampoline()
2664 if (!branches) in prepare_trampoline()
2668 run_ctx_off, branches); in prepare_trampoline()
2687 int offset = &ctx->image[ctx->idx] - branches[i]; in prepare_trampoline()
2688 *branches[i] = cpu_to_le32(A64_CBNZ(1, A64_R(10), offset)); in prepare_trampoline()
2751 kfree(branches); in prepare_trampoline()
/linux/arch/loongarch/net/
H A Dbpf_jit.c1823 u32 **branches = NULL; in __arch_prepare_bpf_trampoline() local
2004 branches = kcalloc(fmod_ret->nr_links, sizeof(u32 *), GFP_KERNEL); in __arch_prepare_bpf_trampoline()
2005 if (!branches) in __arch_prepare_bpf_trampoline()
2015 branches[i] = (u32 *)ctx->image + ctx->idx; in __arch_prepare_bpf_trampoline()
2039 int offset = (void *)(&ctx->image[ctx->idx]) - (void *)branches[i]; in __arch_prepare_bpf_trampoline()
2040 *branches[i] = larch_insn_gen_bne(LOONGARCH_GPR_T1, LOONGARCH_GPR_ZERO, offset); in __arch_prepare_bpf_trampoline()
2111 kfree(branches); in __arch_prepare_bpf_trampoline()

1234