Revision tags: v6.14-rc1, v6.13, v6.13-rc7, v6.13-rc6, v6.13-rc5, v6.13-rc4 |
|
#
60675d4c |
| 20-Dec-2024 |
Ingo Molnar <mingo@kernel.org> |
Merge branch 'linus' into x86/mm, to pick up fixes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
#
37b33c68 |
| 23-Jan-2025 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull CRC updates from Eric Biggers:
- Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to b
Merge tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux
Pull CRC updates from Eric Biggers:
- Reorganize the architecture-optimized CRC32 and CRC-T10DIF code to be directly accessible via the library API, instead of requiring the crypto API. This is much simpler and more efficient.
- Convert some users such as ext4 to use the CRC32 library API instead of the crypto API. More conversions like this will come later.
- Add a KUnit test that tests and benchmarks multiple CRC variants. Remove older, less-comprehensive tests that are made redundant by this.
- Add an entry to MAINTAINERS for the kernel's CRC library code. I'm volunteering to maintain it. I have additional cleanups and optimizations planned for future cycles.
* tag 'crc-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (31 commits) MAINTAINERS: add entry for CRC library powerpc/crc: delete obsolete crc-vpmsum_test.c lib/crc32test: delete obsolete crc32test.c lib/crc16_kunit: delete obsolete crc16_kunit.c lib/crc_kunit.c: add KUnit test suite for CRC library functions powerpc/crc-t10dif: expose CRC-T10DIF function through lib arm64/crc-t10dif: expose CRC-T10DIF function through lib arm/crc-t10dif: expose CRC-T10DIF function through lib x86/crc-t10dif: expose CRC-T10DIF function through lib crypto: crct10dif - expose arch-optimized lib function lib/crc-t10dif: add support for arch overrides lib/crc-t10dif: stop wrapping the crypto API scsi: target: iscsi: switch to using the crc32c library f2fs: switch to using the crc32 library jbd2: switch to using the crc32c library ext4: switch to using the crc32c library lib/crc32: make crc32c() go directly to lib bcachefs: Explicitly select CRYPTO from BCACHEFS_FS x86/crc32: expose CRC32 functions through lib x86/crc32: update prototype for crc32_pclmul_le_16() ...
show more ...
|
Revision tags: v6.13-rc3, v6.13-rc2 |
|
#
7439cfed |
| 02-Dec-2024 |
Eric Biggers <ebiggers@google.com> |
powerpc/crc-t10dif: expose CRC-T10DIF function through lib
Move the powerpc CRC-T10DIF assembly code into the lib directory and wire it up to the library interface. This allows it to be used withou
powerpc/crc-t10dif: expose CRC-T10DIF function through lib
Move the powerpc CRC-T10DIF assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going through the crypto API. It remains usable via the crypto API too via the shash algorithms that use the library interface. Thus all the arch-specific "shash" code becomes unnecessary and is removed.
Note: to see the diff from arch/powerpc/crypto/crct10dif-vpmsum_glue.c to arch/powerpc/lib/crc-t10dif-glue.c, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20241202012056.209768-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
show more ...
|
#
372ff60a |
| 02-Dec-2024 |
Eric Biggers <ebiggers@google.com> |
powerpc/crc32: expose CRC32 functions through lib
Move the powerpc CRC32C assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going throu
powerpc/crc32: expose CRC32 functions through lib
Move the powerpc CRC32C assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going through the crypto API. It remains usable via the crypto API too via the shash algorithms that use the library interface. Thus all the arch-specific "shash" code becomes unnecessary and is removed.
Note: to see the diff from arch/powerpc/crypto/crc32c-vpmsum_glue.c to arch/powerpc/lib/crc32-glue.c, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20241202010844.144356-9-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
show more ...
|
#
25768de5 |
| 21-Jan-2025 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge branch 'next' into for-linus
Prepare input updates for 6.14 merge window.
|
#
6d4a0f4e |
| 17-Dec-2024 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge tag 'v6.13-rc3' into next
Sync up with the mainline.
|
#
2e04247f |
| 22-Jan-2025 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'ftrace-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull ftrace updates from Steven Rostedt:
- Have fprobes built on top of function graph infrastructure
Merge tag 'ftrace-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull ftrace updates from Steven Rostedt:
- Have fprobes built on top of function graph infrastructure
The fprobe logic is an optimized kprobe that uses ftrace to attach to functions when a probe is needed at the start or end of the function. The fprobe and kretprobe logic implements a similar method as the function graph tracer to trace the end of the function. That is to hijack the return address and jump to a trampoline to do the trace when the function exits. To do this, a shadow stack needs to be created to store the original return address. Fprobes and function graph do this slightly differently. Fprobes (and kretprobes) has slots per callsite that are reserved to save the return address. This is fine when just a few points are traced. But users of fprobes, such as BPF programs, are starting to add many more locations, and this method does not scale.
The function graph tracer was created to trace all functions in the kernel. In order to do this, when function graph tracing is started, every task gets its own shadow stack to hold the return address that is going to be traced. The function graph tracer has been updated to allow multiple users to use its infrastructure. Now have fprobes be one of those users. This will also allow for the fprobe and kretprobe methods to trace the return address to become obsolete. With new technologies like CFI that need to know about these methods of hijacking the return address, going toward a solution that has only one method of doing this will make the kernel less complex.
- Cleanup with guard() and free() helpers
There were several places in the code that had a lot of "goto out" in the error paths to either unlock a lock or free some memory that was allocated. But this is error prone. Convert the code over to use the guard() and free() helpers that let the compiler unlock locks or free memory when the function exits.
- Remove disabling of interrupts in the function graph tracer
When function graph tracer was first introduced, it could race with interrupts and NMIs. To prevent that race, it would disable interrupts and not trace NMIs. But the code has changed to allow NMIs and also interrupts. This change was done a long time ago, but the disabling of interrupts was never removed. Remove the disabling of interrupts in the function graph tracer is it is not needed. This greatly improves its performance.
- Allow the :mod: command to enable tracing module functions on the kernel command line.
The function tracer already has a way to enable functions to be traced in modules by writing ":mod:<module>" into set_ftrace_filter. That will enable either all the functions for the module if it is loaded, or if it is not, it will cache that command, and when the module is loaded that matches <module>, its functions will be enabled. This also allows init functions to be traced. But currently events do not have that feature.
Because enabling function tracing can be done very early at boot up (before scheduling is enabled), the commands that can be done when function tracing is started is limited. Having the ":mod:" command to trace module functions as they are loaded is very useful. Update the kernel command line function filtering to allow it.
* tag 'ftrace-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (26 commits) ftrace: Implement :mod: cache filtering on kernel command line tracing: Adopt __free() and guard() for trace_fprobe.c bpf: Use ftrace_get_symaddr() for kprobe_multi probes ftrace: Add ftrace_get_symaddr to convert fentry_ip to symaddr Documentation: probes: Update fprobe on function-graph tracer selftests/ftrace: Add a test case for repeating register/unregister fprobe selftests: ftrace: Remove obsolate maxactive syntax check tracing/fprobe: Remove nr_maxactive from fprobe fprobe: Add fprobe_header encoding feature fprobe: Rewrite fprobe on function-graph tracer s390/tracing: Enable HAVE_FTRACE_GRAPH_FUNC ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC bpf: Enable kprobe_multi feature if CONFIG_FPROBE is enabled tracing/fprobe: Enable fprobe events with CONFIG_DYNAMIC_FTRACE_WITH_ARGS tracing: Add ftrace_fill_perf_regs() for perf event tracing: Add ftrace_partial_regs() for converting ftrace_regs to pt_regs fprobe: Use ftrace_regs in fprobe exit handler fprobe: Use ftrace_regs in fprobe entry handler fgraph: Pass ftrace_regs to retfunc fgraph: Replace fgraph_ret_regs with ftrace_regs ...
show more ...
|
#
95ec54a4 |
| 21-Jan-2025 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'powerpc-6.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Madhavan Srinivasan:
- Add preempt lazy support
- Deprecate cxl and cxl flash
Merge tag 'powerpc-6.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Madhavan Srinivasan:
- Add preempt lazy support
- Deprecate cxl and cxl flash driver
- Fix a possible IOMMU related OOPS at boot on pSeries
- Optimize sched_clock() in ppc32 by replacing mulhdu() by mul_u64_u64_shr()
Thanks to Andrew Donnellan, Andy Shevchenko, Ankur Arora, Christophe Leroy, Frederic Barrat, Gaurav Batra, Luis Felipe Hernandez, Michael Ellerman, Nilay Shroff, Ricardo B. Marliere, Ritesh Harjani (IBM), Sebastian Andrzej Siewior, Shrikanth Hegde, Sourabh Jain, Thorsten Blum, and Zhu Jun.
* tag 'powerpc-6.14-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: selftests/powerpc: Fix argument order to timer_sub() powerpc/prom_init: Use IS_ENABLED() powerpc/pseries/iommu: IOMMU incorrectly marks MMIO range in DDW powerpc: Use str_on_off() helper in check_cache_coherency() powerpc: Large user copy aware of full:rt:lazy preemption powerpc: Add preempt lazy support powerpc/book3s64/hugetlb: Fix disabling hugetlb when fadump is active powerpc/vdso: Mark the vDSO code read-only after init powerpc/64: Use get_user() in start_thread() macintosh: declare ctl_table as const selftest/powerpc/ptrace: Cleanup duplicate macro definitions selftest/powerpc/ptrace/ptrace-pkey: Remove duplicate macros selftest/powerpc/ptrace/core-pkey: Remove duplicate macros powerpc/8xx: Drop legacy-of-mm-gpiochip.h header scsi/cxlflash: Deprecate driver cxl: Deprecate driver selftests/powerpc: Fix typo in test-vphn.c powerpc/xmon: Use str_yes_no() helper in dump_one_paca() powerpc/32: Replace mulhdu() by mul_u64_u64_shr()
show more ...
|
#
c5fb51b7 |
| 03-Jan-2025 |
Rob Clark <robdclark@chromium.org> |
Merge remote-tracking branch 'pm/opp/linux-next' into HEAD
Merge pm/opp tree to get dev_pm_opp_get_bw()
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
#
a762e926 |
| 26-Dec-2024 |
Masami Hiramatsu (Google) <mhiramat@kernel.org> |
ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
Add CONFIG_HAVE_FTRACE_GRAPH_FUNC kconfig in addition to ftrace_graph_func macro check. This is for the other feature (e.g. FPROBE) which requires to access
ftrace: Add CONFIG_HAVE_FTRACE_GRAPH_FUNC
Add CONFIG_HAVE_FTRACE_GRAPH_FUNC kconfig in addition to ftrace_graph_func macro check. This is for the other feature (e.g. FPROBE) which requires to access ftrace_regs from fgraph_ops::entryfunc() can avoid compiling if the fgraph can not pass the valid ftrace_regs.
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com> Cc: Florent Revest <revest@chromium.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: bpf <bpf@vger.kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Alan Maguire <alan.maguire@oracle.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Naveen N Rao <naveen@kernel.org> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/173519001472.391279.1174901685282588467.stgit@devnote2 Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
show more ...
|
Revision tags: v6.13-rc1, v6.12 |
|
#
00199ed6 |
| 16-Nov-2024 |
Shrikanth Hegde <sshegde@linux.ibm.com> |
powerpc: Add preempt lazy support
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use
powerpc: Add preempt lazy support
Define preempt lazy bit for Powerpc. Use bit 9 which is free and within 16 bit range of NEED_RESCHED, so compiler can issue single andi.
Since Powerpc doesn't use the generic entry/exit, add lazy check at exit to user. CONFIG_PREEMPTION is defined for lazy/full/rt so use it for return to kernel.
Ran a few benchmarks and db workload on Power10. Performance is close to preempt=none/voluntary.
Since Powerpc systems can have large core count and large memory, preempt lazy is going to be helpful in avoiding soft lockup issues.
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Ankur Arora <ankur.a.arora@oracle.com> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com> Signed-off-by: Madhavan Srinivasan <maddy@linux.ibm.com> Link: https://patch.msgid.link/20241116192306.88217-2-sshegde@linux.ibm.com
show more ...
|
#
e7f0a3a6 |
| 11-Dec-2024 |
Rodrigo Vivi <rodrigo.vivi@intel.com> |
Merge drm/drm-next into drm-intel-next
Catching up with 6.13-rc2.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
#
8f109f28 |
| 02-Dec-2024 |
Rodrigo Vivi <rodrigo.vivi@intel.com> |
Merge drm/drm-next into drm-xe-next
A backmerge to get the PMT preparation work for merging the BMG PMT support.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
|
#
3aba2eba |
| 02-Dec-2024 |
Maxime Ripard <mripard@kernel.org> |
Merge drm/drm-next into drm-misc-next
Kickstart 6.14 cycle.
Signed-off-by: Maxime Ripard <mripard@kernel.org>
|
#
bcfd5f64 |
| 02-Dec-2024 |
Ingo Molnar <mingo@kernel.org> |
Merge tag 'v6.13-rc1' into perf/core, to refresh the branch
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
#
c34e9ab9 |
| 05-Dec-2024 |
Takashi Iwai <tiwai@suse.de> |
Merge tag 'asoc-fix-v6.13-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v6.13
A few small fixes for v6.13, all system specific - the biggest t
Merge tag 'asoc-fix-v6.13-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v6.13
A few small fixes for v6.13, all system specific - the biggest thing is the fix for jack handling over suspend on some Intel laptops.
show more ...
|
Revision tags: v6.12-rc7, v6.12-rc6, v6.12-rc5, v6.12-rc4 |
|
#
77b67945 |
| 14-Oct-2024 |
Namhyung Kim <namhyung@kernel.org> |
Merge tag 'v6.12-rc3' into perf-tools-next
To get the fixes in the current perf-tools tree.
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
|
#
42d9e8b7 |
| 23-Nov-2024 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Michael Ellerman:
- Rework kfence support for the HPT MMU to work on systems wit
Merge tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc updates from Michael Ellerman:
- Rework kfence support for the HPT MMU to work on systems with >= 16TB of RAM.
- Remove the powerpc "maple" platform, used by the "Yellow Dog Powerstation".
- Add support for DYNAMIC_FTRACE_WITH_CALL_OPS, DYNAMIC_FTRACE_WITH_DIRECT_CALLS & BPF Trampolines.
- Add support for running KVM nested guests on Power11.
- Other small features, cleanups and fixes.
Thanks to Amit Machhiwal, Arnd Bergmann, Christophe Leroy, Costa Shulyupin, David Hunter, David Wang, Disha Goel, Gautam Menghani, Geert Uytterhoeven, Hari Bathini, Julia Lawall, Kajol Jain, Keith Packard, Lukas Bulwahn, Madhavan Srinivasan, Markus Elfring, Michal Suchanek, Ming Lei, Mukesh Kumar Chaurasiya, Nathan Chancellor, Naveen N Rao, Nicholas Piggin, Nysal Jan K.A, Paulo Miguel Almeida, Pavithra Prakash, Ritesh Harjani (IBM), Rob Herring (Arm), Sachin P Bappalige, Shen Lichuan, Simon Horman, Sourabh Jain, Thomas Weißschuh, Thorsten Blum, Thorsten Leemhuis, Venkat Rao Bagalkote, Zhang Zekun, and zhang jiao.
* tag 'powerpc-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (89 commits) EDAC/powerpc: Remove PPC_MAPLE drivers powerpc/perf: Add per-task/process monitoring to vpa_pmu driver powerpc/kvm: Add vpa latency counters to kvm_vcpu_arch docs: ABI: sysfs-bus-event_source-devices-vpa-pmu: Document sysfs event format entries for vpa_pmu powerpc/perf: Add perf interface to expose vpa counters MAINTAINERS: powerpc: Mark Maddy as "M" powerpc/Makefile: Allow overriding CPP powerpc-km82xx.c: replace of_node_put() with __free ps3: Correct some typos in comments powerpc/kexec: Fix return of uninitialized variable macintosh: Use common error handling code in via_pmu_led_init() powerpc/powermac: Use of_property_match_string() in pmac_has_backlight_type() powerpc: remove dead config options for MPC85xx platform support powerpc/xive: Use cpumask_intersects() selftests/powerpc: Remove the path after initialization. powerpc/xmon: symbol lookup length fixed powerpc/ep8248e: Use %pa to format resource_size_t powerpc/ps3: Reorganize kerneldoc parameter names KVM: PPC: Book3S HV: Fix kmv -> kvm typo powerpc/sstep: make emulate_vsx_load and emulate_vsx_store static ...
show more ...
|
#
71db948b |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
samples/ftrace: Add support for ftrace direct samples on powerpc
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to show the sample instruction sequence to be used by ftrace dir
samples/ftrace: Add support for ftrace direct samples on powerpc
Add powerpc 32-bit and 64-bit samples for ftrace direct. This serves to show the sample instruction sequence to be used by ftrace direct calls to adhere to the ftrace ABI.
On 64-bit powerpc, TOC setup requires some additional work.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-17-hbathini@linux.ibm.com
show more ...
|
#
a52f6043 |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS
Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS similar to the arm64 implementation.
ftrace direct calls allow custom trampolines t
powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS
Add support for DYNAMIC_FTRACE_WITH_DIRECT_CALLS similar to the arm64 implementation.
ftrace direct calls allow custom trampolines to be called into directly from function ftrace call sites, bypassing the ftrace trampoline completely. This functionality is currently utilized by BPF trampolines to hook into kernel function entries.
Since we have limited relative branch range, we support ftrace direct calls through support for DYNAMIC_FTRACE_WITH_CALL_OPS. In this approach, ftrace trampoline is not entirely bypassed. Rather, it is re-purposed into a stub that reads direct_call field from the associated ftrace_ops structure and branches into that, if it is not NULL. For this, it is sufficient if we can ensure that the ftrace trampoline is reachable from all traceable functions.
When multiple ftrace_ops are associated with a call site, we utilize a call back to set pt_regs->orig_gpr3 that can then be tested on the return path from the ftrace trampoline to branch into the direct caller.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-16-hbathini@linux.ibm.com
show more ...
|
#
e717754f |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS
Implement support for DYNAMIC_FTRACE_WITH_CALL_OPS similar to the arm64 implementation.
This works by patching-in a pointer to an associ
powerpc/ftrace: Add support for DYNAMIC_FTRACE_WITH_CALL_OPS
Implement support for DYNAMIC_FTRACE_WITH_CALL_OPS similar to the arm64 implementation.
This works by patching-in a pointer to an associated ftrace_ops structure before each traceable function. If multiple ftrace_ops are associated with a call site, then a special ftrace_list_ops is used to enable iterating over all the registered ftrace_ops. If no ftrace_ops are associated with a call site, then a special ftrace_nop_ops structure is used to render the ftrace call as a no-op. ftrace trampoline can then read the associated ftrace_ops for a call site by loading from an offset from the LR, and branch directly to the associated function.
The primary advantage with this approach is that we don't have to iterate over all the registered ftrace_ops for call sites that have a single ftrace_ops registered. This is the equivalent of implementing support for dynamic ftrace trampolines, which set up a special ftrace trampoline for each registered ftrace_ops and have individual call sites branch into those directly.
A secondary advantage is that this gives us a way to add support for direct ftrace callers without having to resort to using stubs. The address of the direct call trampoline can be loaded from the ftrace_ops structure.
To support this, we reserve a nop before each function on 32-bit powerpc. For 64-bit powerpc, two nops are reserved before each out-of-line stub. During ftrace activation, we update this location with the associated ftrace_ops pointer. Then, on ftrace entry, we load from this location and call into ftrace_ops->func().
For 64-bit powerpc, we ensure that the out-of-line stub area is doubleword aligned so that ftrace_ops address can be updated atomically.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-15-hbathini@linux.ibm.com
show more ...
|
#
cf9bc0ef |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
powerpc64/ftrace: Support .text larger than 32MB with out-of-line stubs
We are restricted to a .text size of ~32MB when using out-of-line function profile sequence. Allow this to be extended up to t
powerpc64/ftrace: Support .text larger than 32MB with out-of-line stubs
We are restricted to a .text size of ~32MB when using out-of-line function profile sequence. Allow this to be extended up to the previous limit of ~64MB by reserving space in the middle of .text.
A new config option CONFIG_PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE is introduced to specify the number of function stubs that are reserved in .text. On boot, ftrace utilizes stubs from this area first before using the stub area at the end of .text.
A ppc64le defconfig has ~44k functions that can be traced. A more conservative value of 32k functions is chosen as the default value of PPC_FTRACE_OUT_OF_LINE_NUM_RESERVE so that we do not allot more space than necessary by default. If building a kernel that only has 32k trace-able functions, we won't allot any more space at the end of .text during the pass on vmlinux.o. Otherwise, only the remaining functions get space for stubs at the end of .text. This default value should help cover a .text size of ~48MB in total (including space reserved at the end of .text which can cover up to 32MB), which should be sufficient for most common builds. For a very small kernel build, this can be set to 0. Or, this can be bumped up to a larger value to support vmlinux .text size up to ~64MB.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-14-hbathini@linux.ibm.com
show more ...
|
#
eec37961 |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
powerpc64/ftrace: Move ftrace sequence out of line
Function profile sequence on powerpc includes two instructions at the beginning of each function: mflr r0 bl ftrace_caller
The call to ftrace_ca
powerpc64/ftrace: Move ftrace sequence out of line
Function profile sequence on powerpc includes two instructions at the beginning of each function: mflr r0 bl ftrace_caller
The call to ftrace_caller() gets nop'ed out during kernel boot and is patched in when ftrace is enabled.
Given the sequence, we cannot return from ftrace_caller with 'blr' as we need to keep LR and r0 intact. This results in link stack (return address predictor) imbalance when ftrace is enabled. To address that, we would like to use a three instruction sequence: mflr r0 bl ftrace_caller mtlr r0
Further more, to support DYNAMIC_FTRACE_WITH_CALL_OPS, we need to reserve two instruction slots before the function. This results in a total of five instruction slots to be reserved for ftrace use on each function that is traced.
Move the function profile sequence out-of-line to minimize its impact. To do this, we reserve a single nop at function entry using -fpatchable-function-entry=1 and add a pass on vmlinux.o to determine the total number of functions that can be traced. This is then used to generate a .S file reserving the appropriate amount of space for use as ftrace stubs, which is built and linked into vmlinux.
On bootup, the stub space is split into separate stubs per function and populated with the proper instruction sequence. A pointer to the associated stub is maintained in dyn_arch_ftrace.
For modules, space for ftrace stubs is reserved from the generic module stub space.
This is restricted to and enabled by default only on 64-bit powerpc, though there are some changes to accommodate 32-bit powerpc. This is done so that 32-bit powerpc could choose to opt into this based on further tests and benchmarks.
As an example, after this patch, kernel functions will have a single nop at function entry: <kernel_clone>: addis r2,r12,467 addi r2,r2,-16028 nop mfocrf r11,8 ...
When ftrace is enabled, the nop is converted to an unconditional branch to the stub associated with that function: <kernel_clone>: addis r2,r12,467 addi r2,r2,-16028 b ftrace_ool_stub_text_end+0x11b28 mfocrf r11,8 ...
The associated stub: <ftrace_ool_stub_text_end+0x11b28>: mflr r0 bl ftrace_caller mtlr r0 b kernel_clone+0xc ...
This change showed an improvement of ~10% in null_syscall benchmark on a Power 10 system with ftrace enabled.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-13-hbathini@linux.ibm.com
show more ...
|
#
782f46cb |
| 30-Oct-2024 |
Naveen N Rao <naveen@kernel.org> |
powerpc/ftrace: Add a postlink script to validate function tracer
Function tracer on powerpc can only work with vmlinux having a .text size of up to ~64MB due to powerpc branch instruction having a
powerpc/ftrace: Add a postlink script to validate function tracer
Function tracer on powerpc can only work with vmlinux having a .text size of up to ~64MB due to powerpc branch instruction having a limited relative branch range of 32MB. Today, this is only detected on kernel boot when ftrace is init'ed. Add a post-link script to check the size of .text so that we can detect this at build time, and break the build if necessary.
We add a dependency on !COMPILE_TEST for CONFIG_HAVE_FUNCTION_TRACER so that allyesconfig and other test builds can continue to work without enabling ftrace.
Signed-off-by: Naveen N Rao <naveen@kernel.org> Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241030070850.1361304-11-hbathini@linux.ibm.com
show more ...
|
Revision tags: v6.12-rc3 |
|
#
46e1879d |
| 09-Oct-2024 |
Nathan Chancellor <nathan@kernel.org> |
powerpc: Fix stack protector Kconfig test for clang
Clang's in-progress per-task stack protector support [1] does not work with the current Kconfig checks because '-mstack-protector-guard-offset' is
powerpc: Fix stack protector Kconfig test for clang
Clang's in-progress per-task stack protector support [1] does not work with the current Kconfig checks because '-mstack-protector-guard-offset' is not provided, unlike all other architecture Kconfig checks.
$ fd Kconfig -x rg -l mstack-protector-guard-offset ./arch/arm/Kconfig ./arch/riscv/Kconfig ./arch/arm64/Kconfig
This produces an error from clang, which is interpreted as the flags not being supported at all when they really are.
$ clang --target=powerpc64-linux-gnu \ -mstack-protector-guard=tls \ -mstack-protector-guard-reg=r13 \ -c -o /dev/null -x c /dev/null clang: error: '-mstack-protector-guard=tls' is used without '-mstack-protector-guard-offset', and there is no default
This argument will always be provided by the build system, so mirror other architectures and use '-mstack-protector-guard-offset=0' for testing support, which fixes the issue for clang and does not regress support with GCC.
Even with the first problem addressed, the 32-bit test continues to fail because Kbuild uses the powerpc64le-linux-gnu target for clang and nothing flips the target to 32-bit, resulting in an error about an invalid register valid:
$ clang --target=powerpc64le-linux-gnu \ -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \ -mstack-protector-guard-offset=0 \ -x c -c -o /dev/null /dev/null clang: error: invalid value 'r2' in 'mstack-protector-guard-reg=', expected one of: r13
While GCC allows arbitrary registers, the implementation of '-mstack-protector-guard=tls' in LLVM shares the same code path as the user space thread local storage implementation, which uses a fixed register (2 for 32-bit and 13 for 62-bit), so the command line parsing enforces this limitation.
Use the Kconfig macro '$(m32-flag)', which expands to '-m32' when supported, in the stack protector support cc-option call to properly switch the target to a 32-bit one, which matches what happens in Kbuild. While the 64-bit macro does not strictly need it, add the equivalent 64-bit option for symmetry.
Cc: stable@vger.kernel.org # 6.1+ Link: https://github.com/llvm/llvm-project/pull/110928 [1] Reviewed-by: Keith Packard <keithp@keithp.com> Tested-by: Keith Packard <keithp@keithp.com> Signed-off-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://patch.msgid.link/20241009-powerpc-fix-stackprotector-test-clang-v2-1-12fb86b31857@kernel.org
show more ...
|