1b2441318SGreg Kroah-Hartman# SPDX-License-Identifier: GPL-2.0 2fb32e03fSMathieu Desnoyers# 3fb32e03fSMathieu Desnoyers# General architecture dependent options 4fb32e03fSMathieu Desnoyers# 5125e5645SMathieu Desnoyers 61572497cSChristoph Hellwig# 71572497cSChristoph Hellwig# Note: arch/$(SRCARCH)/Kconfig needs to be included first so that it can 81572497cSChristoph Hellwig# override the default values in this file. 91572497cSChristoph Hellwig# 101572497cSChristoph Hellwigsource "arch/$(SRCARCH)/Kconfig" 111572497cSChristoph Hellwig 12fe42754bSSean Christophersonconfig ARCH_CONFIGURES_CPU_MITIGATIONS 13fe42754bSSean Christopherson bool 14fe42754bSSean Christopherson 15fe42754bSSean Christophersonif !ARCH_CONFIGURES_CPU_MITIGATIONS 16fe42754bSSean Christophersonconfig CPU_MITIGATIONS 17fe42754bSSean Christopherson def_bool y 18fe42754bSSean Christophersonendif 19fe42754bSSean Christopherson 2022471e13SRandy Dunlapmenu "General architecture-dependent options" 2122471e13SRandy Dunlap 22da32b581SCatalin Marinasconfig ARCH_HAS_SUBPAGE_FAULTS 23da32b581SCatalin Marinas bool 24da32b581SCatalin Marinas help 25da32b581SCatalin Marinas Select if the architecture can check permissions at sub-page 26da32b581SCatalin Marinas granularity (e.g. arm64 MTE). The probe_user_*() functions 27da32b581SCatalin Marinas must be implemented. 28da32b581SCatalin Marinas 2905736e4aSThomas Gleixnerconfig HOTPLUG_SMT 3005736e4aSThomas Gleixner bool 3105736e4aSThomas Gleixner 3238253464SMichael Ellermanconfig SMT_NUM_THREADS_DYNAMIC 3338253464SMichael Ellerman bool 3438253464SMichael Ellerman 356f062123SThomas Gleixner# Selected by HOTPLUG_CORE_SYNC_DEAD or HOTPLUG_CORE_SYNC_FULL 366f062123SThomas Gleixnerconfig HOTPLUG_CORE_SYNC 376f062123SThomas Gleixner bool 386f062123SThomas Gleixner 396f062123SThomas Gleixner# Basic CPU dead synchronization selected by architecture 406f062123SThomas Gleixnerconfig HOTPLUG_CORE_SYNC_DEAD 416f062123SThomas Gleixner bool 426f062123SThomas Gleixner select HOTPLUG_CORE_SYNC 436f062123SThomas Gleixner 446f062123SThomas Gleixner# Full CPU synchronization with alive state selected by architecture 456f062123SThomas Gleixnerconfig HOTPLUG_CORE_SYNC_FULL 466f062123SThomas Gleixner bool 476f062123SThomas Gleixner select HOTPLUG_CORE_SYNC_DEAD if HOTPLUG_CPU 486f062123SThomas Gleixner select HOTPLUG_CORE_SYNC 496f062123SThomas Gleixner 50a631be92SThomas Gleixnerconfig HOTPLUG_SPLIT_STARTUP 51a631be92SThomas Gleixner bool 52a631be92SThomas Gleixner select HOTPLUG_CORE_SYNC_FULL 53a631be92SThomas Gleixner 5418415f33SThomas Gleixnerconfig HOTPLUG_PARALLEL 5518415f33SThomas Gleixner bool 5618415f33SThomas Gleixner select HOTPLUG_SPLIT_STARTUP 5718415f33SThomas Gleixner 58142781e1SThomas Gleixnerconfig GENERIC_ENTRY 59142781e1SThomas Gleixner bool 60142781e1SThomas Gleixner 61125e5645SMathieu Desnoyersconfig KPROBES 62125e5645SMathieu Desnoyers bool "Kprobes" 63125e5645SMathieu Desnoyers depends on HAVE_KPROBES 6405ed160eSMasami Hiramatsu select KALLSYMS 65*7582b7beSMike Rapoport (IBM) select EXECMEM 66900da4d2SPaul E. McKenney select NEED_TASKS_RCU 67125e5645SMathieu Desnoyers help 68125e5645SMathieu Desnoyers Kprobes allows you to trap at almost any kernel address and 69125e5645SMathieu Desnoyers execute a callback function. register_kprobe() establishes 70125e5645SMathieu Desnoyers a probepoint and specifies the callback. Kprobes is useful 71125e5645SMathieu Desnoyers for kernel debugging, non-intrusive instrumentation and testing. 72125e5645SMathieu Desnoyers If in doubt, say "N". 73125e5645SMathieu Desnoyers 7445f81b1cSSteven Rostedtconfig JUMP_LABEL 75c5905afbSIngo Molnar bool "Optimize very unlikely/likely branches" 7645f81b1cSSteven Rostedt depends on HAVE_ARCH_JUMP_LABEL 774ab7674fSJosh Poimboeuf select OBJTOOL if HAVE_JUMP_LABEL_HACK 7845f81b1cSSteven Rostedt help 79c5905afbSIngo Molnar This option enables a transparent branch optimization that 80c5905afbSIngo Molnar makes certain almost-always-true or almost-always-false branch 81c5905afbSIngo Molnar conditions even cheaper to execute within the kernel. 8245f81b1cSSteven Rostedt 83c5905afbSIngo Molnar Certain performance-sensitive kernel code, such as trace points, 84c5905afbSIngo Molnar scheduler functionality, networking code and KVM have such 85c5905afbSIngo Molnar branches and include support for this optimization technique. 86c5905afbSIngo Molnar 87c5905afbSIngo Molnar If it is detected that the compiler has support for "asm goto", 88c5905afbSIngo Molnar the kernel will compile such branches with just a nop 89c5905afbSIngo Molnar instruction. When the condition flag is toggled to true, the 90c5905afbSIngo Molnar nop will be converted to a jump instruction to execute the 91c5905afbSIngo Molnar conditional block of instructions. 92c5905afbSIngo Molnar 93c5905afbSIngo Molnar This technique lowers overhead and stress on the branch prediction 94c5905afbSIngo Molnar of the processor and generally makes the kernel faster. The update 95c5905afbSIngo Molnar of the condition is slower, but those are always very rare. 96c5905afbSIngo Molnar 97c5905afbSIngo Molnar ( On 32-bit x86, the necessary options added to the compiler 98c5905afbSIngo Molnar flags may increase the size of the kernel slightly. ) 9945f81b1cSSteven Rostedt 1001987c947SPeter Zijlstraconfig STATIC_KEYS_SELFTEST 1011987c947SPeter Zijlstra bool "Static key selftest" 1021987c947SPeter Zijlstra depends on JUMP_LABEL 1031987c947SPeter Zijlstra help 1041987c947SPeter Zijlstra Boot time self-test of the branch patching code. 1051987c947SPeter Zijlstra 106f03c4129SPeter Zijlstraconfig STATIC_CALL_SELFTEST 107f03c4129SPeter Zijlstra bool "Static call selftest" 108f03c4129SPeter Zijlstra depends on HAVE_STATIC_CALL 109f03c4129SPeter Zijlstra help 110f03c4129SPeter Zijlstra Boot time self-test of the call patching code. 111f03c4129SPeter Zijlstra 112afd66255SMasami Hiramatsuconfig OPTPROBES 1135cc718b9SMasami Hiramatsu def_bool y 1145cc718b9SMasami Hiramatsu depends on KPROBES && HAVE_OPTPROBES 115900da4d2SPaul E. McKenney select NEED_TASKS_RCU 116afd66255SMasami Hiramatsu 117e7dbfe34SMasami Hiramatsuconfig KPROBES_ON_FTRACE 118e7dbfe34SMasami Hiramatsu def_bool y 119e7dbfe34SMasami Hiramatsu depends on KPROBES && HAVE_KPROBES_ON_FTRACE 120e7dbfe34SMasami Hiramatsu depends on DYNAMIC_FTRACE_WITH_REGS 121e7dbfe34SMasami Hiramatsu help 122e7dbfe34SMasami Hiramatsu If function tracer is enabled and the arch supports full 123e7dbfe34SMasami Hiramatsu passing of pt_regs to function tracing, then kprobes can 124e7dbfe34SMasami Hiramatsu optimize on top of function tracing. 125e7dbfe34SMasami Hiramatsu 1262b144498SSrikar Dronamrajuconfig UPROBES 12709294e31SDavid A. Long def_bool n 128e8f4aa60SAllen Pais depends on ARCH_SUPPORTS_UPROBES 1292b144498SSrikar Dronamraju help 1307b2d81d4SIngo Molnar Uprobes is the user-space counterpart to kprobes: they 1317b2d81d4SIngo Molnar enable instrumentation applications (such as 'perf probe') 1327b2d81d4SIngo Molnar to establish unintrusive probes in user-space binaries and 1337b2d81d4SIngo Molnar libraries, by executing handler functions when the probes 1347b2d81d4SIngo Molnar are hit by user-space applications. 1357b2d81d4SIngo Molnar 1367b2d81d4SIngo Molnar ( These probes come in the form of single-byte breakpoints, 1377b2d81d4SIngo Molnar managed by the kernel and kept transparent to the probed 1387b2d81d4SIngo Molnar application. ) 1392b144498SSrikar Dronamraju 140adab66b7SSteven Rostedt (VMware)config HAVE_64BIT_ALIGNED_ACCESS 141adab66b7SSteven Rostedt (VMware) def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS 142adab66b7SSteven Rostedt (VMware) help 143adab66b7SSteven Rostedt (VMware) Some architectures require 64 bit accesses to be 64 bit 144adab66b7SSteven Rostedt (VMware) aligned, which also requires structs containing 64 bit values 145adab66b7SSteven Rostedt (VMware) to be 64 bit aligned too. This includes some 32 bit 146adab66b7SSteven Rostedt (VMware) architectures which can do 64 bit accesses, as well as 64 bit 147adab66b7SSteven Rostedt (VMware) architectures without unaligned access. 148adab66b7SSteven Rostedt (VMware) 149adab66b7SSteven Rostedt (VMware) This symbol should be selected by an architecture if 64 bit 150adab66b7SSteven Rostedt (VMware) accesses are required to be 64 bit aligned in this way even 151adab66b7SSteven Rostedt (VMware) though it is not a 64 bit architecture. 152adab66b7SSteven Rostedt (VMware) 153ba1a297dSLukas Bulwahn See Documentation/core-api/unaligned-memory-access.rst for 154ba1a297dSLukas Bulwahn more information on the topic of unaligned memory accesses. 155adab66b7SSteven Rostedt (VMware) 15658340a07SJohannes Bergconfig HAVE_EFFICIENT_UNALIGNED_ACCESS 1579ba16087SJan Beulich bool 15858340a07SJohannes Berg help 15958340a07SJohannes Berg Some architectures are unable to perform unaligned accesses 16058340a07SJohannes Berg without the use of get_unaligned/put_unaligned. Others are 16158340a07SJohannes Berg unable to perform such accesses efficiently (e.g. trap on 16258340a07SJohannes Berg unaligned access and require fixing it up in the exception 16358340a07SJohannes Berg handler.) 16458340a07SJohannes Berg 16558340a07SJohannes Berg This symbol should be selected by an architecture if it can 16658340a07SJohannes Berg perform unaligned accesses efficiently to allow different 16758340a07SJohannes Berg code paths to be selected for these cases. Some network 16858340a07SJohannes Berg drivers, for example, could opt to not fix up alignment 16958340a07SJohannes Berg problems with received packets if doing so would not help 17058340a07SJohannes Berg much. 17158340a07SJohannes Berg 172c9b54d6fSMauro Carvalho Chehab See Documentation/core-api/unaligned-memory-access.rst for more 17358340a07SJohannes Berg information on the topic of unaligned memory accesses. 17458340a07SJohannes Berg 175cf66bb93SDavid Woodhouseconfig ARCH_USE_BUILTIN_BSWAP 176cf66bb93SDavid Woodhouse bool 177cf66bb93SDavid Woodhouse help 178cf66bb93SDavid Woodhouse Modern versions of GCC (since 4.4) have builtin functions 179cf66bb93SDavid Woodhouse for handling byte-swapping. Using these, instead of the old 180cf66bb93SDavid Woodhouse inline assembler that the architecture code provides in the 181cf66bb93SDavid Woodhouse __arch_bswapXX() macros, allows the compiler to see what's 182cf66bb93SDavid Woodhouse happening and offers more opportunity for optimisation. In 183cf66bb93SDavid Woodhouse particular, the compiler will be able to combine the byteswap 184cf66bb93SDavid Woodhouse with a nearby load or store and use load-and-swap or 185cf66bb93SDavid Woodhouse store-and-swap instructions if the architecture has them. It 186cf66bb93SDavid Woodhouse should almost *never* result in code which is worse than the 187cf66bb93SDavid Woodhouse hand-coded assembler in <asm/swab.h>. But just in case it 188cf66bb93SDavid Woodhouse does, the use of the builtins is optional. 189cf66bb93SDavid Woodhouse 190cf66bb93SDavid Woodhouse Any architecture with load-and-swap or store-and-swap 191cf66bb93SDavid Woodhouse instructions should set this. And it shouldn't hurt to set it 192cf66bb93SDavid Woodhouse on architectures that don't have such instructions. 193cf66bb93SDavid Woodhouse 1949edddaa2SAnanth N Mavinakayanahalliconfig KRETPROBES 1959edddaa2SAnanth N Mavinakayanahalli def_bool y 19673f9b911SMasami Hiramatsu depends on KPROBES && (HAVE_KRETPROBES || HAVE_RETHOOK) 19773f9b911SMasami Hiramatsu 19873f9b911SMasami Hiramatsuconfig KRETPROBE_ON_RETHOOK 19973f9b911SMasami Hiramatsu def_bool y 20073f9b911SMasami Hiramatsu depends on HAVE_RETHOOK 20173f9b911SMasami Hiramatsu depends on KRETPROBES 20273f9b911SMasami Hiramatsu select RETHOOK 2039edddaa2SAnanth N Mavinakayanahalli 2047c68af6eSAvi Kivityconfig USER_RETURN_NOTIFIER 2057c68af6eSAvi Kivity bool 2067c68af6eSAvi Kivity depends on HAVE_USER_RETURN_NOTIFIER 2077c68af6eSAvi Kivity help 2087c68af6eSAvi Kivity Provide a kernel-internal notification when a cpu is about to 2097c68af6eSAvi Kivity switch to user mode. 2107c68af6eSAvi Kivity 21128b2ee20SRik van Rielconfig HAVE_IOREMAP_PROT 2129ba16087SJan Beulich bool 21328b2ee20SRik van Riel 214125e5645SMathieu Desnoyersconfig HAVE_KPROBES 2159ba16087SJan Beulich bool 2169edddaa2SAnanth N Mavinakayanahalli 2179edddaa2SAnanth N Mavinakayanahalliconfig HAVE_KRETPROBES 2189ba16087SJan Beulich bool 21974bc7ceeSArthur Kepner 220afd66255SMasami Hiramatsuconfig HAVE_OPTPROBES 221afd66255SMasami Hiramatsu bool 222d314d74cSCong Wang 223e7dbfe34SMasami Hiramatsuconfig HAVE_KPROBES_ON_FTRACE 224e7dbfe34SMasami Hiramatsu bool 225e7dbfe34SMasami Hiramatsu 2261f6d3a8fSMasami Hiramatsuconfig ARCH_CORRECT_STACKTRACE_ON_KRETPROBE 2271f6d3a8fSMasami Hiramatsu bool 2281f6d3a8fSMasami Hiramatsu help 2291f6d3a8fSMasami Hiramatsu Since kretprobes modifies return address on the stack, the 2301f6d3a8fSMasami Hiramatsu stacktrace may see the kretprobe trampoline address instead 2311f6d3a8fSMasami Hiramatsu of correct one. If the architecture stacktrace code and 2321f6d3a8fSMasami Hiramatsu unwinder can adjust such entries, select this configuration. 2331f6d3a8fSMasami Hiramatsu 234540adea3SMasami Hiramatsuconfig HAVE_FUNCTION_ERROR_INJECTION 2359802d865SJosef Bacik bool 2369802d865SJosef Bacik 23742a0bb3fSPetr Mladekconfig HAVE_NMI 23842a0bb3fSPetr Mladek bool 23942a0bb3fSPetr Mladek 240a257caccSChristophe Leroyconfig HAVE_FUNCTION_DESCRIPTORS 241a257caccSChristophe Leroy bool 242a257caccSChristophe Leroy 2434aae683fSMasahiro Yamadaconfig TRACE_IRQFLAGS_SUPPORT 2444aae683fSMasahiro Yamada bool 2454aae683fSMasahiro Yamada 2464510bffbSMark Rutlandconfig TRACE_IRQFLAGS_NMI_SUPPORT 2474510bffbSMark Rutland bool 2484510bffbSMark Rutland 2491f5a4ad9SRoland McGrath# 2501f5a4ad9SRoland McGrath# An arch should select this if it provides all these things: 2511f5a4ad9SRoland McGrath# 2521f5a4ad9SRoland McGrath# task_pt_regs() in asm/processor.h or asm/ptrace.h 2531f5a4ad9SRoland McGrath# arch_has_single_step() if there is hardware single-step support 2541f5a4ad9SRoland McGrath# arch_has_block_step() if there is hardware block-step support 2551f5a4ad9SRoland McGrath# asm/syscall.h supplying asm-generic/syscall.h interface 2561f5a4ad9SRoland McGrath# linux/regset.h user_regset interfaces 2571f5a4ad9SRoland McGrath# CORE_DUMP_USE_REGSET #define'd in linux/elf.h 258153474baSEric W. Biederman# TIF_SYSCALL_TRACE calls ptrace_report_syscall_{entry,exit} 25903248addSEric W. Biederman# TIF_NOTIFY_RESUME calls resume_user_mode_work() 2601f5a4ad9SRoland McGrath# 2611f5a4ad9SRoland McGrathconfig HAVE_ARCH_TRACEHOOK 2629ba16087SJan Beulich bool 2631f5a4ad9SRoland McGrath 264c64be2bbSMarek Szyprowskiconfig HAVE_DMA_CONTIGUOUS 265c64be2bbSMarek Szyprowski bool 266c64be2bbSMarek Szyprowski 26729d5e047SThomas Gleixnerconfig GENERIC_SMP_IDLE_THREAD 26829d5e047SThomas Gleixner bool 26929d5e047SThomas Gleixner 270485cf5daSKevin Hilmanconfig GENERIC_IDLE_POLL_SETUP 271485cf5daSKevin Hilman bool 272485cf5daSKevin Hilman 2736974f0c4SDaniel Micayconfig ARCH_HAS_FORTIFY_SOURCE 2746974f0c4SDaniel Micay bool 2756974f0c4SDaniel Micay help 2766974f0c4SDaniel Micay An architecture should select this when it can successfully 2776974f0c4SDaniel Micay build and run with CONFIG_FORTIFY_SOURCE. 2786974f0c4SDaniel Micay 279d8ae8a37SChristoph Hellwig# 280d8ae8a37SChristoph Hellwig# Select if the arch provides a historic keepinit alias for the retain_initrd 281d8ae8a37SChristoph Hellwig# command line option 282d8ae8a37SChristoph Hellwig# 283d8ae8a37SChristoph Hellwigconfig ARCH_HAS_KEEPINITRD 284d8ae8a37SChristoph Hellwig bool 285d8ae8a37SChristoph Hellwig 286d2852a22SDaniel Borkmann# Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h 287d2852a22SDaniel Borkmannconfig ARCH_HAS_SET_MEMORY 288d2852a22SDaniel Borkmann bool 289d2852a22SDaniel Borkmann 290d253ca0cSRick Edgecombe# Select if arch has all set_direct_map_invalid/default() functions 291d253ca0cSRick Edgecombeconfig ARCH_HAS_SET_DIRECT_MAP 292d253ca0cSRick Edgecombe bool 293d253ca0cSRick Edgecombe 294c30700dbSChristoph Hellwig# 295fa7e2247SChristoph Hellwig# Select if the architecture provides the arch_dma_set_uncached symbol to 296a86ecfa6SColin Ian King# either provide an uncached segment alias for a DMA allocation, or 297fa7e2247SChristoph Hellwig# to remap the page tables in place. 298c30700dbSChristoph Hellwig# 299fa7e2247SChristoph Hellwigconfig ARCH_HAS_DMA_SET_UNCACHED 300c30700dbSChristoph Hellwig bool 301c30700dbSChristoph Hellwig 302999a5d12SChristoph Hellwig# 303999a5d12SChristoph Hellwig# Select if the architectures provides the arch_dma_clear_uncached symbol 304999a5d12SChristoph Hellwig# to undo an in-place page table remap for uncached access. 305999a5d12SChristoph Hellwig# 306999a5d12SChristoph Hellwigconfig ARCH_HAS_DMA_CLEAR_UNCACHED 307f5e10287SThomas Gleixner bool 308f5e10287SThomas Gleixner 3097725acaaSThomas Gleixnerconfig ARCH_HAS_CPU_FINALIZE_INIT 3107725acaaSThomas Gleixner bool 3117725acaaSThomas Gleixner 3128f23f5dbSJason Gunthorpe# The architecture has a per-task state that includes the mm's PASID 3138f23f5dbSJason Gunthorpeconfig ARCH_HAS_CPU_PASID 3148f23f5dbSJason Gunthorpe bool 3158f23f5dbSJason Gunthorpe select IOMMU_MM_DATA 3168f23f5dbSJason Gunthorpe 3175905429aSKees Cookconfig HAVE_ARCH_THREAD_STRUCT_WHITELIST 3185905429aSKees Cook bool 3195905429aSKees Cook help 3205905429aSKees Cook An architecture should select this to provide hardened usercopy 3215905429aSKees Cook knowledge about what region of the thread_struct should be 3225905429aSKees Cook whitelisted for copying to userspace. Normally this is only the 3235905429aSKees Cook FPU registers. Specifically, arch_thread_struct_whitelist() 3245905429aSKees Cook should be implemented. Without this, the entire thread_struct 3255905429aSKees Cook field in task_struct will be left whitelisted. 3265905429aSKees Cook 3275aaeb5c0SIngo Molnar# Select if arch wants to size task_struct dynamically via arch_task_struct_size: 3285aaeb5c0SIngo Molnarconfig ARCH_WANTS_DYNAMIC_TASK_STRUCT 3295aaeb5c0SIngo Molnar bool 3305aaeb5c0SIngo Molnar 33151c2ee6dSNick Desaulniersconfig ARCH_WANTS_NO_INSTR 33251c2ee6dSNick Desaulniers bool 33351c2ee6dSNick Desaulniers help 33451c2ee6dSNick Desaulniers An architecture should select this if the noinstr macro is being used on 33551c2ee6dSNick Desaulniers functions to denote that the toolchain should avoid instrumenting such 33651c2ee6dSNick Desaulniers functions and is required for correctness. 33751c2ee6dSNick Desaulniers 338942fa985SYury Norovconfig ARCH_32BIT_OFF_T 339942fa985SYury Norov bool 340942fa985SYury Norov depends on !64BIT 341942fa985SYury Norov help 342942fa985SYury Norov All new 32-bit architectures should have 64-bit off_t type on 343942fa985SYury Norov userspace side which corresponds to the loff_t kernel type. This 344942fa985SYury Norov is the requirement for modern ABIs. Some existing architectures 345942fa985SYury Norov still support 32-bit off_t. This option is enabled for all such 346942fa985SYury Norov architectures explicitly. 347942fa985SYury Norov 34896c0a6a7SHeiko Carstens# Selected by 64 bit architectures which have a 32 bit f_tinode in struct ustat 34996c0a6a7SHeiko Carstensconfig ARCH_32BIT_USTAT_F_TINODE 35096c0a6a7SHeiko Carstens bool 35196c0a6a7SHeiko Carstens 3522ff2b7ecSMasahiro Yamadaconfig HAVE_ASM_MODVERSIONS 3532ff2b7ecSMasahiro Yamada bool 3542ff2b7ecSMasahiro Yamada help 355a86ecfa6SColin Ian King This symbol should be selected by an architecture if it provides 3562ff2b7ecSMasahiro Yamada <asm/asm-prototypes.h> to support the module versioning for symbols 3572ff2b7ecSMasahiro Yamada exported from assembly code. 3582ff2b7ecSMasahiro Yamada 359f850c30cSHeiko Carstensconfig HAVE_REGS_AND_STACK_ACCESS_API 360f850c30cSHeiko Carstens bool 361e01292b1SHeiko Carstens help 362a86ecfa6SColin Ian King This symbol should be selected by an architecture if it supports 363e01292b1SHeiko Carstens the API needed to access registers and stack entries from pt_regs, 364e01292b1SHeiko Carstens declared in asm/ptrace.h 365e01292b1SHeiko Carstens For example the kprobes-based event tracer needs this API. 366f850c30cSHeiko Carstens 367d7822b1eSMathieu Desnoyersconfig HAVE_RSEQ 368d7822b1eSMathieu Desnoyers bool 369d7822b1eSMathieu Desnoyers depends on HAVE_REGS_AND_STACK_ACCESS_API 370d7822b1eSMathieu Desnoyers help 371d7822b1eSMathieu Desnoyers This symbol should be selected by an architecture if it 372d7822b1eSMathieu Desnoyers supports an implementation of restartable sequences. 373d7822b1eSMathieu Desnoyers 3742f7ab126SMiguel Ojedaconfig HAVE_RUST 3752f7ab126SMiguel Ojeda bool 3762f7ab126SMiguel Ojeda help 3772f7ab126SMiguel Ojeda This symbol should be selected by an architecture if it 3782f7ab126SMiguel Ojeda supports Rust. 3792f7ab126SMiguel Ojeda 3803c88ee19SMasami Hiramatsuconfig HAVE_FUNCTION_ARG_ACCESS_API 3813c88ee19SMasami Hiramatsu bool 3823c88ee19SMasami Hiramatsu help 383a86ecfa6SColin Ian King This symbol should be selected by an architecture if it supports 3843c88ee19SMasami Hiramatsu the API needed to access function arguments from pt_regs, 3853c88ee19SMasami Hiramatsu declared in asm/ptrace.h 3863c88ee19SMasami Hiramatsu 38762a038d3SK.Prasadconfig HAVE_HW_BREAKPOINT 38862a038d3SK.Prasad bool 38999e8c5a3SFrederic Weisbecker depends on PERF_EVENTS 39062a038d3SK.Prasad 3910102752eSFrederic Weisbeckerconfig HAVE_MIXED_BREAKPOINTS_REGS 3920102752eSFrederic Weisbecker bool 3930102752eSFrederic Weisbecker depends on HAVE_HW_BREAKPOINT 3940102752eSFrederic Weisbecker help 3950102752eSFrederic Weisbecker Depending on the arch implementation of hardware breakpoints, 3960102752eSFrederic Weisbecker some of them have separate registers for data and instruction 3970102752eSFrederic Weisbecker breakpoints addresses, others have mixed registers to store 3980102752eSFrederic Weisbecker them but define the access type in a control register. 3990102752eSFrederic Weisbecker Select this option if your arch implements breakpoints under the 4000102752eSFrederic Weisbecker latter fashion. 4010102752eSFrederic Weisbecker 4027c68af6eSAvi Kivityconfig HAVE_USER_RETURN_NOTIFIER 4037c68af6eSAvi Kivity bool 404a1922ed6SIngo Molnar 405c01d4323SFrederic Weisbeckerconfig HAVE_PERF_EVENTS_NMI 406c01d4323SFrederic Weisbecker bool 40723637d47SFrederic Weisbecker help 40823637d47SFrederic Weisbecker System hardware can generate an NMI using the perf event 40923637d47SFrederic Weisbecker subsystem. Also has support for calculating CPU cycle events 41023637d47SFrederic Weisbecker to determine how many clock cycles in a given period. 411c01d4323SFrederic Weisbecker 41205a4a952SNicholas Pigginconfig HAVE_HARDLOCKUP_DETECTOR_PERF 41305a4a952SNicholas Piggin bool 41405a4a952SNicholas Piggin depends on HAVE_PERF_EVENTS_NMI 41505a4a952SNicholas Piggin help 41605a4a952SNicholas Piggin The arch chooses to use the generic perf-NMI-based hardlockup 41705a4a952SNicholas Piggin detector. Must define HAVE_PERF_EVENTS_NMI. 41805a4a952SNicholas Piggin 41905a4a952SNicholas Pigginconfig HAVE_HARDLOCKUP_DETECTOR_ARCH 42005a4a952SNicholas Piggin bool 42105a4a952SNicholas Piggin help 4221356d0b9SPetr Mladek The arch provides its own hardlockup detector implementation instead 4231356d0b9SPetr Mladek of the generic ones. 4241356d0b9SPetr Mladek 4251356d0b9SPetr Mladek It uses the same command line parameters, and sysctl interface, 4261356d0b9SPetr Mladek as the generic hardlockup detectors. 42705a4a952SNicholas Piggin 428c5e63197SJiri Olsaconfig HAVE_PERF_REGS 429c5e63197SJiri Olsa bool 430c5e63197SJiri Olsa help 431c5e63197SJiri Olsa Support selective register dumps for perf events. This includes 432c5e63197SJiri Olsa bit-mapping of each registers and a unique architecture id. 433c5e63197SJiri Olsa 434c5ebcedbSJiri Olsaconfig HAVE_PERF_USER_STACK_DUMP 435c5ebcedbSJiri Olsa bool 436c5ebcedbSJiri Olsa help 437c5ebcedbSJiri Olsa Support user stack dumps for perf event samples. This needs 438c5ebcedbSJiri Olsa access to the user stack pointer which is not unified across 439c5ebcedbSJiri Olsa architectures. 440c5ebcedbSJiri Olsa 441bf5438fcSJason Baronconfig HAVE_ARCH_JUMP_LABEL 442bf5438fcSJason Baron bool 443bf5438fcSJason Baron 44450ff18abSArd Biesheuvelconfig HAVE_ARCH_JUMP_LABEL_RELATIVE 44550ff18abSArd Biesheuvel bool 44650ff18abSArd Biesheuvel 4470d6e24d4SPeter Zijlstraconfig MMU_GATHER_TABLE_FREE 4480d6e24d4SPeter Zijlstra bool 4490d6e24d4SPeter Zijlstra 450ff2e6d72SPeter Zijlstraconfig MMU_GATHER_RCU_TABLE_FREE 45126723911SPeter Zijlstra bool 4520d6e24d4SPeter Zijlstra select MMU_GATHER_TABLE_FREE 45326723911SPeter Zijlstra 4543af4bd03SPeter Zijlstraconfig MMU_GATHER_PAGE_SIZE 455ed6a7935SPeter Zijlstra bool 456ed6a7935SPeter Zijlstra 45727796d03SPeter Zijlstraconfig MMU_GATHER_NO_RANGE 45827796d03SPeter Zijlstra bool 4591e9fdf21SPeter Zijlstra select MMU_GATHER_MERGE_VMAS 4601e9fdf21SPeter Zijlstra 4611e9fdf21SPeter Zijlstraconfig MMU_GATHER_NO_FLUSH_CACHE 4621e9fdf21SPeter Zijlstra bool 4631e9fdf21SPeter Zijlstra 4641e9fdf21SPeter Zijlstraconfig MMU_GATHER_MERGE_VMAS 4651e9fdf21SPeter Zijlstra bool 46627796d03SPeter Zijlstra 467580a586cSPeter Zijlstraconfig MMU_GATHER_NO_GATHER 468952a31c9SMartin Schwidefsky bool 4690d6e24d4SPeter Zijlstra depends on MMU_GATHER_TABLE_FREE 470952a31c9SMartin Schwidefsky 471d53c3dfbSNicholas Pigginconfig ARCH_WANT_IRQS_OFF_ACTIVATE_MM 472d53c3dfbSNicholas Piggin bool 473d53c3dfbSNicholas Piggin help 474d53c3dfbSNicholas Piggin Temporary select until all architectures can be converted to have 475d53c3dfbSNicholas Piggin irqs disabled over activate_mm. Architectures that do IPI based TLB 476d53c3dfbSNicholas Piggin shootdowns should enable this. 477d53c3dfbSNicholas Piggin 47888e3009bSNicholas Piggin# Use normal mm refcounting for MMU_LAZY_TLB kernel thread references. 47988e3009bSNicholas Piggin# MMU_LAZY_TLB_REFCOUNT=n can improve the scalability of context switching 48088e3009bSNicholas Piggin# to/from kernel threads when the same mm is running on a lot of CPUs (a large 48188e3009bSNicholas Piggin# multi-threaded application), by reducing contention on the mm refcount. 48288e3009bSNicholas Piggin# 48388e3009bSNicholas Piggin# This can be disabled if the architecture ensures no CPUs are using an mm as a 48488e3009bSNicholas Piggin# "lazy tlb" beyond its final refcount (i.e., by the time __mmdrop frees the mm 48588e3009bSNicholas Piggin# or its kernel page tables). This could be arranged by arch_exit_mmap(), or 48688e3009bSNicholas Piggin# final exit(2) TLB flush, for example. 48788e3009bSNicholas Piggin# 48888e3009bSNicholas Piggin# To implement this, an arch *must*: 48988e3009bSNicholas Piggin# Ensure the _lazy_tlb variants of mmgrab/mmdrop are used when manipulating 49088e3009bSNicholas Piggin# the lazy tlb reference of a kthread's ->active_mm (non-arch code has been 49188e3009bSNicholas Piggin# converted already). 49288e3009bSNicholas Pigginconfig MMU_LAZY_TLB_REFCOUNT 49388e3009bSNicholas Piggin def_bool y 4942655421aSNicholas Piggin depends on !MMU_LAZY_TLB_SHOOTDOWN 4952655421aSNicholas Piggin 4962655421aSNicholas Piggin# This option allows MMU_LAZY_TLB_REFCOUNT=n. It ensures no CPUs are using an 4972655421aSNicholas Piggin# mm as a lazy tlb beyond its last reference count, by shooting down these 4982655421aSNicholas Piggin# users before the mm is deallocated. __mmdrop() first IPIs all CPUs that may 4992655421aSNicholas Piggin# be using the mm as a lazy tlb, so that they may switch themselves to using 5002655421aSNicholas Piggin# init_mm for their active mm. mm_cpumask(mm) is used to determine which CPUs 5012655421aSNicholas Piggin# may be using mm as a lazy tlb mm. 5022655421aSNicholas Piggin# 5032655421aSNicholas Piggin# To implement this, an arch *must*: 5042655421aSNicholas Piggin# - At the time of the final mmdrop of the mm, ensure mm_cpumask(mm) contains 5052655421aSNicholas Piggin# at least all possible CPUs in which the mm is lazy. 5062655421aSNicholas Piggin# - It must meet the requirements for MMU_LAZY_TLB_REFCOUNT=n (see above). 5072655421aSNicholas Pigginconfig MMU_LAZY_TLB_SHOOTDOWN 5082655421aSNicholas Piggin bool 50988e3009bSNicholas Piggin 510df013ffbSHuang Yingconfig ARCH_HAVE_NMI_SAFE_CMPXCHG 511df013ffbSHuang Ying bool 512df013ffbSHuang Ying 513a9c3475dSVignesh Balasubramanianconfig ARCH_HAVE_EXTRA_ELF_NOTES 514a9c3475dSVignesh Balasubramanian bool 515a9c3475dSVignesh Balasubramanian help 516a9c3475dSVignesh Balasubramanian An architecture should select this in order to enable adding an 517a9c3475dSVignesh Balasubramanian arch-specific ELF note section to core files. It must provide two 518a9c3475dSVignesh Balasubramanian functions: elf_coredump_extra_notes_size() and 519a9c3475dSVignesh Balasubramanian elf_coredump_extra_notes_write() which are invoked by the ELF core 520a9c3475dSVignesh Balasubramanian dumper. 521a9c3475dSVignesh Balasubramanian 5222e83b879SPaul E. McKenneyconfig ARCH_HAS_NMI_SAFE_THIS_CPU_OPS 5232e83b879SPaul E. McKenney bool 5242e83b879SPaul E. McKenney 52543570fd2SHeiko Carstensconfig HAVE_ALIGNED_STRUCT_PAGE 52643570fd2SHeiko Carstens bool 52743570fd2SHeiko Carstens help 52843570fd2SHeiko Carstens This makes sure that struct pages are double word aligned and that 52943570fd2SHeiko Carstens e.g. the SLUB allocator can perform double word atomic operations 53043570fd2SHeiko Carstens on a struct page for better performance. However selecting this 53143570fd2SHeiko Carstens might increase the size of a struct page by a word. 53243570fd2SHeiko Carstens 5334156153cSHeiko Carstensconfig HAVE_CMPXCHG_LOCAL 5344156153cSHeiko Carstens bool 5354156153cSHeiko Carstens 5362565409fSHeiko Carstensconfig HAVE_CMPXCHG_DOUBLE 5372565409fSHeiko Carstens bool 5382565409fSHeiko Carstens 53977e58496SPaul E. McKenneyconfig ARCH_WEAK_RELEASE_ACQUIRE 54077e58496SPaul E. McKenney bool 54177e58496SPaul E. McKenney 542c1d7e01dSWill Deaconconfig ARCH_WANT_IPC_PARSE_VERSION 543c1d7e01dSWill Deacon bool 544c1d7e01dSWill Deacon 545c1d7e01dSWill Deaconconfig ARCH_WANT_COMPAT_IPC_PARSE_VERSION 546c1d7e01dSWill Deacon bool 547c1d7e01dSWill Deacon 54848b25c43SChris Metcalfconfig ARCH_WANT_OLD_COMPAT_IPC 549c1d7e01dSWill Deacon select ARCH_WANT_COMPAT_IPC_PARSE_VERSION 55048b25c43SChris Metcalf bool 55148b25c43SChris Metcalf 552282a181bSYiFei Zhuconfig HAVE_ARCH_SECCOMP 553e2cfabdfSWill Drewry bool 554e2cfabdfSWill Drewry help 555282a181bSYiFei Zhu An arch should select this symbol to support seccomp mode 1 (the fixed 556282a181bSYiFei Zhu syscall policy), and must provide an overrides for __NR_seccomp_sigreturn, 557282a181bSYiFei Zhu and compat syscalls if the asm-generic/seccomp.h defaults need adjustment: 558282a181bSYiFei Zhu - __NR_seccomp_read_32 559282a181bSYiFei Zhu - __NR_seccomp_write_32 560282a181bSYiFei Zhu - __NR_seccomp_exit_32 561282a181bSYiFei Zhu - __NR_seccomp_sigreturn_32 562282a181bSYiFei Zhu 563282a181bSYiFei Zhuconfig HAVE_ARCH_SECCOMP_FILTER 564282a181bSYiFei Zhu bool 565282a181bSYiFei Zhu select HAVE_ARCH_SECCOMP 566282a181bSYiFei Zhu help 567fb0fadf9SWill Drewry An arch should select this symbol if it provides all of these things: 568282a181bSYiFei Zhu - all the requirements for HAVE_ARCH_SECCOMP 569bb6ea430SWill Drewry - syscall_get_arch() 570bb6ea430SWill Drewry - syscall_get_arguments() 571bb6ea430SWill Drewry - syscall_rollback() 572bb6ea430SWill Drewry - syscall_set_return_value() 573fb0fadf9SWill Drewry - SIGSYS siginfo_t support 574fb0fadf9SWill Drewry - secure_computing is called from a ptrace_event()-safe context 575fb0fadf9SWill Drewry - secure_computing return value is checked and a return value of -1 576fb0fadf9SWill Drewry results in the system call being skipped immediately. 57748dc92b9SKees Cook - seccomp syscall wired up 5780d8315ddSYiFei Zhu - if !HAVE_SPARSE_SYSCALL_NR, have SECCOMP_ARCH_NATIVE, 5790d8315ddSYiFei Zhu SECCOMP_ARCH_NATIVE_NR, SECCOMP_ARCH_NATIVE_NAME defined. If 5800d8315ddSYiFei Zhu COMPAT is supported, have the SECCOMP_ARCH_COMPAT* defines too. 581e2cfabdfSWill Drewry 582282a181bSYiFei Zhuconfig SECCOMP 583282a181bSYiFei Zhu prompt "Enable seccomp to safely execute untrusted bytecode" 584282a181bSYiFei Zhu def_bool y 585282a181bSYiFei Zhu depends on HAVE_ARCH_SECCOMP 586282a181bSYiFei Zhu help 587282a181bSYiFei Zhu This kernel feature is useful for number crunching applications 588282a181bSYiFei Zhu that may need to handle untrusted bytecode during their 589282a181bSYiFei Zhu execution. By using pipes or other transports made available 590282a181bSYiFei Zhu to the process as file descriptors supporting the read/write 591282a181bSYiFei Zhu syscalls, it's possible to isolate those applications in their 592282a181bSYiFei Zhu own address space using seccomp. Once seccomp is enabled via 593282a181bSYiFei Zhu prctl(PR_SET_SECCOMP) or the seccomp() syscall, it cannot be 594282a181bSYiFei Zhu disabled and the task is only allowed to execute a few safe 595282a181bSYiFei Zhu syscalls defined by each seccomp mode. 596282a181bSYiFei Zhu 597282a181bSYiFei Zhu If unsure, say Y. 598282a181bSYiFei Zhu 599e2cfabdfSWill Drewryconfig SECCOMP_FILTER 600e2cfabdfSWill Drewry def_bool y 601e2cfabdfSWill Drewry depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET 602e2cfabdfSWill Drewry help 603e2cfabdfSWill Drewry Enable tasks to build secure computing environments defined 604e2cfabdfSWill Drewry in terms of Berkeley Packet Filter programs which implement 605e2cfabdfSWill Drewry task-defined system call filtering polices. 606e2cfabdfSWill Drewry 6075fb94e9cSMauro Carvalho Chehab See Documentation/userspace-api/seccomp_filter.rst for details. 608e2cfabdfSWill Drewry 6090d8315ddSYiFei Zhuconfig SECCOMP_CACHE_DEBUG 6100d8315ddSYiFei Zhu bool "Show seccomp filter cache status in /proc/pid/seccomp_cache" 6110d8315ddSYiFei Zhu depends on SECCOMP_FILTER && !HAVE_SPARSE_SYSCALL_NR 6120d8315ddSYiFei Zhu depends on PROC_FS 6130d8315ddSYiFei Zhu help 6140d8315ddSYiFei Zhu This enables the /proc/pid/seccomp_cache interface to monitor 6150d8315ddSYiFei Zhu seccomp cache data. The file format is subject to change. Reading 6160d8315ddSYiFei Zhu the file requires CAP_SYS_ADMIN. 6170d8315ddSYiFei Zhu 6180d8315ddSYiFei Zhu This option is for debugging only. Enabling presents the risk that 6190d8315ddSYiFei Zhu an adversary may be able to infer the seccomp filter logic. 6200d8315ddSYiFei Zhu 6210d8315ddSYiFei Zhu If unsure, say N. 6220d8315ddSYiFei Zhu 623afaef01cSAlexander Popovconfig HAVE_ARCH_STACKLEAK 624afaef01cSAlexander Popov bool 625afaef01cSAlexander Popov help 626afaef01cSAlexander Popov An architecture should select this if it has the code which 627afaef01cSAlexander Popov fills the used part of the kernel stack with the STACKLEAK_POISON 628afaef01cSAlexander Popov value before returning from system calls. 629afaef01cSAlexander Popov 630d148eac0SMasahiro Yamadaconfig HAVE_STACKPROTECTOR 63119952a92SKees Cook bool 63219952a92SKees Cook help 63319952a92SKees Cook An arch should select this symbol if: 63419952a92SKees Cook - it has implemented a stack canary (e.g. __stack_chk_guard) 63519952a92SKees Cook 636050e9baaSLinus Torvaldsconfig STACKPROTECTOR 6372a61f474SMasahiro Yamada bool "Stack Protector buffer overflow detection" 638d148eac0SMasahiro Yamada depends on HAVE_STACKPROTECTOR 6392a61f474SMasahiro Yamada depends on $(cc-option,-fstack-protector) 6402a61f474SMasahiro Yamada default y 6418779657dSKees Cook help 6428779657dSKees Cook This option turns on the "stack-protector" GCC feature. This 64319952a92SKees Cook feature puts, at the beginning of functions, a canary value on 64419952a92SKees Cook the stack just before the return address, and validates 64519952a92SKees Cook the value just before actually returning. Stack based buffer 64619952a92SKees Cook overflows (that need to overwrite this return address) now also 64719952a92SKees Cook overwrite the canary, which gets detected and the attack is then 64819952a92SKees Cook neutralized via a kernel panic. 64919952a92SKees Cook 6508779657dSKees Cook Functions will have the stack-protector canary logic added if they 6518779657dSKees Cook have an 8-byte or larger character array on the stack. 6528779657dSKees Cook 65319952a92SKees Cook This feature requires gcc version 4.2 or above, or a distribution 6548779657dSKees Cook gcc with the feature backported ("-fstack-protector"). 6558779657dSKees Cook 6568779657dSKees Cook On an x86 "defconfig" build, this feature adds canary checks to 6578779657dSKees Cook about 3% of all kernel functions, which increases kernel code size 6588779657dSKees Cook by about 0.3%. 6598779657dSKees Cook 660050e9baaSLinus Torvaldsconfig STACKPROTECTOR_STRONG 6612a61f474SMasahiro Yamada bool "Strong Stack Protector" 662050e9baaSLinus Torvalds depends on STACKPROTECTOR 6632a61f474SMasahiro Yamada depends on $(cc-option,-fstack-protector-strong) 6642a61f474SMasahiro Yamada default y 6658779657dSKees Cook help 6668779657dSKees Cook Functions will have the stack-protector canary logic added in any 6678779657dSKees Cook of the following conditions: 6688779657dSKees Cook 6698779657dSKees Cook - local variable's address used as part of the right hand side of an 6708779657dSKees Cook assignment or function argument 6718779657dSKees Cook - local variable is an array (or union containing an array), 6728779657dSKees Cook regardless of array type or length 6738779657dSKees Cook - uses register local variables 6748779657dSKees Cook 6758779657dSKees Cook This feature requires gcc version 4.9 or above, or a distribution 6768779657dSKees Cook gcc with the feature backported ("-fstack-protector-strong"). 6778779657dSKees Cook 6788779657dSKees Cook On an x86 "defconfig" build, this feature adds canary checks to 6798779657dSKees Cook about 20% of all kernel functions, which increases the kernel code 6808779657dSKees Cook size by about 2%. 6818779657dSKees Cook 682d08b9f0cSSami Tolvanenconfig ARCH_SUPPORTS_SHADOW_CALL_STACK 683d08b9f0cSSami Tolvanen bool 684d08b9f0cSSami Tolvanen help 685afcf5441SDan Li An architecture should select this if it supports the compiler's 686afcf5441SDan Li Shadow Call Stack and implements runtime support for shadow stack 687aa7a65aeSWill Deacon switching. 688d08b9f0cSSami Tolvanen 689d08b9f0cSSami Tolvanenconfig SHADOW_CALL_STACK 690afcf5441SDan Li bool "Shadow Call Stack" 691afcf5441SDan Li depends on ARCH_SUPPORTS_SHADOW_CALL_STACK 69238792972SArd Biesheuvel depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER 6936f9dc684SSamuel Holland depends on MMU 694d08b9f0cSSami Tolvanen help 695afcf5441SDan Li This option enables the compiler's Shadow Call Stack, which 696afcf5441SDan Li uses a shadow stack to protect function return addresses from 697afcf5441SDan Li being overwritten by an attacker. More information can be found 698afcf5441SDan Li in the compiler's documentation: 699d08b9f0cSSami Tolvanen 700afcf5441SDan Li - Clang: https://clang.llvm.org/docs/ShadowCallStack.html 701afcf5441SDan Li - GCC: https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html#Instrumentation-Options 702d08b9f0cSSami Tolvanen 703d08b9f0cSSami Tolvanen Note that security guarantees in the kernel differ from the 704d08b9f0cSSami Tolvanen ones documented for user space. The kernel must store addresses 705d08b9f0cSSami Tolvanen of shadow stacks in memory, which means an attacker capable of 706d08b9f0cSSami Tolvanen reading and writing arbitrary memory may be able to locate them 707d08b9f0cSSami Tolvanen and hijack control flow by modifying the stacks. 708d08b9f0cSSami Tolvanen 7099beccca0SArd Biesheuvelconfig DYNAMIC_SCS 7109beccca0SArd Biesheuvel bool 7119beccca0SArd Biesheuvel help 7129beccca0SArd Biesheuvel Set by the arch code if it relies on code patching to insert the 7139beccca0SArd Biesheuvel shadow call stack push and pop instructions rather than on the 7149beccca0SArd Biesheuvel compiler. 7159beccca0SArd Biesheuvel 716dc5723b0SSami Tolvanenconfig LTO 717dc5723b0SSami Tolvanen bool 718dc5723b0SSami Tolvanen help 719dc5723b0SSami Tolvanen Selected if the kernel will be built using the compiler's LTO feature. 720dc5723b0SSami Tolvanen 721dc5723b0SSami Tolvanenconfig LTO_CLANG 722dc5723b0SSami Tolvanen bool 723dc5723b0SSami Tolvanen select LTO 724dc5723b0SSami Tolvanen help 725dc5723b0SSami Tolvanen Selected if the kernel will be built using Clang's LTO feature. 726dc5723b0SSami Tolvanen 727dc5723b0SSami Tolvanenconfig ARCH_SUPPORTS_LTO_CLANG 728dc5723b0SSami Tolvanen bool 729dc5723b0SSami Tolvanen help 730dc5723b0SSami Tolvanen An architecture should select this option if it supports: 731dc5723b0SSami Tolvanen - compiling with Clang, 732dc5723b0SSami Tolvanen - compiling inline assembly with Clang's integrated assembler, 733dc5723b0SSami Tolvanen - and linking with LLD. 734dc5723b0SSami Tolvanen 735dc5723b0SSami Tolvanenconfig ARCH_SUPPORTS_LTO_CLANG_THIN 736dc5723b0SSami Tolvanen bool 737dc5723b0SSami Tolvanen help 738dc5723b0SSami Tolvanen An architecture should select this option if it can support Clang's 739dc5723b0SSami Tolvanen ThinLTO mode. 740dc5723b0SSami Tolvanen 741dc5723b0SSami Tolvanenconfig HAS_LTO_CLANG 742dc5723b0SSami Tolvanen def_bool y 7431e68a8afSNathan Chancellor depends on CC_IS_CLANG && LD_IS_LLD && AS_IS_LLVM 744dc5723b0SSami Tolvanen depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm) 745dc5723b0SSami Tolvanen depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm) 746dc5723b0SSami Tolvanen depends on ARCH_SUPPORTS_LTO_CLANG 747dc5723b0SSami Tolvanen depends on !FTRACE_MCOUNT_USE_RECORDMCOUNT 748349fde59SJakob Koschel # https://github.com/ClangBuiltLinux/linux/issues/1721 749349fde59SJakob Koschel depends on (!KASAN || KASAN_HW_TAGS || CLANG_VERSION >= 170000) || !DEBUG_INFO 750349fde59SJakob Koschel depends on (!KCOV || CLANG_VERSION >= 170000) || !DEBUG_INFO 751dc5723b0SSami Tolvanen depends on !GCOV_KERNEL 752dc5723b0SSami Tolvanen help 753dc5723b0SSami Tolvanen The compiler and Kconfig options support building with Clang's 754dc5723b0SSami Tolvanen LTO. 755dc5723b0SSami Tolvanen 756dc5723b0SSami Tolvanenchoice 757dc5723b0SSami Tolvanen prompt "Link Time Optimization (LTO)" 758dc5723b0SSami Tolvanen default LTO_NONE 759dc5723b0SSami Tolvanen help 760dc5723b0SSami Tolvanen This option enables Link Time Optimization (LTO), which allows the 761dc5723b0SSami Tolvanen compiler to optimize binaries globally. 762dc5723b0SSami Tolvanen 763dc5723b0SSami Tolvanen If unsure, select LTO_NONE. Note that LTO is very resource-intensive 764dc5723b0SSami Tolvanen so it's disabled by default. 765dc5723b0SSami Tolvanen 766dc5723b0SSami Tolvanenconfig LTO_NONE 767dc5723b0SSami Tolvanen bool "None" 768dc5723b0SSami Tolvanen help 769dc5723b0SSami Tolvanen Build the kernel normally, without Link Time Optimization (LTO). 770dc5723b0SSami Tolvanen 771dc5723b0SSami Tolvanenconfig LTO_CLANG_FULL 772dc5723b0SSami Tolvanen bool "Clang Full LTO (EXPERIMENTAL)" 773dc5723b0SSami Tolvanen depends on HAS_LTO_CLANG 774dc5723b0SSami Tolvanen depends on !COMPILE_TEST 775dc5723b0SSami Tolvanen select LTO_CLANG 776dc5723b0SSami Tolvanen help 777dc5723b0SSami Tolvanen This option enables Clang's full Link Time Optimization (LTO), which 778dc5723b0SSami Tolvanen allows the compiler to optimize the kernel globally. If you enable 779dc5723b0SSami Tolvanen this option, the compiler generates LLVM bitcode instead of ELF 780dc5723b0SSami Tolvanen object files, and the actual compilation from bitcode happens at 781dc5723b0SSami Tolvanen the LTO link step, which may take several minutes depending on the 782dc5723b0SSami Tolvanen kernel configuration. More information can be found from LLVM's 783dc5723b0SSami Tolvanen documentation: 784dc5723b0SSami Tolvanen 785dc5723b0SSami Tolvanen https://llvm.org/docs/LinkTimeOptimization.html 786dc5723b0SSami Tolvanen 787dc5723b0SSami Tolvanen During link time, this option can use a large amount of RAM, and 788dc5723b0SSami Tolvanen may take much longer than the ThinLTO option. 789dc5723b0SSami Tolvanen 790dc5723b0SSami Tolvanenconfig LTO_CLANG_THIN 791dc5723b0SSami Tolvanen bool "Clang ThinLTO (EXPERIMENTAL)" 792dc5723b0SSami Tolvanen depends on HAS_LTO_CLANG && ARCH_SUPPORTS_LTO_CLANG_THIN 793dc5723b0SSami Tolvanen select LTO_CLANG 794dc5723b0SSami Tolvanen help 795dc5723b0SSami Tolvanen This option enables Clang's ThinLTO, which allows for parallel 796dc5723b0SSami Tolvanen optimization and faster incremental compiles compared to the 797dc5723b0SSami Tolvanen CONFIG_LTO_CLANG_FULL option. More information can be found 798dc5723b0SSami Tolvanen from Clang's documentation: 799dc5723b0SSami Tolvanen 800dc5723b0SSami Tolvanen https://clang.llvm.org/docs/ThinLTO.html 801dc5723b0SSami Tolvanen 802dc5723b0SSami Tolvanen If unsure, say Y. 803dc5723b0SSami Tolvanenendchoice 804dc5723b0SSami Tolvanen 805cf68fffbSSami Tolvanenconfig ARCH_SUPPORTS_CFI_CLANG 806cf68fffbSSami Tolvanen bool 807cf68fffbSSami Tolvanen help 808cf68fffbSSami Tolvanen An architecture should select this option if it can support Clang's 809cf68fffbSSami Tolvanen Control-Flow Integrity (CFI) checking. 810cf68fffbSSami Tolvanen 81189245600SSami Tolvanenconfig ARCH_USES_CFI_TRAPS 81289245600SSami Tolvanen bool 81389245600SSami Tolvanen 814cf68fffbSSami Tolvanenconfig CFI_CLANG 815cf68fffbSSami Tolvanen bool "Use Clang's Control Flow Integrity (CFI)" 81689245600SSami Tolvanen depends on ARCH_SUPPORTS_CFI_CLANG 81789245600SSami Tolvanen depends on $(cc-option,-fsanitize=kcfi) 818cf68fffbSSami Tolvanen help 819c4ca2276SLiu Song This option enables Clang's forward-edge Control Flow Integrity 820cf68fffbSSami Tolvanen (CFI) checking, where the compiler injects a runtime check to each 821cf68fffbSSami Tolvanen indirect function call to ensure the target is a valid function with 822cf68fffbSSami Tolvanen the correct static type. This restricts possible call targets and 823cf68fffbSSami Tolvanen makes it more difficult for an attacker to exploit bugs that allow 824cf68fffbSSami Tolvanen the modification of stored function pointers. More information can be 825cf68fffbSSami Tolvanen found from Clang's documentation: 826cf68fffbSSami Tolvanen 827cf68fffbSSami Tolvanen https://clang.llvm.org/docs/ControlFlowIntegrity.html 828cf68fffbSSami Tolvanen 829cf68fffbSSami Tolvanenconfig CFI_PERMISSIVE 830cf68fffbSSami Tolvanen bool "Use CFI in permissive mode" 831cf68fffbSSami Tolvanen depends on CFI_CLANG 832cf68fffbSSami Tolvanen help 833cf68fffbSSami Tolvanen When selected, Control Flow Integrity (CFI) violations result in a 834cf68fffbSSami Tolvanen warning instead of a kernel panic. This option should only be used 835cf68fffbSSami Tolvanen for finding indirect call type mismatches during development. 836cf68fffbSSami Tolvanen 837cf68fffbSSami Tolvanen If unsure, say N. 838cf68fffbSSami Tolvanen 8390f60a8efSKees Cookconfig HAVE_ARCH_WITHIN_STACK_FRAMES 8400f60a8efSKees Cook bool 8410f60a8efSKees Cook help 8420f60a8efSKees Cook An architecture should select this if it can walk the kernel stack 8430f60a8efSKees Cook frames to determine if an object is part of either the arguments 8440f60a8efSKees Cook or local variables (i.e. that it excludes saved return addresses, 8450f60a8efSKees Cook and similar) by implementing an inline arch_within_stack_frames(), 8460f60a8efSKees Cook which is used by CONFIG_HARDENED_USERCOPY. 8470f60a8efSKees Cook 84824a9c541SFrederic Weisbeckerconfig HAVE_CONTEXT_TRACKING_USER 8492b1d5024SFrederic Weisbecker bool 8502b1d5024SFrederic Weisbecker help 85191d1aa43SFrederic Weisbecker Provide kernel/user boundaries probes necessary for subsystems 85291d1aa43SFrederic Weisbecker that need it, such as userspace RCU extended quiescent state. 853490f561bSFrederic Weisbecker Syscalls need to be wrapped inside user_exit()-user_enter(), either 854490f561bSFrederic Weisbecker optimized behind static key or through the slow path using TIF_NOHZ 855490f561bSFrederic Weisbecker flag. Exceptions handlers must be wrapped as well. Irqs are already 8566f0e6c15SFrederic Weisbecker protected inside ct_irq_enter/ct_irq_exit() but preemption or signal 857490f561bSFrederic Weisbecker handling on irq exit still need to be protected. 858490f561bSFrederic Weisbecker 85924a9c541SFrederic Weisbeckerconfig HAVE_CONTEXT_TRACKING_USER_OFFSTACK 86083c2da2eSFrederic Weisbecker bool 86183c2da2eSFrederic Weisbecker help 86283c2da2eSFrederic Weisbecker Architecture neither relies on exception_enter()/exception_exit() 86383c2da2eSFrederic Weisbecker nor on schedule_user(). Also preempt_schedule_notrace() and 86483c2da2eSFrederic Weisbecker preempt_schedule_irq() can't be called in a preemptible section 86583c2da2eSFrederic Weisbecker while context tracking is CONTEXT_USER. This feature reflects a sane 86683c2da2eSFrederic Weisbecker entry implementation where the following requirements are met on 86783c2da2eSFrederic Weisbecker critical entry code, ie: before user_exit() or after user_enter(): 86883c2da2eSFrederic Weisbecker 86983c2da2eSFrederic Weisbecker - Critical entry code isn't preemptible (or better yet: 87083c2da2eSFrederic Weisbecker not interruptible). 871493c1822SFrederic Weisbecker - No use of RCU read side critical sections, unless ct_nmi_enter() 87283c2da2eSFrederic Weisbecker got called. 87383c2da2eSFrederic Weisbecker - No use of instrumentation, unless instrumentation_begin() got 87483c2da2eSFrederic Weisbecker called. 87583c2da2eSFrederic Weisbecker 876490f561bSFrederic Weisbeckerconfig HAVE_TIF_NOHZ 877490f561bSFrederic Weisbecker bool 878490f561bSFrederic Weisbecker help 879490f561bSFrederic Weisbecker Arch relies on TIF_NOHZ and syscall slow path to implement context 880490f561bSFrederic Weisbecker tracking calls to user_enter()/user_exit(). 8812b1d5024SFrederic Weisbecker 882b952741cSFrederic Weisbeckerconfig HAVE_VIRT_CPU_ACCOUNTING 883b952741cSFrederic Weisbecker bool 884b952741cSFrederic Weisbecker 8852b91ec9fSFrederic Weisbeckerconfig HAVE_VIRT_CPU_ACCOUNTING_IDLE 8862b91ec9fSFrederic Weisbecker bool 8872b91ec9fSFrederic Weisbecker help 8882b91ec9fSFrederic Weisbecker Architecture has its own way to account idle CPU time and therefore 8892b91ec9fSFrederic Weisbecker doesn't implement vtime_account_idle(). 8902b91ec9fSFrederic Weisbecker 89140565b5aSStanislaw Gruszkaconfig ARCH_HAS_SCALED_CPUTIME 89240565b5aSStanislaw Gruszka bool 89340565b5aSStanislaw Gruszka 894554b0004SKevin Hilmanconfig HAVE_VIRT_CPU_ACCOUNTING_GEN 895554b0004SKevin Hilman bool 896554b0004SKevin Hilman default y if 64BIT 897554b0004SKevin Hilman help 898554b0004SKevin Hilman With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. 899554b0004SKevin Hilman Before enabling this option, arch code must be audited 900554b0004SKevin Hilman to ensure there are no races in concurrent read/write of 901554b0004SKevin Hilman cputime_t. For example, reading/writing 64-bit cputime_t on 902554b0004SKevin Hilman some 32-bit arches may require multiple accesses, so proper 903554b0004SKevin Hilman locking is needed to protect against concurrent accesses. 904554b0004SKevin Hilman 905fdf9c356SFrederic Weisbeckerconfig HAVE_IRQ_TIME_ACCOUNTING 906fdf9c356SFrederic Weisbecker bool 907fdf9c356SFrederic Weisbecker help 908fdf9c356SFrederic Weisbecker Archs need to ensure they use a high enough resolution clock to 909fdf9c356SFrederic Weisbecker support irq time accounting and then call enable_sched_clock_irqtime(). 910fdf9c356SFrederic Weisbecker 911c49dd340SKalesh Singhconfig HAVE_MOVE_PUD 912c49dd340SKalesh Singh bool 913c49dd340SKalesh Singh help 914c49dd340SKalesh Singh Architectures that select this are able to move page tables at the 915c49dd340SKalesh Singh PUD level. If there are only 3 page table levels, the move effectively 916c49dd340SKalesh Singh happens at the PGD level. 917c49dd340SKalesh Singh 9182c91bd4aSJoel Fernandes (Google)config HAVE_MOVE_PMD 9192c91bd4aSJoel Fernandes (Google) bool 9202c91bd4aSJoel Fernandes (Google) help 9212c91bd4aSJoel Fernandes (Google) Archs that select this are able to move page tables at the PMD level. 9222c91bd4aSJoel Fernandes (Google) 92315626062SGerald Schaeferconfig HAVE_ARCH_TRANSPARENT_HUGEPAGE 92415626062SGerald Schaefer bool 92515626062SGerald Schaefer 926a00cc7d9SMatthew Wilcoxconfig HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD 927a00cc7d9SMatthew Wilcox bool 928a00cc7d9SMatthew Wilcox 9290ddab1d2SToshi Kaniconfig HAVE_ARCH_HUGE_VMAP 9300ddab1d2SToshi Kani bool 9310ddab1d2SToshi Kani 932121e6f32SNicholas Piggin# 933121e6f32SNicholas Piggin# Archs that select this would be capable of PMD-sized vmaps (i.e., 934559089e0SSong Liu# arch_vmap_pmd_supported() returns true). The VM_ALLOW_HUGE_VMAP flag 935559089e0SSong Liu# must be used to enable allocations to use hugepages. 936121e6f32SNicholas Piggin# 937121e6f32SNicholas Pigginconfig HAVE_ARCH_HUGE_VMALLOC 938121e6f32SNicholas Piggin depends on HAVE_ARCH_HUGE_VMAP 939121e6f32SNicholas Piggin bool 940121e6f32SNicholas Piggin 9413876d4a3SAlexandre Ghiticonfig ARCH_WANT_HUGE_PMD_SHARE 9423876d4a3SAlexandre Ghiti bool 9433876d4a3SAlexandre Ghiti 9442f0584f3SRick Edgecombe# Archs that want to use pmd_mkwrite on kernel memory need it defined even 9452f0584f3SRick Edgecombe# if there are no userspace memory management features that use it 9462f0584f3SRick Edgecombeconfig ARCH_WANT_KERNEL_PMD_MKWRITE 9472f0584f3SRick Edgecombe bool 9482f0584f3SRick Edgecombe 9492f0584f3SRick Edgecombeconfig ARCH_WANT_PMD_MKWRITE 9502f0584f3SRick Edgecombe def_bool TRANSPARENT_HUGEPAGE || ARCH_WANT_KERNEL_PMD_MKWRITE 9512f0584f3SRick Edgecombe 9520f8975ecSPavel Emelyanovconfig HAVE_ARCH_SOFT_DIRTY 9530f8975ecSPavel Emelyanov bool 9540f8975ecSPavel Emelyanov 955786d35d4SDavid Howellsconfig HAVE_MOD_ARCH_SPECIFIC 956786d35d4SDavid Howells bool 957786d35d4SDavid Howells help 958786d35d4SDavid Howells The arch uses struct mod_arch_specific to store data. Many arches 959786d35d4SDavid Howells just need a simple module loader without arch specific data - those 960786d35d4SDavid Howells should not enable this. 961786d35d4SDavid Howells 962786d35d4SDavid Howellsconfig MODULES_USE_ELF_RELA 963786d35d4SDavid Howells bool 964786d35d4SDavid Howells help 965786d35d4SDavid Howells Modules only use ELF RELA relocations. Modules with ELF REL 966786d35d4SDavid Howells relocations will give an error. 967786d35d4SDavid Howells 968786d35d4SDavid Howellsconfig MODULES_USE_ELF_REL 969786d35d4SDavid Howells bool 970786d35d4SDavid Howells help 971786d35d4SDavid Howells Modules only use ELF REL relocations. Modules with ELF RELA 972786d35d4SDavid Howells relocations will give an error. 973786d35d4SDavid Howells 97401dc0386SChristophe Leroyconfig ARCH_WANTS_MODULES_DATA_IN_VMALLOC 97501dc0386SChristophe Leroy bool 97601dc0386SChristophe Leroy help 97701dc0386SChristophe Leroy For architectures like powerpc/32 which have constraints on module 97801dc0386SChristophe Leroy allocation and need to allocate module data outside of module area. 97901dc0386SChristophe Leroy 980223b5e57SMike Rapoport (IBM)config ARCH_WANTS_EXECMEM_LATE 981223b5e57SMike Rapoport (IBM) bool 982223b5e57SMike Rapoport (IBM) help 983223b5e57SMike Rapoport (IBM) For architectures that do not allocate executable memory early on 984223b5e57SMike Rapoport (IBM) boot, but rather require its initialization late when there is 985223b5e57SMike Rapoport (IBM) enough entropy for module space randomization, for instance 986223b5e57SMike Rapoport (IBM) arm64. 987223b5e57SMike Rapoport (IBM) 988cc1f0274SFrederic Weisbeckerconfig HAVE_IRQ_EXIT_ON_IRQ_STACK 989cc1f0274SFrederic Weisbecker bool 990cc1f0274SFrederic Weisbecker help 991cc1f0274SFrederic Weisbecker Architecture doesn't only execute the irq handler on the irq stack 992cc1f0274SFrederic Weisbecker but also irq_exit(). This way we can process softirqs on this irq 993cc1f0274SFrederic Weisbecker stack instead of switching to a new one when we call __do_softirq() 994cc1f0274SFrederic Weisbecker in the end of an hardirq. 995cc1f0274SFrederic Weisbecker This spares a stack switch and improves cache usage on softirq 996cc1f0274SFrederic Weisbecker processing. 997cc1f0274SFrederic Weisbecker 998cd1a41ceSThomas Gleixnerconfig HAVE_SOFTIRQ_ON_OWN_STACK 999cd1a41ceSThomas Gleixner bool 1000cd1a41ceSThomas Gleixner help 1001cd1a41ceSThomas Gleixner Architecture provides a function to run __do_softirq() on a 1002c226bc3cSColin Ian King separate stack. 1003cd1a41ceSThomas Gleixner 10048cbb2b50SSebastian Andrzej Siewiorconfig SOFTIRQ_ON_OWN_STACK 10058cbb2b50SSebastian Andrzej Siewior def_bool HAVE_SOFTIRQ_ON_OWN_STACK && !PREEMPT_RT 10068cbb2b50SSebastian Andrzej Siewior 100712700c17SArnd Bergmannconfig ALTERNATE_USER_ADDRESS_SPACE 100812700c17SArnd Bergmann bool 100912700c17SArnd Bergmann help 101012700c17SArnd Bergmann Architectures set this when the CPU uses separate address 101112700c17SArnd Bergmann spaces for kernel and user space pointers. In this case, the 101212700c17SArnd Bergmann access_ok() check on a __user pointer is skipped. 101312700c17SArnd Bergmann 1014235a8f02SKirill A. Shutemovconfig PGTABLE_LEVELS 1015235a8f02SKirill A. Shutemov int 1016235a8f02SKirill A. Shutemov default 2 1017235a8f02SKirill A. Shutemov 10182b68f6caSKees Cookconfig ARCH_HAS_ELF_RANDOMIZE 10192b68f6caSKees Cook bool 10202b68f6caSKees Cook help 10212b68f6caSKees Cook An architecture supports choosing randomized locations for 10222b68f6caSKees Cook stack, mmap, brk, and ET_DYN. Defined functions: 10232b68f6caSKees Cook - arch_mmap_rnd() 1024204db6edSKees Cook - arch_randomize_brk() 10252b68f6caSKees Cook 1026d07e2259SDaniel Cashmanconfig HAVE_ARCH_MMAP_RND_BITS 1027d07e2259SDaniel Cashman bool 1028d07e2259SDaniel Cashman help 1029d07e2259SDaniel Cashman An arch should select this symbol if it supports setting a variable 1030d07e2259SDaniel Cashman number of bits for use in establishing the base address for mmap 1031d07e2259SDaniel Cashman allocations, has MMU enabled and provides values for both: 1032d07e2259SDaniel Cashman - ARCH_MMAP_RND_BITS_MIN 1033d07e2259SDaniel Cashman - ARCH_MMAP_RND_BITS_MAX 1034d07e2259SDaniel Cashman 10355f56a5dfSJiri Slabyconfig HAVE_EXIT_THREAD 10365f56a5dfSJiri Slaby bool 10375f56a5dfSJiri Slaby help 10385f56a5dfSJiri Slaby An architecture implements exit_thread. 10395f56a5dfSJiri Slaby 1040d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_BITS_MIN 1041d07e2259SDaniel Cashman int 1042d07e2259SDaniel Cashman 1043d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_BITS_MAX 1044d07e2259SDaniel Cashman int 1045d07e2259SDaniel Cashman 1046d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_BITS_DEFAULT 1047d07e2259SDaniel Cashman int 1048d07e2259SDaniel Cashman 1049d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_BITS 1050d07e2259SDaniel Cashman int "Number of bits to use for ASLR of mmap base address" if EXPERT 1051d07e2259SDaniel Cashman range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX 1052d07e2259SDaniel Cashman default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT 1053d07e2259SDaniel Cashman default ARCH_MMAP_RND_BITS_MIN 1054d07e2259SDaniel Cashman depends on HAVE_ARCH_MMAP_RND_BITS 1055d07e2259SDaniel Cashman help 1056d07e2259SDaniel Cashman This value can be used to select the number of bits to use to 1057d07e2259SDaniel Cashman determine the random offset to the base address of vma regions 1058d07e2259SDaniel Cashman resulting from mmap allocations. This value will be bounded 1059d07e2259SDaniel Cashman by the architecture's minimum and maximum supported values. 1060d07e2259SDaniel Cashman 1061d07e2259SDaniel Cashman This value can be changed after boot using the 1062d07e2259SDaniel Cashman /proc/sys/vm/mmap_rnd_bits tunable 1063d07e2259SDaniel Cashman 1064d07e2259SDaniel Cashmanconfig HAVE_ARCH_MMAP_RND_COMPAT_BITS 1065d07e2259SDaniel Cashman bool 1066d07e2259SDaniel Cashman help 1067d07e2259SDaniel Cashman An arch should select this symbol if it supports running applications 1068d07e2259SDaniel Cashman in compatibility mode, supports setting a variable number of bits for 1069d07e2259SDaniel Cashman use in establishing the base address for mmap allocations, has MMU 1070d07e2259SDaniel Cashman enabled and provides values for both: 1071d07e2259SDaniel Cashman - ARCH_MMAP_RND_COMPAT_BITS_MIN 1072d07e2259SDaniel Cashman - ARCH_MMAP_RND_COMPAT_BITS_MAX 1073d07e2259SDaniel Cashman 1074d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_COMPAT_BITS_MIN 1075d07e2259SDaniel Cashman int 1076d07e2259SDaniel Cashman 1077d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_COMPAT_BITS_MAX 1078d07e2259SDaniel Cashman int 1079d07e2259SDaniel Cashman 1080d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 1081d07e2259SDaniel Cashman int 1082d07e2259SDaniel Cashman 1083d07e2259SDaniel Cashmanconfig ARCH_MMAP_RND_COMPAT_BITS 1084d07e2259SDaniel Cashman int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT 1085d07e2259SDaniel Cashman range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX 1086d07e2259SDaniel Cashman default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT 1087d07e2259SDaniel Cashman default ARCH_MMAP_RND_COMPAT_BITS_MIN 1088d07e2259SDaniel Cashman depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS 1089d07e2259SDaniel Cashman help 1090d07e2259SDaniel Cashman This value can be used to select the number of bits to use to 1091d07e2259SDaniel Cashman determine the random offset to the base address of vma regions 1092d07e2259SDaniel Cashman resulting from mmap allocations for compatible applications This 1093d07e2259SDaniel Cashman value will be bounded by the architecture's minimum and maximum 1094d07e2259SDaniel Cashman supported values. 1095d07e2259SDaniel Cashman 1096d07e2259SDaniel Cashman This value can be changed after boot using the 1097d07e2259SDaniel Cashman /proc/sys/vm/mmap_rnd_compat_bits tunable 1098d07e2259SDaniel Cashman 10991b028f78SDmitry Safonovconfig HAVE_ARCH_COMPAT_MMAP_BASES 11001b028f78SDmitry Safonov bool 11011b028f78SDmitry Safonov help 11021b028f78SDmitry Safonov This allows 64bit applications to invoke 32-bit mmap() syscall 11031b028f78SDmitry Safonov and vice-versa 32-bit applications to call 64-bit mmap(). 11041b028f78SDmitry Safonov Required for applications doing different bitness syscalls. 11051b028f78SDmitry Safonov 1106ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_4KB 1107ba89f9c8SArnd Bergmann bool 1108ba89f9c8SArnd Bergmann 1109ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_8KB 1110ba89f9c8SArnd Bergmann bool 1111ba89f9c8SArnd Bergmann 1112ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_16KB 1113ba89f9c8SArnd Bergmann bool 1114ba89f9c8SArnd Bergmann 1115ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_32KB 1116ba89f9c8SArnd Bergmann bool 1117ba89f9c8SArnd Bergmann 1118ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_64KB 1119ba89f9c8SArnd Bergmann bool 1120ba89f9c8SArnd Bergmann 1121ba89f9c8SArnd Bergmannconfig HAVE_PAGE_SIZE_256KB 1122ba89f9c8SArnd Bergmann bool 1123ba89f9c8SArnd Bergmann 1124ba89f9c8SArnd Bergmannchoice 1125ba89f9c8SArnd Bergmann prompt "MMU page size" 1126ba89f9c8SArnd Bergmann 1127ba89f9c8SArnd Bergmannconfig PAGE_SIZE_4KB 1128ba89f9c8SArnd Bergmann bool "4KiB pages" 1129ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_4KB 1130ba89f9c8SArnd Bergmann help 1131ba89f9c8SArnd Bergmann This option select the standard 4KiB Linux page size and the only 1132ba89f9c8SArnd Bergmann available option on many architectures. Using 4KiB page size will 1133ba89f9c8SArnd Bergmann minimize memory consumption and is therefore recommended for low 1134ba89f9c8SArnd Bergmann memory systems. 1135ba89f9c8SArnd Bergmann Some software that is written for x86 systems makes incorrect 1136ba89f9c8SArnd Bergmann assumptions about the page size and only runs on 4KiB pages. 1137ba89f9c8SArnd Bergmann 1138ba89f9c8SArnd Bergmannconfig PAGE_SIZE_8KB 1139ba89f9c8SArnd Bergmann bool "8KiB pages" 1140ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_8KB 1141ba89f9c8SArnd Bergmann help 1142ba89f9c8SArnd Bergmann This option is the only supported page size on a few older 1143ba89f9c8SArnd Bergmann processors, and can be slightly faster than 4KiB pages. 1144ba89f9c8SArnd Bergmann 1145ba89f9c8SArnd Bergmannconfig PAGE_SIZE_16KB 1146ba89f9c8SArnd Bergmann bool "16KiB pages" 1147ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_16KB 1148ba89f9c8SArnd Bergmann help 1149ba89f9c8SArnd Bergmann This option is usually a good compromise between memory 1150ba89f9c8SArnd Bergmann consumption and performance for typical desktop and server 1151ba89f9c8SArnd Bergmann workloads, often saving a level of page table lookups compared 1152ba89f9c8SArnd Bergmann to 4KB pages as well as reducing TLB pressure and overhead of 1153ba89f9c8SArnd Bergmann per-page operations in the kernel at the expense of a larger 1154ba89f9c8SArnd Bergmann page cache. 1155ba89f9c8SArnd Bergmann 1156ba89f9c8SArnd Bergmannconfig PAGE_SIZE_32KB 1157ba89f9c8SArnd Bergmann bool "32KiB pages" 1158ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_32KB 1159ba89f9c8SArnd Bergmann help 1160ba89f9c8SArnd Bergmann Using 32KiB page size will result in slightly higher performance 1161ba89f9c8SArnd Bergmann kernel at the price of higher memory consumption compared to 1162ba89f9c8SArnd Bergmann 16KiB pages. This option is available only on cnMIPS cores. 1163ba89f9c8SArnd Bergmann Note that you will need a suitable Linux distribution to 1164ba89f9c8SArnd Bergmann support this. 1165ba89f9c8SArnd Bergmann 1166ba89f9c8SArnd Bergmannconfig PAGE_SIZE_64KB 1167ba89f9c8SArnd Bergmann bool "64KiB pages" 1168ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_64KB 1169ba89f9c8SArnd Bergmann help 1170ba89f9c8SArnd Bergmann Using 64KiB page size will result in slightly higher performance 1171ba89f9c8SArnd Bergmann kernel at the price of much higher memory consumption compared to 1172ba89f9c8SArnd Bergmann 4KiB or 16KiB pages. 1173ba89f9c8SArnd Bergmann This is not suitable for general-purpose workloads but the 1174ba89f9c8SArnd Bergmann better performance may be worth the cost for certain types of 1175ba89f9c8SArnd Bergmann supercomputing or database applications that work mostly with 1176ba89f9c8SArnd Bergmann large in-memory data rather than small files. 1177ba89f9c8SArnd Bergmann 1178ba89f9c8SArnd Bergmannconfig PAGE_SIZE_256KB 1179ba89f9c8SArnd Bergmann bool "256KiB pages" 1180ba89f9c8SArnd Bergmann depends on HAVE_PAGE_SIZE_256KB 1181ba89f9c8SArnd Bergmann help 1182ba89f9c8SArnd Bergmann 256KiB pages have little practical value due to their extreme 1183ba89f9c8SArnd Bergmann memory usage. The kernel will only be able to run applications 1184ba89f9c8SArnd Bergmann that have been compiled with '-zmax-page-size' set to 256KiB 1185ba89f9c8SArnd Bergmann (the default is 64KiB or 4KiB on most architectures). 1186ba89f9c8SArnd Bergmann 1187ba89f9c8SArnd Bergmannendchoice 1188ba89f9c8SArnd Bergmann 11891f0e290cSGuenter Roeckconfig PAGE_SIZE_LESS_THAN_64KB 11901f0e290cSGuenter Roeck def_bool y 11911f0e290cSGuenter Roeck depends on !PAGE_SIZE_64KB 1192e4bbd20dSNathan Chancellor depends on PAGE_SIZE_LESS_THAN_256KB 1193e4bbd20dSNathan Chancellor 1194e4bbd20dSNathan Chancellorconfig PAGE_SIZE_LESS_THAN_256KB 1195e4bbd20dSNathan Chancellor def_bool y 11961f0e290cSGuenter Roeck depends on !PAGE_SIZE_256KB 11971f0e290cSGuenter Roeck 1198ba89f9c8SArnd Bergmannconfig PAGE_SHIFT 1199ba89f9c8SArnd Bergmann int 1200ba89f9c8SArnd Bergmann default 12 if PAGE_SIZE_4KB 1201ba89f9c8SArnd Bergmann default 13 if PAGE_SIZE_8KB 1202ba89f9c8SArnd Bergmann default 14 if PAGE_SIZE_16KB 1203ba89f9c8SArnd Bergmann default 15 if PAGE_SIZE_32KB 1204ba89f9c8SArnd Bergmann default 16 if PAGE_SIZE_64KB 1205ba89f9c8SArnd Bergmann default 18 if PAGE_SIZE_256KB 1206ba89f9c8SArnd Bergmann 120767f3977fSAlexandre Ghiti# This allows to use a set of generic functions to determine mmap base 120867f3977fSAlexandre Ghiti# address by giving priority to top-down scheme only if the process 120967f3977fSAlexandre Ghiti# is not in legacy mode (compat task, unlimited stack size or 121067f3977fSAlexandre Ghiti# sysctl_legacy_va_layout). 121167f3977fSAlexandre Ghiti# Architecture that selects this option can provide its own version of: 121267f3977fSAlexandre Ghiti# - STACK_RND_MASK 121367f3977fSAlexandre Ghiticonfig ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT 121467f3977fSAlexandre Ghiti bool 121567f3977fSAlexandre Ghiti depends on MMU 1216e7142bf5SAlexandre Ghiti select ARCH_HAS_ELF_RANDOMIZE 121767f3977fSAlexandre Ghiti 121803f16cd0SJosh Poimboeufconfig HAVE_OBJTOOL 121903f16cd0SJosh Poimboeuf bool 122003f16cd0SJosh Poimboeuf 12214ab7674fSJosh Poimboeufconfig HAVE_JUMP_LABEL_HACK 12224ab7674fSJosh Poimboeuf bool 12234ab7674fSJosh Poimboeuf 122422102f45SJosh Poimboeufconfig HAVE_NOINSTR_HACK 122522102f45SJosh Poimboeuf bool 122622102f45SJosh Poimboeuf 1227489e355bSJosh Poimboeufconfig HAVE_NOINSTR_VALIDATION 1228489e355bSJosh Poimboeuf bool 1229489e355bSJosh Poimboeuf 12305f3da8c0SJosh Poimboeufconfig HAVE_UACCESS_VALIDATION 12315f3da8c0SJosh Poimboeuf bool 12325f3da8c0SJosh Poimboeuf select OBJTOOL 12335f3da8c0SJosh Poimboeuf 1234b9ab5ebbSJosh Poimboeufconfig HAVE_STACK_VALIDATION 1235b9ab5ebbSJosh Poimboeuf bool 1236b9ab5ebbSJosh Poimboeuf help 123703f16cd0SJosh Poimboeuf Architecture supports objtool compile-time frame pointer rule 123803f16cd0SJosh Poimboeuf validation. 1239b9ab5ebbSJosh Poimboeuf 1240af085d90SJosh Poimboeufconfig HAVE_RELIABLE_STACKTRACE 1241af085d90SJosh Poimboeuf bool 1242af085d90SJosh Poimboeuf help 1243140d7e88SMiroslav Benes Architecture has either save_stack_trace_tsk_reliable() or 1244140d7e88SMiroslav Benes arch_stack_walk_reliable() function which only returns a stack trace 1245140d7e88SMiroslav Benes if it can guarantee the trace is reliable. 1246af085d90SJosh Poimboeuf 1247468a9428SGeorge Spelvinconfig HAVE_ARCH_HASH 1248468a9428SGeorge Spelvin bool 1249468a9428SGeorge Spelvin default n 1250468a9428SGeorge Spelvin help 1251468a9428SGeorge Spelvin If this is set, the architecture provides an <asm/hash.h> 1252468a9428SGeorge Spelvin file which provides platform-specific implementations of some 1253468a9428SGeorge Spelvin functions in <linux/hash.h> or fs/namei.c. 1254468a9428SGeorge Spelvin 1255666047feSFinn Thainconfig HAVE_ARCH_NVRAM_OPS 1256666047feSFinn Thain bool 1257666047feSFinn Thain 12583a495511SWilliam Breathitt Grayconfig ISA_BUS_API 12593a495511SWilliam Breathitt Gray def_bool ISA 12603a495511SWilliam Breathitt Gray 1261d2125043SAl Viro# 1262d2125043SAl Viro# ABI hall of shame 1263d2125043SAl Viro# 1264d2125043SAl Viroconfig CLONE_BACKWARDS 1265d2125043SAl Viro bool 1266d2125043SAl Viro help 1267d2125043SAl Viro Architecture has tls passed as the 4th argument of clone(2), 1268d2125043SAl Viro not the 5th one. 1269d2125043SAl Viro 1270d2125043SAl Viroconfig CLONE_BACKWARDS2 1271d2125043SAl Viro bool 1272d2125043SAl Viro help 1273d2125043SAl Viro Architecture has the first two arguments of clone(2) swapped. 1274d2125043SAl Viro 1275dfa9771aSMichal Simekconfig CLONE_BACKWARDS3 1276dfa9771aSMichal Simek bool 1277dfa9771aSMichal Simek help 1278dfa9771aSMichal Simek Architecture has tls passed as the 3rd argument of clone(2), 1279dfa9771aSMichal Simek not the 5th one. 1280dfa9771aSMichal Simek 1281eaca6eaeSAl Viroconfig ODD_RT_SIGACTION 1282eaca6eaeSAl Viro bool 1283eaca6eaeSAl Viro help 1284eaca6eaeSAl Viro Architecture has unusual rt_sigaction(2) arguments 1285eaca6eaeSAl Viro 12860a0e8cdfSAl Viroconfig OLD_SIGSUSPEND 12870a0e8cdfSAl Viro bool 12880a0e8cdfSAl Viro help 12890a0e8cdfSAl Viro Architecture has old sigsuspend(2) syscall, of one-argument variety 12900a0e8cdfSAl Viro 12910a0e8cdfSAl Viroconfig OLD_SIGSUSPEND3 12920a0e8cdfSAl Viro bool 12930a0e8cdfSAl Viro help 12940a0e8cdfSAl Viro Even weirder antique ABI - three-argument sigsuspend(2) 12950a0e8cdfSAl Viro 1296495dfbf7SAl Viroconfig OLD_SIGACTION 1297495dfbf7SAl Viro bool 1298495dfbf7SAl Viro help 1299495dfbf7SAl Viro Architecture has old sigaction(2) syscall. Nope, not the same 1300495dfbf7SAl Viro as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), 1301495dfbf7SAl Viro but fairly different variant of sigaction(2), thanks to OSF/1 1302495dfbf7SAl Viro compatibility... 1303495dfbf7SAl Viro 1304495dfbf7SAl Viroconfig COMPAT_OLD_SIGACTION 1305495dfbf7SAl Viro bool 1306495dfbf7SAl Viro 130717435e5fSDeepa Dinamaniconfig COMPAT_32BIT_TIME 1308942437c9SArnd Bergmann bool "Provide system calls for 32-bit time_t" 1309942437c9SArnd Bergmann default !64BIT || COMPAT 131017435e5fSDeepa Dinamani help 131117435e5fSDeepa Dinamani This enables 32 bit time_t support in addition to 64 bit time_t support. 131217435e5fSDeepa Dinamani This is relevant on all 32-bit architectures, and 64-bit architectures 131317435e5fSDeepa Dinamani as part of compat syscall handling. 131417435e5fSDeepa Dinamani 131587a4c375SChristoph Hellwigconfig ARCH_NO_PREEMPT 131687a4c375SChristoph Hellwig bool 131787a4c375SChristoph Hellwig 1318a50a3f4bSThomas Gleixnerconfig ARCH_SUPPORTS_RT 1319a50a3f4bSThomas Gleixner bool 1320a50a3f4bSThomas Gleixner 1321fff7fb0bSZhaoxiu Zengconfig CPU_NO_EFFICIENT_FFS 1322fff7fb0bSZhaoxiu Zeng def_bool n 1323fff7fb0bSZhaoxiu Zeng 1324ba14a194SAndy Lutomirskiconfig HAVE_ARCH_VMAP_STACK 1325ba14a194SAndy Lutomirski def_bool n 1326ba14a194SAndy Lutomirski help 1327ba14a194SAndy Lutomirski An arch should select this symbol if it can support kernel stacks 1328ba14a194SAndy Lutomirski in vmalloc space. This means: 1329ba14a194SAndy Lutomirski 1330ba14a194SAndy Lutomirski - vmalloc space must be large enough to hold many kernel stacks. 1331ba14a194SAndy Lutomirski This may rule out many 32-bit architectures. 1332ba14a194SAndy Lutomirski 1333ba14a194SAndy Lutomirski - Stacks in vmalloc space need to work reliably. For example, if 1334ba14a194SAndy Lutomirski vmap page tables are created on demand, either this mechanism 1335ba14a194SAndy Lutomirski needs to work while the stack points to a virtual address with 1336ba14a194SAndy Lutomirski unpopulated page tables or arch code (switch_to() and switch_mm(), 1337ba14a194SAndy Lutomirski most likely) needs to ensure that the stack's page table entries 1338ba14a194SAndy Lutomirski are populated before running on a possibly unpopulated stack. 1339ba14a194SAndy Lutomirski 1340ba14a194SAndy Lutomirski - If the stack overflows into a guard page, something reasonable 1341ba14a194SAndy Lutomirski should happen. The definition of "reasonable" is flexible, but 1342ba14a194SAndy Lutomirski instantly rebooting without logging anything would be unfriendly. 1343ba14a194SAndy Lutomirski 1344ba14a194SAndy Lutomirskiconfig VMAP_STACK 1345ba14a194SAndy Lutomirski default y 1346ba14a194SAndy Lutomirski bool "Use a virtually-mapped stack" 1347eafb149eSDaniel Axtens depends on HAVE_ARCH_VMAP_STACK 134838dd767dSAndrey Konovalov depends on !KASAN || KASAN_HW_TAGS || KASAN_VMALLOC 1349a7f7f624SMasahiro Yamada help 1350ba14a194SAndy Lutomirski Enable this if you want the use virtually-mapped kernel stacks 1351ba14a194SAndy Lutomirski with guard pages. This causes kernel stack overflows to be 1352ba14a194SAndy Lutomirski caught immediately rather than causing difficult-to-diagnose 1353ba14a194SAndy Lutomirski corruption. 1354ba14a194SAndy Lutomirski 135538dd767dSAndrey Konovalov To use this with software KASAN modes, the architecture must support 135638dd767dSAndrey Konovalov backing virtual mappings with real shadow memory, and KASAN_VMALLOC 135738dd767dSAndrey Konovalov must be enabled. 1358ba14a194SAndy Lutomirski 135939218ff4SKees Cookconfig HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET 136039218ff4SKees Cook def_bool n 136139218ff4SKees Cook help 136239218ff4SKees Cook An arch should select this symbol if it can support kernel stack 136339218ff4SKees Cook offset randomization with calls to add_random_kstack_offset() 136439218ff4SKees Cook during syscall entry and choose_random_kstack_offset() during 136539218ff4SKees Cook syscall exit. Careful removal of -fstack-protector-strong and 136639218ff4SKees Cook -fstack-protector should also be applied to the entry code and 136739218ff4SKees Cook closely examined, as the artificial stack bump looks like an array 136839218ff4SKees Cook to the compiler, so it will attempt to add canary checks regardless 136939218ff4SKees Cook of the static branch state. 137039218ff4SKees Cook 13718cb37a59SMarco Elverconfig RANDOMIZE_KSTACK_OFFSET 13728cb37a59SMarco Elver bool "Support for randomizing kernel stack offset on syscall entry" if EXPERT 13738cb37a59SMarco Elver default y 137439218ff4SKees Cook depends on HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET 1375efa90c11SMarco Elver depends on INIT_STACK_NONE || !CC_IS_CLANG || CLANG_VERSION >= 140000 137639218ff4SKees Cook help 137739218ff4SKees Cook The kernel stack offset can be randomized (after pt_regs) by 137839218ff4SKees Cook roughly 5 bits of entropy, frustrating memory corruption 137939218ff4SKees Cook attacks that depend on stack address determinism or 13808cb37a59SMarco Elver cross-syscall address exposures. 13818cb37a59SMarco Elver 13828cb37a59SMarco Elver The feature is controlled via the "randomize_kstack_offset=on/off" 13838cb37a59SMarco Elver kernel boot param, and if turned off has zero overhead due to its use 13848cb37a59SMarco Elver of static branches (see JUMP_LABEL). 13858cb37a59SMarco Elver 13868cb37a59SMarco Elver If unsure, say Y. 13878cb37a59SMarco Elver 13888cb37a59SMarco Elverconfig RANDOMIZE_KSTACK_OFFSET_DEFAULT 13898cb37a59SMarco Elver bool "Default state of kernel stack offset randomization" 13908cb37a59SMarco Elver depends on RANDOMIZE_KSTACK_OFFSET 13918cb37a59SMarco Elver help 13928cb37a59SMarco Elver Kernel stack offset randomization is controlled by kernel boot param 13938cb37a59SMarco Elver "randomize_kstack_offset=on/off", and this config chooses the default 13948cb37a59SMarco Elver boot state. 139539218ff4SKees Cook 1396ad21fc4fSLaura Abbottconfig ARCH_OPTIONAL_KERNEL_RWX 1397ad21fc4fSLaura Abbott def_bool n 1398ad21fc4fSLaura Abbott 1399ad21fc4fSLaura Abbottconfig ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1400ad21fc4fSLaura Abbott def_bool n 1401ad21fc4fSLaura Abbott 1402ad21fc4fSLaura Abbottconfig ARCH_HAS_STRICT_KERNEL_RWX 1403ad21fc4fSLaura Abbott def_bool n 1404ad21fc4fSLaura Abbott 14050f5bf6d0SLaura Abbottconfig STRICT_KERNEL_RWX 1406ad21fc4fSLaura Abbott bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX 1407ad21fc4fSLaura Abbott depends on ARCH_HAS_STRICT_KERNEL_RWX 1408ad21fc4fSLaura Abbott default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1409ad21fc4fSLaura Abbott help 1410ad21fc4fSLaura Abbott If this is set, kernel text and rodata memory will be made read-only, 1411ad21fc4fSLaura Abbott and non-text memory will be made non-executable. This provides 1412ad21fc4fSLaura Abbott protection against certain security exploits (e.g. executing the heap 1413ad21fc4fSLaura Abbott or modifying text) 1414ad21fc4fSLaura Abbott 1415ad21fc4fSLaura Abbott These features are considered standard security practice these days. 1416ad21fc4fSLaura Abbott You should say Y here in almost all cases. 1417ad21fc4fSLaura Abbott 1418ad21fc4fSLaura Abbottconfig ARCH_HAS_STRICT_MODULE_RWX 1419ad21fc4fSLaura Abbott def_bool n 1420ad21fc4fSLaura Abbott 14210f5bf6d0SLaura Abbottconfig STRICT_MODULE_RWX 1422ad21fc4fSLaura Abbott bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX 1423ad21fc4fSLaura Abbott depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES 1424ad21fc4fSLaura Abbott default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT 1425ad21fc4fSLaura Abbott help 1426ad21fc4fSLaura Abbott If this is set, module text and rodata memory will be made read-only, 1427ad21fc4fSLaura Abbott and non-text memory will be made non-executable. This provides 1428ad21fc4fSLaura Abbott protection against certain security exploits (e.g. writing to text) 1429ad21fc4fSLaura Abbott 1430ea8c64acSChristoph Hellwig# select if the architecture provides an asm/dma-direct.h header 1431ea8c64acSChristoph Hellwigconfig ARCH_HAS_PHYS_TO_DMA 1432ea8c64acSChristoph Hellwig bool 1433ea8c64acSChristoph Hellwig 143404f264d3SPaul Burtonconfig HAVE_ARCH_COMPILER_H 143504f264d3SPaul Burton bool 143604f264d3SPaul Burton help 143704f264d3SPaul Burton An architecture can select this if it provides an 143804f264d3SPaul Burton asm/compiler.h header that should be included after 143904f264d3SPaul Burton linux/compiler-*.h in order to override macro definitions that those 144004f264d3SPaul Burton headers generally provide. 144104f264d3SPaul Burton 1442271ca788SArd Biesheuvelconfig HAVE_ARCH_PREL32_RELOCATIONS 1443271ca788SArd Biesheuvel bool 1444271ca788SArd Biesheuvel help 1445271ca788SArd Biesheuvel May be selected by an architecture if it supports place-relative 1446271ca788SArd Biesheuvel 32-bit relocations, both in the toolchain and in the module loader, 1447271ca788SArd Biesheuvel in which case relative references can be used in special sections 1448271ca788SArd Biesheuvel for PCI fixup, initcalls etc which are only half the size on 64 bit 1449271ca788SArd Biesheuvel architectures, and don't require runtime relocation on relocatable 1450271ca788SArd Biesheuvel kernels. 1451271ca788SArd Biesheuvel 1452ce9084baSArd Biesheuvelconfig ARCH_USE_MEMREMAP_PROT 1453ce9084baSArd Biesheuvel bool 1454ce9084baSArd Biesheuvel 1455fb346fd9SWaiman Longconfig LOCK_EVENT_COUNTS 1456fb346fd9SWaiman Long bool "Locking event counts collection" 1457fb346fd9SWaiman Long depends on DEBUG_FS 1458a7f7f624SMasahiro Yamada help 1459fb346fd9SWaiman Long Enable light-weight counting of various locking related events 1460fb346fd9SWaiman Long in the system with minimal performance impact. This reduces 1461fb346fd9SWaiman Long the chance of application behavior change because of timing 1462fb346fd9SWaiman Long differences. The counts are reported via debugfs. 1463fb346fd9SWaiman Long 14645cf896fbSPeter Collingbourne# Select if the architecture has support for applying RELR relocations. 14655cf896fbSPeter Collingbourneconfig ARCH_HAS_RELR 14665cf896fbSPeter Collingbourne bool 14675cf896fbSPeter Collingbourne 14685cf896fbSPeter Collingbourneconfig RELR 14695cf896fbSPeter Collingbourne bool "Use RELR relocation packing" 14705cf896fbSPeter Collingbourne depends on ARCH_HAS_RELR && TOOLS_SUPPORT_RELR 14715cf896fbSPeter Collingbourne default y 14725cf896fbSPeter Collingbourne help 14735cf896fbSPeter Collingbourne Store the kernel's dynamic relocations in the RELR relocation packing 14745cf896fbSPeter Collingbourne format. Requires a compatible linker (LLD supports this feature), as 14755cf896fbSPeter Collingbourne well as compatible NM and OBJCOPY utilities (llvm-nm and llvm-objcopy 14765cf896fbSPeter Collingbourne are compatible). 14775cf896fbSPeter Collingbourne 14780c9c1d56SThiago Jung Bauermannconfig ARCH_HAS_MEM_ENCRYPT 14790c9c1d56SThiago Jung Bauermann bool 14800c9c1d56SThiago Jung Bauermann 148146b49b12STom Lendackyconfig ARCH_HAS_CC_PLATFORM 148246b49b12STom Lendacky bool 148346b49b12STom Lendacky 14840e242208SHassan Naveedconfig HAVE_SPARSE_SYSCALL_NR 14850e242208SHassan Naveed bool 14860e242208SHassan Naveed help 14870e242208SHassan Naveed An architecture should select this if its syscall numbering is sparse 14880e242208SHassan Naveed to save space. For example, MIPS architecture has a syscall array with 14890e242208SHassan Naveed entries at 4000, 5000 and 6000 locations. This option turns on syscall 14900e242208SHassan Naveed related optimizations for a given architecture. 14910e242208SHassan Naveed 1492d60d7de3SSven Schnelleconfig ARCH_HAS_VDSO_DATA 1493d60d7de3SSven Schnelle bool 1494d60d7de3SSven Schnelle 1495115284d8SJosh Poimboeufconfig HAVE_STATIC_CALL 1496115284d8SJosh Poimboeuf bool 1497115284d8SJosh Poimboeuf 14989183c3f9SJosh Poimboeufconfig HAVE_STATIC_CALL_INLINE 14999183c3f9SJosh Poimboeuf bool 15009183c3f9SJosh Poimboeuf depends on HAVE_STATIC_CALL 150103f16cd0SJosh Poimboeuf select OBJTOOL 15029183c3f9SJosh Poimboeuf 15036ef869e0SMichal Hockoconfig HAVE_PREEMPT_DYNAMIC 15046ef869e0SMichal Hocko bool 150599cf983cSMark Rutland 150699cf983cSMark Rutlandconfig HAVE_PREEMPT_DYNAMIC_CALL 150799cf983cSMark Rutland bool 15086ef869e0SMichal Hocko depends on HAVE_STATIC_CALL 150999cf983cSMark Rutland select HAVE_PREEMPT_DYNAMIC 15106ef869e0SMichal Hocko help 151199cf983cSMark Rutland An architecture should select this if it can handle the preemption 151299cf983cSMark Rutland model being selected at boot time using static calls. 151399cf983cSMark Rutland 151499cf983cSMark Rutland Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a 151599cf983cSMark Rutland preemption function will be patched directly. 151699cf983cSMark Rutland 151799cf983cSMark Rutland Where an architecture does not select HAVE_STATIC_CALL_INLINE, any 151899cf983cSMark Rutland call to a preemption function will go through a trampoline, and the 151999cf983cSMark Rutland trampoline will be patched. 152099cf983cSMark Rutland 152199cf983cSMark Rutland It is strongly advised to support inline static call to avoid any 152299cf983cSMark Rutland overhead. 152399cf983cSMark Rutland 152499cf983cSMark Rutlandconfig HAVE_PREEMPT_DYNAMIC_KEY 152599cf983cSMark Rutland bool 1526a0a12c3eSNick Desaulniers depends on HAVE_ARCH_JUMP_LABEL 152799cf983cSMark Rutland select HAVE_PREEMPT_DYNAMIC 152899cf983cSMark Rutland help 152999cf983cSMark Rutland An architecture should select this if it can handle the preemption 153099cf983cSMark Rutland model being selected at boot time using static keys. 153199cf983cSMark Rutland 153299cf983cSMark Rutland Each preemption function will be given an early return based on a 153399cf983cSMark Rutland static key. This should have slightly lower overhead than non-inline 153499cf983cSMark Rutland static calls, as this effectively inlines each trampoline into the 153599cf983cSMark Rutland start of its callee. This may avoid redundant work, and may 153699cf983cSMark Rutland integrate better with CFI schemes. 153799cf983cSMark Rutland 153899cf983cSMark Rutland This will have greater overhead than using inline static calls as 153999cf983cSMark Rutland the call to the preemption function cannot be entirely elided. 15406ef869e0SMichal Hocko 154159612b24SNathan Chancellorconfig ARCH_WANT_LD_ORPHAN_WARN 154259612b24SNathan Chancellor bool 154359612b24SNathan Chancellor help 154459612b24SNathan Chancellor An arch should select this symbol once all linker sections are explicitly 154559612b24SNathan Chancellor included, size-asserted, or discarded in the linker scripts. This is 154659612b24SNathan Chancellor important because we never want expected sections to be placed heuristically 154759612b24SNathan Chancellor by the linker, since the locations of such sections can change between linker 154859612b24SNathan Chancellor versions. 154959612b24SNathan Chancellor 15504f5b0c17SMike Rapoportconfig HAVE_ARCH_PFN_VALID 15514f5b0c17SMike Rapoport bool 15524f5b0c17SMike Rapoport 15535d6ad668SMike Rapoportconfig ARCH_SUPPORTS_DEBUG_PAGEALLOC 15545d6ad668SMike Rapoport bool 15555d6ad668SMike Rapoport 1556df4e817bSPasha Tatashinconfig ARCH_SUPPORTS_PAGE_TABLE_CHECK 1557df4e817bSPasha Tatashin bool 1558df4e817bSPasha Tatashin 15592ca408d9SBrian Gerstconfig ARCH_SPLIT_ARG64 15602ca408d9SBrian Gerst bool 15612ca408d9SBrian Gerst help 15622ca408d9SBrian Gerst If a 32-bit architecture requires 64-bit arguments to be split into 15632ca408d9SBrian Gerst pairs of 32-bit arguments, select this option. 15642ca408d9SBrian Gerst 15657facdc42SAl Viroconfig ARCH_HAS_ELFCORE_COMPAT 15667facdc42SAl Viro bool 15677facdc42SAl Viro 156858e106e7SBalbir Singhconfig ARCH_HAS_PARANOID_L1D_FLUSH 156958e106e7SBalbir Singh bool 157058e106e7SBalbir Singh 1571d593d64fSPrasad Sodagudiconfig ARCH_HAVE_TRACE_MMIO_ACCESS 1572d593d64fSPrasad Sodagudi bool 1573d593d64fSPrasad Sodagudi 15741bdda24cSThomas Gleixnerconfig DYNAMIC_SIGFRAME 15751bdda24cSThomas Gleixner bool 15761bdda24cSThomas Gleixner 157750468e43SJarkko Sakkinen# Select, if arch has a named attribute group bound to NUMA device nodes. 157850468e43SJarkko Sakkinenconfig HAVE_ARCH_NODE_DEV_GROUP 157950468e43SJarkko Sakkinen bool 158050468e43SJarkko Sakkinen 158171ce1ab5SKinsey Hoconfig ARCH_HAS_HW_PTE_YOUNG 158271ce1ab5SKinsey Ho bool 158371ce1ab5SKinsey Ho help 158471ce1ab5SKinsey Ho Architectures that select this option are capable of setting the 158571ce1ab5SKinsey Ho accessed bit in PTE entries when using them as part of linear address 158671ce1ab5SKinsey Ho translations. Architectures that require runtime check should select 158771ce1ab5SKinsey Ho this option and override arch_has_hw_pte_young(). 158871ce1ab5SKinsey Ho 1589eed9a328SYu Zhaoconfig ARCH_HAS_NONLEAF_PMD_YOUNG 1590eed9a328SYu Zhao bool 1591eed9a328SYu Zhao help 1592eed9a328SYu Zhao Architectures that select this option are capable of setting the 1593eed9a328SYu Zhao accessed bit in non-leaf PMD entries when using them as part of linear 1594eed9a328SYu Zhao address translations. Page table walkers that clear the accessed bit 1595eed9a328SYu Zhao may use this capability to reduce their search space. 1596eed9a328SYu Zhao 15972521f2c2SPeter Oberparleitersource "kernel/gcov/Kconfig" 159845332b1bSMasahiro Yamada 159945332b1bSMasahiro Yamadasource "scripts/gcc-plugins/Kconfig" 1600fa1b5d09SLinus Torvalds 1601d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT_4B 1602d49a0626SPeter Zijlstra bool 1603d49a0626SPeter Zijlstra 1604d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT_8B 1605d49a0626SPeter Zijlstra bool 1606d49a0626SPeter Zijlstra 1607d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT_16B 1608d49a0626SPeter Zijlstra bool 1609d49a0626SPeter Zijlstra 1610d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT_32B 1611d49a0626SPeter Zijlstra bool 1612d49a0626SPeter Zijlstra 1613d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT_64B 1614d49a0626SPeter Zijlstra bool 1615d49a0626SPeter Zijlstra 1616d49a0626SPeter Zijlstraconfig FUNCTION_ALIGNMENT 1617d49a0626SPeter Zijlstra int 1618d49a0626SPeter Zijlstra default 64 if FUNCTION_ALIGNMENT_64B 1619d49a0626SPeter Zijlstra default 32 if FUNCTION_ALIGNMENT_32B 1620d49a0626SPeter Zijlstra default 16 if FUNCTION_ALIGNMENT_16B 1621d49a0626SPeter Zijlstra default 8 if FUNCTION_ALIGNMENT_8B 1622d49a0626SPeter Zijlstra default 4 if FUNCTION_ALIGNMENT_4B 1623d49a0626SPeter Zijlstra default 0 1624d49a0626SPeter Zijlstra 16255270316cSPetr Pavluconfig CC_HAS_MIN_FUNCTION_ALIGNMENT 16265270316cSPetr Pavlu # Detect availability of the GCC option -fmin-function-alignment which 16275270316cSPetr Pavlu # guarantees minimal alignment for all functions, unlike 16285270316cSPetr Pavlu # -falign-functions which the compiler ignores for cold functions. 16295270316cSPetr Pavlu def_bool $(cc-option, -fmin-function-alignment=8) 16305270316cSPetr Pavlu 16315270316cSPetr Pavluconfig CC_HAS_SANE_FUNCTION_ALIGNMENT 16325270316cSPetr Pavlu # Set if the guaranteed alignment with -fmin-function-alignment is 16335270316cSPetr Pavlu # available or extra care is required in the kernel. Clang provides 16345270316cSPetr Pavlu # strict alignment always, even with -falign-functions. 16355270316cSPetr Pavlu def_bool CC_HAS_MIN_FUNCTION_ALIGNMENT || CC_IS_CLANG 16365270316cSPetr Pavlu 1637a88d970cSPaul E. McKenneyconfig ARCH_NEED_CMPXCHG_1_EMU 1638a88d970cSPaul E. McKenney bool 1639a88d970cSPaul E. McKenney 164022471e13SRandy Dunlapendmenu 1641