Kconfig (e240e53ae0abb0896e0f399bdfef41c69cec3123) Kconfig (b0284cd29a957e62d60c2886fd663be93c56f9c0)
1# SPDX-License-Identifier: GPL-2.0-only
2
3menu "Memory Management options"
4
5#
6# For some reason microblaze and nios2 hard code SWAP=n. Hopefully we can
7# add proper SWAP support to them, in which case this can be remove.
8#

--- 216 unchanged lines hidden (view full) ---

225 depends on !PREEMPT_RT
226 help
227 SLOB replaces the stock allocator with a drastically simpler
228 allocator. SLOB is generally more space efficient but
229 does not perform as well on large systems.
230
231endchoice
232
1# SPDX-License-Identifier: GPL-2.0-only
2
3menu "Memory Management options"
4
5#
6# For some reason microblaze and nios2 hard code SWAP=n. Hopefully we can
7# add proper SWAP support to them, in which case this can be remove.
8#

--- 216 unchanged lines hidden (view full) ---

225 depends on !PREEMPT_RT
226 help
227 SLOB replaces the stock allocator with a drastically simpler
228 allocator. SLOB is generally more space efficient but
229 does not perform as well on large systems.
230
231endchoice
232
233config SLUB_TINY
234 bool "Configure SLUB for minimal memory footprint"
235 depends on SLUB && EXPERT
236 select SLAB_MERGE_DEFAULT
237 help
238 Configures the SLUB allocator in a way to achieve minimal memory
239 footprint, sacrificing scalability, debugging and other features.
240 This is intended only for the smallest system that had used the
241 SLOB allocator and is not recommended for systems with more than
242 16MB RAM.
243
244 If unsure, say N.
245
246config SLAB_MERGE_DEFAULT
247 bool "Allow slab caches to be merged"
248 default y
249 depends on SLAB || SLUB
250 help
251 For reduced kernel memory fragmentation, slab caches can be
252 merged when they share the same size and other characteristics.
253 This carries a risk of kernel heap overflows being able to
254 overwrite objects from merged caches (and more easily control
255 cache layout), which makes such heap attacks easier to exploit
256 by attackers. By keeping caches unmerged, these kinds of exploits
257 can usually only damage objects in the same cache. To disable
258 merging at runtime, "slab_nomerge" can be passed on the kernel
259 command line.
260
261config SLAB_FREELIST_RANDOM
262 bool "Randomize slab freelist"
233config SLAB_MERGE_DEFAULT
234 bool "Allow slab caches to be merged"
235 default y
236 depends on SLAB || SLUB
237 help
238 For reduced kernel memory fragmentation, slab caches can be
239 merged when they share the same size and other characteristics.
240 This carries a risk of kernel heap overflows being able to
241 overwrite objects from merged caches (and more easily control
242 cache layout), which makes such heap attacks easier to exploit
243 by attackers. By keeping caches unmerged, these kinds of exploits
244 can usually only damage objects in the same cache. To disable
245 merging at runtime, "slab_nomerge" can be passed on the kernel
246 command line.
247
248config SLAB_FREELIST_RANDOM
249 bool "Randomize slab freelist"
263 depends on SLAB || (SLUB && !SLUB_TINY)
250 depends on SLAB || SLUB
264 help
265 Randomizes the freelist order used on creating new pages. This
266 security feature reduces the predictability of the kernel slab
267 allocator against heap overflows.
268
269config SLAB_FREELIST_HARDENED
270 bool "Harden slab freelist metadata"
251 help
252 Randomizes the freelist order used on creating new pages. This
253 security feature reduces the predictability of the kernel slab
254 allocator against heap overflows.
255
256config SLAB_FREELIST_HARDENED
257 bool "Harden slab freelist metadata"
271 depends on SLAB || (SLUB && !SLUB_TINY)
258 depends on SLAB || SLUB
272 help
273 Many kernel heap attacks try to target slab cache metadata and
274 other infrastructure. This options makes minor performance
275 sacrifices to harden the kernel slab allocator against common
276 freelist exploit methods. Some slab implementations have more
277 sanity-checking than others. This option is most effective with
278 CONFIG_SLUB.
279
280config SLUB_STATS
281 default n
282 bool "Enable SLUB performance statistics"
259 help
260 Many kernel heap attacks try to target slab cache metadata and
261 other infrastructure. This options makes minor performance
262 sacrifices to harden the kernel slab allocator against common
263 freelist exploit methods. Some slab implementations have more
264 sanity-checking than others. This option is most effective with
265 CONFIG_SLUB.
266
267config SLUB_STATS
268 default n
269 bool "Enable SLUB performance statistics"
283 depends on SLUB && SYSFS && !SLUB_TINY
270 depends on SLUB && SYSFS
284 help
285 SLUB statistics are useful to debug SLUBs allocation behavior in
286 order find ways to optimize the allocator. This should never be
287 enabled for production use since keeping statistics slows down
288 the allocator by a few percentage points. The slabinfo command
289 supports the determination of the most active slabs to figure
290 out which slabs are relevant to a particular load.
291 Try running: slabinfo -DA
292
293config SLUB_CPU_PARTIAL
294 default y
271 help
272 SLUB statistics are useful to debug SLUBs allocation behavior in
273 order find ways to optimize the allocator. This should never be
274 enabled for production use since keeping statistics slows down
275 the allocator by a few percentage points. The slabinfo command
276 supports the determination of the most active slabs to figure
277 out which slabs are relevant to a particular load.
278 Try running: slabinfo -DA
279
280config SLUB_CPU_PARTIAL
281 default y
295 depends on SLUB && SMP && !SLUB_TINY
282 depends on SLUB && SMP
296 bool "SLUB per cpu partial cache"
297 help
298 Per cpu partial caches accelerate objects allocation and freeing
299 that is local to a processor at the price of more indeterminism
300 in the latency of the free. On overflow these caches will be cleared
301 which requires the taking of locks that may cause latency spikes.
302 Typically one would choose no for a realtime system.
303

--- 709 unchanged lines hidden (view full) ---

1013config VMAP_PFN
1014 bool
1015
1016config ARCH_USES_HIGH_VMA_FLAGS
1017 bool
1018config ARCH_HAS_PKEYS
1019 bool
1020
283 bool "SLUB per cpu partial cache"
284 help
285 Per cpu partial caches accelerate objects allocation and freeing
286 that is local to a processor at the price of more indeterminism
287 in the latency of the free. On overflow these caches will be cleared
288 which requires the taking of locks that may cause latency spikes.
289 Typically one would choose no for a realtime system.
290

--- 709 unchanged lines hidden (view full) ---

1000config VMAP_PFN
1001 bool
1002
1003config ARCH_USES_HIGH_VMA_FLAGS
1004 bool
1005config ARCH_HAS_PKEYS
1006 bool
1007
1008config ARCH_USES_PG_ARCH_X
1009 bool
1010 help
1011 Enable the definition of PG_arch_x page flags with x > 1. Only
1012 suitable for 64-bit architectures with CONFIG_FLATMEM or
1013 CONFIG_SPARSEMEM_VMEMMAP enabled, otherwise there may not be
1014 enough room for additional bits in page->flags.
1015
1021config VM_EVENT_COUNTERS
1022 default y
1023 bool "Enable VM event counters for /proc/vmstat" if EXPERT
1024 help
1025 VM event counters are needed for event counts to be shown.
1026 This option allows the disabling of the VM event counters
1027 on EXPERT systems. /proc/vmstat will only show page counts
1028 if VM event counters are disabled.

--- 140 unchanged lines hidden ---
1016config VM_EVENT_COUNTERS
1017 default y
1018 bool "Enable VM event counters for /proc/vmstat" if EXPERT
1019 help
1020 VM event counters are needed for event counts to be shown.
1021 This option allows the disabling of the VM event counters
1022 on EXPERT systems. /proc/vmstat will only show page counts
1023 if VM event counters are disabled.

--- 140 unchanged lines hidden ---