1e1785e85SDave Hansenconfig SELECT_MEMORY_MODEL 2e1785e85SDave Hansen def_bool y 3a8826eebSKees Cook depends on ARCH_SELECT_MEMORY_MODEL 4e1785e85SDave Hansen 53a9da765SDave Hansenchoice 63a9da765SDave Hansen prompt "Memory model" 7e1785e85SDave Hansen depends on SELECT_MEMORY_MODEL 8e1785e85SDave Hansen default DISCONTIGMEM_MANUAL if ARCH_DISCONTIGMEM_DEFAULT 9d41dee36SAndy Whitcroft default SPARSEMEM_MANUAL if ARCH_SPARSEMEM_DEFAULT 10e1785e85SDave Hansen default FLATMEM_MANUAL 113a9da765SDave Hansen 12e1785e85SDave Hansenconfig FLATMEM_MANUAL 133a9da765SDave Hansen bool "Flat Memory" 14c898ec16SAnton Blanchard depends on !(ARCH_DISCONTIGMEM_ENABLE || ARCH_SPARSEMEM_ENABLE) || ARCH_FLATMEM_ENABLE 153a9da765SDave Hansen help 163a9da765SDave Hansen This option allows you to change some of the ways that 173a9da765SDave Hansen Linux manages its memory internally. Most users will 183a9da765SDave Hansen only have one option here: FLATMEM. This is normal 193a9da765SDave Hansen and a correct option. 203a9da765SDave Hansen 21d41dee36SAndy Whitcroft Some users of more advanced features like NUMA and 22d41dee36SAndy Whitcroft memory hotplug may have different options here. 2318f65332SGeert Uytterhoeven DISCONTIGMEM is a more mature, better tested system, 24d41dee36SAndy Whitcroft but is incompatible with memory hotplug and may suffer 25d41dee36SAndy Whitcroft decreased performance over SPARSEMEM. If unsure between 26d41dee36SAndy Whitcroft "Sparse Memory" and "Discontiguous Memory", choose 27d41dee36SAndy Whitcroft "Discontiguous Memory". 28d41dee36SAndy Whitcroft 29d41dee36SAndy Whitcroft If unsure, choose this option (Flat Memory) over any other. 303a9da765SDave Hansen 31e1785e85SDave Hansenconfig DISCONTIGMEM_MANUAL 32f3519f91SDave Hansen bool "Discontiguous Memory" 333a9da765SDave Hansen depends on ARCH_DISCONTIGMEM_ENABLE 343a9da765SDave Hansen help 35785dcd44SDave Hansen This option provides enhanced support for discontiguous 36785dcd44SDave Hansen memory systems, over FLATMEM. These systems have holes 37785dcd44SDave Hansen in their physical address spaces, and this option provides 38785dcd44SDave Hansen more efficient handling of these holes. However, the vast 39785dcd44SDave Hansen majority of hardware has quite flat address spaces, and 40ad3d0a38SPhilipp Marek can have degraded performance from the extra overhead that 41785dcd44SDave Hansen this option imposes. 42785dcd44SDave Hansen 43785dcd44SDave Hansen Many NUMA configurations will have this as the only option. 44785dcd44SDave Hansen 453a9da765SDave Hansen If unsure, choose "Flat Memory" over this option. 463a9da765SDave Hansen 47d41dee36SAndy Whitcroftconfig SPARSEMEM_MANUAL 48d41dee36SAndy Whitcroft bool "Sparse Memory" 49d41dee36SAndy Whitcroft depends on ARCH_SPARSEMEM_ENABLE 50d41dee36SAndy Whitcroft help 51d41dee36SAndy Whitcroft This will be the only option for some systems, including 52d41dee36SAndy Whitcroft memory hotplug systems. This is normal. 53d41dee36SAndy Whitcroft 54d41dee36SAndy Whitcroft For many other systems, this will be an alternative to 55f3519f91SDave Hansen "Discontiguous Memory". This option provides some potential 56d41dee36SAndy Whitcroft performance benefits, along with decreased code complexity, 57d41dee36SAndy Whitcroft but it is newer, and more experimental. 58d41dee36SAndy Whitcroft 59d41dee36SAndy Whitcroft If unsure, choose "Discontiguous Memory" or "Flat Memory" 60d41dee36SAndy Whitcroft over this option. 61d41dee36SAndy Whitcroft 623a9da765SDave Hansenendchoice 633a9da765SDave Hansen 64e1785e85SDave Hansenconfig DISCONTIGMEM 65e1785e85SDave Hansen def_bool y 66e1785e85SDave Hansen depends on (!SELECT_MEMORY_MODEL && ARCH_DISCONTIGMEM_ENABLE) || DISCONTIGMEM_MANUAL 67e1785e85SDave Hansen 68d41dee36SAndy Whitcroftconfig SPARSEMEM 69d41dee36SAndy Whitcroft def_bool y 701a83e175SRussell King depends on (!SELECT_MEMORY_MODEL && ARCH_SPARSEMEM_ENABLE) || SPARSEMEM_MANUAL 71d41dee36SAndy Whitcroft 72e1785e85SDave Hansenconfig FLATMEM 73e1785e85SDave Hansen def_bool y 74d41dee36SAndy Whitcroft depends on (!DISCONTIGMEM && !SPARSEMEM) || FLATMEM_MANUAL 75d41dee36SAndy Whitcroft 76d41dee36SAndy Whitcroftconfig FLAT_NODE_MEM_MAP 77d41dee36SAndy Whitcroft def_bool y 78d41dee36SAndy Whitcroft depends on !SPARSEMEM 79e1785e85SDave Hansen 8093b7504eSDave Hansen# 8193b7504eSDave Hansen# Both the NUMA code and DISCONTIGMEM use arrays of pg_data_t's 8293b7504eSDave Hansen# to represent different areas of memory. This variable allows 8393b7504eSDave Hansen# those dependencies to exist individually. 8493b7504eSDave Hansen# 8593b7504eSDave Hansenconfig NEED_MULTIPLE_NODES 8693b7504eSDave Hansen def_bool y 8793b7504eSDave Hansen depends on DISCONTIGMEM || NUMA 88af705362SAndy Whitcroft 89af705362SAndy Whitcroftconfig HAVE_MEMORY_PRESENT 90af705362SAndy Whitcroft def_bool y 91d41dee36SAndy Whitcroft depends on ARCH_HAVE_MEMORY_PRESENT || SPARSEMEM 92802f192eSBob Picco 93802f192eSBob Picco# 943e347261SBob Picco# SPARSEMEM_EXTREME (which is the default) does some bootmem 953e347261SBob Picco# allocations when memory_present() is called. If this cannot 963e347261SBob Picco# be done on your architecture, select this option. However, 973e347261SBob Picco# statically allocating the mem_section[] array can potentially 983e347261SBob Picco# consume vast quantities of .bss, so be careful. 993e347261SBob Picco# 1003e347261SBob Picco# This option will also potentially produce smaller runtime code 1013e347261SBob Picco# with gcc 3.4 and later. 1023e347261SBob Picco# 1033e347261SBob Piccoconfig SPARSEMEM_STATIC 1049ba16087SJan Beulich bool 1053e347261SBob Picco 1063e347261SBob Picco# 10744c09201SMatt LaPlante# Architecture platforms which require a two level mem_section in SPARSEMEM 108802f192eSBob Picco# must select this option. This is usually for architecture platforms with 109802f192eSBob Picco# an extremely sparse physical address space. 110802f192eSBob Picco# 1113e347261SBob Piccoconfig SPARSEMEM_EXTREME 1123e347261SBob Picco def_bool y 1133e347261SBob Picco depends on SPARSEMEM && !SPARSEMEM_STATIC 1144c21e2f2SHugh Dickins 11529c71111SAndy Whitcroftconfig SPARSEMEM_VMEMMAP_ENABLE 1169ba16087SJan Beulich bool 11729c71111SAndy Whitcroft 1189bdac914SYinghai Luconfig SPARSEMEM_ALLOC_MEM_MAP_TOGETHER 1199bdac914SYinghai Lu def_bool y 1209bdac914SYinghai Lu depends on SPARSEMEM && X86_64 1219bdac914SYinghai Lu 12229c71111SAndy Whitcroftconfig SPARSEMEM_VMEMMAP 123a5ee6daaSGeoff Levand bool "Sparse Memory virtual memmap" 124a5ee6daaSGeoff Levand depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE 125a5ee6daaSGeoff Levand default y 126a5ee6daaSGeoff Levand help 127a5ee6daaSGeoff Levand SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise 128a5ee6daaSGeoff Levand pfn_to_page and page_to_pfn operations. This is the most 129a5ee6daaSGeoff Levand efficient option when sufficient kernel resources are available. 13029c71111SAndy Whitcroft 13195f72d1eSYinghai Luconfig HAVE_MEMBLOCK 132*6341e62bSChristoph Jaeger bool 13395f72d1eSYinghai Lu 1347c0caeb8STejun Heoconfig HAVE_MEMBLOCK_NODE_MAP 135*6341e62bSChristoph Jaeger bool 1367c0caeb8STejun Heo 13770210ed9SPhilipp Hachtmannconfig HAVE_MEMBLOCK_PHYS_MAP 138*6341e62bSChristoph Jaeger bool 13970210ed9SPhilipp Hachtmann 1402667f50eSSteve Capperconfig HAVE_GENERIC_RCU_GUP 141*6341e62bSChristoph Jaeger bool 1422667f50eSSteve Capper 143c378ddd5STejun Heoconfig ARCH_DISCARD_MEMBLOCK 144*6341e62bSChristoph Jaeger bool 145c378ddd5STejun Heo 14666616720SSam Ravnborgconfig NO_BOOTMEM 147*6341e62bSChristoph Jaeger bool 14866616720SSam Ravnborg 149ee6f509cSMinchan Kimconfig MEMORY_ISOLATION 150*6341e62bSChristoph Jaeger bool 151ee6f509cSMinchan Kim 15220b2f52bSLai Jiangshanconfig MOVABLE_NODE 153*6341e62bSChristoph Jaeger bool "Enable to assign a node which has only movable memory" 15420b2f52bSLai Jiangshan depends on HAVE_MEMBLOCK 15520b2f52bSLai Jiangshan depends on NO_BOOTMEM 15620b2f52bSLai Jiangshan depends on X86_64 15720b2f52bSLai Jiangshan depends on NUMA 158c2974058STang Chen default n 159c2974058STang Chen help 160c2974058STang Chen Allow a node to have only movable memory. Pages used by the kernel, 161c2974058STang Chen such as direct mapping pages cannot be migrated. So the corresponding 162c5320926STang Chen memory device cannot be hotplugged. This option allows the following 163c5320926STang Chen two things: 164c5320926STang Chen - When the system is booting, node full of hotpluggable memory can 165c5320926STang Chen be arranged to have only movable memory so that the whole node can 166c5320926STang Chen be hot-removed. (need movable_node boot option specified). 167c5320926STang Chen - After the system is up, the option allows users to online all the 168c5320926STang Chen memory of a node as movable memory so that the whole node can be 169c5320926STang Chen hot-removed. 170c5320926STang Chen 171c5320926STang Chen Users who don't use the memory hotplug feature are fine with this 172c5320926STang Chen option on since they don't specify movable_node boot option or they 173c5320926STang Chen don't online memory as movable. 174c2974058STang Chen 175c2974058STang Chen Say Y here if you want to hotplug a whole node. 176c2974058STang Chen Say N here if you want kernel to use memory on all nodes evenly. 17720b2f52bSLai Jiangshan 17846723bfaSYasuaki Ishimatsu# 17946723bfaSYasuaki Ishimatsu# Only be set on architectures that have completely implemented memory hotplug 18046723bfaSYasuaki Ishimatsu# feature. If you are not sure, don't touch it. 18146723bfaSYasuaki Ishimatsu# 18246723bfaSYasuaki Ishimatsuconfig HAVE_BOOTMEM_INFO_NODE 18346723bfaSYasuaki Ishimatsu def_bool n 18446723bfaSYasuaki Ishimatsu 1853947be19SDave Hansen# eventually, we can have this option just 'select SPARSEMEM' 1863947be19SDave Hansenconfig MEMORY_HOTPLUG 1873947be19SDave Hansen bool "Allow for memory hot-add" 188ec69acbbSKeith Mannthey depends on SPARSEMEM || X86_64_ACPI_NUMA 18940b31360SStephen Rothwell depends on ARCH_ENABLE_MEMORY_HOTPLUG 190ed84a07aSKumar Gala depends on (IA64 || X86 || PPC_BOOK3S_64 || SUPERH || S390) 1913947be19SDave Hansen 192ec69acbbSKeith Manntheyconfig MEMORY_HOTPLUG_SPARSE 193ec69acbbSKeith Mannthey def_bool y 194ec69acbbSKeith Mannthey depends on SPARSEMEM && MEMORY_HOTPLUG 195ec69acbbSKeith Mannthey 1960c0e6195SKAMEZAWA Hiroyukiconfig MEMORY_HOTREMOVE 1970c0e6195SKAMEZAWA Hiroyuki bool "Allow for memory hot remove" 19846723bfaSYasuaki Ishimatsu select MEMORY_ISOLATION 199f7e3334aSNathan Fontenot select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64) 2000c0e6195SKAMEZAWA Hiroyuki depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE 2010c0e6195SKAMEZAWA Hiroyuki depends on MIGRATION 2020c0e6195SKAMEZAWA Hiroyuki 203e20b8ccaSChristoph Lameter# 204e20b8ccaSChristoph Lameter# If we have space for more page flags then we can enable additional 205e20b8ccaSChristoph Lameter# optimizations and functionality. 206e20b8ccaSChristoph Lameter# 207e20b8ccaSChristoph Lameter# Regular Sparsemem takes page flag bits for the sectionid if it does not 208e20b8ccaSChristoph Lameter# use a virtual memmap. Disable extended page flags for 32 bit platforms 209e20b8ccaSChristoph Lameter# that require the use of a sectionid in the page flags. 210e20b8ccaSChristoph Lameter# 211e20b8ccaSChristoph Lameterconfig PAGEFLAGS_EXTENDED 212e20b8ccaSChristoph Lameter def_bool y 213a269cca9SH. Peter Anvin depends on 64BIT || SPARSEMEM_VMEMMAP || !SPARSEMEM 214e20b8ccaSChristoph Lameter 2154c21e2f2SHugh Dickins# Heavily threaded applications may benefit from splitting the mm-wide 2164c21e2f2SHugh Dickins# page_table_lock, so that faults on different parts of the user address 2174c21e2f2SHugh Dickins# space can be handled with less contention: split it at this NR_CPUS. 2184c21e2f2SHugh Dickins# Default to 4 for wider testing, though 8 might be more appropriate. 2194c21e2f2SHugh Dickins# ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock. 2207b6ac9dfSHugh Dickins# PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes. 221a70caa8bSHugh Dickins# DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page. 2224c21e2f2SHugh Dickins# 2234c21e2f2SHugh Dickinsconfig SPLIT_PTLOCK_CPUS 2244c21e2f2SHugh Dickins int 2259164550eSKirill A. Shutemov default "999999" if !MMU 226a70caa8bSHugh Dickins default "999999" if ARM && !CPU_CACHE_VIPT 227a70caa8bSHugh Dickins default "999999" if PARISC && !PA20 2284c21e2f2SHugh Dickins default "4" 2297cbe34cfSChristoph Lameter 230e009bb30SKirill A. Shutemovconfig ARCH_ENABLE_SPLIT_PMD_PTLOCK 231*6341e62bSChristoph Jaeger bool 232e009bb30SKirill A. Shutemov 2337cbe34cfSChristoph Lameter# 23409316c09SKonstantin Khlebnikov# support for memory balloon 23509316c09SKonstantin Khlebnikovconfig MEMORY_BALLOON 236*6341e62bSChristoph Jaeger bool 23709316c09SKonstantin Khlebnikov 23809316c09SKonstantin Khlebnikov# 23918468d93SRafael Aquini# support for memory balloon compaction 24018468d93SRafael Aquiniconfig BALLOON_COMPACTION 24118468d93SRafael Aquini bool "Allow for balloon memory compaction/migration" 24218468d93SRafael Aquini def_bool y 24309316c09SKonstantin Khlebnikov depends on COMPACTION && MEMORY_BALLOON 24418468d93SRafael Aquini help 24518468d93SRafael Aquini Memory fragmentation introduced by ballooning might reduce 24618468d93SRafael Aquini significantly the number of 2MB contiguous memory blocks that can be 24718468d93SRafael Aquini used within a guest, thus imposing performance penalties associated 24818468d93SRafael Aquini with the reduced number of transparent huge pages that could be used 24918468d93SRafael Aquini by the guest workload. Allowing the compaction & migration for memory 25018468d93SRafael Aquini pages enlisted as being part of memory balloon devices avoids the 25118468d93SRafael Aquini scenario aforementioned and helps improving memory defragmentation. 25218468d93SRafael Aquini 25318468d93SRafael Aquini# 254e9e96b39SMel Gorman# support for memory compaction 255e9e96b39SMel Gormanconfig COMPACTION 256e9e96b39SMel Gorman bool "Allow for memory compaction" 25705106e6aSRik van Riel def_bool y 258e9e96b39SMel Gorman select MIGRATION 25933a93877SAndrea Arcangeli depends on MMU 260e9e96b39SMel Gorman help 261e9e96b39SMel Gorman Allows the compaction of memory for the allocation of huge pages. 262e9e96b39SMel Gorman 263e9e96b39SMel Gorman# 2647cbe34cfSChristoph Lameter# support for page migration 2657cbe34cfSChristoph Lameter# 2667cbe34cfSChristoph Lameterconfig MIGRATION 267b20a3503SChristoph Lameter bool "Page migration" 2686c5240aeSChristoph Lameter def_bool y 269de32a817SChen Gang depends on (NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION || CMA) && MMU 270b20a3503SChristoph Lameter help 271b20a3503SChristoph Lameter Allows the migration of the physical location of pages of processes 272e9e96b39SMel Gorman while the virtual addresses are not changed. This is useful in 273e9e96b39SMel Gorman two situations. The first is on NUMA systems to put pages nearer 274e9e96b39SMel Gorman to the processors accessing. The second is when allocating huge 275e9e96b39SMel Gorman pages as migration can relocate pages to satisfy a huge page 276e9e96b39SMel Gorman allocation instead of reclaiming. 2776550e07fSGreg Kroah-Hartman 278c177c81eSNaoya Horiguchiconfig ARCH_ENABLE_HUGEPAGE_MIGRATION 279*6341e62bSChristoph Jaeger bool 280c177c81eSNaoya Horiguchi 281600715dcSJeremy Fitzhardingeconfig PHYS_ADDR_T_64BIT 282600715dcSJeremy Fitzhardinge def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT 283600715dcSJeremy Fitzhardinge 2844b51d669SChristoph Lameterconfig ZONE_DMA_FLAG 2854b51d669SChristoph Lameter int 2864b51d669SChristoph Lameter default "0" if !ZONE_DMA 2874b51d669SChristoph Lameter default "1" 2884b51d669SChristoph Lameter 2892a7326b5SChristoph Lameterconfig BOUNCE 2909ca24e2eSVinayak Menon bool "Enable bounce buffers" 2919ca24e2eSVinayak Menon default y 2922a7326b5SChristoph Lameter depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM) 2939ca24e2eSVinayak Menon help 2949ca24e2eSVinayak Menon Enable bounce buffers for devices that cannot access 2959ca24e2eSVinayak Menon the full range of memory available to the CPU. Enabled 2969ca24e2eSVinayak Menon by default when ZONE_DMA or HIGHMEM is selected, but you 2979ca24e2eSVinayak Menon may say n to override this. 2982a7326b5SChristoph Lameter 299ffecfd1aSDarrick J. Wong# On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often 300ffecfd1aSDarrick J. Wong# have more than 4GB of memory, but we don't currently use the IOTLB to present 301ffecfd1aSDarrick J. Wong# a 32-bit address to OHCI. So we need to use a bounce pool instead. 302ffecfd1aSDarrick J. Wong# 303ffecfd1aSDarrick J. Wong# We also use the bounce pool to provide stable page writes for jbd. jbd 304ffecfd1aSDarrick J. Wong# initiates buffer writeback without locking the page or setting PG_writeback, 305ffecfd1aSDarrick J. Wong# and fixing that behavior (a second time; jbd2 doesn't have this problem) is 306ffecfd1aSDarrick J. Wong# a major rework effort. Instead, use the bounce buffer to snapshot pages 307ffecfd1aSDarrick J. Wong# (until jbd goes away). The only jbd user is ext3. 308ffecfd1aSDarrick J. Wongconfig NEED_BOUNCE_POOL 309ffecfd1aSDarrick J. Wong bool 310ffecfd1aSDarrick J. Wong default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD) 311ffecfd1aSDarrick J. Wong 3126225e937SChristoph Lameterconfig NR_QUICK 3136225e937SChristoph Lameter int 3146225e937SChristoph Lameter depends on QUICKLIST 3150176bd3dSPaul Mundt default "2" if AVR32 3166225e937SChristoph Lameter default "1" 317f057eac0SStephen Rothwell 318f057eac0SStephen Rothwellconfig VIRT_TO_BUS 3194febd95aSStephen Rothwell bool 3204febd95aSStephen Rothwell help 3214febd95aSStephen Rothwell An architecture should select this if it implements the 3224febd95aSStephen Rothwell deprecated interface virt_to_bus(). All new architectures 3234febd95aSStephen Rothwell should probably not select this. 3244febd95aSStephen Rothwell 325cddb8a5cSAndrea Arcangeli 326cddb8a5cSAndrea Arcangeliconfig MMU_NOTIFIER 327cddb8a5cSAndrea Arcangeli bool 328fc4d5c29SDavid Howells 329f8af4da3SHugh Dickinsconfig KSM 330f8af4da3SHugh Dickins bool "Enable KSM for page merging" 331f8af4da3SHugh Dickins depends on MMU 332f8af4da3SHugh Dickins help 333f8af4da3SHugh Dickins Enable Kernel Samepage Merging: KSM periodically scans those areas 334f8af4da3SHugh Dickins of an application's address space that an app has advised may be 335f8af4da3SHugh Dickins mergeable. When it finds pages of identical content, it replaces 336d0f209f6SHugh Dickins the many instances by a single page with that content, so 337f8af4da3SHugh Dickins saving memory until one or another app needs to modify the content. 338f8af4da3SHugh Dickins Recommended for use with KVM, or with other duplicative applications. 339c73602adSHugh Dickins See Documentation/vm/ksm.txt for more information: KSM is inactive 340c73602adSHugh Dickins until a program has madvised that an area is MADV_MERGEABLE, and 341c73602adSHugh Dickins root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set). 342f8af4da3SHugh Dickins 343e0a94c2aSChristoph Lameterconfig DEFAULT_MMAP_MIN_ADDR 344e0a94c2aSChristoph Lameter int "Low address space to protect from user allocation" 3456e141546SDavid Howells depends on MMU 346e0a94c2aSChristoph Lameter default 4096 347e0a94c2aSChristoph Lameter help 348e0a94c2aSChristoph Lameter This is the portion of low virtual memory which should be protected 349e0a94c2aSChristoph Lameter from userspace allocation. Keeping a user from writing to low pages 350e0a94c2aSChristoph Lameter can help reduce the impact of kernel NULL pointer bugs. 351e0a94c2aSChristoph Lameter 352e0a94c2aSChristoph Lameter For most ia64, ppc64 and x86 users with lots of address space 353e0a94c2aSChristoph Lameter a value of 65536 is reasonable and should cause no problems. 354e0a94c2aSChristoph Lameter On arm and other archs it should not be higher than 32768. 355788084abSEric Paris Programs which use vm86 functionality or have some need to map 356788084abSEric Paris this low address space will need CAP_SYS_RAWIO or disable this 357788084abSEric Paris protection by setting the value to 0. 358e0a94c2aSChristoph Lameter 359e0a94c2aSChristoph Lameter This value can be changed after boot using the 360e0a94c2aSChristoph Lameter /proc/sys/vm/mmap_min_addr tunable. 361e0a94c2aSChristoph Lameter 362d949f36fSLinus Torvaldsconfig ARCH_SUPPORTS_MEMORY_FAILURE 363d949f36fSLinus Torvalds bool 364e0a94c2aSChristoph Lameter 3656a46079cSAndi Kleenconfig MEMORY_FAILURE 3666a46079cSAndi Kleen depends on MMU 367d949f36fSLinus Torvalds depends on ARCH_SUPPORTS_MEMORY_FAILURE 3686a46079cSAndi Kleen bool "Enable recovery from hardware memory errors" 369ee6f509cSMinchan Kim select MEMORY_ISOLATION 3706a46079cSAndi Kleen help 3716a46079cSAndi Kleen Enables code to recover from some memory failures on systems 3726a46079cSAndi Kleen with MCA recovery. This allows a system to continue running 3736a46079cSAndi Kleen even when some of its memory has uncorrected errors. This requires 3746a46079cSAndi Kleen special hardware support and typically ECC memory. 3756a46079cSAndi Kleen 376cae681fcSAndi Kleenconfig HWPOISON_INJECT 377413f9efbSAndi Kleen tristate "HWPoison pages injector" 37827df5068SAndi Kleen depends on MEMORY_FAILURE && DEBUG_KERNEL && PROC_FS 379478c5ffcSWu Fengguang select PROC_PAGE_MONITOR 380cae681fcSAndi Kleen 381fc4d5c29SDavid Howellsconfig NOMMU_INITIAL_TRIM_EXCESS 382fc4d5c29SDavid Howells int "Turn on mmap() excess space trimming before booting" 383fc4d5c29SDavid Howells depends on !MMU 384fc4d5c29SDavid Howells default 1 385fc4d5c29SDavid Howells help 386fc4d5c29SDavid Howells The NOMMU mmap() frequently needs to allocate large contiguous chunks 387fc4d5c29SDavid Howells of memory on which to store mappings, but it can only ask the system 388fc4d5c29SDavid Howells allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently 389fc4d5c29SDavid Howells more than it requires. To deal with this, mmap() is able to trim off 390fc4d5c29SDavid Howells the excess and return it to the allocator. 391fc4d5c29SDavid Howells 392fc4d5c29SDavid Howells If trimming is enabled, the excess is trimmed off and returned to the 393fc4d5c29SDavid Howells system allocator, which can cause extra fragmentation, particularly 394fc4d5c29SDavid Howells if there are a lot of transient processes. 395fc4d5c29SDavid Howells 396fc4d5c29SDavid Howells If trimming is disabled, the excess is kept, but not used, which for 397fc4d5c29SDavid Howells long-term mappings means that the space is wasted. 398fc4d5c29SDavid Howells 399fc4d5c29SDavid Howells Trimming can be dynamically controlled through a sysctl option 400fc4d5c29SDavid Howells (/proc/sys/vm/nr_trim_pages) which specifies the minimum number of 401fc4d5c29SDavid Howells excess pages there must be before trimming should occur, or zero if 402fc4d5c29SDavid Howells no trimming is to occur. 403fc4d5c29SDavid Howells 404fc4d5c29SDavid Howells This option specifies the initial value of this option. The default 405fc4d5c29SDavid Howells of 1 says that all excess pages should be trimmed. 406fc4d5c29SDavid Howells 407fc4d5c29SDavid Howells See Documentation/nommu-mmap.txt for more information. 408bbddff05STejun Heo 4094c76d9d1SAndrea Arcangeliconfig TRANSPARENT_HUGEPAGE 41013ece886SAndrea Arcangeli bool "Transparent Hugepage Support" 41115626062SGerald Schaefer depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE 4125d689240SAndrea Arcangeli select COMPACTION 4134c76d9d1SAndrea Arcangeli help 4144c76d9d1SAndrea Arcangeli Transparent Hugepages allows the kernel to use huge pages and 4154c76d9d1SAndrea Arcangeli huge tlb transparently to the applications whenever possible. 4164c76d9d1SAndrea Arcangeli This feature can improve computing performance to certain 4174c76d9d1SAndrea Arcangeli applications by speeding up page faults during memory 4184c76d9d1SAndrea Arcangeli allocation, by reducing the number of tlb misses and by speeding 4194c76d9d1SAndrea Arcangeli up the pagetable walking. 4204c76d9d1SAndrea Arcangeli 4214c76d9d1SAndrea Arcangeli If memory constrained on embedded, you may want to say N. 4224c76d9d1SAndrea Arcangeli 42313ece886SAndrea Arcangelichoice 42413ece886SAndrea Arcangeli prompt "Transparent Hugepage Support sysfs defaults" 42513ece886SAndrea Arcangeli depends on TRANSPARENT_HUGEPAGE 42613ece886SAndrea Arcangeli default TRANSPARENT_HUGEPAGE_ALWAYS 42713ece886SAndrea Arcangeli help 42813ece886SAndrea Arcangeli Selects the sysfs defaults for Transparent Hugepage Support. 42913ece886SAndrea Arcangeli 43013ece886SAndrea Arcangeli config TRANSPARENT_HUGEPAGE_ALWAYS 43113ece886SAndrea Arcangeli bool "always" 43213ece886SAndrea Arcangeli help 43313ece886SAndrea Arcangeli Enabling Transparent Hugepage always, can increase the 43413ece886SAndrea Arcangeli memory footprint of applications without a guaranteed 43513ece886SAndrea Arcangeli benefit but it will work automatically for all applications. 43613ece886SAndrea Arcangeli 43713ece886SAndrea Arcangeli config TRANSPARENT_HUGEPAGE_MADVISE 43813ece886SAndrea Arcangeli bool "madvise" 43913ece886SAndrea Arcangeli help 44013ece886SAndrea Arcangeli Enabling Transparent Hugepage madvise, will only provide a 44113ece886SAndrea Arcangeli performance improvement benefit to the applications using 44213ece886SAndrea Arcangeli madvise(MADV_HUGEPAGE) but it won't risk to increase the 44313ece886SAndrea Arcangeli memory footprint of applications without a guaranteed 44413ece886SAndrea Arcangeli benefit. 44513ece886SAndrea Arcangeliendchoice 44613ece886SAndrea Arcangeli 447bbddff05STejun Heo# 448bbddff05STejun Heo# UP and nommu archs use km based percpu allocator 449bbddff05STejun Heo# 450bbddff05STejun Heoconfig NEED_PER_CPU_KM 451bbddff05STejun Heo depends on !SMP 452bbddff05STejun Heo bool 453bbddff05STejun Heo default y 454077b1f83SDan Magenheimer 455077b1f83SDan Magenheimerconfig CLEANCACHE 456077b1f83SDan Magenheimer bool "Enable cleancache driver to cache clean pages if tmem is present" 457077b1f83SDan Magenheimer default n 458077b1f83SDan Magenheimer help 459077b1f83SDan Magenheimer Cleancache can be thought of as a page-granularity victim cache 460077b1f83SDan Magenheimer for clean pages that the kernel's pageframe replacement algorithm 461077b1f83SDan Magenheimer (PFRA) would like to keep around, but can't since there isn't enough 462077b1f83SDan Magenheimer memory. So when the PFRA "evicts" a page, it first attempts to use 463140a1ef2SMichael Witten cleancache code to put the data contained in that page into 464077b1f83SDan Magenheimer "transcendent memory", memory that is not directly accessible or 465077b1f83SDan Magenheimer addressable by the kernel and is of unknown and possibly 466077b1f83SDan Magenheimer time-varying size. And when a cleancache-enabled 467077b1f83SDan Magenheimer filesystem wishes to access a page in a file on disk, it first 468077b1f83SDan Magenheimer checks cleancache to see if it already contains it; if it does, 469077b1f83SDan Magenheimer the page is copied into the kernel and a disk access is avoided. 470077b1f83SDan Magenheimer When a transcendent memory driver is available (such as zcache or 471077b1f83SDan Magenheimer Xen transcendent memory), a significant I/O reduction 472077b1f83SDan Magenheimer may be achieved. When none is available, all cleancache calls 473077b1f83SDan Magenheimer are reduced to a single pointer-compare-against-NULL resulting 474077b1f83SDan Magenheimer in a negligible performance hit. 475077b1f83SDan Magenheimer 476077b1f83SDan Magenheimer If unsure, say Y to enable cleancache 47727c6aec2SDan Magenheimer 47827c6aec2SDan Magenheimerconfig FRONTSWAP 47927c6aec2SDan Magenheimer bool "Enable frontswap to cache swap pages if tmem is present" 48027c6aec2SDan Magenheimer depends on SWAP 48127c6aec2SDan Magenheimer default n 48227c6aec2SDan Magenheimer help 48327c6aec2SDan Magenheimer Frontswap is so named because it can be thought of as the opposite 48427c6aec2SDan Magenheimer of a "backing" store for a swap device. The data is stored into 48527c6aec2SDan Magenheimer "transcendent memory", memory that is not directly accessible or 48627c6aec2SDan Magenheimer addressable by the kernel and is of unknown and possibly 48727c6aec2SDan Magenheimer time-varying size. When space in transcendent memory is available, 48827c6aec2SDan Magenheimer a significant swap I/O reduction may be achieved. When none is 48927c6aec2SDan Magenheimer available, all frontswap calls are reduced to a single pointer- 49027c6aec2SDan Magenheimer compare-against-NULL resulting in a negligible performance hit 49127c6aec2SDan Magenheimer and swap data is stored as normal on the matching swap device. 49227c6aec2SDan Magenheimer 49327c6aec2SDan Magenheimer If unsure, say Y to enable frontswap. 494f825c736SAneesh Kumar K.V 495f825c736SAneesh Kumar K.Vconfig CMA 496f825c736SAneesh Kumar K.V bool "Contiguous Memory Allocator" 497de32a817SChen Gang depends on HAVE_MEMBLOCK && MMU 498f825c736SAneesh Kumar K.V select MIGRATION 499f825c736SAneesh Kumar K.V select MEMORY_ISOLATION 500f825c736SAneesh Kumar K.V help 501f825c736SAneesh Kumar K.V This enables the Contiguous Memory Allocator which allows other 502f825c736SAneesh Kumar K.V subsystems to allocate big physically-contiguous blocks of memory. 503f825c736SAneesh Kumar K.V CMA reserves a region of memory and allows only movable pages to 504f825c736SAneesh Kumar K.V be allocated from it. This way, the kernel can use the memory for 505f825c736SAneesh Kumar K.V pagecache and when a subsystem requests for contiguous area, the 506f825c736SAneesh Kumar K.V allocated pages are migrated away to serve the contiguous request. 507f825c736SAneesh Kumar K.V 508f825c736SAneesh Kumar K.V If unsure, say "n". 509f825c736SAneesh Kumar K.V 510f825c736SAneesh Kumar K.Vconfig CMA_DEBUG 511f825c736SAneesh Kumar K.V bool "CMA debug messages (DEVELOPMENT)" 512f825c736SAneesh Kumar K.V depends on DEBUG_KERNEL && CMA 513f825c736SAneesh Kumar K.V help 514f825c736SAneesh Kumar K.V Turns on debug messages in CMA. This produces KERN_DEBUG 515f825c736SAneesh Kumar K.V messages for every CMA call as well as various messages while 516f825c736SAneesh Kumar K.V processing calls such as dma_alloc_from_contiguous(). 517f825c736SAneesh Kumar K.V This option does not affect warning and error messages. 518bf550fc9SAlexander Graf 519a254129eSJoonsoo Kimconfig CMA_AREAS 520a254129eSJoonsoo Kim int "Maximum count of the CMA areas" 521a254129eSJoonsoo Kim depends on CMA 522a254129eSJoonsoo Kim default 7 523a254129eSJoonsoo Kim help 524a254129eSJoonsoo Kim CMA allows to create CMA areas for particular purpose, mainly, 525a254129eSJoonsoo Kim used as device private area. This parameter sets the maximum 526a254129eSJoonsoo Kim number of CMA area in the system. 527a254129eSJoonsoo Kim 528a254129eSJoonsoo Kim If unsure, leave the default value "7". 529a254129eSJoonsoo Kim 530af8d417aSDan Streetmanconfig MEM_SOFT_DIRTY 531af8d417aSDan Streetman bool "Track memory changes" 532af8d417aSDan Streetman depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS 533af8d417aSDan Streetman select PROC_PAGE_MONITOR 5344e2e2770SSeth Jennings help 535af8d417aSDan Streetman This option enables memory changes tracking by introducing a 536af8d417aSDan Streetman soft-dirty bit on pte-s. This bit it set when someone writes 537af8d417aSDan Streetman into a page just as regular dirty bit, but unlike the latter 538af8d417aSDan Streetman it can be cleared by hands. 539af8d417aSDan Streetman 540af8d417aSDan Streetman See Documentation/vm/soft-dirty.txt for more details. 5414e2e2770SSeth Jennings 5422b281117SSeth Jenningsconfig ZSWAP 5432b281117SSeth Jennings bool "Compressed cache for swap pages (EXPERIMENTAL)" 5442b281117SSeth Jennings depends on FRONTSWAP && CRYPTO=y 5452b281117SSeth Jennings select CRYPTO_LZO 54612d79d64SDan Streetman select ZPOOL 5472b281117SSeth Jennings default n 5482b281117SSeth Jennings help 5492b281117SSeth Jennings A lightweight compressed cache for swap pages. It takes 5502b281117SSeth Jennings pages that are in the process of being swapped out and attempts to 5512b281117SSeth Jennings compress them into a dynamically allocated RAM-based memory pool. 5522b281117SSeth Jennings This can result in a significant I/O reduction on swap device and, 5532b281117SSeth Jennings in the case where decompressing from RAM is faster that swap device 5542b281117SSeth Jennings reads, can also improve workload performance. 5552b281117SSeth Jennings 5562b281117SSeth Jennings This is marked experimental because it is a new feature (as of 5572b281117SSeth Jennings v3.11) that interacts heavily with memory reclaim. While these 5582b281117SSeth Jennings interactions don't cause any known issues on simple memory setups, 5592b281117SSeth Jennings they have not be fully explored on the large set of potential 5602b281117SSeth Jennings configurations and workloads that exist. 5612b281117SSeth Jennings 562af8d417aSDan Streetmanconfig ZPOOL 563af8d417aSDan Streetman tristate "Common API for compressed memory storage" 564af8d417aSDan Streetman default n 5650f8975ecSPavel Emelyanov help 566af8d417aSDan Streetman Compressed memory storage API. This allows using either zbud or 567af8d417aSDan Streetman zsmalloc. 5680f8975ecSPavel Emelyanov 569af8d417aSDan Streetmanconfig ZBUD 570af8d417aSDan Streetman tristate "Low density storage for compressed pages" 571af8d417aSDan Streetman default n 572af8d417aSDan Streetman help 573af8d417aSDan Streetman A special purpose allocator for storing compressed pages. 574af8d417aSDan Streetman It is designed to store up to two compressed pages per physical 575af8d417aSDan Streetman page. While this design limits storage density, it has simple and 576af8d417aSDan Streetman deterministic reclaim properties that make it preferable to a higher 577af8d417aSDan Streetman density approach when reclaim will be used. 578bcf1647dSMinchan Kim 579bcf1647dSMinchan Kimconfig ZSMALLOC 580d867f203SMinchan Kim tristate "Memory allocator for compressed pages" 581bcf1647dSMinchan Kim depends on MMU 582bcf1647dSMinchan Kim default n 583bcf1647dSMinchan Kim help 584bcf1647dSMinchan Kim zsmalloc is a slab-based memory allocator designed to store 585bcf1647dSMinchan Kim compressed RAM pages. zsmalloc uses virtual memory mapping 586bcf1647dSMinchan Kim in order to reduce fragmentation. However, this results in a 587bcf1647dSMinchan Kim non-standard allocator interface where a handle, not a pointer, is 588bcf1647dSMinchan Kim returned by an alloc(). This handle must be mapped in order to 589bcf1647dSMinchan Kim access the allocated space. 590bcf1647dSMinchan Kim 591bcf1647dSMinchan Kimconfig PGTABLE_MAPPING 592bcf1647dSMinchan Kim bool "Use page table mapping to access object in zsmalloc" 593bcf1647dSMinchan Kim depends on ZSMALLOC 594bcf1647dSMinchan Kim help 595bcf1647dSMinchan Kim By default, zsmalloc uses a copy-based object mapping method to 596bcf1647dSMinchan Kim access allocations that span two pages. However, if a particular 597bcf1647dSMinchan Kim architecture (ex, ARM) performs VM mapping faster than copying, 598bcf1647dSMinchan Kim then you should select this. This causes zsmalloc to use page table 599bcf1647dSMinchan Kim mapping rather than copying for object mapping. 600bcf1647dSMinchan Kim 6012216ee85SBen Hutchings You can check speed with zsmalloc benchmark: 6022216ee85SBen Hutchings https://github.com/spartacus06/zsmapbench 6039e5c33d7SMark Salter 6049e5c33d7SMark Salterconfig GENERIC_EARLY_IOREMAP 6059e5c33d7SMark Salter bool 606042d27acSHelge Deller 607042d27acSHelge Dellerconfig MAX_STACK_SIZE_MB 608042d27acSHelge Deller int "Maximum user stack size for 32-bit processes (MB)" 609042d27acSHelge Deller default 80 610042d27acSHelge Deller range 8 256 if METAG 611042d27acSHelge Deller range 8 2048 612042d27acSHelge Deller depends on STACK_GROWSUP && (!64BIT || COMPAT) 613042d27acSHelge Deller help 614042d27acSHelge Deller This is the maximum stack size in Megabytes in the VM layout of 32-bit 615042d27acSHelge Deller user processes when the stack grows upwards (currently only on parisc 616042d27acSHelge Deller and metag arch). The stack will be located at the highest memory 617042d27acSHelge Deller address minus the given value, unless the RLIMIT_STACK hard limit is 618042d27acSHelge Deller changed to a smaller value in which case that is used. 619042d27acSHelge Deller 620042d27acSHelge Deller A sane initial value is 80 MB. 621