1=============================== 2Documentation for /proc/sys/vm/ 3=============================== 4 5kernel version 2.6.29 6 7Copyright (c) 1998, 1999, Rik van Riel <riel@nl.linux.org> 8 9Copyright (c) 2008 Peter W. Morreale <pmorreale@novell.com> 10 11For general info and legal blurb, please look in index.rst. 12 13------------------------------------------------------------------------------ 14 15This file contains the documentation for the sysctl files in 16/proc/sys/vm and is valid for Linux kernel version 2.6.29. 17 18The files in this directory can be used to tune the operation 19of the virtual memory (VM) subsystem of the Linux kernel and 20the writeout of dirty data to disk. 21 22Default values and initialization routines for most of these 23files can be found in mm/swap.c. 24 25Currently, these files are in /proc/sys/vm: 26 27- admin_reserve_kbytes 28- compact_memory 29- compaction_proactiveness 30- compact_unevictable_allowed 31- dirty_background_bytes 32- dirty_background_ratio 33- dirty_bytes 34- dirty_expire_centisecs 35- dirty_ratio 36- dirtytime_expire_seconds 37- dirty_writeback_centisecs 38- drop_caches 39- enable_soft_offline 40- extfrag_threshold 41- highmem_is_dirtyable 42- hugetlb_shm_group 43- laptop_mode 44- legacy_va_layout 45- lowmem_reserve_ratio 46- max_map_count 47- mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y) 48- memory_failure_early_kill 49- memory_failure_recovery 50- min_free_kbytes 51- min_slab_ratio 52- min_unmapped_ratio 53- mmap_min_addr 54- mmap_rnd_bits 55- mmap_rnd_compat_bits 56- nr_hugepages 57- nr_hugepages_mempolicy 58- nr_overcommit_hugepages 59- nr_trim_pages (only if CONFIG_MMU=n) 60- numa_zonelist_order 61- oom_dump_tasks 62- oom_kill_allocating_task 63- overcommit_kbytes 64- overcommit_memory 65- overcommit_ratio 66- page-cluster 67- page_lock_unfairness 68- panic_on_oom 69- percpu_pagelist_high_fraction 70- stat_interval 71- stat_refresh 72- numa_stat 73- swappiness 74- unprivileged_userfaultfd 75- user_reserve_kbytes 76- vfs_cache_pressure 77- watermark_boost_factor 78- watermark_scale_factor 79- zone_reclaim_mode 80 81 82admin_reserve_kbytes 83==================== 84 85The amount of free memory in the system that should be reserved for users 86with the capability cap_sys_admin. 87 88admin_reserve_kbytes defaults to min(3% of free pages, 8MB) 89 90That should provide enough for the admin to log in and kill a process, 91if necessary, under the default overcommit 'guess' mode. 92 93Systems running under overcommit 'never' should increase this to account 94for the full Virtual Memory Size of programs used to recover. Otherwise, 95root may not be able to log in to recover the system. 96 97How do you calculate a minimum useful reserve? 98 99sshd or login + bash (or some other shell) + top (or ps, kill, etc.) 100 101For overcommit 'guess', we can sum resident set sizes (RSS). 102On x86_64 this is about 8MB. 103 104For overcommit 'never', we can take the max of their virtual sizes (VSZ) 105and add the sum of their RSS. 106On x86_64 this is about 128MB. 107 108Changing this takes effect whenever an application requests memory. 109 110 111compact_memory 112============== 113 114Available only when CONFIG_COMPACTION is set. When 1 is written to the file, 115all zones are compacted such that free memory is available in contiguous 116blocks where possible. This can be important for example in the allocation of 117huge pages although processes will also directly compact memory as required. 118 119compaction_proactiveness 120======================== 121 122This tunable takes a value in the range [0, 100] with a default value of 12320. This tunable determines how aggressively compaction is done in the 124background. Write of a non zero value to this tunable will immediately 125trigger the proactive compaction. Setting it to 0 disables proactive compaction. 126 127Note that compaction has a non-trivial system-wide impact as pages 128belonging to different processes are moved around, which could also lead 129to latency spikes in unsuspecting applications. The kernel employs 130various heuristics to avoid wasting CPU cycles if it detects that 131proactive compaction is not being effective. 132 133Be careful when setting it to extreme values like 100, as that may 134cause excessive background compaction activity. 135 136compact_unevictable_allowed 137=========================== 138 139Available only when CONFIG_COMPACTION is set. When set to 1, compaction is 140allowed to examine the unevictable lru (mlocked pages) for pages to compact. 141This should be used on systems where stalls for minor page faults are an 142acceptable trade for large contiguous free memory. Set to 0 to prevent 143compaction from moving pages that are unevictable. Default value is 1. 144On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due 145to compaction, which would block the task from becoming active until the fault 146is resolved. 147 148 149dirty_background_bytes 150====================== 151 152Contains the amount of dirty memory at which the background kernel 153flusher threads will start writeback. 154 155Note: 156 dirty_background_bytes is the counterpart of dirty_background_ratio. Only 157 one of them may be specified at a time. When one sysctl is written it is 158 immediately taken into account to evaluate the dirty memory limits and the 159 other appears as 0 when read. 160 161 162dirty_background_ratio 163====================== 164 165Contains, as a percentage of total available memory that contains free pages 166and reclaimable pages, the number of pages at which the background kernel 167flusher threads will start writing out dirty data. 168 169The total available memory is not equal to total system memory. 170 171 172dirty_bytes 173=========== 174 175Contains the amount of dirty memory at which a process generating disk writes 176will itself start writeback. 177 178Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be 179specified at a time. When one sysctl is written it is immediately taken into 180account to evaluate the dirty memory limits and the other appears as 0 when 181read. 182 183Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any 184value lower than this limit will be ignored and the old configuration will be 185retained. 186 187 188dirty_expire_centisecs 189====================== 190 191This tunable is used to define when dirty data is old enough to be eligible 192for writeout by the kernel flusher threads. It is expressed in 100'ths 193of a second. Data which has been dirty in-memory for longer than this 194interval will be written out next time a flusher thread wakes up. 195 196 197dirty_ratio 198=========== 199 200Contains, as a percentage of total available memory that contains free pages 201and reclaimable pages, the number of pages at which a process which is 202generating disk writes will itself start writing out dirty data. 203 204The total available memory is not equal to total system memory. 205 206 207dirtytime_expire_seconds 208======================== 209 210When a lazytime inode is constantly having its pages dirtied, the inode with 211an updated timestamp will never get chance to be written out. And, if the 212only thing that has happened on the file system is a dirtytime inode caused 213by an atime update, a worker will be scheduled to make sure that inode 214eventually gets pushed out to disk. This tunable is used to define when dirty 215inode is old enough to be eligible for writeback by the kernel flusher threads. 216And, it is also used as the interval to wakeup dirtytime_writeback thread. 217 218 219dirty_writeback_centisecs 220========================= 221 222The kernel flusher threads will periodically wake up and write `old` data 223out to disk. This tunable expresses the interval between those wakeups, in 224100'ths of a second. 225 226Setting this to zero disables periodic writeback altogether. 227 228 229drop_caches 230=========== 231 232Writing to this will cause the kernel to drop clean caches, as well as 233reclaimable slab objects like dentries and inodes. Once dropped, their 234memory becomes free. 235 236To free pagecache:: 237 238 echo 1 > /proc/sys/vm/drop_caches 239 240To free reclaimable slab objects (includes dentries and inodes):: 241 242 echo 2 > /proc/sys/vm/drop_caches 243 244To free slab objects and pagecache:: 245 246 echo 3 > /proc/sys/vm/drop_caches 247 248This is a non-destructive operation and will not free any dirty objects. 249To increase the number of objects freed by this operation, the user may run 250`sync` prior to writing to /proc/sys/vm/drop_caches. This will minimize the 251number of dirty objects on the system and create more candidates to be 252dropped. 253 254This file is not a means to control the growth of the various kernel caches 255(inodes, dentries, pagecache, etc...) These objects are automatically 256reclaimed by the kernel when memory is needed elsewhere on the system. 257 258Use of this file can cause performance problems. Since it discards cached 259objects, it may cost a significant amount of I/O and CPU to recreate the 260dropped objects, especially if they were under heavy use. Because of this, 261use outside of a testing or debugging environment is not recommended. 262 263You may see informational messages in your kernel log when this file is 264used:: 265 266 cat (1234): drop_caches: 3 267 268These are informational only. They do not mean that anything is wrong 269with your system. To disable them, echo 4 (bit 2) into drop_caches. 270 271enable_soft_offline 272=================== 273Correctable memory errors are very common on servers. Soft-offline is kernel's 274solution for memory pages having (excessive) corrected memory errors. 275 276For different types of page, soft-offline has different behaviors / costs. 277 278- For a raw error page, soft-offline migrates the in-use page's content to 279 a new raw page. 280 281- For a page that is part of a transparent hugepage, soft-offline splits the 282 transparent hugepage into raw pages, then migrates only the raw error page. 283 As a result, user is transparently backed by 1 less hugepage, impacting 284 memory access performance. 285 286- For a page that is part of a HugeTLB hugepage, soft-offline first migrates 287 the entire HugeTLB hugepage, during which a free hugepage will be consumed 288 as migration target. Then the original hugepage is dissolved into raw 289 pages without compensation, reducing the capacity of the HugeTLB pool by 1. 290 291It is user's call to choose between reliability (staying away from fragile 292physical memory) vs performance / capacity implications in transparent and 293HugeTLB cases. 294 295For all architectures, enable_soft_offline controls whether to soft offline 296memory pages. When set to 1, kernel attempts to soft offline the pages 297whenever it thinks needed. When set to 0, kernel returns EOPNOTSUPP to 298the request to soft offline the pages. Its default value is 1. 299 300It is worth mentioning that after setting enable_soft_offline to 0, the 301following requests to soft offline pages will not be performed: 302 303- Request to soft offline pages from RAS Correctable Errors Collector. 304 305- On ARM, the request to soft offline pages from GHES driver. 306 307- On PARISC, the request to soft offline pages from Page Deallocation Table. 308 309extfrag_threshold 310================= 311 312This parameter affects whether the kernel will compact memory or direct 313reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in 314debugfs shows what the fragmentation index for each order is in each zone in 315the system. Values tending towards 0 imply allocations would fail due to lack 316of memory, values towards 1000 imply failures are due to fragmentation and -1 317implies that the allocation will succeed as long as watermarks are met. 318 319The kernel will not compact memory in a zone if the 320fragmentation index is <= extfrag_threshold. The default value is 500. 321 322 323highmem_is_dirtyable 324==================== 325 326Available only for systems with CONFIG_HIGHMEM enabled (32b systems). 327 328This parameter controls whether the high memory is considered for dirty 329writers throttling. This is not the case by default which means that 330only the amount of memory directly visible/usable by the kernel can 331be dirtied. As a result, on systems with a large amount of memory and 332lowmem basically depleted writers might be throttled too early and 333streaming writes can get very slow. 334 335Changing the value to non zero would allow more memory to be dirtied 336and thus allow writers to write more data which can be flushed to the 337storage more effectively. Note this also comes with a risk of pre-mature 338OOM killer because some writers (e.g. direct block device writes) can 339only use the low memory and they can fill it up with dirty data without 340any throttling. 341 342 343hugetlb_shm_group 344================= 345 346hugetlb_shm_group contains group id that is allowed to create SysV 347shared memory segment using hugetlb page. 348 349 350laptop_mode 351=========== 352 353laptop_mode is a knob that controls "laptop mode". All the things that are 354controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst. 355 356 357legacy_va_layout 358================ 359 360If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel 361will use the legacy (2.4) layout for all processes. 362 363 364lowmem_reserve_ratio 365==================== 366 367For some specialised workloads on highmem machines it is dangerous for 368the kernel to allow process memory to be allocated from the "lowmem" 369zone. This is because that memory could then be pinned via the mlock() 370system call, or by unavailability of swapspace. 371 372And on large highmem machines this lack of reclaimable lowmem memory 373can be fatal. 374 375So the Linux page allocator has a mechanism which prevents allocations 376which *could* use highmem from using too much lowmem. This means that 377a certain amount of lowmem is defended from the possibility of being 378captured into pinned user memory. 379 380(The same argument applies to the old 16 megabyte ISA DMA region. This 381mechanism will also defend that region from allocations which could use 382highmem or lowmem). 383 384The `lowmem_reserve_ratio` tunable determines how aggressive the kernel is 385in defending these lower zones. 386 387If you have a machine which uses highmem or ISA DMA and your 388applications are using mlock(), or if you are running with no swap then 389you probably should change the lowmem_reserve_ratio setting. 390 391The lowmem_reserve_ratio is an array. You can see them by reading this file:: 392 393 % cat /proc/sys/vm/lowmem_reserve_ratio 394 256 256 32 395 396But, these values are not used directly. The kernel calculates # of protection 397pages for each zones from them. These are shown as array of protection pages 398in /proc/zoneinfo like the following. (This is an example of x86-64 box). 399Each zone has an array of protection pages like this:: 400 401 Node 0, zone DMA 402 pages free 1355 403 min 3 404 low 3 405 high 4 406 : 407 : 408 numa_other 0 409 protection: (0, 2004, 2004, 2004) 410 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 411 pagesets 412 cpu: 0 pcp: 0 413 : 414 415These protections are added to score to judge whether this zone should be used 416for page allocation or should be reclaimed. 417 418In this example, if normal pages (index=2) are required to this DMA zone and 419watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should 420not be used because pages_free(1355) is smaller than watermark + protection[2] 421(4 + 2004 = 2008). If this protection value is 0, this zone would be used for 422normal page requirement. If requirement is DMA zone(index=0), protection[0] 423(=0) is used. 424 425zone[i]'s protection[j] is calculated by following expression:: 426 427 (i < j): 428 zone[i]->protection[j] 429 = (total sums of managed_pages from zone[i+1] to zone[j] on the node) 430 / lowmem_reserve_ratio[i]; 431 (i = j): 432 (should not be protected. = 0; 433 (i > j): 434 (not necessary, but looks 0) 435 436The default values of lowmem_reserve_ratio[i] are 437 438 === ==================================== 439 256 (if zone[i] means DMA or DMA32 zone) 440 32 (others) 441 === ==================================== 442 443As above expression, they are reciprocal number of ratio. 444256 means 1/256. # of protection pages becomes about "0.39%" of total managed 445pages of higher zones on the node. 446 447If you would like to protect more pages, smaller values are effective. 448The minimum value is 1 (1/1 -> 100%). The value less than 1 completely 449disables protection of the pages. 450 451 452max_map_count: 453============== 454 455This file contains the maximum number of memory map areas a process 456may have. Memory map areas are used as a side-effect of calling 457malloc, directly by mmap, mprotect, and madvise, and also when loading 458shared libraries. 459 460While most applications need less than a thousand maps, certain 461programs, particularly malloc debuggers, may consume lots of them, 462e.g., up to one or two maps per allocation. 463 464The default value is 65530. 465 466 467mem_profiling 468============== 469 470Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y) 471 4721: Enable memory profiling. 473 4740: Disable memory profiling. 475 476Enabling memory profiling introduces a small performance overhead for all 477memory allocations. 478 479The default value depends on CONFIG_MEM_ALLOC_PROFILING_ENABLED_BY_DEFAULT. 480 481 482memory_failure_early_kill: 483========================== 484 485Control how to kill processes when uncorrected memory error (typically 486a 2bit error in a memory module) is detected in the background by hardware 487that cannot be handled by the kernel. In some cases (like the page 488still having a valid copy on disk) the kernel will handle the failure 489transparently without affecting any applications. But if there is 490no other up-to-date copy of the data it will kill to prevent any data 491corruptions from propagating. 492 4931: Kill all processes that have the corrupted and not reloadable page mapped 494as soon as the corruption is detected. Note this is not supported 495for a few types of pages, like kernel internally allocated data or 496the swap cache, but works for the majority of user pages. 497 4980: Only unmap the corrupted page from all processes and only kill a process 499who tries to access it. 500 501The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can 502handle this if they want to. 503 504This is only active on architectures/platforms with advanced machine 505check handling and depends on the hardware capabilities. 506 507Applications can override this setting individually with the PR_MCE_KILL prctl 508 509 510memory_failure_recovery 511======================= 512 513Enable memory failure recovery (when supported by the platform) 514 5151: Attempt recovery. 516 5170: Always panic on a memory failure. 518 519 520min_free_kbytes 521=============== 522 523This is used to force the Linux VM to keep a minimum number 524of kilobytes free. The VM uses this number to compute a 525watermark[WMARK_MIN] value for each lowmem zone in the system. 526Each lowmem zone gets a number of reserved free pages based 527proportionally on its size. 528 529Some minimal amount of memory is needed to satisfy PF_MEMALLOC 530allocations; if you set this to lower than 1024KB, your system will 531become subtly broken, and prone to deadlock under high loads. 532 533Setting this too high will OOM your machine instantly. 534 535 536min_slab_ratio 537============== 538 539This is available only on NUMA kernels. 540 541A percentage of the total pages in each zone. On Zone reclaim 542(fallback from the local zone occurs) slabs will be reclaimed if more 543than this percentage of pages in a zone are reclaimable slab pages. 544This insures that the slab growth stays under control even in NUMA 545systems that rarely perform global reclaim. 546 547The default is 5 percent. 548 549Note that slab reclaim is triggered in a per zone / node fashion. 550The process of reclaiming slab memory is currently not node specific 551and may not be fast. 552 553 554min_unmapped_ratio 555================== 556 557This is available only on NUMA kernels. 558 559This is a percentage of the total pages in each zone. Zone reclaim will 560only occur if more than this percentage of pages are in a state that 561zone_reclaim_mode allows to be reclaimed. 562 563If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared 564against all file-backed unmapped pages including swapcache pages and tmpfs 565files. Otherwise, only unmapped pages backed by normal files but not tmpfs 566files and similar are considered. 567 568The default is 1 percent. 569 570 571mmap_min_addr 572============= 573 574This file indicates the amount of address space which a user process will 575be restricted from mmapping. Since kernel null dereference bugs could 576accidentally operate based on the information in the first couple of pages 577of memory userspace processes should not be allowed to write to them. By 578default this value is set to 0 and no protections will be enforced by the 579security module. Setting this value to something like 64k will allow the 580vast majority of applications to work correctly and provide defense in depth 581against future potential kernel bugs. 582 583 584mmap_rnd_bits 585============= 586 587This value can be used to select the number of bits to use to 588determine the random offset to the base address of vma regions 589resulting from mmap allocations on architectures which support 590tuning address space randomization. This value will be bounded 591by the architecture's minimum and maximum supported values. 592 593This value can be changed after boot using the 594/proc/sys/vm/mmap_rnd_bits tunable 595 596 597mmap_rnd_compat_bits 598==================== 599 600This value can be used to select the number of bits to use to 601determine the random offset to the base address of vma regions 602resulting from mmap allocations for applications run in 603compatibility mode on architectures which support tuning address 604space randomization. This value will be bounded by the 605architecture's minimum and maximum supported values. 606 607This value can be changed after boot using the 608/proc/sys/vm/mmap_rnd_compat_bits tunable 609 610 611nr_hugepages 612============ 613 614Change the minimum size of the hugepage pool. 615 616See Documentation/admin-guide/mm/hugetlbpage.rst 617 618 619hugetlb_optimize_vmemmap 620======================== 621 622This knob is not available when the size of 'struct page' (a structure defined 623in include/linux/mm_types.h) is not power of two (an unusual system config could 624result in this). 625 626Enable (set to 1) or disable (set to 0) HugeTLB Vmemmap Optimization (HVO). 627 628Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 629buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages 630per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be 631optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool 632to the buddy allocator, the vmemmap pages representing that range needs to be 633remapped again and the vmemmap pages discarded earlier need to be rellocated 634again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g. 635never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set 636'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on 637the fly') instead of being pulled from the HugeTLB pool, you should weigh the 638benefits of memory savings against the more overhead (~2x slower than before) 639of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy 640allocator. Another behavior to note is that if the system is under heavy memory 641pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB 642pool to the buddy allocator since the allocation of vmemmap pages could be 643failed, you have to retry later if your system encounter this situation. 644 645Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from 646buddy allocator will not be optimized meaning the extra overhead at allocation 647time from buddy allocator disappears, whereas already optimized HugeTLB pages 648will not be affected. If you want to make sure there are no optimized HugeTLB 649pages, you can set "nr_hugepages" to 0 first and then disable this. Note that 650writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus 651pages. So, those surplus pages are still optimized until they are no longer 652in use. You would need to wait for those surplus pages to be released before 653there are no optimized pages in the system. 654 655 656nr_hugepages_mempolicy 657====================== 658 659Change the size of the hugepage pool at run-time on a specific 660set of NUMA nodes. 661 662See Documentation/admin-guide/mm/hugetlbpage.rst 663 664 665nr_overcommit_hugepages 666======================= 667 668Change the maximum size of the hugepage pool. The maximum is 669nr_hugepages + nr_overcommit_hugepages. 670 671See Documentation/admin-guide/mm/hugetlbpage.rst 672 673 674nr_trim_pages 675============= 676 677This is available only on NOMMU kernels. 678 679This value adjusts the excess page trimming behaviour of power-of-2 aligned 680NOMMU mmap allocations. 681 682A value of 0 disables trimming of allocations entirely, while a value of 1 683trims excess pages aggressively. Any value >= 1 acts as the watermark where 684trimming of allocations is initiated. 685 686The default value is 1. 687 688See Documentation/admin-guide/mm/nommu-mmap.rst for more information. 689 690 691numa_zonelist_order 692=================== 693 694This sysctl is only for NUMA and it is deprecated. Anything but 695Node order will fail! 696 697'where the memory is allocated from' is controlled by zonelists. 698 699(This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation. 700you may be able to read ZONE_DMA as ZONE_DMA32...) 701 702In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following. 703ZONE_NORMAL -> ZONE_DMA 704This means that a memory allocation request for GFP_KERNEL will 705get memory from ZONE_DMA only when ZONE_NORMAL is not available. 706 707In NUMA case, you can think of following 2 types of order. 708Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL:: 709 710 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL 711 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA. 712 713Type(A) offers the best locality for processes on Node(0), but ZONE_DMA 714will be used before ZONE_NORMAL exhaustion. This increases possibility of 715out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small. 716 717Type(B) cannot offer the best locality but is more robust against OOM of 718the DMA zone. 719 720Type(A) is called as "Node" order. Type (B) is "Zone" order. 721 722"Node order" orders the zonelists by node, then by zone within each node. 723Specify "[Nn]ode" for node order 724 725"Zone Order" orders the zonelists by zone type, then by node within each 726zone. Specify "[Zz]one" for zone order. 727 728Specify "[Dd]efault" to request automatic configuration. 729 730On 32-bit, the Normal zone needs to be preserved for allocations accessible 731by the kernel, so "zone" order will be selected. 732 733On 64-bit, devices that require DMA32/DMA are relatively rare, so "node" 734order will be selected. 735 736Default order is recommended unless this is causing problems for your 737system/application. 738 739 740oom_dump_tasks 741============== 742 743Enables a system-wide task dump (excluding kernel threads) to be produced 744when the kernel performs an OOM-killing and includes such information as 745pid, uid, tgid, vm size, rss, pgtables_bytes, swapents, oom_score_adj 746score, and name. This is helpful to determine why the OOM killer was 747invoked, to identify the rogue task that caused it, and to determine why 748the OOM killer chose the task it did to kill. 749 750If this is set to zero, this information is suppressed. On very 751large systems with thousands of tasks it may not be feasible to dump 752the memory state information for each one. Such systems should not 753be forced to incur a performance penalty in OOM conditions when the 754information may not be desired. 755 756If this is set to non-zero, this information is shown whenever the 757OOM killer actually kills a memory-hogging task. 758 759The default value is 1 (enabled). 760 761 762oom_kill_allocating_task 763======================== 764 765This enables or disables killing the OOM-triggering task in 766out-of-memory situations. 767 768If this is set to zero, the OOM killer will scan through the entire 769tasklist and select a task based on heuristics to kill. This normally 770selects a rogue memory-hogging task that frees up a large amount of 771memory when killed. 772 773If this is set to non-zero, the OOM killer simply kills the task that 774triggered the out-of-memory condition. This avoids the expensive 775tasklist scan. 776 777If panic_on_oom is selected, it takes precedence over whatever value 778is used in oom_kill_allocating_task. 779 780The default value is 0. 781 782 783overcommit_kbytes 784================= 785 786When overcommit_memory is set to 2, the committed address space is not 787permitted to exceed swap plus this amount of physical RAM. See below. 788 789Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one 790of them may be specified at a time. Setting one disables the other (which 791then appears as 0 when read). 792 793 794overcommit_memory 795================= 796 797This value contains a flag that enables memory overcommitment. 798 799When this flag is 0, the kernel compares the userspace memory request 800size against total memory plus swap and rejects obvious overcommits. 801 802When this flag is 1, the kernel pretends there is always enough 803memory until it actually runs out. 804 805When this flag is 2, the kernel uses a "never overcommit" 806policy that attempts to prevent any overcommit of memory. 807Note that user_reserve_kbytes affects this policy. 808 809This feature can be very useful because there are a lot of 810programs that malloc() huge amounts of memory "just-in-case" 811and don't use much of it. 812 813The default value is 0. 814 815See Documentation/mm/overcommit-accounting.rst and 816mm/util.c::__vm_enough_memory() for more information. 817 818 819overcommit_ratio 820================ 821 822When overcommit_memory is set to 2, the committed address 823space is not permitted to exceed swap plus this percentage 824of physical RAM. See above. 825 826 827page-cluster 828============ 829 830page-cluster controls the number of pages up to which consecutive pages 831are read in from swap in a single attempt. This is the swap counterpart 832to page cache readahead. 833The mentioned consecutivity is not in terms of virtual/physical addresses, 834but consecutive on swap space - that means they were swapped out together. 835 836It is a logarithmic value - setting it to zero means "1 page", setting 837it to 1 means "2 pages", setting it to 2 means "4 pages", etc. 838Zero disables swap readahead completely. 839 840The default value is three (eight pages at a time). There may be some 841small benefits in tuning this to a different value if your workload is 842swap-intensive. 843 844Lower values mean lower latencies for initial faults, but at the same time 845extra faults and I/O delays for following faults if they would have been part of 846that consecutive pages readahead would have brought in. 847 848 849page_lock_unfairness 850==================== 851 852This value determines the number of times that the page lock can be 853stolen from under a waiter. After the lock is stolen the number of times 854specified in this file (default is 5), the "fair lock handoff" semantics 855will apply, and the waiter will only be awakened if the lock can be taken. 856 857panic_on_oom 858============ 859 860This enables or disables panic on out-of-memory feature. 861 862If this is set to 0, the kernel will kill some rogue process, 863called oom_killer. Usually, oom_killer can kill rogue processes and 864system will survive. 865 866If this is set to 1, the kernel panics when out-of-memory happens. 867However, if a process limits using nodes by mempolicy/cpusets, 868and those nodes become memory exhaustion status, one process 869may be killed by oom-killer. No panic occurs in this case. 870Because other nodes' memory may be free. This means system total status 871may be not fatal yet. 872 873If this is set to 2, the kernel panics compulsorily even on the 874above-mentioned. Even oom happens under memory cgroup, the whole 875system panics. 876 877The default value is 0. 878 8791 and 2 are for failover of clustering. Please select either 880according to your policy of failover. 881 882panic_on_oom=2+kdump gives you very strong tool to investigate 883why oom happens. You can get snapshot. 884 885 886percpu_pagelist_high_fraction 887============================= 888 889This is the fraction of pages in each zone that are can be stored to 890per-cpu page lists. It is an upper boundary that is divided depending 891on the number of online CPUs. The min value for this is 8 which means 892that we do not allow more than 1/8th of pages in each zone to be stored 893on per-cpu page lists. This entry only changes the value of hot per-cpu 894page lists. A user can specify a number like 100 to allocate 1/100th of 895each zone between per-cpu lists. 896 897The batch value of each per-cpu page list remains the same regardless of 898the value of the high fraction so allocation latencies are unaffected. 899 900The initial value is zero. Kernel uses this value to set the high pcp->high 901mark based on the low watermark for the zone and the number of local 902online CPUs. If the user writes '0' to this sysctl, it will revert to 903this default behavior. 904 905 906stat_interval 907============= 908 909The time interval between which vm statistics are updated. The default 910is 1 second. 911 912 913stat_refresh 914============ 915 916Any read or write (by root only) flushes all the per-cpu vm statistics 917into their global totals, for more accurate reports when testing 918e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo 919 920As a side-effect, it also checks for negative totals (elsewhere reported 921as 0) and "fails" with EINVAL if any are found, with a warning in dmesg. 922(At time of writing, a few stats are known sometimes to be found negative, 923with no ill effects: errors and warnings on these stats are suppressed.) 924 925 926numa_stat 927========= 928 929This interface allows runtime configuration of numa statistics. 930 931When page allocation performance becomes a bottleneck and you can tolerate 932some possible tool breakage and decreased numa counter precision, you can 933do:: 934 935 echo 0 > /proc/sys/vm/numa_stat 936 937When page allocation performance is not a bottleneck and you want all 938tooling to work, you can do:: 939 940 echo 1 > /proc/sys/vm/numa_stat 941 942 943swappiness 944========== 945 946This control is used to define the rough relative IO cost of swapping 947and filesystem paging, as a value between 0 and 200. At 100, the VM 948assumes equal IO cost and will thus apply memory pressure to the page 949cache and swap-backed pages equally; lower values signify more 950expensive swap IO, higher values indicates cheaper. 951 952Keep in mind that filesystem IO patterns under memory pressure tend to 953be more efficient than swap's random IO. An optimal value will require 954experimentation and will also be workload-dependent. 955 956The default value is 60. 957 958For in-memory swap, like zram or zswap, as well as hybrid setups that 959have swap on faster devices than the filesystem, values beyond 100 can 960be considered. For example, if the random IO against the swap device 961is on average 2x faster than IO from the filesystem, swappiness should 962be 133 (x + 2x = 200, 2x = 133.33). 963 964At 0, the kernel will not initiate swap until the amount of free and 965file-backed pages is less than the high watermark in a zone. 966 967 968unprivileged_userfaultfd 969======================== 970 971This flag controls the mode in which unprivileged users can use the 972userfaultfd system calls. Set this to 0 to restrict unprivileged users 973to handle page faults in user mode only. In this case, users without 974SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to 975succeed. Prohibiting use of userfaultfd for handling faults from kernel 976mode may make certain vulnerabilities more difficult to exploit. 977 978Set this to 1 to allow unprivileged users to use the userfaultfd system 979calls without any restrictions. 980 981The default value is 0. 982 983Another way to control permissions for userfaultfd is to use 984/dev/userfaultfd instead of userfaultfd(2). See 985Documentation/admin-guide/mm/userfaultfd.rst. 986 987user_reserve_kbytes 988=================== 989 990When overcommit_memory is set to 2, "never overcommit" mode, reserve 991min(3% of current process size, user_reserve_kbytes) of free memory. 992This is intended to prevent a user from starting a single memory hogging 993process, such that they cannot recover (kill the hog). 994 995user_reserve_kbytes defaults to min(3% of the current process size, 128MB). 996 997If this is reduced to zero, then the user will be allowed to allocate 998all free memory with a single process, minus admin_reserve_kbytes. 999Any subsequent attempts to execute a command will result in 1000"fork: Cannot allocate memory". 1001 1002Changing this takes effect whenever an application requests memory. 1003 1004 1005vfs_cache_pressure 1006================== 1007 1008This percentage value controls the tendency of the kernel to reclaim 1009the memory which is used for caching of directory and inode objects. 1010 1011At the default value of vfs_cache_pressure=100 the kernel will attempt to 1012reclaim dentries and inodes at a "fair" rate with respect to pagecache and 1013swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer 1014to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will 1015never reclaim dentries and inodes due to memory pressure and this can easily 1016lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100 1017causes the kernel to prefer to reclaim dentries and inodes. 1018 1019Increasing vfs_cache_pressure significantly beyond 100 may have negative 1020performance impact. Reclaim code needs to take various locks to find freeable 1021directory and inode objects. With vfs_cache_pressure=1000, it will look for 1022ten times more freeable objects than there are. 1023 1024 1025watermark_boost_factor 1026====================== 1027 1028This factor controls the level of reclaim when memory is being fragmented. 1029It defines the percentage of the high watermark of a zone that will be 1030reclaimed if pages of different mobility are being mixed within pageblocks. 1031The intent is that compaction has less work to do in the future and to 1032increase the success rate of future high-order allocations such as SLUB 1033allocations, THP and hugetlbfs pages. 1034 1035To make it sensible with respect to the watermark_scale_factor 1036parameter, the unit is in fractions of 10,000. The default value of 103715,000 means that up to 150% of the high watermark will be reclaimed in the 1038event of a pageblock being mixed due to fragmentation. The level of reclaim 1039is determined by the number of fragmentation events that occurred in the 1040recent past. If this value is smaller than a pageblock then a pageblocks 1041worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor 1042of 0 will disable the feature. 1043 1044 1045watermark_scale_factor 1046====================== 1047 1048This factor controls the aggressiveness of kswapd. It defines the 1049amount of memory left in a node/system before kswapd is woken up and 1050how much memory needs to be free before kswapd goes back to sleep. 1051 1052The unit is in fractions of 10,000. The default value of 10 means the 1053distances between watermarks are 0.1% of the available memory in the 1054node/system. The maximum value is 3000, or 30% of memory. 1055 1056A high rate of threads entering direct reclaim (allocstall) or kswapd 1057going to sleep prematurely (kswapd_low_wmark_hit_quickly) can indicate 1058that the number of free pages kswapd maintains for latency reasons is 1059too small for the allocation bursts occurring in the system. This knob 1060can then be used to tune kswapd aggressiveness accordingly. 1061 1062 1063zone_reclaim_mode 1064================= 1065 1066Zone_reclaim_mode allows someone to set more or less aggressive approaches to 1067reclaim memory when a zone runs out of memory. If it is set to zero then no 1068zone reclaim occurs. Allocations will be satisfied from other zones / nodes 1069in the system. 1070 1071This is value OR'ed together of 1072 1073= =================================== 10741 Zone reclaim on 10752 Zone reclaim writes dirty pages out 10764 Zone reclaim swaps pages 1077= =================================== 1078 1079zone_reclaim_mode is disabled by default. For file servers or workloads 1080that benefit from having their data cached, zone_reclaim_mode should be 1081left disabled as the caching effect is likely to be more important than 1082data locality. 1083 1084Consider enabling one or more zone_reclaim mode bits if it's known that the 1085workload is partitioned such that each partition fits within a NUMA node 1086and that accessing remote memory would cause a measurable performance 1087reduction. The page allocator will take additional actions before 1088allocating off node pages. 1089 1090Allowing zone reclaim to write out pages stops processes that are 1091writing large amounts of data from dirtying pages on other nodes. Zone 1092reclaim will write out dirty pages if a zone fills up and so effectively 1093throttle the process. This may decrease the performance of a single process 1094since it cannot use all of system memory to buffer the outgoing writes 1095anymore but it preserve the memory on other nodes so that the performance 1096of other processes running on other nodes will not be affected. 1097 1098Allowing regular swap effectively restricts allocations to the local 1099node unless explicitly overridden by memory policies or cpuset 1100configurations. 1101