Lines Matching +full:in +full:- +full:memory

11 For general info and legal blurb, please look in index.rst.
13 ------------------------------------------------------------------------------
15 This file contains the documentation for the sysctl files in
18 The files in this directory can be used to tune the operation
19 of the virtual memory (VM) subsystem of the Linux kernel and
23 files can be found in mm/swap.c.
25 Currently, these files are in /proc/sys/vm:
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - defrag_mode
32 - dirty_background_bytes
33 - dirty_background_ratio
34 - dirty_bytes
35 - dirty_expire_centisecs
36 - dirty_ratio
37 - dirtytime_expire_seconds
38 - dirty_writeback_centisecs
39 - drop_caches
40 - enable_soft_offline
41 - extfrag_threshold
42 - highmem_is_dirtyable
43 - hugetlb_shm_group
44 - laptop_mode
45 - legacy_va_layout
46 - lowmem_reserve_ratio
47 - max_map_count
48 - mem_profiling (only if CONFIG_MEM_ALLOC_PROFILING=y)
49 - memory_failure_early_kill
50 - memory_failure_recovery
51 - min_free_kbytes
52 - min_slab_ratio
53 - min_unmapped_ratio
54 - mmap_min_addr
55 - mmap_rnd_bits
56 - mmap_rnd_compat_bits
57 - nr_hugepages
58 - nr_hugepages_mempolicy
59 - nr_overcommit_hugepages
60 - nr_trim_pages (only if CONFIG_MMU=n)
61 - numa_zonelist_order
62 - oom_dump_tasks
63 - oom_kill_allocating_task
64 - overcommit_kbytes
65 - overcommit_memory
66 - overcommit_ratio
67 - page-cluster
68 - page_lock_unfairness
69 - panic_on_oom
70 - percpu_pagelist_high_fraction
71 - stat_interval
72 - stat_refresh
73 - numa_stat
74 - swappiness
75 - unprivileged_userfaultfd
76 - user_reserve_kbytes
77 - vfs_cache_pressure
78 - vfs_cache_pressure_denom
79 - watermark_boost_factor
80 - watermark_scale_factor
81 - zone_reclaim_mode
87 The amount of free memory in the system that should be reserved for users
92 That should provide enough for the admin to log in and kill a process,
96 for the full Virtual Memory Size of programs used to recover. Otherwise,
97 root may not be able to log in to recover the system.
110 Changing this takes effect whenever an application requests memory.
117 all zones are compacted such that free memory is available in contiguous
118 blocks where possible. This can be important for example in the allocation of
119 huge pages although processes will also directly compact memory as required.
124 This tunable takes a value in the range [0, 100] with a default value of
125 20. This tunable determines how aggressively compaction is done in the
129 Note that compaction has a non-trivial system-wide impact as pages
131 to latency spikes in unsuspecting applications. The kernel employs
135 Setting the value above 80 will, in addition to lowering the acceptable level
136 of fragmentation, make the compaction code more sensitive to increases in
150 acceptable trade for large contiguous free memory. Set to 0 to prevent
152 On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due
160 and maintain the ability to produce huge pages / higher-order pages.
163 once it occurred, can be long-lasting or even permanent.
168 Contains the amount of dirty memory at which the background kernel
174 immediately taken into account to evaluate the dirty memory limits and the
181 Contains, as a percentage of total available memory that contains free pages
185 The total available memory is not equal to total system memory.
191 Contains the amount of dirty memory at which a process generating disk writes
196 account to evaluate the dirty memory limits and the other appears as 0 when
199 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
208 for writeout by the kernel flusher threads. It is expressed in 100'ths
209 of a second. Data which has been dirty in-memory for longer than this
216 Contains, as a percentage of total available memory that contains free pages
220 The total available memory is not equal to total system memory.
239 out to disk. This tunable expresses the interval between those wakeups, in
250 memory becomes free.
264 This is a non-destructive operation and will not free any dirty objects.
272 reclaimed by the kernel when memory is needed elsewhere on the system.
279 You may see informational messages in your kernel log when this file is
289 Correctable memory errors are very common on servers. Soft-offline is kernel's
290 solution for memory pages having (excessive) corrected memory errors.
292 For different types of page, soft-offline has different behaviors / costs.
294 - For a raw error page, soft-offline migrates the in-use page's content to
297 - For a page that is part of a transparent hugepage, soft-offline splits the
300 memory access performance.
302 - For a page that is part of a HugeTLB hugepage, soft-offline first migrates
308 physical memory) vs performance / capacity implications in transparent and
312 memory pages. When set to 1, kernel attempts to soft offline the pages
319 - Request to soft offline pages from RAS Correctable Errors Collector.
321 - On ARM, the request to soft offline pages from GHES driver.
323 - On PARISC, the request to soft offline pages from Page Deallocation Table.
328 This parameter affects whether the kernel will compact memory or direct
329 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
330 debugfs shows what the fragmentation index for each order is in each zone in
332 of memory, values towards 1000 imply failures are due to fragmentation and -1
335 The kernel will not compact memory in a zone if the
344 This parameter controls whether the high memory is considered for dirty
346 only the amount of memory directly visible/usable by the kernel can
347 be dirtied. As a result, on systems with a large amount of memory and
351 Changing the value to non zero would allow more memory to be dirtied
353 storage more effectively. Note this also comes with a risk of pre-mature
355 only use the low memory and they can fill it up with dirty data without
363 shared memory segment using hugetlb page.
370 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
376 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
384 the kernel to allow process memory to be allocated from the "lowmem"
385 zone. This is because that memory could then be pinned via the mlock()
388 And on large highmem machines this lack of reclaimable lowmem memory
394 captured into pinned user memory.
401 in defending these lower zones.
414 in /proc/zoneinfo like the following. (This is an example of x86-64 box).
434 In this example, if normal pages (index=2) are required to this DMA zone and
444 zone[i]->protection[j]
464 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
471 This file contains the maximum number of memory map areas a process
472 may have. Memory map areas are used as a side-effect of calling
486 Enable memory profiling (when CONFIG_MEM_ALLOC_PROFILING=y)
488 1: Enable memory profiling.
490 0: Disable memory profiling.
492 Enabling memory profiling introduces a small performance overhead for all
493 memory allocations.
501 Control how to kill processes when uncorrected memory error (typically
502 a 2bit error in a memory module) is detected in the background by hardware
503 that cannot be handled by the kernel. In some cases (like the page
506 no other up-to-date copy of the data it will kill to prevent any data
529 Enable memory failure recovery (when supported by the platform)
533 0: Always panic on a memory failure.
541 watermark[WMARK_MIN] value for each lowmem zone in the system.
545 Some minimal amount of memory is needed to satisfy PF_MEMALLOC
557 A percentage of the total pages in each zone. On Zone reclaim
559 than this percentage of pages in a zone are reclaimable slab pages.
560 This insures that the slab growth stays under control even in NUMA
565 Note that slab reclaim is triggered in a per zone / node fashion.
566 The process of reclaiming slab memory is currently not node specific
575 This is a percentage of the total pages in each zone. Zone reclaim will
576 only occur if more than this percentage of pages are in a state that
580 against all file-backed unmapped pages including swapcache pages and tmpfs
592 accidentally operate based on the information in the first couple of pages
593 of memory userspace processes should not be allowed to write to them. By
596 vast majority of applications to work correctly and provide defense in depth
618 resulting from mmap allocations for applications run in
632 See Documentation/admin-guide/mm/hugetlbpage.rst
639 in include/linux/mm_types.h) is not power of two (an unusual system config could
640 result in this).
654 benefits of memory savings against the more overhead (~2x slower than before)
656 allocator. Another behavior to note is that if the system is under heavy memory
666 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
668 in use. You would need to wait for those surplus pages to be released before
669 there are no optimized pages in the system.
675 Change the size of the hugepage pool at run-time on a specific
678 See Documentation/admin-guide/mm/hugetlbpage.rst
687 See Documentation/admin-guide/mm/hugetlbpage.rst
695 This value adjusts the excess page trimming behaviour of power-of-2 aligned
704 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
713 'where the memory is allocated from' is controlled by zonelists.
718 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
719 ZONE_NORMAL -> ZONE_DMA
720 This means that a memory allocation request for GFP_KERNEL will
721 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
723 In NUMA case, you can think of following 2 types of order.
726 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
727 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
731 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
746 On 32-bit, the Normal zone needs to be preserved for allocations accessible
749 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
759 Enables a system-wide task dump (excluding kernel threads) to be produced
760 when the kernel performs an OOM-killing and includes such information as
768 the memory state information for each one. Such systems should not
769 be forced to incur a performance penalty in OOM conditions when the
772 If this is set to non-zero, this information is shown whenever the
773 OOM killer actually kills a memory-hogging task.
781 This enables or disables killing the OOM-triggering task in
782 out-of-memory situations.
786 selects a rogue memory-hogging task that frees up a large amount of
787 memory when killed.
789 If this is set to non-zero, the OOM killer simply kills the task that
790 triggered the out-of-memory condition. This avoids the expensive
794 is used in oom_kill_allocating_task.
813 This value contains a flag that enables memory overcommitment.
815 When this flag is 0, the kernel compares the userspace memory request
816 size against total memory plus swap and rejects obvious overcommits.
819 memory until it actually runs out.
822 policy that attempts to prevent any overcommit of memory.
826 programs that malloc() huge amounts of memory "just-in-case"
831 See Documentation/mm/overcommit-accounting.rst and
843 page-cluster
846 page-cluster controls the number of pages up to which consecutive pages
847 are read in from swap in a single attempt. This is the swap counterpart
849 The mentioned consecutivity is not in terms of virtual/physical addresses,
850 but consecutive on swap space - that means they were swapped out together.
852 It is a logarithmic value - setting it to zero means "1 page", setting
857 small benefits in tuning this to a different value if your workload is
858 swap-intensive.
862 that consecutive pages readahead would have brought in.
870 specified in this file (default is 5), the "fair lock handoff" semantics
876 This enables or disables panic on out-of-memory feature.
882 If this is set to 1, the kernel panics when out-of-memory happens.
884 and those nodes become memory exhaustion status, one process
885 may be killed by oom-killer. No panic occurs in this case.
886 Because other nodes' memory may be free. This means system total status
890 above-mentioned. Even oom happens under memory cgroup, the whole
905 This is the fraction of pages in each zone that are can be stored to
906 per-cpu page lists. It is an upper boundary that is divided depending
908 that we do not allow more than 1/8th of pages in each zone to be stored
909 on per-cpu page lists. This entry only changes the value of hot per-cpu
911 each zone between per-cpu lists.
913 The batch value of each per-cpu page list remains the same regardless of
916 The initial value is zero. Kernel uses this value to set the high pcp->high
932 Any read or write (by root only) flushes all the per-cpu vm statistics
936 As a side-effect, it also checks for negative totals (elsewhere reported
937 as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
964 assumes equal IO cost and will thus apply memory pressure to the page
965 cache and swap-backed pages equally; lower values signify more
968 Keep in mind that filesystem IO patterns under memory pressure tend to
970 experimentation and will also be workload-dependent.
974 For in-memory swap, like zram or zswap, as well as hybrid setups that
981 file-backed pages is less than the high watermark in a zone.
987 This flag controls the mode in which unprivileged users can use the
989 to handle page faults in user mode only. In this case, users without
990 SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
1001 Documentation/admin-guide/mm/userfaultfd.rst.
1007 min(3% of current process size, user_reserve_kbytes) of free memory.
1008 This is intended to prevent a user from starting a single memory hogging
1014 all free memory with a single process, minus admin_reserve_kbytes.
1015 Any subsequent attempts to execute a command will result in
1016 "fork: Cannot allocate memory".
1018 Changing this takes effect whenever an application requests memory.
1025 the memory which is used for caching of directory and inode objects.
1031 the kernel will never reclaim dentries and inodes due to memory pressure and
1032 this can easily lead to out-of-memory conditions. Increasing vfs_cache_pressure
1053 This factor controls the level of reclaim when memory is being fragmented.
1056 The intent is that compaction has less work to do in the future and to
1057 increase the success rate of future high-order allocations such as SLUB
1061 parameter, the unit is in fractions of 10,000. The default value of
1062 15,000 means that up to 150% of the high watermark will be reclaimed in the
1064 is determined by the number of fragmentation events that occurred in the
1066 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1074 amount of memory left in a node/system before kswapd is woken up and
1075 how much memory needs to be free before kswapd goes back to sleep.
1077 The unit is in fractions of 10,000. The default value of 10 means the
1078 distances between watermarks are 0.1% of the available memory in the
1079 node/system. The maximum value is 3000, or 30% of memory.
1084 too small for the allocation bursts occurring in the system. This knob
1092 reclaim memory when a zone runs out of memory. If it is set to zero then no
1094 in the system.
1111 and that accessing remote memory would cause a measurable performance
1119 since it cannot use all of system memory to buffer the outgoing writes
1120 anymore but it preserve the memory on other nodes so that the performance
1124 node unless explicitly overridden by memory policies or cpuset