Lines Matching +full:page +full:- +full:based

1 # SPDX-License-Identifier: GPL-2.0-only
30 compress them into a dynamically allocated RAM-based memory pool.
69 available at the following LWN page:
145 int "Maximum number of physical pages per-zspage"
150 that a zmalloc page (zspage) can consist of. The optimal zspage
226 specifically-sized allocations with user-controlled contents
230 user-controlled allocations. This may very slightly increase
232 of extra pages since the bulk of user-controlled allocations
233 are relatively long-lived.
248 Try running: slabinfo -DA
267 normal kmalloc allocation and makes kmalloc randomly pick one based
281 bool "Page allocator randomization"
284 Randomization of the page allocator improves the average
285 utilization of a direct-mapped memory-side-cache. See section
288 the presence of a memory-side-cache. There are also incidental
289 security benefits as it reduces the predictability of page
292 order of pages is selected based on cache utilization benefits
308 also breaks ancient binaries (including anything libc5 based).
313 On non-ancient distros (post-2000 ones) N is usually a safe choice.
328 ELF-FDPIC binfmt's brk and stack allocator.
332 userspace. Since that isn't generally a problem on no-MMU systems,
335 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
356 This option is best suited for non-NUMA systems with
372 memory hot-plug systems. This is normal.
376 hot-plug and hot-remove.
455 # Keep arch NUMA mapping infrastructure post-init.
511 Example kernel usage would be page structs and page tables.
513 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
546 sufficient kernel-capable memory (ZONE_NORMAL) must be
547 available to allocate page structs to describe ZONE_MOVABLE.
567 # Heavily threaded applications may benefit from splitting the mm-wide
571 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
572 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
573 # SPARC32 allocates multiple pte tables within a single page, and therefore
574 # a per-page lock leads to problems when multiple tables need to be locked
576 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
624 reliably. The page allocator relies on compaction heavily and
629 linux-mm@kvack.org.
638 # support for free page reporting
640 bool "Free page reporting"
642 Free page reporting allows for the incremental acquisition of
648 # support for page migration
651 bool "Page migration"
659 pages as migration can relocate pages to satisfy a huge page
675 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
685 int "Maximum scale factor of PCP (Per-CPU pageset) batch allocate/free"
689 In page allocator, PCP (Per-CPU pageset) is refilled and drained in
690 batches. The batch number is scaled automatically to improve page
703 bool "Enable KSM for page merging"
710 the many instances by a single page with that content, so
762 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
771 long-term mappings means that the space is wasted.
781 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
799 once and never freed. One full huge page's worth of memory shall
818 applications by speeding up page faults during memory
952 XXX: For now, swap cluster backing transparent huge page
958 bool "Read-only THP for filesystems (EXPERIMENTAL)"
962 Allow khugepaged to put read-only file-backed pages in THP.
969 bool "No per-page mapcount (EXPERIMENTAL)"
971 Do not maintain per-page mapcounts for pages part of larger
975 this information will rely on less-precise per-allocation information
976 instead: for example, using the average per-page mapcount in such
977 a large allocation instead of the per-page mapcount.
1024 # UP and nommu archs use km based percpu allocator
1050 subsystems to allocate big physically-contiguous blocks of memory.
1085 # the max page order for physically contiguous allocations.
1092 # the default page block order is MAX_PAGE_ORDER (10) as per
1096 int "Page Block Order Upper Limit"
1102 The page block order refers to the power of two number of pages that
1104 them. The maximum size of the page block order is at least limited by
1107 This config adds a new upper limit of default page block
1108 order when the page block order is required to be smaller than
1110 (see include/linux/pageblock-flags.h for details).
1124 soft-dirty bit on pte-s. This bit it set when someone writes
1125 into a page just as regular dirty bit, but unlike the latter
1128 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
1134 int "Default maximum user stack size for 32-bit processes (MB)"
1139 This is the maximum stack size in Megabytes in the VM layout of 32-bit
1165 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
1170 bool "Enable idle page tracking"
1179 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1195 checking, an architecture-agnostic way to find the stack pointer
1223 "device-physical" addresses which is needed for using a DAX
1229 # Helpers to mirror range of the CPU page tables of a process into device page
1268 on EXPERT systems. /proc/vmstat will only show page counts
1279 bool "Enable infrastructure for get_user_pages()-related unit tests"
1283 to make ioctl calls that can launch kernel-based unit tests for
1288 the non-_fast variants.
1290 There is also a sub-test that allows running dump_page() on any
1292 range of user-space addresses. These pages are either pinned via
1335 not mapped to other processes and other kernel page tables.
1366 handle page faults in userland.
1377 file-backed memory types like shmem and hugetlbfs.
1380 # multi-gen LRU {
1382 bool "Multi-Gen LRU"
1384 # make sure folio->flags has enough spare bits
1388 Documentation/admin-guide/mm/multigen_lru.rst for details.
1394 This option enables the multi-gen LRU by default.
1403 This option has a per-memcg and per-node memory overhead.
1417 Allow per-vma locking during page fault handling.
1420 handling page faults instead of taking mmap_lock.
1448 stacks (eg, x86 CET, arm64 GCS or RISC-V Zicfiss).
1454 bool "reclaim empty user page table pages"
1459 Try to reclaim empty user page table pages in paths other than munmap
1462 Note: now only empty user PTE page table pages will be reclaimed.