Lines Matching +full:page +full:- +full:size

12 that supports the automatic promotion and demotion of page sizes and
19 in the examples below we presume that the basic page size is 4K and
20 the huge page size is 2M, although the actual numbers may vary
26 requiring larger clear-page copy-page in page faults which is a
28 single page fault for each 2M virtual region touched by userland (so
43 larger size only if both KVM and the Linux guest are using
48 Modern kernels support "multi-size THP" (mTHP), which introduces the
49 ability to allocate memory in blocks that are bigger than a base page
50 but smaller than traditional PMD-size (as described above), in
51 increments of a power-of-2 number of pages. mTHP can back anonymous
53 PTE-mapped, but in many cases can still provide similar benefits to
54 those outlined above: Page faults are significantly reduced (by a
56 prominent because the size of each page isn't as huge as the PMD-sized
57 variant and there is less memory to clear in each page fault. Some
66 collapses sequences of basic pages into PMD-sized huge pages.
84 lived page allocations even for hugepage unaware applications that
89 large region but only touch 1 byte of it, in that case a 2M page might
90 be allocated instead of a 4k page for no good. This is why it's
91 possible to disable hugepages system-wide and to only have them inside
108 -------------------
113 system wide. This can be achieved per-supported-THP-size with one of::
115 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
116 echo madvise >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
117 echo never >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
119 where <size> is the hugepage size being addressed, the available sizes
125 PMD-sized huge pages unconditionally.
129 echo always >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
131 Alternatively it is possible to specify that a given hugepage size
132 will inherit the top-level "enabled" value::
134 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/enabled
138 echo inherit >/sys/kernel/mm/transparent_hugepage/hugepages-2048kB/enabled
140 The top-level setting (for use with "inherit") can be set by issuing
147 By default, PMD-sized hugepages have enabled="inherit" and all other
149 sizes, the kernel will select the most appropriate enabled size for a
195 should be self-explanatory. Note that ``madvise(...,
199 By default kernel tries to use huge, PMD-mappable zero page on read
200 page fault to anonymous mapping. It's possible to disable huge zero
201 page by writing 0 or enable it back by writing 1::
207 allocation library) may want to know the size (in bytes) of a
208 PMD-mappable transparent hugepage::
214 "underused". A THP is underused if the number of zero-filled pages in
222 khugepaged will be automatically started when PMD-sized THP is enabled
223 (either of the per-size anon control or the top-level control are set
225 PMD-sized THP is disabled (when both the per-size anon control and the
226 top-level control are "never")
229 -------------------
233 PMD-sized THP and no attempt is made to collapse to other THP
237 invoke defrag algorithms synchronously during the page faults, it
277 of small pages into one large page::
287 swap when collapsing a group of pages into a transparent huge page::
297 processes. khugepaged might treat pages of THPs as shared if any page of
307 You can change the sysfs boot time default for the top-level "enabled"
312 Alternatively, each supported anonymous THP size can be controlled by
313 passing ``thp_anon=<size>[KMG],<size>[KMG]:<state>;<size>[KMG]-<size>[KMG]:<state>``,
314 where ``<size>`` is the THP size (must be a power of 2 of PAGE_SIZE and
322 thp_anon=16K-64K:always;128K,512K:inherit;256K:madvise;1M-2M:never
349 size, ``thp_shmem`` controls each supported shmem THP size. ``thp_shmem``
368 Traditionally, tmpfs only supported a single huge page size ("PMD"). Today,
370 to as "multi-size THP" (mTHP). Huge pages of any size are commonly
373 While there is fine control over the huge page sizes to use for the internal
375 huge page sizes without any control over the exact sizes, behaving more like
379 ------------
385 Attempt to allocate huge pages every time we need a new page;
393 Only allocate huge page if it will be fully within i_size.
405 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
417 Force the huge option on for all - very useful for testing;
420 ----------------------
426 per THP size in
427 '/sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/shmem_enabled'
431 for tmpfs mounts, except that the different huge page sizes can be controlled
433 per-size knob is set to 'inherit'.
439 Attempt to allocate <size> huge pages every time we need a new page;
442 Inherit the top-level "shmem_enabled" value. By default, PMD-sized hugepages
446 Do not allocate <size> huge pages. Note that ``madvise(...,
451 Only allocate <size> huge page if it will be fully within i_size.
455 Only allocate <size> huge pages if requested with madvise();
461 transparent_hugepage/hugepages-<size>kB/enabled values and tmpfs mount
469 The number of PMD-sized anonymous transparent huge pages currently used by the
471 To identify what applications are using PMD-sized anonymous transparent huge
474 PMD-sized THP for historical reasons and should have been called
490 is incremented every time a huge page is successfully
491 allocated and charged to handle a page fault.
495 a range of pages to collapse into one huge page and has
496 successfully allocated a new huge page to store the data.
499 is incremented if a page fault fails to allocate or charge
500 a huge page and instead falls back to using small pages.
503 is incremented if a page fault fails to charge a huge page and
509 of pages that should be collapsed into one huge page but failed
513 is incremented every time a shmem huge page is successfully
518 is incremented if a shmem huge page is attempted to be allocated
523 is incremented if a shmem huge page cannot be charged and instead
529 is incremented every time a file or shmem huge page is mapped into
533 is incremented every time a huge page is split into base
535 reason is that a huge page is old and is being reclaimed.
536 This action implies splitting all PMD the page mapped with.
540 page. This can happen if the page was pinned by somebody.
543 is incremented when a huge page is put onto split
544 queue. This happens when a huge page is partially unmapped and
549 is incremented when a huge page on the split queue was split
557 munmap() on part of huge page. It doesn't split huge page, only
558 page table entry.
561 is incremented every time a huge zero page used for thp is
563 the huge zero page, only its allocation.
567 huge zero page and falls back to using small pages.
570 is incremented every time a huge page is swapout in one
574 is incremented if a huge page has to be split before swapout.
576 for the huge page.
578 In /sys/kernel/mm/transparent_hugepage/hugepages-<size>kB/stats, There are
579 also individual counters for each huge page size, which can be utilized to
584 is incremented every time a huge page is successfully
585 allocated and charged to handle a page fault.
588 is incremented if a page fault fails to allocate or charge
589 a huge page and instead falls back to using huge pages with
593 is incremented if a page fault fails to charge a huge page and
598 is incremented every time a huge page is swapped out to zswap in one
602 is incremented every time a huge page is swapped in from a non-zswap
606 is incremented if swapin fails to allocate or charge a huge page
611 is incremented if swapin fails to charge a huge page and instead
616 is incremented every time a huge page is swapped out to a non-zswap
620 is incremented if a huge page has to be split before swapout.
622 for the huge page.
625 is incremented every time a shmem huge page is successfully
629 is incremented if a shmem huge page is attempted to be allocated
633 is incremented if a shmem huge page cannot be charged and instead
638 is incremented every time a huge page is successfully split into
640 common reason is that a huge page is old and is being reclaimed.
644 page. This can happen if the page was pinned by somebody.
647 is incremented when a huge page is put onto split queue.
648 This happens when a huge page is partially unmapped and splitting
666 huge page for use. There are some counters in ``/proc/vmstat`` to help
671 memory compaction so that a huge page is free for use.
675 freed a huge page for use.