xref: /linux/Documentation/mm/transhuge.rst (revision 497e6b37b0099dc415578488287fd84fb74433eb)
1.. _transhuge:
2
3============================
4Transparent Hugepage Support
5============================
6
7This document describes design principles for Transparent Hugepage (THP)
8support and its interaction with other parts of the memory management
9system.
10
11Design principles
12=================
13
14- "graceful fallback": mm components which don't have transparent hugepage
15  knowledge fall back to breaking huge pmd mapping into table of ptes and,
16  if necessary, split a transparent hugepage. Therefore these components
17  can continue working on the regular pages or regular pte mappings.
18
19- if a hugepage allocation fails because of memory fragmentation,
20  regular pages should be gracefully allocated instead and mixed in
21  the same vma without any failure or significant delay and without
22  userland noticing
23
24- if some task quits and more hugepages become available (either
25  immediately in the buddy or through the VM), guest physical memory
26  backed by regular pages should be relocated on hugepages
27  automatically (with khugepaged)
28
29- it doesn't require memory reservation and in turn it uses hugepages
30  whenever possible (the only possible reservation here is kernelcore=
31  to avoid unmovable pages to fragment all the memory but such a tweak
32  is not specific to transparent hugepage support and it's a generic
33  feature that applies to all dynamic high order allocations in the
34  kernel)
35
36get_user_pages and follow_page
37==============================
38
39get_user_pages and follow_page if run on a hugepage, will return the
40head or tail pages as usual (exactly as they would do on
41hugetlbfs). Most GUP users will only care about the actual physical
42address of the page and its temporary pinning to release after the I/O
43is complete, so they won't ever notice the fact the page is huge. But
44if any driver is going to mangle over the page structure of the tail
45page (like for checking page->mapping or other bits that are relevant
46for the head page and not the tail page), it should be updated to jump
47to check head page instead. Taking a reference on any head/tail page would
48prevent the page from being split by anyone.
49
50.. note::
51   these aren't new constraints to the GUP API, and they match the
52   same constraints that apply to hugetlbfs too, so any driver capable
53   of handling GUP on hugetlbfs will also work fine on transparent
54   hugepage backed mappings.
55
56Graceful fallback
57=================
58
59Code walking pagetables but unaware about huge pmds can simply call
60split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
61pmd_offset. It's trivial to make the code transparent hugepage aware
62by just grepping for "pmd_offset" and adding split_huge_pmd where
63missing after pmd_offset returns the pmd. Thanks to the graceful
64fallback design, with a one liner change, you can avoid to write
65hundreds if not thousands of lines of complex code to make your code
66hugepage aware.
67
68If you're not walking pagetables but you run into a physical hugepage
69that you can't handle natively in your code, you can split it by
70calling split_huge_page(page). This is what the Linux VM does before
71it tries to swapout the hugepage for example. split_huge_page() can fail
72if the page is pinned and you must handle this correctly.
73
74Example to make mremap.c transparent hugepage aware with a one liner
75change::
76
77	diff --git a/mm/mremap.c b/mm/mremap.c
78	--- a/mm/mremap.c
79	+++ b/mm/mremap.c
80	@@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
81			return NULL;
82
83		pmd = pmd_offset(pud, addr);
84	+	split_huge_pmd(vma, pmd, addr);
85		if (pmd_none_or_clear_bad(pmd))
86			return NULL;
87
88Locking in hugepage aware code
89==============================
90
91We want as much code as possible hugepage aware, as calling
92split_huge_page() or split_huge_pmd() has a cost.
93
94To make pagetable walks huge pmd aware, all you need to do is to call
95pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
96mmap_lock in read (or write) mode to be sure a huge pmd cannot be
97created from under you by khugepaged (khugepaged collapse_huge_page
98takes the mmap_lock in write mode in addition to the anon_vma lock). If
99pmd_trans_huge returns false, you just fallback in the old code
100paths. If instead pmd_trans_huge returns true, you have to take the
101page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
102page table lock will prevent the huge pmd being converted into a
103regular pmd from under you (split_huge_pmd can run in parallel to the
104pagetable walk). If the second pmd_trans_huge returns false, you
105should just drop the page table lock and fallback to the old code as
106before. Otherwise, you can proceed to process the huge pmd and the
107hugepage natively. Once finished, you can drop the page table lock.
108
109Refcounts and transparent huge pages
110====================================
111
112Refcounting on THP is mostly consistent with refcounting on other compound
113pages:
114
115  - get_page()/put_page() and GUP operate on head page's ->_refcount.
116
117  - ->_refcount in tail pages is always zero: get_page_unless_zero() never
118    succeeds on tail pages.
119
120  - map/unmap of PMD entry for the whole compound page increment/decrement
121    ->compound_mapcount, stored in the first tail page of the compound page;
122    and also increment/decrement ->subpages_mapcount (also in the first tail)
123    by COMPOUND_MAPPED when compound_mapcount goes from -1 to 0 or 0 to -1.
124
125  - map/unmap of sub-pages with PTE entry increment/decrement ->_mapcount
126    on relevant sub-page of the compound page, and also increment/decrement
127    ->subpages_mapcount, stored in first tail page of the compound page, when
128    _mapcount goes from -1 to 0 or 0 to -1: counting sub-pages mapped by PTE.
129
130split_huge_page internally has to distribute the refcounts in the head
131page to the tail pages before clearing all PG_head/tail bits from the page
132structures. It can be done easily for refcounts taken by page table
133entries, but we don't have enough information on how to distribute any
134additional pins (i.e. from get_user_pages). split_huge_page() fails any
135requests to split pinned huge pages: it expects page count to be equal to
136the sum of mapcount of all sub-pages plus one (split_huge_page caller must
137have a reference to the head page).
138
139split_huge_page uses migration entries to stabilize page->_refcount and
140page->_mapcount of anonymous pages. File pages just get unmapped.
141
142We are safe against physical memory scanners too: the only legitimate way
143a scanner can get a reference to a page is get_page_unless_zero().
144
145All tail pages have zero ->_refcount until atomic_add(). This prevents the
146scanner from getting a reference to the tail page up to that point. After the
147atomic_add() we don't care about the ->_refcount value. We already know how
148many references should be uncharged from the head page.
149
150For head page get_page_unless_zero() will succeed and we don't mind. It's
151clear where references should go after split: it will stay on the head page.
152
153Note that split_huge_pmd() doesn't have any limitations on refcounting:
154pmd can be split at any point and never fails.
155
156Partial unmap and deferred_split_huge_page()
157============================================
158
159Unmapping part of THP (with munmap() or other way) is not going to free
160memory immediately. Instead, we detect that a subpage of THP is not in use
161in page_remove_rmap() and queue the THP for splitting if memory pressure
162comes. Splitting will free up unused subpages.
163
164Splitting the page right away is not an option due to locking context in
165the place where we can detect partial unmap. It also might be
166counterproductive since in many cases partial unmap happens during exit(2) if
167a THP crosses a VMA boundary.
168
169The function deferred_split_huge_page() is used to queue a page for splitting.
170The splitting itself will happen when we get memory pressure via shrinker
171interface.
172