xref: /linux/Documentation/mm/page_migration.rst (revision 202779456dc5b75d07b214064161ef6a2421e8be)
1==============
2Page migration
3==============
4
5Page migration allows moving the physical location of pages between
6nodes in a NUMA system while the process is running. This means that the
7virtual addresses that the process sees do not change. However, the
8system rearranges the physical location of those pages.
9
10Also see Documentation/mm/hmm.rst for migrating pages to or from device
11private memory.
12
13The main intent of page migration is to reduce the latency of memory accesses
14by moving pages near to the processor where the process accessing that memory
15is running.
16
17Page migration allows a process to manually relocate the node on which its
18pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
19a new memory policy via mbind(). The pages of a process can also be relocated
20from another process using the sys_migrate_pages() function call. The
21migrate_pages() function call takes two sets of nodes and moves pages of a
22process that are located on the from nodes to the destination nodes.
23Page migration functions are provided by the numactl package by Andi Kleen
24(a version later than 0.9.3 is required. Get it from
25https://github.com/numactl/numactl.git). numactl provides libnuma
26which provides an interface similar to other NUMA functionality for page
27migration.  cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
28pages of a process are located. See also the numa_maps documentation in the
29proc(5) man page.
30
31Manual migration is useful if for example the scheduler has relocated
32a process to a processor on a distant node. A batch scheduler or an
33administrator may detect the situation and move the pages of the process
34nearer to the new processor. The kernel itself only provides
35manual page migration support. Automatic page migration may be implemented
36through user space processes that move pages. A special function call
37"move_pages" allows the moving of individual pages within a process.
38For example, A NUMA profiler may obtain a log showing frequent off-node
39accesses and may use the result to move pages to more advantageous
40locations.
41
42Larger installations usually partition the system using cpusets into
43sections of nodes. Paul Jackson has equipped cpusets with the ability to
44move pages when a task is moved to another cpuset (See
45:ref:`CPUSETS <cpusets>`).
46Cpusets allow the automation of process locality. If a task is moved to
47a new cpuset then also all its pages are moved with it so that the
48performance of the process does not sink dramatically. Also the pages
49of processes in a cpuset are moved if the allowed memory nodes of a
50cpuset are changed.
51
52Page migration allows the preservation of the relative location of pages
53within a group of nodes for all migration techniques which will preserve a
54particular memory allocation pattern generated even after migrating a
55process. This is necessary in order to preserve the memory latencies.
56Processes will run with similar performance after migration.
57
58Page migration occurs in several steps. First a high level
59description for those trying to use migrate_pages() from the kernel
60(for userspace usage see the Andi Kleen's numactl package mentioned above)
61and then a low level description of how the low level details work.
62
63In kernel use of migrate_pages()
64================================
65
661. Remove pages from the LRU.
67
68   Lists of pages to be migrated are generated by scanning over
69   pages and moving them into lists. This is done by
70   calling isolate_lru_page().
71   Calling isolate_lru_page() increases the references to the page
72   so that it cannot vanish while the page migration occurs.
73   It also prevents the swapper or other scans from encountering
74   the page.
75
762. We need to have a function of type new_page_t that can be
77   passed to migrate_pages(). This function should figure out
78   how to allocate the correct new page given the old page.
79
803. The migrate_pages() function is called which attempts
81   to do the migration. It will call the function to allocate
82   the new page for each page that is considered for
83   moving.
84
85How migrate_pages() works
86=========================
87
88migrate_pages() does several passes over its list of pages. A page is moved
89if all references to a page are removable at the time. The page has
90already been removed from the LRU via isolate_lru_page() and the refcount
91is increased so that the page cannot be freed while page migration occurs.
92
93Steps:
94
951. Lock the page to be migrated.
96
972. Ensure that writeback is complete.
98
993. Lock the new page that we want to move to. It is locked so that accesses to
100   this (not yet up-to-date) page immediately block while the move is in progress.
101
1024. All the page table references to the page are converted to migration
103   entries. This decreases the mapcount of a page. If the resulting
104   mapcount is not zero then we do not migrate the page. All user space
105   processes that attempt to access the page will now wait on the page lock
106   or wait for the migration page table entry to be removed.
107
1085. The i_pages lock is taken. This will cause all processes trying
109   to access the page via the mapping to block on the spinlock.
110
1116. The refcount of the page is examined and we back out if references remain.
112   Otherwise, we know that we are the only one referencing this page.
113
1147. The radix tree is checked and if it does not contain the pointer to this
115   page then we back out because someone else modified the radix tree.
116
1178. The new page is prepped with some settings from the old page so that
118   accesses to the new page will discover a page with the correct settings.
119
1209. The radix tree is changed to point to the new page.
121
12210. The reference count of the old page is dropped because the address space
123    reference is gone. A reference to the new page is established because
124    the new page is referenced by the address space.
125
12611. The i_pages lock is dropped. With that lookups in the mapping
127    become possible again. Processes will move from spinning on the lock
128    to sleeping on the locked new page.
129
13012. The page contents are copied to the new page.
131
13213. The remaining page flags are copied to the new page.
133
13414. The old page flags are cleared to indicate that the page does
135    not provide any information anymore.
136
13715. Queued up writeback on the new page is triggered.
138
13916. If migration entries were inserted into the page table, then replace them
140    with real ptes. Doing so will enable access for user space processes not
141    already waiting for the page lock.
142
14317. The page locks are dropped from the old and new page.
144    Processes waiting on the page lock will redo their page faults
145    and will reach the new page.
146
14718. The new page is moved to the LRU and can be scanned by the swapper,
148    etc. again.
149
150Non-LRU page migration
151======================
152
153Although migration originally aimed for reducing the latency of memory
154accesses for NUMA, compaction also uses migration to create high-order
155pages.  For compaction purposes, it is also useful to be able to move
156non-LRU pages, such as zsmalloc and virtio-balloon pages.
157
158If a driver wants to make its pages movable, it should define a struct
159movable_operations.  It then needs to call __SetPageMovable() on each
160page that it may be able to move.  This uses the ``page->mapping`` field,
161so this field is not available for the driver to use for other purposes.
162
163Monitoring Migration
164=====================
165
166The following events (counters) can be used to monitor page migration.
167
1681. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
169   page was migrated. If the page was a non-THP and non-hugetlb page, then
170   this counter is increased by one. If the page was a THP or hugetlb, then
171   this counter is increased by the number of THP or hugetlb subpages.
172   For example, migration of a single 2MB THP that has 4KB-size base pages
173   (subpages) will cause this counter to increase by 512.
174
1752. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
176   PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
177   if it was a THP or hugetlb.
178
1793. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
180
1814. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
182
1835. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
184   to be split. After splitting, a migration retry was used for it's sub-pages.
185
186THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
187PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
188THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
189
190Christoph Lameter, May 8, 2006.
191Minchan Kim, Mar 28, 2016.
192
193.. kernel-doc:: include/linux/migrate.h
194