1============== 2Page migration 3============== 4 5Page migration allows moving the physical location of pages between 6nodes in a NUMA system while the process is running. This means that the 7virtual addresses that the process sees do not change. However, the 8system rearranges the physical location of those pages. 9 10Also see Documentation/mm/hmm.rst for migrating pages to or from device 11private memory. 12 13The main intent of page migration is to reduce the latency of memory accesses 14by moving pages near to the processor where the process accessing that memory 15is running. 16 17Page migration allows a process to manually relocate the node on which its 18pages are located through the MF_MOVE and MF_MOVE_ALL options while setting 19a new memory policy via mbind(). The pages of a process can also be relocated 20from another process using the sys_migrate_pages() function call. The 21migrate_pages() function call takes two sets of nodes and moves pages of a 22process that are located on the from nodes to the destination nodes. 23Page migration functions are provided by the numactl package by Andi Kleen 24(a version later than 0.9.3 is required. Get it from 25https://github.com/numactl/numactl.git). numactl provides libnuma 26which provides an interface similar to other NUMA functionality for page 27migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the 28pages of a process are located. See also the numa_maps documentation in the 29proc(5) man page. 30 31Manual migration is useful if for example the scheduler has relocated 32a process to a processor on a distant node. A batch scheduler or an 33administrator may detect the situation and move the pages of the process 34nearer to the new processor. The kernel itself only provides 35manual page migration support. Automatic page migration may be implemented 36through user space processes that move pages. A special function call 37"move_pages" allows the moving of individual pages within a process. 38For example, A NUMA profiler may obtain a log showing frequent off-node 39accesses and may use the result to move pages to more advantageous 40locations. 41 42Larger installations usually partition the system using cpusets into 43sections of nodes. Paul Jackson has equipped cpusets with the ability to 44move pages when a task is moved to another cpuset (See 45:ref:`CPUSETS <cpusets>`). 46Cpusets allow the automation of process locality. If a task is moved to 47a new cpuset then also all its pages are moved with it so that the 48performance of the process does not sink dramatically. Also the pages 49of processes in a cpuset are moved if the allowed memory nodes of a 50cpuset are changed. 51 52Page migration allows the preservation of the relative location of pages 53within a group of nodes for all migration techniques which will preserve a 54particular memory allocation pattern generated even after migrating a 55process. This is necessary in order to preserve the memory latencies. 56Processes will run with similar performance after migration. 57 58Page migration occurs in several steps. First a high level 59description for those trying to use migrate_pages() from the kernel 60(for userspace usage see the Andi Kleen's numactl package mentioned above) 61and then a low level description of how the low level details work. 62 63In kernel use of migrate_pages() 64================================ 65 661. Remove folios from the LRU. 67 68 Lists of folios to be migrated are generated by scanning over 69 folios and moving them into lists. This is done by 70 calling folio_isolate_lru(). 71 Calling folio_isolate_lru() increases the references to the folio 72 so that it cannot vanish while the folio migration occurs. 73 It also prevents the swapper or other scans from encountering 74 the folio. 75 762. We need to have a function of type new_folio_t that can be 77 passed to migrate_pages(). This function should figure out 78 how to allocate the correct new folio given the old folio. 79 803. The migrate_pages() function is called which attempts 81 to do the migration. It will call the function to allocate 82 the new folio for each folio that is considered for moving. 83 84How migrate_pages() works 85========================= 86 87migrate_pages() does several passes over its list of folios. A folio is moved 88if all references to a folio are removable at the time. The folio has 89already been removed from the LRU via folio_isolate_lru() and the refcount 90is increased so that the folio cannot be freed while folio migration occurs. 91 92Steps: 93 941. Lock the page to be migrated. 95 962. Ensure that writeback is complete. 97 983. Lock the new page that we want to move to. It is locked so that accesses to 99 this (not yet up-to-date) page immediately block while the move is in progress. 100 1014. All the page table references to the page are converted to migration 102 entries. This decreases the mapcount of a page. If the resulting 103 mapcount is not zero then we do not migrate the page. All user space 104 processes that attempt to access the page will now wait on the page lock 105 or wait for the migration page table entry to be removed. 106 1075. The i_pages lock is taken. This will cause all processes trying 108 to access the page via the mapping to block on the spinlock. 109 1106. The refcount of the page is examined and we back out if references remain. 111 Otherwise, we know that we are the only one referencing this page. 112 1137. The radix tree is checked and if it does not contain the pointer to this 114 page then we back out because someone else modified the radix tree. 115 1168. The new page is prepped with some settings from the old page so that 117 accesses to the new page will discover a page with the correct settings. 118 1199. The radix tree is changed to point to the new page. 120 12110. The reference count of the old page is dropped because the address space 122 reference is gone. A reference to the new page is established because 123 the new page is referenced by the address space. 124 12511. The i_pages lock is dropped. With that lookups in the mapping 126 become possible again. Processes will move from spinning on the lock 127 to sleeping on the locked new page. 128 12912. The page contents are copied to the new page. 130 13113. The remaining page flags are copied to the new page. 132 13314. The old page flags are cleared to indicate that the page does 134 not provide any information anymore. 135 13615. Queued up writeback on the new page is triggered. 137 13816. If migration entries were inserted into the page table, then replace them 139 with real ptes. Doing so will enable access for user space processes not 140 already waiting for the page lock. 141 14217. The page locks are dropped from the old and new page. 143 Processes waiting on the page lock will redo their page faults 144 and will reach the new page. 145 14618. The new page is moved to the LRU and can be scanned by the swapper, 147 etc. again. 148 149Non-LRU page migration 150====================== 151 152Although migration originally aimed for reducing the latency of memory 153accesses for NUMA, compaction also uses migration to create high-order 154pages. For compaction purposes, it is also useful to be able to move 155non-LRU pages, such as zsmalloc and virtio-balloon pages. 156 157If a driver wants to make its pages movable, it should define a struct 158movable_operations. It then needs to call __SetPageMovable() on each 159page that it may be able to move. This uses the ``page->mapping`` field, 160so this field is not available for the driver to use for other purposes. 161 162Monitoring Migration 163===================== 164 165The following events (counters) can be used to monitor page migration. 166 1671. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a 168 page was migrated. If the page was a non-THP and non-hugetlb page, then 169 this counter is increased by one. If the page was a THP or hugetlb, then 170 this counter is increased by the number of THP or hugetlb subpages. 171 For example, migration of a single 2MB THP that has 4KB-size base pages 172 (subpages) will cause this counter to increase by 512. 173 1742. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for 175 PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages, 176 if it was a THP or hugetlb. 177 1783. THP_MIGRATION_SUCCESS: A THP was migrated without being split. 179 1804. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split. 181 1825. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had 183 to be split. After splitting, a migration retry was used for its sub-pages. 184 185THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or 186PGMIGRATE_FAIL events. For example, a THP migration failure will cause both 187THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase. 188 189Christoph Lameter, May 8, 2006. 190Minchan Kim, Mar 28, 2016. 191 192.. kernel-doc:: include/linux/migrate.h 193