xref: /linux/Documentation/admin-guide/mm/userfaultfd.rst (revision 34dc1baba215b826e454b8d19e4f24adbeb7d00d)
1===========
2Userfaultfd
3===========
4
5Objective
6=========
7
8Userfaults allow the implementation of on-demand paging from userland
9and more generally they allow userland to take control of various
10memory page faults, something otherwise only the kernel code could do.
11
12For example userfaults allows a proper and more optimal implementation
13of the ``PROT_NONE+SIGSEGV`` trick.
14
15Design
16======
17
18Userspace creates a new userfaultfd, initializes it, and registers one or more
19regions of virtual memory with it. Then, any page faults which occur within the
20region(s) result in a message being delivered to the userfaultfd, notifying
21userspace of the fault.
22
23The ``userfaultfd`` (aside from registering and unregistering virtual
24memory ranges) provides two primary functionalities:
25
261) ``read/POLLIN`` protocol to notify a userland thread of the faults
27   happening
28
292) various ``UFFDIO_*`` ioctls that can manage the virtual memory regions
30   registered in the ``userfaultfd`` that allows userland to efficiently
31   resolve the userfaults it receives via 1) or to manage the virtual
32   memory in the background
33
34The real advantage of userfaults if compared to regular virtual memory
35management of mremap/mprotect is that the userfaults in all their
36operations never involve heavyweight structures like vmas (in fact the
37``userfaultfd`` runtime load never takes the mmap_lock for writing).
38Vmas are not suitable for page- (or hugepage) granular fault tracking
39when dealing with virtual address spaces that could span
40Terabytes. Too many vmas would be needed for that.
41
42The ``userfaultfd``, once created, can also be
43passed using unix domain sockets to a manager process, so the same
44manager process could handle the userfaults of a multitude of
45different processes without them being aware about what is going on
46(well of course unless they later try to use the ``userfaultfd``
47themselves on the same region the manager is already tracking, which
48is a corner case that would currently return ``-EBUSY``).
49
50API
51===
52
53Creating a userfaultfd
54----------------------
55
56There are two ways to create a new userfaultfd, each of which provide ways to
57restrict access to this functionality (since historically userfaultfds which
58handle kernel page faults have been a useful tool for exploiting the kernel).
59
60The first way, supported since userfaultfd was introduced, is the
61userfaultfd(2) syscall. Access to this is controlled in several ways:
62
63- Any user can always create a userfaultfd which traps userspace page faults
64  only. Such a userfaultfd can be created using the userfaultfd(2) syscall
65  with the flag UFFD_USER_MODE_ONLY.
66
67- In order to also trap kernel page faults for the address space, either the
68  process needs the CAP_SYS_PTRACE capability, or the system must have
69  vm.unprivileged_userfaultfd set to 1. By default, vm.unprivileged_userfaultfd
70  is set to 0.
71
72The second way, added to the kernel more recently, is by opening
73/dev/userfaultfd and issuing a USERFAULTFD_IOC_NEW ioctl to it. This method
74yields equivalent userfaultfds to the userfaultfd(2) syscall.
75
76Unlike userfaultfd(2), access to /dev/userfaultfd is controlled via normal
77filesystem permissions (user/group/mode), which gives fine grained access to
78userfaultfd specifically, without also granting other unrelated privileges at
79the same time (as e.g. granting CAP_SYS_PTRACE would do). Users who have access
80to /dev/userfaultfd can always create userfaultfds that trap kernel page faults;
81vm.unprivileged_userfaultfd is not considered.
82
83Initializing a userfaultfd
84--------------------------
85
86When first opened the ``userfaultfd`` must be enabled invoking the
87``UFFDIO_API`` ioctl specifying a ``uffdio_api.api`` value set to ``UFFD_API`` (or
88a later API version) which will specify the ``read/POLLIN`` protocol
89userland intends to speak on the ``UFFD`` and the ``uffdio_api.features``
90userland requires. The ``UFFDIO_API`` ioctl if successful (i.e. if the
91requested ``uffdio_api.api`` is spoken also by the running kernel and the
92requested features are going to be enabled) will return into
93``uffdio_api.features`` and ``uffdio_api.ioctls`` two 64bit bitmasks of
94respectively all the available features of the read(2) protocol and
95the generic ioctl available.
96
97The ``uffdio_api.features`` bitmask returned by the ``UFFDIO_API`` ioctl
98defines what memory types are supported by the ``userfaultfd`` and what
99events, except page fault notifications, may be generated:
100
101- The ``UFFD_FEATURE_EVENT_*`` flags indicate that various other events
102  other than page faults are supported. These events are described in more
103  detail below in the `Non-cooperative userfaultfd`_ section.
104
105- ``UFFD_FEATURE_MISSING_HUGETLBFS`` and ``UFFD_FEATURE_MISSING_SHMEM``
106  indicate that the kernel supports ``UFFDIO_REGISTER_MODE_MISSING``
107  registrations for hugetlbfs and shared memory (covering all shmem APIs,
108  i.e. tmpfs, ``IPCSHM``, ``/dev/zero``, ``MAP_SHARED``, ``memfd_create``,
109  etc) virtual memory areas, respectively.
110
111- ``UFFD_FEATURE_MINOR_HUGETLBFS`` indicates that the kernel supports
112  ``UFFDIO_REGISTER_MODE_MINOR`` registration for hugetlbfs virtual memory
113  areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
114  support for shmem virtual memory areas.
115
116The userland application should set the feature flags it intends to use
117when invoking the ``UFFDIO_API`` ioctl, to request that those features be
118enabled if supported.
119
120Once the ``userfaultfd`` API has been enabled the ``UFFDIO_REGISTER``
121ioctl should be invoked (if present in the returned ``uffdio_api.ioctls``
122bitmask) to register a memory range in the ``userfaultfd`` by setting the
123uffdio_register structure accordingly. The ``uffdio_register.mode``
124bitmask will specify to the kernel which kind of faults to track for
125the range. The ``UFFDIO_REGISTER`` ioctl will return the
126``uffdio_register.ioctls`` bitmask of ioctls that are suitable to resolve
127userfaults on the range registered. Not all ioctls will necessarily be
128supported for all memory types (e.g. anonymous memory vs. shmem vs.
129hugetlbfs), or all types of intercepted faults.
130
131Userland can use the ``uffdio_register.ioctls`` to manage the virtual
132address space in the background (to add or potentially also remove
133memory from the ``userfaultfd`` registered range). This means a userfault
134could be triggering just before userland maps in the background the
135user-faulted page.
136
137Resolving Userfaults
138--------------------
139
140There are three basic ways to resolve userfaults:
141
142- ``UFFDIO_COPY`` atomically copies some existing page contents from
143  userspace.
144
145- ``UFFDIO_ZEROPAGE`` atomically zeros the new page.
146
147- ``UFFDIO_CONTINUE`` maps an existing, previously-populated page.
148
149These operations are atomic in the sense that they guarantee nothing can
150see a half-populated page, since readers will keep userfaulting until the
151operation has finished.
152
153By default, these wake up userfaults blocked on the range in question.
154They support a ``UFFDIO_*_MODE_DONTWAKE`` ``mode`` flag, which indicates
155that waking will be done separately at some later time.
156
157Which ioctl to choose depends on the kind of page fault, and what we'd
158like to do to resolve it:
159
160- For ``UFFDIO_REGISTER_MODE_MISSING`` faults, the fault needs to be
161  resolved by either providing a new page (``UFFDIO_COPY``), or mapping
162  the zero page (``UFFDIO_ZEROPAGE``). By default, the kernel would map
163  the zero page for a missing fault. With userfaultfd, userspace can
164  decide what content to provide before the faulting thread continues.
165
166- For ``UFFDIO_REGISTER_MODE_MINOR`` faults, there is an existing page (in
167  the page cache). Userspace has the option of modifying the page's
168  contents before resolving the fault. Once the contents are correct
169  (modified or not), userspace asks the kernel to map the page and let the
170  faulting thread continue with ``UFFDIO_CONTINUE``.
171
172Notes:
173
174- You can tell which kind of fault occurred by examining
175  ``pagefault.flags`` within the ``uffd_msg``, checking for the
176  ``UFFD_PAGEFAULT_FLAG_*`` flags.
177
178- None of the page-delivering ioctls default to the range that you
179  registered with.  You must fill in all fields for the appropriate
180  ioctl struct including the range.
181
182- You get the address of the access that triggered the missing page
183  event out of a struct uffd_msg that you read in the thread from the
184  uffd.  You can supply as many pages as you want with these IOCTLs.
185  Keep in mind that unless you used DONTWAKE then the first of any of
186  those IOCTLs wakes up the faulting thread.
187
188- Be sure to test for all errors including
189  (``pollfd[0].revents & POLLERR``).  This can happen, e.g. when ranges
190  supplied were incorrect.
191
192Write Protect Notifications
193---------------------------
194
195This is equivalent to (but faster than) using mprotect and a SIGSEGV
196signal handler.
197
198Firstly you need to register a range with ``UFFDIO_REGISTER_MODE_WP``.
199Instead of using mprotect(2) you use
200``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``
201while ``mode = UFFDIO_WRITEPROTECT_MODE_WP``
202in the struct passed in.  The range does not default to and does not
203have to be identical to the range you registered with.  You can write
204protect as many ranges as you like (inside the registered range).
205Then, in the thread reading from uffd the struct will have
206``msg.arg.pagefault.flags & UFFD_PAGEFAULT_FLAG_WP`` set. Now you send
207``ioctl(uffd, UFFDIO_WRITEPROTECT, struct *uffdio_writeprotect)``
208again while ``pagefault.mode`` does not have ``UFFDIO_WRITEPROTECT_MODE_WP``
209set. This wakes up the thread which will continue to run with writes. This
210allows you to do the bookkeeping about the write in the uffd reading
211thread before the ioctl.
212
213If you registered with both ``UFFDIO_REGISTER_MODE_MISSING`` and
214``UFFDIO_REGISTER_MODE_WP`` then you need to think about the sequence in
215which you supply a page and undo write protect.  Note that there is a
216difference between writes into a WP area and into a !WP area.  The
217former will have ``UFFD_PAGEFAULT_FLAG_WP`` set, the latter
218``UFFD_PAGEFAULT_FLAG_WRITE``.  The latter did not fail on protection but
219you still need to supply a page when ``UFFDIO_REGISTER_MODE_MISSING`` was
220used.
221
222Userfaultfd write-protect mode currently behave differently on none ptes
223(when e.g. page is missing) over different types of memories.
224
225For anonymous memory, ``ioctl(UFFDIO_WRITEPROTECT)`` will ignore none ptes
226(e.g. when pages are missing and not populated).  For file-backed memories
227like shmem and hugetlbfs, none ptes will be write protected just like a
228present pte.  In other words, there will be a userfaultfd write fault
229message generated when writing to a missing page on file typed memories,
230as long as the page range was write-protected before.  Such a message will
231not be generated on anonymous memories by default.
232
233If the application wants to be able to write protect none ptes on anonymous
234memory, one can pre-populate the memory with e.g. MADV_POPULATE_READ.  On
235newer kernels, one can also detect the feature UFFD_FEATURE_WP_UNPOPULATED
236and set the feature bit in advance to make sure none ptes will also be
237write protected even upon anonymous memory.
238
239When using ``UFFDIO_REGISTER_MODE_WP`` in combination with either
240``UFFDIO_REGISTER_MODE_MISSING`` or ``UFFDIO_REGISTER_MODE_MINOR``, when
241resolving missing / minor faults with ``UFFDIO_COPY`` or ``UFFDIO_CONTINUE``
242respectively, it may be desirable for the new page / mapping to be
243write-protected (so future writes will also result in a WP fault). These ioctls
244support a mode flag (``UFFDIO_COPY_MODE_WP`` or ``UFFDIO_CONTINUE_MODE_WP``
245respectively) to configure the mapping this way.
246
247Memory Poisioning Emulation
248---------------------------
249
250In response to a fault (either missing or minor), an action userspace can
251take to "resolve" it is to issue a ``UFFDIO_POISON``. This will cause any
252future faulters to either get a SIGBUS, or in KVM's case the guest will
253receive an MCE as if there were hardware memory poisoning.
254
255This is used to emulate hardware memory poisoning. Imagine a VM running on a
256machine which experiences a real hardware memory error. Later, we live migrate
257the VM to another physical machine. Since we want the migration to be
258transparent to the guest, we want that same address range to act as if it was
259still poisoned, even though it's on a new physical host which ostensibly
260doesn't have a memory error in the exact same spot.
261
262QEMU/KVM
263========
264
265QEMU/KVM is using the ``userfaultfd`` syscall to implement postcopy live
266migration. Postcopy live migration is one form of memory
267externalization consisting of a virtual machine running with part or
268all of its memory residing on a different node in the cloud. The
269``userfaultfd`` abstraction is generic enough that not a single line of
270KVM kernel code had to be modified in order to add postcopy live
271migration to QEMU.
272
273Guest async page faults, ``FOLL_NOWAIT`` and all other ``GUP*`` features work
274just fine in combination with userfaults. Userfaults trigger async
275page faults in the guest scheduler so those guest processes that
276aren't waiting for userfaults (i.e. network bound) can keep running in
277the guest vcpus.
278
279It is generally beneficial to run one pass of precopy live migration
280just before starting postcopy live migration, in order to avoid
281generating userfaults for readonly guest regions.
282
283The implementation of postcopy live migration currently uses one
284single bidirectional socket but in the future two different sockets
285will be used (to reduce the latency of the userfaults to the minimum
286possible without having to decrease ``/proc/sys/net/ipv4/tcp_wmem``).
287
288The QEMU in the source node writes all pages that it knows are missing
289in the destination node, into the socket, and the migration thread of
290the QEMU running in the destination node runs ``UFFDIO_COPY|ZEROPAGE``
291ioctls on the ``userfaultfd`` in order to map the received pages into the
292guest (``UFFDIO_ZEROCOPY`` is used if the source page was a zero page).
293
294A different postcopy thread in the destination node listens with
295poll() to the ``userfaultfd`` in parallel. When a ``POLLIN`` event is
296generated after a userfault triggers, the postcopy thread read() from
297the ``userfaultfd`` and receives the fault address (or ``-EAGAIN`` in case the
298userfault was already resolved and waken by a ``UFFDIO_COPY|ZEROPAGE`` run
299by the parallel QEMU migration thread).
300
301After the QEMU postcopy thread (running in the destination node) gets
302the userfault address it writes the information about the missing page
303into the socket. The QEMU source node receives the information and
304roughly "seeks" to that page address and continues sending all
305remaining missing pages from that new page offset. Soon after that
306(just the time to flush the tcp_wmem queue through the network) the
307migration thread in the QEMU running in the destination node will
308receive the page that triggered the userfault and it'll map it as
309usual with the ``UFFDIO_COPY|ZEROPAGE`` (without actually knowing if it
310was spontaneously sent by the source or if it was an urgent page
311requested through a userfault).
312
313By the time the userfaults start, the QEMU in the destination node
314doesn't need to keep any per-page state bitmap relative to the live
315migration around and a single per-page bitmap has to be maintained in
316the QEMU running in the source node to know which pages are still
317missing in the destination node. The bitmap in the source node is
318checked to find which missing pages to send in round robin and we seek
319over it when receiving incoming userfaults. After sending each page of
320course the bitmap is updated accordingly. It's also useful to avoid
321sending the same page twice (in case the userfault is read by the
322postcopy thread just before ``UFFDIO_COPY|ZEROPAGE`` runs in the migration
323thread).
324
325Non-cooperative userfaultfd
326===========================
327
328When the ``userfaultfd`` is monitored by an external manager, the manager
329must be able to track changes in the process virtual memory
330layout. Userfaultfd can notify the manager about such changes using
331the same read(2) protocol as for the page fault notifications. The
332manager has to explicitly enable these events by setting appropriate
333bits in ``uffdio_api.features`` passed to ``UFFDIO_API`` ioctl:
334
335``UFFD_FEATURE_EVENT_FORK``
336	enable ``userfaultfd`` hooks for fork(). When this feature is
337	enabled, the ``userfaultfd`` context of the parent process is
338	duplicated into the newly created process. The manager
339	receives ``UFFD_EVENT_FORK`` with file descriptor of the new
340	``userfaultfd`` context in the ``uffd_msg.fork``.
341
342``UFFD_FEATURE_EVENT_REMAP``
343	enable notifications about mremap() calls. When the
344	non-cooperative process moves a virtual memory area to a
345	different location, the manager will receive
346	``UFFD_EVENT_REMAP``. The ``uffd_msg.remap`` will contain the old and
347	new addresses of the area and its original length.
348
349``UFFD_FEATURE_EVENT_REMOVE``
350	enable notifications about madvise(MADV_REMOVE) and
351	madvise(MADV_DONTNEED) calls. The event ``UFFD_EVENT_REMOVE`` will
352	be generated upon these calls to madvise(). The ``uffd_msg.remove``
353	will contain start and end addresses of the removed area.
354
355``UFFD_FEATURE_EVENT_UNMAP``
356	enable notifications about memory unmapping. The manager will
357	get ``UFFD_EVENT_UNMAP`` with ``uffd_msg.remove`` containing start and
358	end addresses of the unmapped area.
359
360Although the ``UFFD_FEATURE_EVENT_REMOVE`` and ``UFFD_FEATURE_EVENT_UNMAP``
361are pretty similar, they quite differ in the action expected from the
362``userfaultfd`` manager. In the former case, the virtual memory is
363removed, but the area is not, the area remains monitored by the
364``userfaultfd``, and if a page fault occurs in that area it will be
365delivered to the manager. The proper resolution for such page fault is
366to zeromap the faulting address. However, in the latter case, when an
367area is unmapped, either explicitly (with munmap() system call), or
368implicitly (e.g. during mremap()), the area is removed and in turn the
369``userfaultfd`` context for such area disappears too and the manager will
370not get further userland page faults from the removed area. Still, the
371notification is required in order to prevent manager from using
372``UFFDIO_COPY`` on the unmapped area.
373
374Unlike userland page faults which have to be synchronous and require
375explicit or implicit wakeup, all the events are delivered
376asynchronously and the non-cooperative process resumes execution as
377soon as manager executes read(). The ``userfaultfd`` manager should
378carefully synchronize calls to ``UFFDIO_COPY`` with the events
379processing. To aid the synchronization, the ``UFFDIO_COPY`` ioctl will
380return ``-ENOSPC`` when the monitored process exits at the time of
381``UFFDIO_COPY``, and ``-ENOENT``, when the non-cooperative process has changed
382its virtual memory layout simultaneously with outstanding ``UFFDIO_COPY``
383operation.
384
385The current asynchronous model of the event delivery is optimal for
386single threaded non-cooperative ``userfaultfd`` manager implementations. A
387synchronous event delivery model can be added later as a new
388``userfaultfd`` feature to facilitate multithreading enhancements of the
389non cooperative manager, for example to allow ``UFFDIO_COPY`` ioctls to
390run in parallel to the event reception. Single threaded
391implementations should continue to use the current async event
392delivery model instead.
393