Lines Matching +full:memory +full:- +full:mapping
8 Huge pages as described at Documentation/admin-guide/mm/hugetlbpage.rst are
15 huge pages to cover the mapping, the mmap() would fail. This was first
17 were enough free huge pages to cover the mapping. Like most things in the
20 available for page faults in that mapping. The description below attempts to
34 This is a global (per-hstate) count of reserved huge pages. Reserved
37 as (``free_huge_pages - resv_huge_pages``).
50 There is one reserve map for each huge page mapping in the system.
52 the mapping. A region is described as::
61 indices into the mapping. Depending on the type of mapping, a
69 associated with the mapping.
71 Indicates task originally mapping this range (and creating
83 A huge page mapping or segment is either private or shared. If private,
89 - For private mappings, the reservation map hangs off the VMA structure.
90 Specifically, vma->vm_private_data. This reserve map is created at the
91 time the mapping (mmap(MAP_PRIVATE)) is created.
92 - For shared mappings, the reservation map hangs off the inode. Specifically,
93 inode->i_mapping->private_data. Since shared mappings are always backed
101 Reservations are created when a huge page backed shared memory segment is
102 created (shmget(SHM_HUGETLB)) or a mapping is created via mmap(MAP_HUGETLB).
115 The arguments 'from' and 'to' are huge page indices into the mapping or
117 the length of the segment/mapping. For mmap(), the offset argument could
124 - For shared mappings, an entry in the reservation map indicates a reservation
127 - For private mappings, the lack of an entry in the reservation map indicates
138 are needed for the current mapping/segment. For private mappings, this is
139 always the value (to - from). However, for shared mappings it is possible that
140 some reservations may already exist within the range (to - from). See the
144 The mapping may be associated with a subpool. If so, the subpool is consulted
145 to ensure there is sufficient space for the mapping. It is possible that the
146 subpool has set aside reservations that can be used for the mapping. See the
158 if (resv_needed <= (resv_huge_pages - free_huge_pages))
165 was adjusted, then the reservation map associated with the mapping is
166 modified to reflect the reservations. In the case of a shared mapping, a
167 file_region will exist that includes the range 'from' - 'to'. For private
172 reservation map associated with the mapping will be modified as required to
173 ensure reservations exist for the range 'from' - 'to'.
181 are allocated and instantiated in the corresponding mapping. The allocation
196 reservation exists for the address within the mapping(vma). See the section
202 mapping the subpool is consulted to determine if it contains reservations.
209 - avoid_reserve, this is the same value/argument passed to
211 - chg, even though this argument is of type long only the values 0 or 1 are
213 reservation exists (see the section "Memory Policy and Reservations" for
217 The free lists associated with the memory policy of the VMA are searched for
226 resv_huge_pages--; /* Decrement the global reservation count */
228 Note, if no huge page can be found that satisfies the VMA's memory policy
233 resv_huge_pages--.
235 After obtaining a new hugetlb folio, (folio)->_hugetlb_subpool is set to the
244 was no reservation in a shared mapping or this was a private mapping a new
251 mapping. In such cases, the reservation count and subpool free page count
264 of the allocating task. Before this, pages in a shared mapping are added
266 reverse mapping. In both cases, the PagePrivate flag is cleared. Therefore,
281 The page->private field points to any subpool associated with the page.
347 When the private mapping was originally created, the owner of the mapping
349 map of the owner. Since the owner created the mapping, the owner owns all
350 the reservations associated with the mapping. Therefore, when a write fault
352 and non-owner of the reservation.
359 non-owning task. In this way, the only reference is from the owning task.
361 of the non-owning task. The non-owning task may receive a SIGBUS if it later
362 faults on a non-present page. But, the original owner of the
363 mapping/reservation will behave as expected.
399 range. region_chg() is responsible for pre-allocating any data structures
418 - When a file in the hugetlbfs filesystem is being removed, the inode will
422 - When a hugetlbfs file is being truncated. In this case, all allocated pages
426 - When a hole is being punched in a hugetlbfs file. In this case, huge pages
436 will return -ENOMEM. The problem here is that the reservation map will
443 region_count() is called when unmapping a private huge page mapping. In
447 outstanding (outstanding = (end - start) - region_count(resv, start, end)).
448 Since the mapping is going away, the subpool and global reservation counts
459 they pass in the associated VMA. From the VMA, the type of mapping (private
532 be higher than it should and prevent allocation of a pre-allocated page.
553 Reservations and Memory Policy
555 Per-node huge page lists existed in struct hstate when git was first used
557 When reservations were added, no attempt was made to take memory policy
558 into account. While cpusets are not exactly the same as memory policy, this
560 and cpusets/memory policy::
571 * task or memory node can be dynamically moved between cpusets.
573 * The change of semantics for shared hugetlb mapping with cpuset is
582 of cpusets or memory policy there is no guarantee that huge pages will be
594 --