Lines Matching +full:shared +full:- +full:memory
2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
16 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
18 memory may be allocated by a set of processes. Memory policies are a
19 programming interface that a NUMA-aware application can take advantage of. When
21 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
24 Memory Policy Concepts
27 Scope of Memory Policies
28 ------------------------
30 The Linux kernel supports _scopes_ of memory policy, described here from
40 allocations across all nodes with "sufficient" memory, so as
41 not to overload the initial boot node with boot-time
45 this is an optional, per-task policy. When defined for a
56 executable image that has no awareness of memory policy. See the
57 :ref:`Memory Policy APIs <memory_policy_apis>` section,
61 In a multi-threaded task, task policies apply only to the thread
75 A "VMA" or "Virtual Memory Area" refers to a range of a task's
78 :ref:`Memory Policy APIs <memory_policy_apis>` section,
98 mapping-- i.e., at Copy-On-Write.
100 * VMA policies are shared between all tasks that share a
101 virtual address space--a.k.a. threads--independent of when
106 are NOT inheritable across exec(). Thus, only NUMA-aware
109 * A task may install a new VMA policy on a sub-range of a
111 the existing virtual memory area into 2 or 3 VMAs, each with
122 Shared Policy
123 Conceptually, shared policies apply to "memory objects" mapped
124 shared into one or more tasks' distinct address spaces. An
125 application installs shared policies the same way as VMA
126 policies--using the mbind() system call specifying a range of
127 virtual addresses that map the shared object. However, unlike
129 range of a task's address space, shared policies apply
130 directly to the shared object. Thus, all tasks that attach to
132 shared object, by any task, will obey the shared policy.
134 As of 2.6.22, only shared memory segments, created by shmget() or
135 mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared
138 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
139 shmem segments were never "hooked up" to the shared policy support.
141 for shared policy has not been completed.
146 address range backed by the shared file mapping. Rather,
147 shared page cache pages, including pages backing private
151 The shared policy infrastructure supports different policies on subset
152 ranges of the shared object. However, Linux still splits the VMA of
154 Thus, different tasks that attach to a shared memory segment can have
155 different VMA configurations mapping that one shared object. This
157 a shared memory region, when one task has installed shared policy on
160 Components of Memory Policies
161 -----------------------------
163 A NUMA memory policy consists of a "mode", optional mode flags, and
169 Internally, memory policies are implemented by a reference counted
173 NUMA memory policy supports the following 4 behavioral modes:
175 Default Mode--MPOL_DEFAULT
176 This mode is only used in the memory policy APIs. Internally,
177 MPOL_DEFAULT is converted to the NULL memory policy in all
178 policy scopes. Any existing non-default policy will simply be
187 When specified in one of the memory policy APIs, the Default mode
191 be non-empty.
194 This mode specifies that memory must come from the set of
195 nodes specified by the policy. Memory will be allocated from
196 the node in the set with sufficient free memory that is
206 Internally, the Preferred policy uses a single node--the
226 For allocation of anonymous pages and shared memory pages,
249 a memory pressure on all nodes in the nodemask, the allocation
262 NUMA memory policy supports the following optional mode flags:
267 nodes changes after the memory policy has been defined.
275 With this flag, if the user-specified nodes overlap with the
276 nodes allowed by the task's cpuset, then the memory policy is
281 mems 1-3 that sets an Interleave policy over the same set. If
282 the cpuset's mems change to 3-5, the Interleave will now occur
296 set of allowed nodes. The kernel stores the user-passed nodemask,
306 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
319 the user's nodemask when the set of allowed nodes is only 0-3),
324 mems 2-5 that sets an Interleave policy over the same set with
325 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
326 interleave now occurs over nodes 3,5-7. If the cpuset's mems
327 then change to 0,2-3,5, then the interleave occurs over nodes
328 0,2-3,5.
331 nodemasks to specify memory policies using this flag should
332 disregard their current, actual cpuset imposed memory placement
334 memory nodes 0 to N-1, where N is the number of memory nodes the
336 set of memory nodes allowed by the task's cpuset, as that may
344 Memory Policy Reference Counting
353 When a new memory policy is allocated, its reference count is initialized
355 new policy. When a pointer to a memory policy structure is stored in another
359 During run-time "usage" of the policy, we attempt to minimize atomic operations
392 4) Shared policies require special consideration. One task can replace a
393 shared memory policy while another task, with a distinct mmap_lock, is
395 potential race, the shared policy infrastructure adds an extra reference
396 to the shared policy during lookup while holding a spin lock on the shared
399 extra reference on shared policies in the same query/allocation paths
400 used for non-shared policies. For this reason, shared policies are marked
401 as such, and the extra reference is dropped "conditionally"--i.e., only
402 for shared policies.
405 shared policies in a tree structure under spinlock, shared policies are
407 true for shared policies on shared memory regions shared by tasks running
409 falling back to task or system default policy for shared memory regions,
410 or by prefaulting the entire shared memory region into memory and locking
415 Memory Policy APIs
418 Linux supports 4 system calls for controlling memory policy. These APIS
420 some shared object mapped into the calling task's address space.
429 Set [Task] Memory Policy::
434 Set's the calling task's "task/process memory policy" to mode
444 Get [Task] Memory Policy or Related Information::
450 Queries the "task/process memory policy" of the calling task, or the
457 Install VMA/Shared Policy for a Range of Task's Address Space::
480 the default allocation policy to allocate memory close to the local node for an
484 Memory Policy Command Line Interface
487 Although not strictly part of the Linux implementation of memory policy,
493 + set the shared policy for a shared memory segment via mbind(2)
495 The numactl(8) tool is packaged with the run-time version of the library
496 containing the memory policy system call wrappers. Some distributions
497 package the headers and compile-time libraries in a separate development
502 Memory Policies and cpusets
505 Memory policies work within cpusets as described above. For memory policies
510 specified for the policy and the set of nodes with memory is used. If the
515 The interaction of memory policies and cpusets can be problematic when tasks
516 in two cpusets share access to a memory region, such as shared memory segments
518 any of the tasks install shared policy on the region, only nodes whose
520 this information requires "stepping outside" the memory policy APIs to use the
522 be attaching to the shared region. Furthermore, if the cpusets' allowed
523 memory sets are disjoint, "local" allocation is the only valid policy.