Lines Matching full:memory
2 NUMA Memory Policy
5 What is NUMA Memory Policy?
8 In the Linux kernel, "memory policy" determines from which node the kernel will
9 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
10 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
11 The current memory policy support was added to Linux 2.6 around May 2004. This
12 document attempts to describe the concepts and APIs of the 2.6 memory policy
15 Memory policies should not be confused with cpusets
18 memory may be allocated by a set of processes. Memory policies are a
21 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
24 Memory Policy Concepts
27 Scope of Memory Policies
30 The Linux kernel supports _scopes_ of memory policy, described here from
40 allocations across all nodes with "sufficient" memory, so as
56 executable image that has no awareness of memory policy. See the
57 :ref:`Memory Policy APIs <memory_policy_apis>` section,
75 A "VMA" or "Virtual Memory Area" refers to a range of a task's
78 :ref:`Memory Policy APIs <memory_policy_apis>` section,
111 the existing virtual memory area into 2 or 3 VMAs, each with
123 Conceptually, shared policies apply to "memory objects" mapped
134 As of 2.6.22, only shared memory segments, created by shmget() or
154 Thus, different tasks that attach to a shared memory segment can have
157 a shared memory region, when one task has installed shared policy on
160 Components of Memory Policies
163 A NUMA memory policy consists of a "mode", optional mode flags, and
169 Internally, memory policies are implemented by a reference counted
173 NUMA memory policy supports the following 4 behavioral modes:
176 This mode is only used in the memory policy APIs. Internally,
177 MPOL_DEFAULT is converted to the NULL memory policy in all
187 When specified in one of the memory policy APIs, the Default mode
194 This mode specifies that memory must come from the set of
195 nodes specified by the policy. Memory will be allocated from
196 the node in the set with sufficient free memory that is
226 For allocation of anonymous pages and shared memory pages,
249 a memory pressure on all nodes in the nodemask, the allocation
262 NUMA memory policy supports the following optional mode flags:
267 nodes changes after the memory policy has been defined.
276 nodes allowed by the task's cpuset, then the memory policy is
331 nodemasks to specify memory policies using this flag should
332 disregard their current, actual cpuset imposed memory placement
334 memory nodes 0 to N-1, where N is the number of memory nodes the
336 set of memory nodes allowed by the task's cpuset, as that may
344 Memory Policy Reference Counting
353 When a new memory policy is allocated, its reference count is initialized
355 new policy. When a pointer to a memory policy structure is stored in another
393 shared memory policy while another task, with a distinct mmap_lock, is
407 true for shared policies on shared memory regions shared by tasks running
409 falling back to task or system default policy for shared memory regions,
410 or by prefaulting the entire shared memory region into memory and locking
415 Memory Policy APIs
418 Linux supports 4 system calls for controlling memory policy. These APIS
429 Set [Task] Memory Policy::
434 Set's the calling task's "task/process memory policy" to mode
444 Get [Task] Memory Policy or Related Information::
450 Queries the "task/process memory policy" of the calling task, or the
480 the default allocation policy to allocate memory close to the local node for an
484 Memory Policy Command Line Interface
487 Although not strictly part of the Linux implementation of memory policy,
493 + set the shared policy for a shared memory segment via mbind(2)
496 containing the memory policy system call wrappers. Some distributions
502 Memory Policies and cpusets
505 Memory policies work within cpusets as described above. For memory policies
510 specified for the policy and the set of nodes with memory is used. If the
515 The interaction of memory policies and cpusets can be problematic when tasks
516 in two cpusets share access to a memory region, such as shared memory segments
520 this information requires "stepping outside" the memory policy APIs to use the
523 memory sets are disjoint, "local" allocation is the only valid policy.