1.. _numa: 2 3Started Nov 1999 by Kanoj Sarcar <kanoj@sgi.com> 4 5============= 6What is NUMA? 7============= 8 9This question can be answered from a couple of perspectives: the 10hardware view and the Linux software view. 11 12From the hardware perspective, a NUMA system is a computer platform that 13comprises multiple components or assemblies each of which may contain 0 14or more CPUs, local memory, and/or IO buses. For brevity and to 15disambiguate the hardware view of these physical components/assemblies 16from the software abstraction thereof, we'll call the components/assemblies 17'cells' in this document. 18 19Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 20of the system--although some components necessary for a stand-alone SMP system 21may not be populated on any given cell. The cells of the NUMA system are 22connected together with some sort of system interconnect--e.g., a crossbar or 23point-to-point link are common types of NUMA system interconnects. Both of 24these types of interconnects can be aggregated to create NUMA platforms with 25cells at multiple distances from other cells. 26 27For Linux, the NUMA platforms of interest are primarily what is known as Cache 28Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 29to and accessible from any CPU attached to any cell and cache coherency 30is handled in hardware by the processor caches and/or the system interconnect. 31 32Memory access time and effective memory bandwidth varies depending on how far 33away the cell containing the CPU or IO bus making the memory access is from the 34cell containing the target memory. For example, access to memory by CPUs 35attached to the same cell will experience faster access times and higher 36bandwidths than accesses to memory on other, remote cells. NUMA platforms 37can have cells at multiple remote distances from any given cell. 38 39Platform vendors don't build NUMA systems just to make software developers' 40lives interesting. Rather, this architecture is a means to provide scalable 41memory bandwidth. However, to achieve scalable memory bandwidth, system and 42application software must arrange for a large majority of the memory references 43[cache misses] to be to "local" memory--memory on the same cell, if any--or 44to the closest cell with memory. 45 46This leads to the Linux software view of a NUMA system: 47 48Linux divides the system's hardware resources into multiple software 49abstractions called "nodes". Linux maps the nodes onto the physical cells 50of the hardware platform, abstracting away some of the details for some 51architectures. As with physical cells, software nodes may contain 0 or more 52CPUs, memory and/or IO buses. And, again, memory accesses to memory on 53"closer" nodes--nodes that map to closer cells--will generally experience 54faster access times and higher effective bandwidth than accesses to more 55remote cells. 56 57For some architectures, such as x86, Linux will "hide" any node representing a 58physical cell that has no memory attached, and reassign any CPUs attached to 59that cell to a node representing a cell that does have memory. Thus, on 60these architectures, one cannot assume that all CPUs that Linux associates with 61a given node will see the same local memory access times and bandwidth. 62 63In addition, for some architectures, again x86 is an example, Linux supports 64the emulation of additional nodes. For NUMA emulation, linux will carve up 65the existing nodes--or the system memory for non-NUMA platforms--into multiple 66nodes. Each emulated node will manage a fraction of the underlying cells' 67physical memory. NUMA emluation is useful for testing NUMA kernel and 68application features on non-NUMA platforms, and as a sort of memory resource 69management mechanism when used together with cpusets. 70[see Documentation/admin-guide/cgroup-v1/cpusets.rst] 71 72For each node with memory, Linux constructs an independent memory management 73subsystem, complete with its own free page lists, in-use page lists, usage 74statistics and locks to mediate access. In addition, Linux constructs for 75each memory zone [one or more of DMA, DMA32, NORMAL, HIGH_MEMORY, MOVABLE], 76an ordered "zonelist". A zonelist specifies the zones/nodes to visit when a 77selected zone/node cannot satisfy the allocation request. This situation, 78when a zone has no available memory to satisfy a request, is called 79"overflow" or "fallback". 80 81Because some nodes contain multiple zones containing different types of 82memory, Linux must decide whether to order the zonelists such that allocations 83fall back to the same zone type on a different node, or to a different zone 84type on the same node. This is an important consideration because some zones, 85such as DMA or DMA32, represent relatively scarce resources. Linux chooses 86a default Node ordered zonelist. This means it tries to fallback to other zones 87from the same node before using remote nodes which are ordered by NUMA distance. 88 89By default, Linux will attempt to satisfy memory allocation requests from the 90node to which the CPU that executes the request is assigned. Specifically, 91Linux will attempt to allocate from the first node in the appropriate zonelist 92for the node where the request originates. This is called "local allocation." 93If the "local" node cannot satisfy the request, the kernel will examine other 94nodes' zones in the selected zonelist looking for the first zone in the list 95that can satisfy the request. 96 97Local allocation will tend to keep subsequent access to the allocated memory 98"local" to the underlying physical resources and off the system interconnect-- 99as long as the task on whose behalf the kernel allocated some memory does not 100later migrate away from that memory. The Linux scheduler is aware of the 101NUMA topology of the platform--embodied in the "scheduling domains" data 102structures [see Documentation/scheduler/sched-domains.rst]--and the scheduler 103attempts to minimize task migration to distant scheduling domains. However, 104the scheduler does not take a task's NUMA footprint into account directly. 105Thus, under sufficient imbalance, tasks can migrate between nodes, remote 106from their initial node and kernel data structures. 107 108System administrators and application designers can restrict a task's migration 109to improve NUMA locality using various CPU affinity command line interfaces, 110such as taskset(1) and numactl(1), and program interfaces such as 111sched_setaffinity(2). Further, one can modify the kernel's default local 112allocation behavior using Linux NUMA memory policy. [see 113:ref:`Documentation/admin-guide/mm/numa_memory_policy.rst <numa_memory_policy>`]. 114 115System administrators can restrict the CPUs and nodes' memories that a non- 116privileged user can specify in the scheduling or NUMA commands and functions 117using control groups and CPUsets. [see Documentation/admin-guide/cgroup-v1/cpusets.rst] 118 119On architectures that do not hide memoryless nodes, Linux will include only 120zones [nodes] with memory in the zonelists. This means that for a memoryless 121node the "local memory node"--the node of the first zone in CPU's node's 122zonelist--will not be the node itself. Rather, it will be the node that the 123kernel selected as the nearest node with memory when it built the zonelists. 124So, default, local allocations will succeed with the kernel supplying the 125closest available memory. This is a consequence of the same mechanism that 126allows such allocations to fallback to other nearby nodes when a node that 127does contain memory overflows. 128 129Some kernel allocations do not want or cannot tolerate this allocation fallback 130behavior. Rather they want to be sure they get memory from the specified node 131or get notified that the node has no free memory. This is usually the case when 132a subsystem allocates per CPU memory resources, for example. 133 134A typical model for making such an allocation is to obtain the node id of the 135node to which the "current CPU" is attached using one of the kernel's 136numa_node_id() or CPU_to_node() functions and then request memory from only 137the node id returned. When such an allocation fails, the requesting subsystem 138may revert to its own fallback path. The slab kernel memory allocator is an 139example of this. Or, the subsystem may choose to disable or not to enable 140itself on allocation failure. The kernel profiling subsystem is an example of 141this. 142 143If the architecture supports--does not hide--memoryless nodes, then CPUs 144attached to memoryless nodes would always incur the fallback path overhead 145or some subsystems would fail to initialize if they attempted to allocated 146memory exclusively from a node without memory. To support such 147architectures transparently, kernel subsystems can use the numa_mem_id() 148or cpu_to_mem() function to locate the "local memory node" for the calling or 149specified CPU. Again, this is the same node from which default, local page 150allocations will be attempted. 151