1Started Nov 1999 by Kanoj Sarcar <kanoj@sgi.com> 2 3============= 4What is NUMA? 5============= 6 7This question can be answered from a couple of perspectives: the 8hardware view and the Linux software view. 9 10From the hardware perspective, a NUMA system is a computer platform that 11comprises multiple components or assemblies each of which may contain 0 12or more CPUs, local memory, and/or IO buses. For brevity and to 13disambiguate the hardware view of these physical components/assemblies 14from the software abstraction thereof, we'll call the components/assemblies 15'cells' in this document. 16 17Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset 18of the system--although some components necessary for a stand-alone SMP system 19may not be populated on any given cell. The cells of the NUMA system are 20connected together with some sort of system interconnect--e.g., a crossbar or 21point-to-point link are common types of NUMA system interconnects. Both of 22these types of interconnects can be aggregated to create NUMA platforms with 23cells at multiple distances from other cells. 24 25For Linux, the NUMA platforms of interest are primarily what is known as Cache 26Coherent NUMA or ccNUMA systems. With ccNUMA systems, all memory is visible 27to and accessible from any CPU attached to any cell and cache coherency 28is handled in hardware by the processor caches and/or the system interconnect. 29 30Memory access time and effective memory bandwidth varies depending on how far 31away the cell containing the CPU or IO bus making the memory access is from the 32cell containing the target memory. For example, access to memory by CPUs 33attached to the same cell will experience faster access times and higher 34bandwidths than accesses to memory on other, remote cells. NUMA platforms 35can have cells at multiple remote distances from any given cell. 36 37Platform vendors don't build NUMA systems just to make software developers' 38lives interesting. Rather, this architecture is a means to provide scalable 39memory bandwidth. However, to achieve scalable memory bandwidth, system and 40application software must arrange for a large majority of the memory references 41[cache misses] to be to "local" memory--memory on the same cell, if any--or 42to the closest cell with memory. 43 44This leads to the Linux software view of a NUMA system: 45 46Linux divides the system's hardware resources into multiple software 47abstractions called "nodes". Linux maps the nodes onto the physical cells 48of the hardware platform, abstracting away some of the details for some 49architectures. As with physical cells, software nodes may contain 0 or more 50CPUs, memory and/or IO buses. And, again, memory accesses to memory on 51"closer" nodes--nodes that map to closer cells--will generally experience 52faster access times and higher effective bandwidth than accesses to more 53remote cells. 54 55For some architectures, such as x86, Linux will "hide" any node representing a 56physical cell that has no memory attached, and reassign any CPUs attached to 57that cell to a node representing a cell that does have memory. Thus, on 58these architectures, one cannot assume that all CPUs that Linux associates with 59a given node will see the same local memory access times and bandwidth. 60 61In addition, for some architectures, again x86 is an example, Linux supports 62the emulation of additional nodes. For NUMA emulation, linux will carve up 63the existing nodes--or the system memory for non-NUMA platforms--into multiple 64nodes. Each emulated node will manage a fraction of the underlying cells' 65physical memory. NUMA emulation is useful for testing NUMA kernel and 66application features on non-NUMA platforms, and as a sort of memory resource 67management mechanism when used together with cpusets. 68[see Documentation/admin-guide/cgroup-v1/cpusets.rst] 69 70For each node with memory, Linux constructs an independent memory management 71subsystem, complete with its own free page lists, in-use page lists, usage 72statistics and locks to mediate access. In addition, Linux constructs for 73each memory zone [one or more of DMA, DMA32, NORMAL, HIGH_MEMORY, MOVABLE], 74an ordered "zonelist". A zonelist specifies the zones/nodes to visit when a 75selected zone/node cannot satisfy the allocation request. This situation, 76when a zone has no available memory to satisfy a request, is called 77"overflow" or "fallback". 78 79Because some nodes contain multiple zones containing different types of 80memory, Linux must decide whether to order the zonelists such that allocations 81fall back to the same zone type on a different node, or to a different zone 82type on the same node. This is an important consideration because some zones, 83such as DMA or DMA32, represent relatively scarce resources. Linux chooses 84a default Node ordered zonelist. This means it tries to fallback to other zones 85from the same node before using remote nodes which are ordered by NUMA distance. 86 87By default, Linux will attempt to satisfy memory allocation requests from the 88node to which the CPU that executes the request is assigned. Specifically, 89Linux will attempt to allocate from the first node in the appropriate zonelist 90for the node where the request originates. This is called "local allocation." 91If the "local" node cannot satisfy the request, the kernel will examine other 92nodes' zones in the selected zonelist looking for the first zone in the list 93that can satisfy the request. 94 95Local allocation will tend to keep subsequent access to the allocated memory 96"local" to the underlying physical resources and off the system interconnect-- 97as long as the task on whose behalf the kernel allocated some memory does not 98later migrate away from that memory. The Linux scheduler is aware of the 99NUMA topology of the platform--embodied in the "scheduling domains" data 100structures [see Documentation/scheduler/sched-domains.rst]--and the scheduler 101attempts to minimize task migration to distant scheduling domains. However, 102the scheduler does not take a task's NUMA footprint into account directly. 103Thus, under sufficient imbalance, tasks can migrate between nodes, remote 104from their initial node and kernel data structures. 105 106System administrators and application designers can restrict a task's migration 107to improve NUMA locality using various CPU affinity command line interfaces, 108such as taskset(1) and numactl(1), and program interfaces such as 109sched_setaffinity(2). Further, one can modify the kernel's default local 110allocation behavior using Linux NUMA memory policy. [see 111Documentation/admin-guide/mm/numa_memory_policy.rst]. 112 113System administrators can restrict the CPUs and nodes' memories that a non- 114privileged user can specify in the scheduling or NUMA commands and functions 115using control groups and CPUsets. [see Documentation/admin-guide/cgroup-v1/cpusets.rst] 116 117On architectures that do not hide memoryless nodes, Linux will include only 118zones [nodes] with memory in the zonelists. This means that for a memoryless 119node the "local memory node"--the node of the first zone in CPU's node's 120zonelist--will not be the node itself. Rather, it will be the node that the 121kernel selected as the nearest node with memory when it built the zonelists. 122So, default, local allocations will succeed with the kernel supplying the 123closest available memory. This is a consequence of the same mechanism that 124allows such allocations to fallback to other nearby nodes when a node that 125does contain memory overflows. 126 127Some kernel allocations do not want or cannot tolerate this allocation fallback 128behavior. Rather they want to be sure they get memory from the specified node 129or get notified that the node has no free memory. This is usually the case when 130a subsystem allocates per CPU memory resources, for example. 131 132A typical model for making such an allocation is to obtain the node id of the 133node to which the "current CPU" is attached using one of the kernel's 134numa_node_id() or CPU_to_node() functions and then request memory from only 135the node id returned. When such an allocation fails, the requesting subsystem 136may revert to its own fallback path. The slab kernel memory allocator is an 137example of this. Or, the subsystem may choose to disable or not to enable 138itself on allocation failure. The kernel profiling subsystem is an example of 139this. 140 141If the architecture supports--does not hide--memoryless nodes, then CPUs 142attached to memoryless nodes would always incur the fallback path overhead 143or some subsystems would fail to initialize if they attempted to allocated 144memory exclusively from a node without memory. To support such 145architectures transparently, kernel subsystems can use the numa_mem_id() 146or cpu_to_mem() function to locate the "local memory node" for the calling or 147specified CPU. Again, this is the same node from which default, local page 148allocations will be attempted. 149