Lines Matching +full:one +full:- +full:cell

17 Each of the 'cells' may be viewed as an SMP [symmetric multi-processor] subset
18 of the system--although some components necessary for a stand-alone SMP system
19 may not be populated on any given cell. The cells of the NUMA system are
20 connected together with some sort of system interconnect--e.g., a crossbar or
21 point-to-point link are common types of NUMA system interconnects. Both of
27 to and accessible from any CPU attached to any cell and cache coherency
31 away the cell containing the CPU or IO bus making the memory access is from the
32 cell containing the target memory. For example, access to memory by CPUs
33 attached to the same cell will experience faster access times and higher
35 can have cells at multiple remote distances from any given cell.
41 [cache misses] to be to "local" memory--memory on the same cell, if any--or
42 to the closest cell with memory.
51 "closer" nodes--nodes that map to closer cells--will generally experience
56 physical cell that has no memory attached, and reassign any CPUs attached to
57 that cell to a node representing a cell that does have memory. Thus, on
58 these architectures, one cannot assume that all CPUs that Linux associates with
63 the existing nodes--or the system memory for non-NUMA platforms--into multiple
66 application features on non-NUMA platforms, and as a sort of memory resource
68 [see Documentation/admin-guide/cgroup-v1/cpusets.rst]
71 subsystem, complete with its own free page lists, in-use page lists, usage
73 each memory zone [one or more of DMA, DMA32, NORMAL, HIGH_MEMORY, MOVABLE],
96 "local" to the underlying physical resources and off the system interconnect--
99 NUMA topology of the platform--embodied in the "scheduling domains" data
100 structures [see Documentation/scheduler/sched-domains.rst]--and the scheduler
109 sched_setaffinity(2). Further, one can modify the kernel's default local
111 Documentation/admin-guide/mm/numa_memory_policy.rst].
113 System administrators can restrict the CPUs and nodes' memories that a non-
115 using control groups and CPUsets. [see Documentation/admin-guide/cgroup-v1/cpusets.rst]
119 node the "local memory node"--the node of the first zone in CPU's node's
120 zonelist--will not be the node itself. Rather, it will be the node that the
133 node to which the "current CPU" is attached using one of the kernel's
141 If the architecture supports--does not hide--memoryless nodes, then CPUs