/linux/Documentation/admin-guide/mm/ |
H A D | numa_memory_policy.rst | 17 which is an administrative mechanism for restricting the nodes from which 40 allocations across all nodes with "sufficient" memory, so as 164 an optional set of nodes. The mode determines the behavior of the 166 and the optional set of nodes can be viewed as the arguments to the 188 does not use the optional set of nodes. 190 It is an error for the set of nodes specified for this policy to 195 nodes specified by the policy. Memory will be allocated from 202 allocation fails, the kernel will search other nodes, in order 222 page granularity, across the nodes specified in the policy. 227 Interleave mode indexes the set of nodes specified by the [all …]
|
/linux/fs/bcachefs/ |
H A D | btree_node_scan.c | 43 …tic void found_btree_nodes_to_text(struct printbuf *out, struct bch_fs *c, found_btree_nodes nodes) in found_btree_nodes_to_text() argument 46 darray_for_each(nodes, i) { in found_btree_nodes_to_text() 122 * Given two found btree nodes, if their sequence numbers are equal, take the 228 if (darray_push(&f->nodes, n)) in try_read_btree_node() 381 if (f->nodes.nr) in bch2_scan_for_btree_nodes() 390 if (!f->nodes.nr) { in bch2_scan_for_btree_nodes() 391 bch_err(c, "%s: no btree nodes found", __func__); in bch2_scan_for_btree_nodes() 398 prt_printf(&buf, "%s: nodes found:\n", __func__); in bch2_scan_for_btree_nodes() 399 found_btree_nodes_to_text(&buf, c, f->nodes); in bch2_scan_for_btree_nodes() 403 …sort_nonatomic(f->nodes.data, f->nodes.nr, sizeof(f->nodes.data[0]), found_btree_node_cmp_cookie, … in bch2_scan_for_btree_nodes() [all …]
|
/linux/drivers/net/ethernet/tehuti/ |
H A D | tn40_mdio.c | 127 struct tn40_nodes *nodes = &priv->nodes; in tn40_swnodes_register() local 134 snprintf(nodes->phy_name, sizeof(nodes->phy_name), "ethernet-phy@1"); in tn40_swnodes_register() 135 snprintf(nodes->mdio_name, sizeof(nodes->mdio_name), "tn40_mdio-%x", in tn40_swnodes_register() 138 swnodes = nodes->swnodes; in tn40_swnodes_register() 140 swnodes[SWNODE_MDIO] = NODE_PROP(nodes->mdio_name, NULL); in tn40_swnodes_register() 142 nodes->phy_props[0] = PROPERTY_ENTRY_STRING("compatible", in tn40_swnodes_register() 144 nodes->phy_props[1] = PROPERTY_ENTRY_U32("reg", 1); in tn40_swnodes_register() 145 nodes->phy_props[2] = PROPERTY_ENTRY_STRING("firmware-name", in tn40_swnodes_register() 147 swnodes[SWNODE_PHY] = NODE_PAR_PROP(nodes->phy_name, in tn40_swnodes_register() 149 nodes->phy_props); in tn40_swnodes_register() [all …]
|
/linux/fs/ubifs/ |
H A D | gc.c | 14 * nodes) or not. For non-index LEBs, garbage collection finds a LEB which 15 * contains a lot of dirty space (obsolete nodes), and copies the non-obsolete 16 * nodes to the journal, at which point the garbage-collected LEB is free to be 17 * reused. For index LEBs, garbage collection marks the non-obsolete index nodes 19 * to be reused. Garbage collection will cause the number of dirty index nodes 33 * the UBIFS nodes GC deals with. Large nodes make GC waste more space. Indeed, 34 * if GC move data from LEB A to LEB B and nodes in LEB A are large, GC would 35 * have to waste large pieces of free space at the end of LEB B, because nodes 36 * from LEB A would not fit. And the worst situation is when all nodes are of 97 * data_nodes_cmp - compare 2 data nodes. [all …]
|
/linux/Documentation/mm/ |
H A D | numa.rst | 47 abstractions called "nodes". Linux maps the nodes onto the physical cells 49 architectures. As with physical cells, software nodes may contain 0 or more 51 "closer" nodes--nodes that map to closer cells--will generally experience 62 the emulation of additional nodes. For NUMA emulation, linux will carve up 63 the existing nodes--or the system memory for non-NUMA platforms--into multiple 64 nodes. Each emulated node will manage a fraction of the underlying cells' 74 an ordered "zonelist". A zonelist specifies the zones/nodes to visit when a 79 Because some nodes contain multiple zones containing different types of 85 from the same node before using remote nodes which are ordered by NUMA distance. 92 nodes' zones in the selected zonelist looking for the first zone in the list [all …]
|
/linux/Documentation/driver-api/md/ |
H A D | md-cluster.rst | 54 node may write to those sectors. This is used when a new nodes 60 Each node has to communicate with other nodes when starting or ending 70 Normally all nodes hold a concurrent-read lock on this device. 75 Messages can be broadcast to all nodes, and the sender waits for all 76 other nodes to acknowledge the message before proceeding. Only one 87 informs other nodes that the metadata has 94 informs other nodes that a resync is initiated or 104 informs other nodes that a device is being added to 128 The DLM LVB is used to communicate within nodes of the cluster. There 145 acknowledged by all nodes in the cluster. The BAST of the resource [all …]
|
/linux/Documentation/filesystems/ |
H A D | ubifs-authentication.rst | 80 - *Index*: an on-flash B+ tree where the leaf nodes contain filesystem data 98 Basic on-flash UBIFS entities are called *nodes*. UBIFS knows different types 99 of nodes. Eg. data nodes (``struct ubifs_data_node``) which store chunks of file 100 contents or inode nodes (``struct ubifs_ino_node``) which represent VFS inodes. 101 Almost all types of nodes share a common header (``ubifs_ch``) containing basic 104 and some less important node types like padding nodes which are used to pad 108 as *wandering tree*, where only the changed nodes are re-written and previous 121 a dirty-flag which marks nodes that have to be persisted the next time the 126 on-flash filesystem structures like the index. On every commit, the TNC nodes 135 any changes (in form of inode nodes, data nodes etc.) between commits [all …]
|
/linux/Documentation/devicetree/bindings/usb/ |
H A D | usb-device.yaml | 17 Four types of device-tree nodes are defined: "host-controller nodes" 18 representing USB host controllers, "device nodes" representing USB devices, 19 "interface nodes" representing USB interfaces and "combined nodes" 33 description: Device nodes or combined nodes. 49 description: should be 1 for hub nodes with device nodes, 50 should be 2 for device nodes with interface nodes. 59 description: USB interface nodes. 66 description: Interface nodes.
|
/linux/mm/ |
H A D | mempolicy.c | 15 * interleave Allocate memory interleaved over a set of nodes, 23 * Allocate memory interleaved over a set of nodes based on 29 * bind Only allocate memory on a specific set of nodes, 33 * the allocation to memory nodes instead 41 * preferred many Try a set of nodes first before normal fallback. This is 144 * weightiness balances the tradeoff between small weights (cycles through nodes 313 * @mask: a pointer to a nodemask representing the allowed nodes. 315 * This function iterates over all nodes in @mask and calculates the 359 int (*create)(struct mempolicy *pol, const nodemask_t *nodes); 360 void (*rebind)(struct mempolicy *pol, const nodemask_t *nodes); [all …]
|
H A D | memory-tiers.c | 25 /* All the nodes that are part of all the lower memory tiers. */ 79 * Node 0 & 1 are CPU + DRAM nodes, node 2 & 3 are PMEM nodes. 98 * Node 0 & 1 are CPU + DRAM nodes, node 2 is memory-only DRAM node. 114 * Node 0 is CPU + DRAM nodes, Node 1 is HBM node, node 2 is PMEM node. 150 nodemask_t nodes = NODE_MASK_NONE; in get_memtier_nodemask() local 154 nodes_or(nodes, nodes, memtype->nodes); in get_memtier_nodemask() 156 return nodes; in get_memtier_nodemask() 348 * If there are multiple target nodes, just select one in next_demotion_node() 411 * nodes. Failing here is OK. It might just indicate 443 * Add all memory nodes except the selected memory tier in establish_demotion_targets() [all …]
|
/linux/drivers/md/persistent-data/ |
H A D | dm-btree-spine.c | 128 s->nodes[0] = NULL; in init_ro_spine() 129 s->nodes[1] = NULL; in init_ro_spine() 137 unlock_block(s->info, s->nodes[i]); in exit_ro_spine() 145 unlock_block(s->info, s->nodes[0]); in ro_step() 146 s->nodes[0] = s->nodes[1]; in ro_step() 150 r = bn_read_lock(s->info, new_child, s->nodes + s->count); in ro_step() 161 unlock_block(s->info, s->nodes[s->count]); in ro_pop() 169 block = s->nodes[s->count - 1]; in ro_node() 187 unlock_block(s->info, s->nodes[i]); in exit_shadow_spine() 196 unlock_block(s->info, s->nodes[0]); in shadow_step() [all …]
|
/linux/include/linux/ |
H A D | interconnect-provider.h | 31 * @num_nodes: number of nodes in this device 32 * @nodes: array of pointers to the nodes in this device 36 struct icc_node *nodes[] __counted_by(num_nodes); 47 * @nodes: internal list of the interconnect provider nodes 53 * @xlate: provider-specific callback for mapping nodes from phandle arguments 62 struct list_head nodes; member 83 * @num_links: number of links to other interconnect nodes 85 * @node_list: the list entry in the parent provider's "nodes" list 86 * @search_list: list used when walking the nodes graph 87 * @reverse: pointer to previous node when walking the nodes graph [all …]
|
/linux/arch/arm/mach-sunxi/ |
H A D | mc_smp.c | 689 * This holds any device nodes that we requested resources for, 702 int (*get_smp_nodes)(struct sunxi_mc_smp_nodes *nodes); 706 static void __init sunxi_mc_smp_put_nodes(struct sunxi_mc_smp_nodes *nodes) in sunxi_mc_smp_put_nodes() argument 708 of_node_put(nodes->prcm_node); in sunxi_mc_smp_put_nodes() 709 of_node_put(nodes->cpucfg_node); in sunxi_mc_smp_put_nodes() 710 of_node_put(nodes->sram_node); in sunxi_mc_smp_put_nodes() 711 of_node_put(nodes->r_cpucfg_node); in sunxi_mc_smp_put_nodes() 712 memset(nodes, 0, sizeof(*nodes)); in sunxi_mc_smp_put_nodes() 715 static int __init sun9i_a80_get_smp_nodes(struct sunxi_mc_smp_nodes *nodes) in sun9i_a80_get_smp_nodes() argument 717 nodes->prcm_node = of_find_compatible_node(NULL, NULL, in sun9i_a80_get_smp_nodes() [all …]
|
/linux/Documentation/sphinx/ |
H A D | automarkup.py | 7 from docutils import nodes 103 repl.append(nodes.Text(t[done:m.start()])) 113 repl.append(nodes.Text(t[done:])) 132 target_text = nodes.Text(match.group(0)) 142 lit_text = nodes.literal(classes=['xref', 'c', 'c-func']) 173 target_text = nodes.Text(match.group(0)) 183 lit_text = nodes.literal(classes=['xref', 'c', class_str[match.re]]) 207 return nodes.Text(match.group(0)) 223 return nodes.Text(match.group(0)) 229 return nodes.Text(match.group(0)) [all …]
|
H A D | translations.py | 11 from docutils import nodes 31 class LanguagesNode(nodes.Element): 64 pxref += nodes.Text(lang_name) 77 # Iterate over the child nodes; any resolved links will have 78 # the type 'nodes.reference', while unresolved links will be 79 # type 'nodes.Text'. 81 isinstance(xref, nodes.reference), node.children)) 89 node.replace_self(nodes.raw('', html_content, format='html'))
|
/linux/tools/perf/tests/ |
H A D | mem2node.c | 50 struct memory_node nodes[3]; in test__mem2node() local 52 .memory_nodes = (struct memory_node *) &nodes[0], in test__mem2node() 53 .nr_memory_nodes = ARRAY_SIZE(nodes), in test__mem2node() 58 for (i = 0; i < ARRAY_SIZE(nodes); i++) { in test__mem2node() 59 nodes[i].node = test_nodes[i].node; in test__mem2node() 60 nodes[i].size = 10; in test__mem2node() 63 (nodes[i].set = get_bitmap(test_nodes[i].map, 10))); in test__mem2node() 75 for (i = 0; i < ARRAY_SIZE(nodes); i++) in test__mem2node() 76 zfree(&nodes[i].set); in test__mem2node()
|
/linux/Documentation/driver-api/acpi/ |
H A D | scan_handlers.rst | 19 acpi_device objects are referred to as "device nodes" in what follows, but they 23 During ACPI-based device hot-remove device nodes representing pieces of hardware 27 initialization of device nodes, such as retrieving common configuration 48 where ids is the list of IDs of device nodes the given handler is supposed to 51 executed, respectively, after registration of new device nodes and before 52 unregistration of device nodes the handler attached to previously. 55 device nodes in the given namespace scope with the driver core. Then, it tries 72 callbacks from the scan handlers of all device nodes in the given namespace 74 nodes in that scope. 79 is the order in which they are matched against device nodes during namespace
|
/linux/drivers/interconnect/qcom/ |
H A D | icc-rpmh.h | 23 * @nodes: list of icc nodes that maps to the provider 24 * @num_nodes: number of @nodes 35 struct qcom_icc_node * const *nodes; member 82 * struct qcom_icc_node - Qualcomm specific interconnect nodes 84 * @links: an array of nodes where we can go next while traversing 114 * struct qcom_icc_bcm - Qualcomm specific hardware accelerator nodes 130 * @nodes: list of qcom_icc_nodes that this BCM encapsulates 146 struct qcom_icc_node *nodes[]; member 150 struct qcom_icc_node **nodes; member 156 struct qcom_icc_node * const *nodes; member
|
/linux/drivers/gpu/drm/tests/ |
H A D | drm_mm_test.c | 193 struct drm_mm_node nodes[2]; in drm_test_mm_debug() local 195 /* Create a small drm_mm with a couple of nodes and a few holes, and in drm_test_mm_debug() 200 memset(nodes, 0, sizeof(nodes)); in drm_test_mm_debug() 201 nodes[0].start = 512; in drm_test_mm_debug() 202 nodes[0].size = 1024; in drm_test_mm_debug() 203 KUNIT_ASSERT_FALSE_MSG(test, drm_mm_reserve_node(&mm, &nodes[0]), in drm_test_mm_debug() 205 nodes[0].start, nodes[0].size); in drm_test_mm_debug() 207 nodes[1].size = 1024; in drm_test_mm_debug() 208 nodes[1].start = 4096 - 512 - nodes[1].size; in drm_test_mm_debug() 209 KUNIT_ASSERT_FALSE_MSG(test, drm_mm_reserve_node(&mm, &nodes[1]), in drm_test_mm_debug() [all …]
|
/linux/arch/sparc/kernel/ |
H A D | cpumap.c | 45 int num_nodes; /* Number of nodes in a level in a cpuinfo tree */ 51 /* Offsets into nodes[] for each level of the tree */ 53 struct cpuinfo_node nodes[] __counted_by(total_nodes); 86 * nodes. 121 * end index, and number of nodes for each level in the cpuinfo tree. The 122 * total number of cpuinfo nodes required to build the tree is returned. 197 new_tree = kzalloc(struct_size(new_tree, nodes, n), GFP_ATOMIC); in build_cpuinfo_tree() 211 node = &new_tree->nodes[n]; in build_cpuinfo_tree() 252 node = &new_tree->nodes[level_rover[level]]; in build_cpuinfo_tree() 277 node = &new_tree->nodes[n]; in build_cpuinfo_tree() [all …]
|
/linux/fs/btrfs/ |
H A D | ctree.c | 166 if (!p->nodes[i]) in btrfs_release_path() 169 btrfs_tree_unlock_rw(p->nodes[i], p->locks[i]); in btrfs_release_path() 172 free_extent_buffer(p->nodes[i]); in btrfs_release_path() 173 p->nodes[i] = NULL; in btrfs_release_path() 718 * leaves and nodes. 845 * node level balancing, used to make sure nodes are in proper order for 866 mid = path->nodes[level]; in balance_level() 874 parent = path->nodes[level + 1]; in balance_level() 917 path->nodes[level] = NULL; in balance_level() 1086 path->nodes[level] = left; in balance_level() [all …]
|
/linux/sound/hda/ |
H A D | hdac_sysfs.c | 16 struct kobject **nodes; member 328 if (tree->nodes) { in widget_tree_free() 329 for (p = tree->nodes; *p; p++) in widget_tree_free() 331 kfree(tree->nodes); in widget_tree_free() 377 tree->nodes = kcalloc(codec->num_nodes + 1, sizeof(*tree->nodes), in widget_tree_create() 379 if (!tree->nodes) in widget_tree_create() 384 &tree->nodes[i]); in widget_tree_create() 439 tree->nodes = kcalloc(num_nodes + 1, sizeof(*tree->nodes), GFP_KERNEL); in hda_widget_sysfs_reinit() 440 if (!tree->nodes) { in hda_widget_sysfs_reinit() 445 /* prune non-existing nodes */ in hda_widget_sysfs_reinit() [all …]
|
/linux/Documentation/power/powercap/ |
H A D | dtpm.rst | 39 The nodes of the tree are a virtual description aggregating the power 40 characteristics of the children nodes and their power limitations. 64 When the nodes are inserted in the tree, their power characteristics are propagated to the parents:: 106 A root node is created and it is the parent of all the nodes. This 115 hierarchically. There is one root node, all intermediate nodes are 116 grouping the child nodes which can be intermediate nodes also or real 119 The intermediate nodes aggregate the power information and allows to 120 set the power limit given the weight of the nodes. 137 if it is not recommended for the user space, several nodes can have 170 allocate and link the different nodes of the tree. [all …]
|
/linux/drivers/scsi/lpfc/ |
H A D | lpfc_disc.h | 219 * processing is needed. Each list holds the nodes that require a PLOGI or 221 * nodes affected by an RSCN, or a Link Up (Typically, all nodes are effected 222 * by Link Up) event. The unmapped_list contains all nodes that have 224 * mapped_list will contain all nodes that are mapped FCP targets. 226 * The bind list is a list of undiscovered (potentially non-existent) nodes 228 * nodes transition from the unmapped to the mapped list. 259 * For a Link Down, all nodes on the ADISC, PLOGI, unmapped or mapped 261 * expire, all effected nodes will receive a DEVICE_RM event. 264 * For a Link Up or RSCN, all nodes will move from the mapped / unmapped lists 266 * check, additional nodes may be added (DEVICE_ADD) or removed (DEVICE_RM) to / [all …]
|
/linux/Documentation/admin-guide/cgroup-v1/ |
H A D | cpusets.rst | 44 Nodes to a set of tasks. In this document "Memory Node" refers to 58 set_mempolicy(2) system calls to include Memory Nodes in its memory 60 CPUs or Memory Nodes not in that cpuset. The scheduler will not 67 cpusets and which CPUs and Memory Nodes are assigned to each cpuset, 76 complex memory cache hierarchies and multiple Memory Nodes having 114 Memory Nodes are used by a process or set of processes. 118 Nodes it may obtain memory (mbind, set_mempolicy). 122 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the 129 those Memory Nodes allowed in that task's cpuset. 131 Nodes. [all …]
|