Searched refs:MPOL_BIND (Results 1 – 10 of 10) sorted by relevance
22 MPOL_BIND, enumerator
37 #define MPOL_BIND 2 macro
440 [MPOL_BIND] = {898 case MPOL_BIND: in get_policy_nodemask() 1507 if (*mode == MPOL_BIND || *mode == MPOL_PREFERRED_MANY) in sanitize_mpol_flags() 1573 * If any vma in the range got policy other than MPOL_BIND in SYSCALL_DEFINE4() 1582 if (old->mode != MPOL_BIND && old->mode != MPOL_PREFERRED_MANY) { in SYSCALL_DEFINE4() 1942 case MPOL_BIND: in mempolicy_slab_node() 2060 case MPOL_BIND: in policy_nodemask() 2143 case MPOL_BIND: in init_nodemask_of_mempolicy() 2183 if (mempolicy && mempolicy->mode == MPOL_BIND) in mempolicy_in_oom_domain() 2621 case MPOL_BIND in __mpol_equal() [all...]
2440 * Only enforce MPOL_BIND policy which overlaps with cpuset policy2443 if (mpol->mode == MPOL_BIND &&
104 例如,mpol=bind=staticNodeList相当于MPOL_BIND|MPOL_F_STATIC_NODES的分配策略
104 例如,mpol=bind=staticNodeList相當於MPOL_BIND|MPOL_F_STATIC_NODES的分配策略
115 if (mbind(data, mmap_len, MPOL_BIND, node_mask, node_index + 1 + 1, 0)) { in perf_mmap__aio_bind()
199 allocation policy of MPOL_BIND | MPOL_F_STATIC_NODES.
193 MPOL_BIND369 path". Note that for MPOL_BIND, the "usage" extends across the entire
406 ret = set_mempolicy(MPOL_BIND, node_mask->maskp, node_mask->size + 1); in bind_to_memnode()