Lines Matching full:idle

5  * Built-in idle CPU tracking policy.
14 /* Enable/disable built-in idle CPU selection policy */
17 /* Enable/disable per-node idle cpumasks */
27 * cpumasks to track idle CPUs within each NUMA node.
30 * from is used to track all the idle CPUs in the system.
38 * Global host-wide idle cpumasks (used when SCX_OPS_BUILTIN_IDLE_PER_NODE
44 * Per-node idle cpumasks.
49 * Local per-CPU cpumasks (used to generate temporary idle cpumasks).
56 * Return the idle masks associated to a target @node.
58 * NUMA_NO_NODE identifies the global idle cpumask.
67 * per-node idle cpumasks are disabled.
85 * cluster is not wholly idle either way. This also prevents in scx_idle_test_and_clear_cpu()
95 * @cpu is never cleared from the idle SMT mask. Ensure that in scx_idle_test_and_clear_cpu()
113 * Pick an idle CPU in a specific NUMA node.
142 * Tracks nodes that have not yet been visited when searching for an idle
148 * Search for an idle CPU across all nodes, excluding @node.
172 * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU in pick_idle_cpu_from_online_nodes()
198 * Find an idle CPU in the system, starting from @node.
337 * cache-aware / NUMA-aware scheduling optimizations in the default CPU idle
353 * single LLC domain, the idle CPU selection logic can choose any in scx_idle_update_selcpu_topology()
376 * for an idle CPU in the same domain twice is redundant. in scx_idle_update_selcpu_topology()
379 * optimization, as we would naturally select idle CPUs within in scx_idle_update_selcpu_topology()
393 pr_debug("sched_ext: LLC idle selection %s\n", in scx_idle_update_selcpu_topology()
395 pr_debug("sched_ext: NUMA idle selection %s\n", in scx_idle_update_selcpu_topology()
417 * Built-in CPU idle selection policy:
419 * 1. Prioritize full-idle cores:
420 * - always prioritize CPUs from fully idle cores (both logical CPUs are
421 * idle) to avoid interference caused by SMT.
436 * 5. Pick any idle CPU within the @cpus_allowed domain.
446 * Return the picked CPU if idle, or a negative value otherwise.
523 * If the waker's CPU is cache affine and prev_cpu is idle, in scx_select_cpu_dfl()
541 * Checking only for the presence of idle CPUs is also in scx_select_cpu_dfl()
543 * piled up on it even if there is an idle core elsewhere on in scx_select_cpu_dfl()
557 * If CPU has SMT, any wholly idle CPU is likely a better pick than in scx_select_cpu_dfl()
558 * partially idle @prev_cpu. in scx_select_cpu_dfl()
562 * Keep using @prev_cpu if it's part of a fully idle core. in scx_select_cpu_dfl()
572 * Search for any fully idle core in the same LLC domain. in scx_select_cpu_dfl()
581 * Search for any fully idle core in the same NUMA node. in scx_select_cpu_dfl()
590 * Search for any full-idle core usable by the task. in scx_select_cpu_dfl()
592 * If the node-aware idle CPU selection policy is enabled in scx_select_cpu_dfl()
602 * Give up if we're strictly looking for a full-idle SMT in scx_select_cpu_dfl()
612 * Use @prev_cpu if it's idle. in scx_select_cpu_dfl()
620 * Search for any idle CPU in the same LLC domain. in scx_select_cpu_dfl()
629 * Search for any idle CPU in the same NUMA node. in scx_select_cpu_dfl()
638 * Search for any idle CPU usable by the task. in scx_select_cpu_dfl()
640 * If the node-aware idle CPU selection policy is enabled in scx_select_cpu_dfl()
656 * Initialize global and per-node idle cpumasks.
662 /* Allocate global idle cpumasks */ in scx_idle_init_masks()
666 /* Allocate per-node idle cpumasks */ in scx_idle_init_masks()
680 /* Allocate local per-cpu idle cpumasks */ in scx_idle_init_masks()
691 static void update_builtin_idle(int cpu, bool idle) in update_builtin_idle() argument
696 assign_cpu(cpu, idle_cpus, idle); in update_builtin_idle()
703 if (idle) { in update_builtin_idle()
719 * Update the idle state of a CPU to @idle.
722 * scheduler of an actual idle state transition (idle to busy or vice
723 * versa). If @do_notify is false, only the idle state in the idle masks is
726 * This distinction is necessary, because an idle CPU can be "reserved" and
729 * to idle without a true state transition. Refreshing the idle masks
730 * without invoking ops.update_idle() ensures accurate idle state tracking
734 void __scx_update_idle(struct rq *rq, bool idle, bool do_notify) in __scx_update_idle() argument
742 * Update the idle masks: in __scx_update_idle()
743 * - for real idle transitions (do_notify == true) in __scx_update_idle()
744 * - for idle-to-idle transitions (indicated by the previous task in __scx_update_idle()
745 * being the idle thread, managed by pick_task_idle()) in __scx_update_idle()
747 * Skip updating idle masks if the previous task is not the idle in __scx_update_idle()
749 * transitioning from a task to the idle thread (calling this in __scx_update_idle()
752 * In this way we can avoid updating the idle masks twice, in __scx_update_idle()
757 update_builtin_idle(cpu, idle); in __scx_update_idle()
761 * the idle thread and vice versa. in __scx_update_idle()
763 * Idle transitions are indicated by do_notify being set to true, in __scx_update_idle()
766 * This must come after builtin idle update so that BPF schedulers can in __scx_update_idle()
768 * either enqueue() sees the idle bit or update_idle() sees the task in __scx_update_idle()
772 SCX_CALL_OP(sch, SCX_KF_REST, update_idle, rq, cpu_of(rq), idle); in __scx_update_idle()
780 * Consider all online cpus idle. Should converge to the actual state in reset_idle_masks()
825 scx_error(sch, "per-node idle tracking is disabled"); in validate_node()
855 scx_error(sch, "built-in idle tracking is disabled"); in check_builtin_idle_enabled()
924 * per-CPU tasks as well. For these tasks, we can skip all idle CPU in select_cpu_from_kfunc()
926 * used CPU is idle and within the allowed cpumask. in select_cpu_from_kfunc()
967 * @is_idle: out parameter indicating whether the returned CPU is idle
975 * currently idle and thus a good candidate for direct dispatching.
999 * scx_bpf_select_cpu_and - Pick an idle CPU usable by task @p,
1014 * Returns the selected idle CPU, which will be automatically awakened upon
1016 * a negative value if no idle CPU is available.
1035 * idle-tracking per-CPU cpumask of a target NUMA node.
1038 * Returns an empty cpumask if idle tracking is not enabled, if @node is
1060 * scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
1063 * Returns an empty mask if idle tracking is not enabled, or running on a
1089 * idle-tracking, per-physical-core cpumask of a target NUMA node. Can be
1093 * Returns an empty cpumask if idle tracking is not enabled, if @node is
1118 * scx_bpf_get_idle_smtmask - Get a referenced kptr to the idle-tracking,
1122 * Returns an empty mask if idle tracking is not enabled, or running on a
1151 * either the percpu, or SMT idle-tracking cpumask.
1158 * a reference to a global idle cpumask, which is read-only in the in scx_bpf_put_idle_cpumask()
1165 * scx_bpf_test_and_clear_cpu_idle - Test and clear @cpu's idle state
1166 * @cpu: cpu to test and clear idle for
1168 * Returns %true if @cpu was idle and its idle state was successfully cleared.
1194 * scx_bpf_pick_idle_cpu_node - Pick and claim an idle cpu from @node
1199 * Pick and claim an idle cpu in @cpus_allowed from the NUMA node @node.
1201 * Returns the picked idle cpu number on success, or -%EBUSY if no matching
1231 * scx_bpf_pick_idle_cpu - Pick and claim an idle cpu
1235 * Pick and claim an idle cpu in @cpus_allowed. Returns the picked idle cpu
1238 * Idle CPU tracking may race against CPU scheduling state transitions. For
1240 * idle state. If the caller then assumes that there will be dispatch events on
1264 scx_error(sch, "per-node idle tracking is enabled"); in scx_bpf_pick_idle_cpu()
1275 * scx_bpf_pick_any_cpu_node - Pick and claim an idle cpu if available
1281 * Pick and claim an idle cpu in @cpus_allowed. If none is available, pick any
1282 * CPU in @cpus_allowed. Guaranteed to succeed and returns the picked idle cpu
1289 * the CPU idle state).
1292 * set, this function can't tell which CPUs are idle and will always pick any
1326 * scx_bpf_pick_any_cpu - Pick and claim an idle cpu if available or pick any CPU
1330 * Pick and claim an idle cpu in @cpus_allowed. If none is available, pick any
1331 * CPU in @cpus_allowed. Guaranteed to succeed and returns the picked idle cpu
1336 * set, this function can't tell which CPUs are idle and will always pick any
1355 scx_error(sch, "per-node idle tracking is enabled"); in scx_bpf_pick_any_cpu()