Lines Matching defs:tasks
75 * Minimal preemption granularity for CPU-bound tasks:
438 * both tasks until we find their ancestors who are siblings of common
1091 * 2) from those tasks that meet 1), we select the one
1248 * Tasks are initialized with full load to be seen as heavy tasks until
1260 * With new tasks being created, their initial util_avgs are extrapolated
1270 * To solve this problem, we also cap the util_avg of successive tasks to
1296 * For !fair tasks do:
1501 * Are we enqueueing a waiting task? (for current tasks
1581 * threshold. Above this threshold, individual tasks may be contending
1583 * approximation as the number of running tasks may not be related to
1591 * tasks that remain local when the destination is lightly loaded.
1603 * calculated based on the tasks virtual memory size and
1621 spinlock_t lock; /* nr_tasks, tasks */
1882 * of nodes, and move tasks towards the group with the most
1918 * larger multiplier, in order to group tasks together that are almost
2187 /* The node has spare capacity that can be used to run more tasks. */
2190 * The node is fully used and the tasks don't compete for more CPU
2191 * cycles. Nevertheless, some tasks might wait before running.
2196 * tasks.
2406 * be improved if the source tasks was migrated to the target dst_cpu taking
2467 * be incurred if the tasks were swapped.
2469 * If dst and source tasks are in the same NUMA group, or not
2475 * Do not swap within a group or between tasks that have
2487 * tasks within a group over tiny differences.
2535 * of tasks and also hurt performance due to cache
2615 * more running tasks that the imbalance is ignored as the
2638 * than swapping tasks around, check if a move is possible.
2677 * imbalance and would be the first to start moving tasks about.
2679 * And we want to avoid any moving of tasks about, as that would create
2680 * random movement of tasks -- counter the numa conditions we're trying
2901 * Most memory accesses are shared with other tasks.
2903 * since other tasks may just move the memory elsewhere.
2994 * tasks from numa_groups near each other in the system, and
3098 * Normalize the faults_from, so all tasks in a group
3606 * Scanning the VMAs of short lived tasks add more overhead. So
3701 * Make sure tasks use at least 32x as much time to run other code
3905 * a lot of tasks with the rounding problem between 2 updates of
4202 * on an 8-core system with 8 tasks each runnable on one CPU shares has
4249 * number include things like RT tasks.
4498 * Imagine a rq with 2 tasks that each are runnable 2/3 of the time. Then the
4500 * runnable section of these tasks overlap (or not). If they were to perfectly
4602 * assuming all tasks are equally runnable.
4804 * avg. The immediate corollary is that all (fair) tasks must be attached.
5007 * tasks cannot exit without having gone through wake_up_new_task() ->
5364 * adding tasks with positive lag, or removing tasks with negative lag
5366 * other tasks.
5474 * When joining the competition; the existing tasks will be,
5475 * on average, halfway through their slice, as such start tasks
5583 * Delayed se of cfs_rq have no tasks queued on them.
5585 * will account it for blocked tasks.
5602 * Delayed se of cfs_rq have no tasks queued on them.
5727 * when there are only lesser-weight tasks around):
6142 /* Re-enqueue the tasks that have been throttled at this level. */
6167 * Kthreads and exiting tasks don't return to userspace, so adding the
7010 * CFS operations on tasks:
7117 /* Runqueue only has SCHED_IDLE tasks enqueued */
7263 * Since new tasks are assigned an initial util_avg equal to
7264 * half of the spare capacity of their CPU, tiny tasks have the
7268 * for the first enqueue operation of new tasks during the
7272 * the PELT signals of tasks to converge before taking them
7378 /* balance early to pull high priority tasks */
7454 * The load of a CPU is defined by the load of tasks currently enqueued on that
7455 * CPU as well as tasks which are currently sleeping after an execution on that
7511 * Only decay a single time; tasks that have less then 1 wakeup per
7573 * interrupt intensive workload could force all tasks onto one
8106 * kworker thread and the tasks previous CPUs are the same.
8191 * cpu_util() - Estimates the amount of CPU capacity used by CFS tasks.
8200 * CPU utilization is the sum of running time of runnable tasks plus the
8201 * recent utilization of currently non-runnable tasks on that CPU.
8202 * It represents the amount of CPU capacity currently used by CFS tasks in
8208 * runnable tasks on that CPU. It preserves a utilization "snapshot" of
8209 * previously-executed tasks, which helps better deduce how busy a CPU will
8214 * CPU contention for CFS tasks can be detected by CPU runnable > CPU
8221 * of rounding errors as well as task migrations or wakeups of new tasks.
8311 * The utilization of a CPU is defined by the utilization of tasks currently
8312 * enqueued on that CPU as well as tasks which are currently sleeping after an
8389 * Because the time spend on RT/DL tasks is visible as 'lost' time to
8390 * CFS tasks and we use the same metric to track the effective
8523 * NOTE: in case RT tasks are running, by default the min
8590 * small tasks on a CPU in order to let other CPUs go in deeper idle states,
8606 * bias new tasks towards specific types of CPUs first, or to try to infer
8985 * multiple tasks must be woken up.
9005 * WF_RQ_SELECTED implies the tasks are stacking on a CPU when they
9091 * BATCH and IDLE tasks do not preempt others.
9110 * preempt for tasks that are sched_delayed as it would violate
9340 * ineligible tasks rather than force idling. If this happens we may
9400 * We them move tasks around to minimize the imbalance. In the continuous
9429 * Coupled with a limit on how many tasks we can migrate every balance pass,
9497 /* The group has spare capacity that can be used to run more tasks. */
9500 * The group is fully used and the tasks don't compete for more CPU
9501 * cycles. Nevertheless, some tasks might wait before running.
9521 * The tasks' affinity constraints previously prevented the scheduler
9527 * tasks.
9569 struct list_head tasks;
9713 * We do not migrate tasks that are:
9728 * We want to prioritize the migration of eligible tasks.
9729 * For ineligible tasks we soft-limit them and only allow
9754 * meet load balance goals by pulling other tasks on src_cpu.
9863 * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
9866 * Returns number of detached tasks if successful and 0 otherwise.
9870 struct list_head *tasks = &env->src_rq->cfs_tasks;
9889 while (!list_empty(tasks)) {
9902 /* take a breather every nr_migrate tasks */
9909 p = list_last_entry(tasks, struct task_struct, se.group_node);
9917 * Depending of the number of CPUs and tasks and the
9921 * detaching up to loop_max tasks.
9964 list_add(&p->se.group_node, &env->tasks);
9980 * load/util/tasks.
9990 list_move(&p->se.group_node, tasks);
10004 * attach_tasks() -- attaches all tasks detached by detach_tasks() to their
10009 struct list_head *tasks = &env->tasks;
10016 while (!list_empty(tasks)) {
10017 p = list_first_entry(tasks, struct task_struct, se.group_node);
10232 unsigned int sum_nr_running; /* Nr of all tasks running in the group */
10233 unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
10403 * Imagine a situation of two groups of 4 CPUs each and 4 tasks each with a
10410 * If we were to balance group-wise we'd place two tasks in the first group and
10411 * two tasks in the second group. Clearly this is undesired as it will overload
10416 * moving tasks due to affinity constraints.
10435 * be used by some tasks.
10438 * available capacity for CFS tasks.
10440 * account the variance of the tasks' load and to return true if the available
10463 * group_is_overloaded returns true if the group has more tasks than it can
10466 * with the exact right number of tasks, has no more spare capacity but is not
10597 * to a CPU that doesn't have multiple tasks sharing its CPU capacity.
10774 * Don't try to pull misfit tasks we can't help.
10834 * group because tasks have all compute capacity that they need
10873 * and highest number of running tasks. We could also compare
10876 * CPUs which means less opportunity to pull tasks.
10889 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
11141 /* There is no idlest group to push tasks to */
11186 * idlest group don't try and push any tasks.
11226 * and improve locality if the number of running tasks
11377 * Indicate that the child domain of the busiest group prefers tasks
11416 /* Set imbalance to allow misfit tasks to be balanced. */
11441 /* Reduce number of tasks sharing CPU capacity */
11495 * When prefer sibling, evenly spread running tasks on
11520 /* Number of tasks to move to restore balance */
11541 * busiest group don't try to pull any tasks.
11553 * load, don't try to pull any tasks.
11622 /* There is no busy sibling group to pull tasks from */
11628 /* Misfit tasks should be dealt with regardless of the avg load */
11651 * don't try and pull any tasks.
11658 * between tasks.
11663 * busiest group don't try to pull any tasks.
11673 * Don't pull any tasks if this group is already above the
11689 * Try to move all excess tasks to a sibling domain of the busiest
11728 * busiest doesn't have any tasks waiting to run
11765 * - regular: there are !numa tasks
11766 * - remote: there are numa tasks that run on the 'wrong' node
11769 * In order to avoid migrating ideally placed numa tasks,
11774 * queue by moving tasks around inside the node.
11778 * allow migration of more tasks.
11803 * Make sure we only pull tasks from a CPU of lower priority
11870 * For ASYM_CPUCAPACITY domains with misfit tasks we
11887 * Max backoff if we encounter pinned tasks. Pretty arbitrary value, but
11896 * ASYM_PACKING needs to force migrate tasks from busy but lower
11897 * priority CPUs in order to pack all tasks in the highest priority
11916 * The imbalanced case includes the case of pinned tasks preventing a fair
11975 * However, we bail out if we already have tasks or a wakeup pending,
12063 * tasks if there is an imbalance.
12084 .tasks = LIST_HEAD_INIT(env.tasks),
12130 * Attempt to move tasks. If sched_balance_find_src_group has found
12148 * We've detached some tasks from busiest_rq. Every
12151 * that nobody can manipulate the tasks in parallel.
12170 * Revisit (affine) tasks on src_cpu that couldn't be moved to
12216 /* All tasks on this runqueue were pinned by CPU affinity */
12303 * constraints. Clear the imbalance flag only if other tasks got
12315 * We reach balance because all tasks are pinned at this level so
12391 * running tasks off the busiest CPU onto idle CPUs. It requires at
12408 * CPUs can become inactive. We should not move tasks from or to
12593 * state even if we migrated tasks. Update it.
12770 * currently idle; in which case, kick the ILB to move tasks
12798 * ensure tasks have enough CPU capacity.
12814 * like this LLC domain has tasks we could move.
12954 * tasks movement depending of flags.
13138 * idle. Attempts to pull tasks from other CPUs.
13141 * < 0 - we released the lock and there are !fair tasks present
13142 * 0 - failed, no new tasks
13143 * > 0 - success, new (fair) tasks present
13172 * Do not pull tasks towards !active CPUs...
13250 * Stop searching for tasks to pull if there are
13251 * now runnable tasks on this rq.
13377 * tasks on this CPU and the forced idle CPU. Ideally, we should
13380 * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check
13389 * Consider any infeasible weight scenario. Take for instance two tasks,
13397 * tasks receives:
13491 * And that gives us the tools to compare tasks across a combined runqueue.
13499 * b) when comparing tasks between 2 runqueues of which one is forced-idle,
13594 * Find an se in the hierarchy for tasks a and b, such that the se's
14159 * Time slice is 0 for SCHED_OTHER tasks that are on an otherwise