Lines Matching full:utilization
101 * The margin used when comparing utilization with CPU capacity.
1067 * only 1/2 of the left utilization budget:
3994 * thread is a different class (!fair), nor will the utilization in cfs_rq_util_change()
4281 /* Set new sched_entity's utilization */ in update_tg_cfs_util()
4287 /* Update parent cfs_rq utilization */ in update_tg_cfs_util()
4429 * Check if we need to update the load and the utilization of a blocked
4437 * If sched_entity still have not zero load or utilization, we have to in skip_blocked_update()
4445 * the utilization of the sched_entity: in skip_blocked_update()
4451 * Otherwise, the load and the utilization of the sched_entity is in skip_blocked_update()
4849 /* Update root cfs_rq's estimated utilization */ in util_est_enqueue()
4865 /* Update root cfs_rq's estimated utilization */ in util_est_dequeue()
4885 * Skip update of task's estimated utilization when the task has not in util_est_update()
4891 /* Get current estimate of utilization */ in util_est_update()
4901 /* Get utilization at dequeue */ in util_est_update()
4905 * Reset EWMA on utilization increases, the moving average is used only in util_est_update()
4906 * to smooth utilization decreases. in util_est_update()
4914 * Skip update of task's estimated utilization when its members are in util_est_update()
4922 * To avoid overestimation of actual task utilization, skip updates if in util_est_update()
4929 * To avoid underestimate of task utilization, skip updates of EWMA if in util_est_update()
4937 * Update Task's estimated utilization in util_est_update()
5097 * include the utilization but also the performance hints. in task_fits_cpu()
6831 /* Return true only if the utilization doesn't fit CPU's capacity */ in cpu_overutilized()
6929 * the cfs_rq utilization to select a frequency. in enqueue_task_fair()
6930 * Let's add the task's estimated utilization to the cfs_rq's in enqueue_task_fair()
6931 * estimated utilization, before we update schedutil. in enqueue_task_fair()
6943 * utilization updates, so do it here explicitly with the IOWAIT flag in enqueue_task_fair()
7806 * which include the utilization and the performance hints. in asym_fits_cpu()
7824 * On asymmetric system, update task utilization because we will check in select_idle_sibling()
7945 * @cpu: the CPU to get the utilization for
7946 * @p: task for which the CPU utilization should be predicted or NULL
7951 * so that CPU utilization can be compared with CPU capacity.
7953 * CPU utilization is the sum of running time of runnable tasks plus the
7954 * recent utilization of currently non-runnable tasks on that CPU.
7959 * The estimated CPU utilization is defined as the maximum between CPU
7960 * utilization and sum of the estimated utilization of the currently
7961 * runnable tasks on that CPU. It preserves a utilization "snapshot" of
7963 * be when a long-sleeping task wakes up. The contribution to CPU utilization
7966 * Boosted CPU utilization is defined as max(CPU runnable, CPU utilization).
7968 * utilization. Boosting is implemented in cpu_util() so that internal
7972 * CPU utilization can be higher than the current CPU capacity
7975 * CPU utilization has to be capped to fit into the [0..max CPU capacity]
7978 * capacity. CPU utilization is allowed to overshoot current CPU capacity
7982 * Return: (Boosted) (estimated) utilization for the specified CPU.
8060 * cpu_util_without: compute cpu utilization without any contributions from *p
8061 * @cpu: the CPU which utilization is requested
8062 * @p: the task which utilization should be discounted
8064 * The utilization of a CPU is defined by the utilization of tasks currently
8068 * This method returns the utilization of the specified CPU by discounting the
8069 * utilization of the specified task, whenever the task is currently
8070 * contributing to the CPU utilization.
8082 * This function computes an effective utilization for the given CPU, to be
8093 * The cfs,rt,dl utilization are the running times measured with rq->clock_task
8095 * in the IRQ utilization.
8098 * based on the task model parameters and gives the minimal utilization
8126 * The minimum utilization returns the highest level between: in effective_cpu_util()
8144 * utilization (PELT windows are synchronized) we can directly add them in effective_cpu_util()
8145 * to obtain the CPU's actual utilization. in effective_cpu_util()
8152 * than the actual utilization because of uclamp_max requirements. in effective_cpu_util()
8181 * energy_env - Utilization landscape for energy estimation.
8182 * @task_busy_time: Utilization contribution by the task for which we test the
8184 * @pd_busy_time: Utilization of the whole perf domain without the task
8218 * utilization for each @pd_cpus, it however doesn't take into account
8219 * clamping since the ratio (utilization / cpu_capacity) is already enough to
8227 * - A stable PD utilization, no matter which CPU of that PD we want to place
8231 * will always be the same no matter which CPU utilization we rely on
8254 * Compute the maximum utilization for compute_energy() when the task @p
8257 * Returns the maximum utilization among @eenv->cpus. This utilization can't
8273 * Performance domain frequency: utilization clamping in eenv_pd_max_util()
8277 * utilization can be max OPP. in eenv_pd_max_util()
8304 * consume for a given utilization landscape @eenv. When @dst_cpu < 0, the task
8338 * frequency requests follow utilization (e.g. using schedutil), the CPU with
8355 * they don't have any useful utilization data yet and it's not possible to
8493 /* CPU utilization has changed */ in find_energy_efficient_cpu()
8517 /* CPU utilization has changed */ in find_energy_efficient_cpu()
9903 unsigned long group_util; /* Total utilization over the CPUs of the group */
10109 * smaller than the number of CPUs or if the utilization is lower than the
10917 * compare the utilization which is more stable but it can end in sched_balance_find_dst_group()
11059 /* Update over-utilization (tipping point, U >= 0) indicator */ in update_sd_lb_stats()
11146 * In some cases, the group's utilization is max or even in calculate_imbalance()
11515 * Don't try to pull utilization from a CPU with one in sched_balance_find_src_rq()
11516 * running task. Whatever its utilization, we will fail in sched_balance_find_src_rq()
12423 * increase the overall cache utilization), we need a less-loaded LLC in nohz_balancer_kick()