Lines Matching refs:workers

67 	 * While associated (!DISASSOCIATED), all workers are bound to the
71 * While DISASSOCIATED, the cpu may be offline and all workers have
84 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
124 * Rescue workers are used only on emergencies and shared by
215 int nr_workers; /* L: total number of workers */
216 int nr_idle; /* L: currently idle workers */
218 struct list_head idle_list; /* L: list of idle workers */
222 struct timer_list mayday_timer; /* L: SOS timer for workers */
224 /* a workers is either on busy_hash or idle_list, or the manager */
226 /* L: hash of busy workers */
229 struct list_head workers; /* A: attached workers */
256 PWQ_STAT_REPATRIATED, /* unbound workers brought back into scope */
590 * for_each_pool_worker - iterate through all workers of a worker_pool
592 * @pool: worker_pool to iterate workers of
600 list_for_each_entry((worker), &(pool)->workers, node) \
944 * running workers.
946 * Note that, because unbound workers never contribute to nr_running, this
955 /* Can I start working? Called from busy but !running workers. */
961 /* Do I need to keep working? Called from currently running workers. */
973 /* Do we have too many workers and should some go away? */
1220 * A single work shouldn't be executed concurrently by multiple workers.
1223 * @work is not executed concurrently by multiple workers from the same
1295 * now. If this becomes pronounced, we can skip over workers which are
1460 * workers, also reach here, let's not access anything before
1553 * to sleep. It's used by psi to identify aggregation workers during
2722 * details. BH workers are, while per-CPU, always DISASSOCIATED.
2734 list_add_tail(&worker->node, &pool->workers);
2940 * idle_worker_timeout - check if some idle workers can now be deleted.
2978 * idle_cull_fn - cull workers that have been idle for too long.
2979 * @work: the pool's work for handling these idle workers
2981 * This goes through a pool's idle workers and gets rid of those that have been
3193 * interaction with other workers on the same cpu, queueing and
3258 * workers such as the UNBOUND and CPU_INTENSIVE ones.
3403 * The worker thread function. All workers belong to a worker_pool -
3404 * either a per-cpu one or dynamic unbound one. These workers process all
3498 * The pool has idle workers and doesn't need the rescuer, so it
3502 * In PERCPU pool with concurrency enabled, having idle workers
3504 * simply mean regular workers have woken up, completed their
3507 * In this case, those working workers may later sleep again,
3508 * the pool may run out of idle workers, and it will have to
3581 * pwq(s) queued. This can happen by non-rescuer workers consuming
3630 * Leave this pool. Notify regular workers; otherwise, we end up
3716 bh_worker(list_first_entry(&pool->workers, struct worker, node));
3743 bh_worker(list_first_entry(&pool->workers, struct worker, node));
4890 INIT_LIST_HEAD(&pool->workers);
5061 * Become the manager and destroy all workers. This prevents
5062 * @pool's workers from blocking on attach_mutex. We're the last
5536 * with a cpumask spanning multiple pods, the workers which were already
6550 pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers);
6683 * We've blocked all attach/detach operations. Make all workers
6684 * unbound and set DISASSOCIATED. Before this, all workers
6701 * are served by workers tied to the pool.
6722 * rebind_workers - rebind all workers of a pool to the associated CPU
6725 * @pool->cpu is coming online. Rebind all workers to the CPU.
6734 * Restore CPU affinity of all workers. As all idle workers should
6737 * of all workers first and then clear UNBOUND. As we're called
6778 * restore_unbound_workers_cpumask - restore cpumask of unbound workers
6785 * online CPU before, cpus_allowed of all its workers should be restored.
6866 /* unbinding per-cpu workers should happen on the local CPU */
7194 * nice RW int : nice value of the workers
7195 * cpumask RW mask : bitmask of allowed CPUs for the workers
7663 * Show workers that might prevent the processing of pending work items.
7698 pr_info("Showing backtraces of busy workers in stalled worker pools:\n");
8096 * workers and enable future kworker creations.
8128 * Create the initial workers. A BH pool has one pseudo worker that
8130 * affected by hotplug events. Create the BH pseudo workers for all