Lines Matching full:workers
66 * While associated (!DISASSOCIATED), all workers are bound to the
70 * While DISASSOCIATED, the cpu may be offline and all workers have
83 POOL_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
121 * Rescue workers are used only on emergencies and shared by
204 int nr_workers; /* L: total number of workers */
205 int nr_idle; /* L: currently idle workers */
207 struct list_head idle_list; /* L: list of idle workers */
211 struct timer_list mayday_timer; /* L: SOS timer for workers */
213 /* a workers is either on busy_hash or idle_list, or the manager */
215 /* L: hash of busy workers */
218 struct list_head workers; /* A: attached workers */ member
245 PWQ_STAT_REPATRIATED, /* unbound workers brought back into scope */
578 * for_each_pool_worker - iterate through all workers of a worker_pool
580 * @pool: worker_pool to iterate workers of
588 list_for_each_entry((worker), &(pool)->workers, node) \
932 * running workers.
934 * Note that, because unbound workers never contribute to nr_running, this
943 /* Can I start working? Called from busy but !running workers. */
949 /* Do I need to keep working? Called from currently running workers. */
961 /* Do we have too many workers and should some go away? */
1192 * A single work shouldn't be executed concurrently by multiple workers. in assign_work()
1195 * @work is not executed concurrently by multiple workers from the same in assign_work()
1267 * now. If this becomes pronounced, we can skip over workers which are in kick_pool()
1432 * workers, also reach here, let's not access anything before in wq_worker_sleeping()
1525 * to sleep. It's used by psi to identify aggregation workers during
2683 * details. BH workers are, while per-CPU, always DISASSOCIATED. in worker_attach_to_pool()
2695 list_add_tail(&worker->node, &pool->workers); in worker_attach_to_pool()
2901 * idle_worker_timeout - check if some idle workers can now be deleted.
2939 * idle_cull_fn - cull workers that have been idle for too long.
2940 * @work: the pool's work for handling these idle workers
2942 * This goes through a pool's idle workers and gets rid of those that have been
3155 * interaction with other workers on the same cpu, queueing and
3219 * workers such as the UNBOUND and CPU_INTENSIVE ones. in process_one_work()
3364 * The worker thread function. All workers belong to a worker_pool -
3365 * either a per-cpu one or dynamic unbound one. These workers process all
3434 * manage, sleep. Workers are woken up only while holding in worker_thread()
3486 * pwq(s) queued. This can happen by non-rescuer workers consuming in rescuer_thread()
3549 * Leave this pool. Notify regular workers; otherwise, we end up in rescuer_thread()
3635 bh_worker(list_first_entry(&pool->workers, struct worker, node)); in workqueue_softirq_action()
3662 bh_worker(list_first_entry(&pool->workers, struct worker, node)); in drain_dead_softirq_workfn()
4809 INIT_LIST_HEAD(&pool->workers); in init_worker_pool()
4980 * Become the manager and destroy all workers. This prevents in put_unbound_pool()
4981 * @pool's workers from blocking on attach_mutex. We're the last in put_unbound_pool()
5447 * with a cpumask spanning multiple pods, the workers which were already
6416 pr_cont(" hung=%lus workers=%d", hung, pool->nr_workers); in show_one_worker_pool()
6549 * We've blocked all attach/detach operations. Make all workers in unbind_workers()
6550 * unbound and set DISASSOCIATED. Before this, all workers in unbind_workers()
6567 * are served by workers tied to the pool. in unbind_workers()
6588 * rebind_workers - rebind all workers of a pool to the associated CPU
6591 * @pool->cpu is coming online. Rebind all workers to the CPU.
6600 * Restore CPU affinity of all workers. As all idle workers should in rebind_workers()
6603 * of all workers first and then clear UNBOUND. As we're called in rebind_workers()
6644 * restore_unbound_workers_cpumask - restore cpumask of unbound workers
6651 * online CPU before, cpus_allowed of all its workers should be restored.
6732 /* unbinding per-cpu workers should happen on the local CPU */ in workqueue_offline_cpu()
7045 * nice RW int : nice value of the workers
7046 * cpumask RW mask : bitmask of allowed CPUs for the workers
7498 * Show workers that might prevent the processing of pending work items.
7499 * The only candidates are CPU-bound workers in the running state.
7535 pr_info("Showing backtraces of running workers in stalled CPU-bound worker pools:\n"); in show_cpu_pools_hogs()
7900 * workers and enable future kworker creations.
7932 * Create the initial workers. A BH pool has one pseudo worker that in workqueue_init()
7934 * affected by hotplug events. Create the BH pseudo workers for all in workqueue_init()