Lines Matching full:workqueues
22 * pools for workqueues which are not bound to any specific CPU - the
339 struct list_head list; /* PR: list of all workqueues */
376 * the workqueues list without grabbing wq_pool_mutex.
377 * This is used to dump all workqueues from sysrq.
388 * Each pod type describes how CPUs should be grouped for unbound workqueues.
442 static DEFINE_MUTEX(wq_pool_mutex); /* protects pools and workqueues list */
448 static LIST_HEAD(workqueues); /* PR: list of all workqueues */
1300 * workqueues as appropriate. To avoid flooding the console, each violating work
1552 * - %NULL for per-cpu workqueues as they don't need to use shared nr_active.
1809 * This function should only be called for ordered workqueues where only the
1931 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock.
1942 * workqueues. in pwq_dec_nr_active()
1985 * For unbound workqueues, this function may temporarily drop @pwq->pool->lock
2457 * This current implementation is specific to unbound workqueues. in queue_work_node()
3259 * workqueues), so hiding them isn't a problem. in process_one_work()
3367 * exception is work items which belong to workqueues with a rescuer which
3460 * workqueues which have works queued on the pool and let them process
3804 * BH and threaded workqueues need separate lockdep keys to avoid in insert_wq_barrier()
4211 * For single threaded workqueues the deadlock happens when the work in start_flush_work()
4213 * workqueues the deadlock happens when the rescuer stalls, blocking in start_flush_work()
5343 * For initialized ordered workqueues, there should only be one pwq in apply_wqattrs_prepare()
5392 /* only unbound workqueues can change attributes */ in apply_workqueue_attrs_locked()
5449 * may execute on any CPU. This is similar to how per-cpu workqueues behave on
5583 * Workqueues which may be used during memory reclaim should have a rescuer
5733 * BH workqueues always share a single execution context per CPU in __alloc_workqueue()
5763 * wq_pool_mutex protects the workqueues list, allocations of PWQs, in __alloc_workqueue()
5775 list_add_tail_rcu(&wq->list, &workqueues); in __alloc_workqueue()
5978 /* max_active doesn't mean anything for BH workqueues */ in workqueue_set_max_active()
5981 /* disallow meddling with max_active for ordered workqueues */ in workqueue_set_max_active()
6004 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
6015 /* min_active is only meaningful for non-ordered unbound workqueues */ in workqueue_set_min_active()
6068 * With the exception of ordered workqueues, all workqueues have per-cpu
6441 * Called from a sysrq handler and prints out all busy workqueues and pools.
6451 pr_info("Showing busy workqueues and worker pools:\n"); in show_all_workqueues()
6453 list_for_each_entry_rcu(wq, &workqueues, list) in show_all_workqueues()
6465 * Called from try_to_freeze_tasks() and prints out all freezable workqueues
6474 pr_info("Showing freezable workqueues that are still busy:\n"); in show_freezable_workqueues()
6476 list_for_each_entry_rcu(wq, &workqueues, list) { in show_freezable_workqueues()
6707 /* update pod affinity of unbound workqueues */ in workqueue_online_cpu()
6708 list_for_each_entry(wq, &workqueues, list) { in workqueue_online_cpu()
6738 /* update pod affinity of unbound workqueues */ in workqueue_offline_cpu()
6743 list_for_each_entry(wq, &workqueues, list) { in workqueue_offline_cpu()
6806 * freeze_workqueues_begin - begin freezing workqueues
6808 * Start freezing workqueues. After this function returns, all freezable
6809 * workqueues will queue new works to their inactive_works list instead of
6824 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_begin()
6834 * freeze_workqueues_busy - are freezable workqueues still busy?
6843 * %true if some freezable workqueues are still busy. %false if freezing
6856 list_for_each_entry(wq, &workqueues, list) { in freeze_workqueues_busy()
6880 * thaw_workqueues - thaw workqueues
6882 * Thaw workqueues. Normal queueing is restored and all collected
6900 list_for_each_entry(wq, &workqueues, list) { in thaw_workqueues()
6920 list_for_each_entry(wq, &workqueues, list) { in workqueue_apply_unbound_cpumask()
7011 list_for_each_entry(wq, &workqueues, list) { in wq_affn_dfl_set()
7036 * Workqueues with WQ_SYSFS flag set is visible to userland via
7037 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
7043 * Unbound workqueues have the following extra attributes.
7281 * The low-level workqueues cpumask is a global cpumask that limits
7282 * the affinity of all unbound workqueues. This function check the @cpumask
7283 * and apply it to all unbound workqueues and updates all pwqs of them.
7404 * ordered workqueues. in workqueue_sysfs_register()
7741 * up. It sets up all the data structures and system workqueues and allows early
7742 * boot code to create workqueues and queue/cancel work items. Actual work item
7897 * and invoked as soon as kthreads can be created and scheduled. Workqueues have
7914 * up. Also, create a rescuer for workqueues that requested it. in workqueue_init()
7923 list_for_each_entry(wq, &workqueues, list) { in workqueue_init()
8018 * workqueue_init_topology - initialize CPU pods for unbound workqueues
8039 * Workqueues allocated earlier would have all CPUs sharing the default in workqueue_init_topology()
8043 list_for_each_entry(wq, &workqueues, list) { in workqueue_init_topology()
8058 pr_warn("WARNING: Flushing system-wide workqueues will be prohibited in near future.\n"); in __warn_flushing_systemwide_wq()