Lines Matching full:workqueue
3 * kernel/workqueue.c - generic async execution with shared worker pool
25 * Please read Documentation/core-api/workqueue.rst for details.
35 #include <linux/workqueue.h>
237 * tools/workqueue/wq_monitor.py.
253 * The per-pool workqueue. While queued, bits below WORK_PWQ_SHIFT
260 struct workqueue_struct *wq; /* I: the owning workqueue */
303 * Structure used to wait for workqueue flush.
314 * Unlike in a per-cpu workqueue where max_active limits its concurrency level
315 * on each CPU, in an unbound workqueue, max_active applies to the whole system.
334 * The externally visible workqueue. It relays the issued work items to
372 char name[WQ_NAME_LEN]; /* I: workqueue name */
537 #include <trace/events/workqueue.h>
593 * for_each_pwq - iterate through all pool_workqueues of the specified workqueue
595 * @wq: the target workqueue
745 * unbound_effective_cpumask - effective cpumask of an unbound workqueue
746 * @wq: workqueue of interest
1097 * before the original execution finishes, workqueue will identify the
1296 * should be using an unbound workqueue instead.
1345 …printk_deferred(KERN_WARNING "workqueue: %ps hogged CPU for >%luus %llu times, consider switching … in wq_cpu_intensive_report()
1529 * As this function doesn't involve any workqueue-related locking, it
1547 * @wq: workqueue of interest
1572 * @wq: workqueue to update
1723 /* BH or per-cpu workqueue, pwq->nr_active is sufficient */ in pwq_tryinc_nr_active()
1732 * Unbound workqueue uses per-node shared nr_active $nna. If @pwq is in pwq_tryinc_nr_active()
1947 * For a percpu workqueue, it's simple. Just need to kick the first in pwq_dec_nr_active()
1956 * If @pwq is for an unbound workqueue, it's more complicated because in pwq_dec_nr_active()
1982 * decrement nr_in_flight of its pwq and handle workqueue flushing.
2196 * same workqueue.
2223 pr_warn_once("workqueue: round-robin CPU selection forced, expect performance impact\n"); in wq_select_unbound_cpu()
2252 * For a draining wq, only works from the same workqueue are in __queue_work()
2257 WARN_ONCE(!is_chained_work(wq), "workqueue: cannot queue %ps on wq %s\n", in __queue_work()
2279 * For ordered workqueue, work items must be queued on the newest pwq in __queue_work()
2318 WARN_ONCE(true, "workqueue: per-cpu pwq for %s on cpu%d has 0 refcnt", in __queue_work()
2371 * @wq: workqueue to use
2433 * @wq: workqueue to use
2461 * If this is used with a per-cpu workqueue then the logic in in queue_work_node()
2535 * @wq: workqueue to use
2573 * @wq: workqueue to use
2616 * @wq: workqueue to use
2766 * create_worker - create a new workqueue worker
2785 pr_err_once("workqueue: Failed to allocate a worker ID: %pe\n", in create_worker()
2792 pr_err_once("workqueue: Failed to allocate a worker\n"); in create_worker()
2806 pr_err("workqueue: Interrupted when creating a worker thread \"%s\"\n", in create_worker()
2809 pr_err_once("workqueue: Failed to create a worker thread: %pe", in create_worker()
3277 pr_err("BUG: workqueue leaked atomic, lock or RCU: %s[%d]\n" in process_one_work()
3366 * work items regardless of their specific target workqueue. The only
3377 /* tell the scheduler that this is a workqueue worker */ in worker_thread()
3450 * Workqueue rescuer thread function. There's one rescuer for each
3451 * workqueue which has WQ_MEM_RECLAIM set.
3484 * By the time the rescuer is requested to stop, the workqueue in rescuer_thread()
3512 * Slurp in all works issued via this workqueue and in rescuer_thread()
3620 * TODO: Convert all tasklet users to workqueue and use softirq directly.
3719 * @target_wq: workqueue being flushed
3720 * @target_work: work item being flushed (NULL for workqueue flushes)
3727 * on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward-
3744 "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3748 "workqueue: WQ_MEM_RECLAIM %s:%ps is flushing !WQ_MEM_RECLAIM %s:%ps", in check_flush_dependency()
3843 * flush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing
3844 * @wq: workqueue being flushed
3848 * Prepare pwqs for workqueue flushing.
3886 * For unbound workqueue, pwqs will map to only a few pools. in flush_workqueue_prep_pwqs()
3961 * @wq: workqueue to flush
4117 * drain_workqueue - drain a workqueue
4118 * @wq: workqueue to drain
4120 * Wait until the workqueue becomes empty. While draining is in progress,
4158 pr_warn("workqueue %s: %s() isn't complete after %u tries\n", in drain_workqueue()
4209 * single-threaded or rescuer equipped workqueue. in start_flush_work()
4244 * was queued on a BH workqueue, we also know that it was running in the in __flush_work()
4347 WARN_ONCE(true, "workqueue: work disable count overflowed\n"); in work_offqd_disable()
4355 WARN_ONCE(true, "workqueue: work disable count underflowed\n"); in work_offqd_enable()
4415 * even if the work re-queues itself or migrates to another workqueue. On return
4423 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4424 * if @work was last queued on a BH workqueue.
4497 * workqueue. Can also be called from non-hardirq atomic contexts including BH
4498 * if @work was last queued on a BH workqueue.
4578 * system workqueue and blocks until all CPUs have completed.
4697 * Some attrs fields are workqueue-only. Clear them for worker_pool's. See the
5114 * For ordered workqueue with a plugged dfl_pwq, restart it now. in pwq_release_workfn()
5221 * @attrs: the wq_attrs of the default pwq of the target workqueue
5224 * Calculate the cpumask a workqueue with @attrs should use on @pod.
5267 struct workqueue_struct *wq; /* target workqueue */
5408 * apply_workqueue_attrs - apply new workqueue_attrs to an unbound workqueue
5409 * @wq: the target workqueue
5412 * Apply @attrs to an unbound workqueue @wq. Unless disabled, this function maps
5436 * @wq: the target workqueue
5446 * Note that when the last allowed CPU of a pod goes offline for a workqueue
5448 * executing the work items for the workqueue will lose their CPU affinity and
5450 * CPU_DOWN. If a workqueue user wants strict affinity, it's the user's
5481 pr_warn("workqueue: allocation failed while updating CPU pod affinity of \"%s\"\n", in unbound_wq_update_pwq()
5551 "ordering guarantee broken for workqueue %s\n", wq->name); in alloc_and_link_pwqs()
5576 pr_warn("workqueue: max_active %d requested for %s is out of range, clamping between %d and %d\n", in wq_clamp_max_active()
5599 pr_err("workqueue: Failed to allocate a rescuer for wq \"%s\"\n", in init_rescuer()
5610 pr_err("workqueue: Failed to create a rescuer kthread for wq \"%s\": %pe", in init_rescuer()
5628 * @wq: target workqueue
5728 pr_warn_once("workqueue: name exceeds WQ_NAME_LEN. Truncating to: %s\n", in __alloc_workqueue()
5868 * destroy_workqueue - safely terminate a workqueue
5869 * @wq: target workqueue
5871 * Safely destroy a workqueue. All work currently pending will be done first.
5875 * before destroying the workqueue. The fundamental problem is that, currently,
5876 * the workqueue has no way of accessing non-pending delayed_work. delayed_work
5895 /* mark the workqueue destruction is in progress */ in destroy_workqueue()
5966 * workqueue_set_max_active - adjust max_active of a workqueue
5967 * @wq: target workqueue
6000 * workqueue_set_min_active - adjust min_active of an unbound workqueue
6001 * @wq: target unbound workqueue
6004 * Set min_active of an unbound workqueue. Unlike other types of workqueues, an
6005 * unbound workqueue is not guaranteed to be able to process max_active
6006 * interdependent work items. Instead, an unbound workqueue is guaranteed to be
6029 * Determine if %current task is a workqueue worker and what it's working on.
6032 * Return: work struct if %current task is a workqueue worker, %NULL otherwise.
6043 * current_is_workqueue_rescuer - is %current workqueue rescuer?
6045 * Determine whether %current is a workqueue rescuer. Can be used from
6048 * Return: %true if %current is a workqueue rescuer. %false otherwise.
6058 * workqueue_congested - test whether a workqueue is congested
6060 * @wq: target workqueue
6062 * Test whether @wq's cpu workqueue for @cpu is congested. There is
6069 * pool_workqueues, each with its own congested state. A workqueue being
6070 * congested on one CPU doesn't mean that the workqueue is contested on any
6158 * name of the workqueue being serviced and worker description set with
6184 * Carefully copy the associated workqueue's workfn, name and desc. in print_worker_info()
6346 * show_one_workqueue - dump state of specified workqueue
6347 * @wq: workqueue whose state will be printed
6361 if (idle) /* Nothing to print for idle workqueue */ in show_one_workqueue()
6364 pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags); in show_one_workqueue()
6439 * show_all_workqueues - dump workqueue state
6463 * show_freezable_workqueues - dump freezable workqueue state
6968 * by any subsequent write to workqueue/cpumask sysfs file. in workqueue_unbound_exclude_cpumask()
7037 * /sys/bus/workqueue/devices/WQ_NAME. All visible workqueues have the
7040 * per_cpu RO bool : whether the workqueue is per-cpu or unbound
7273 .name = "workqueue",
7383 * workqueue_sysfs_register - make a workqueue visible in sysfs
7384 * @wq: the workqueue to register
7386 * Expose @wq in sysfs under /sys/bus/workqueue/devices.
7390 * Workqueue user should use this function directly iff it wants to apply
7391 * workqueue_attrs before making the workqueue visible in sysfs; otherwise,
7451 * @wq: the workqueue to unregister
7470 * Workqueue watchdog.
7474 * indefinitely. Workqueue stalls can be very difficult to debug as the
7475 * usual warning mechanisms don't trigger and internal workqueue state is
7478 * Workqueue watchdog monitors all worker pools periodically and dumps
7483 * "workqueue.watchdog_thresh" which can be updated at runtime through the
7611 pr_emerg("BUG: workqueue lockup - pool"); in wq_watchdog_timer_fn()
7712 pr_warn("workqueue: Restricting unbound_cpumask (%*pb) with %s (%*pb) leaves no CPU, ignoring\n", in restrict_unbound_cpumask()
7737 * workqueue_init_early - early init for workqueue subsystem
7739 * This is the first step of three-staged workqueue subsystem initialization and
7766 restrict_unbound_cpumask("workqueue.unbound_cpus", &wq_cmdline_cpumask); in workqueue_init_early()
7777 * If nohz_full is enabled, set power efficient workqueue as unbound. in workqueue_init_early()
7778 * This allows workqueue items to be moved to HK CPUs. in workqueue_init_early()
7894 * workqueue_init - bring workqueue subsystem fully online
7896 * This is the second step of three-staged workqueue subsystem initialization
7925 "workqueue: failed to create early rescuer for %s", in workqueue_init()
8020 * This is the third step of three-staged workqueue subsystem initialization and
8040 * worker pool. Explicitly call unbound_wq_update_pwq() on all workqueue in workqueue_init_topology()
8067 pr_warn("workqueue.unbound_cpus: incorrect CPU range, using default\n"); in workqueue_unbound_cpus_setup()
8072 __setup("workqueue.unbound_cpus=", workqueue_unbound_cpus_setup);