Lines Matching +full:work +full:- +full:around

17 When such an asynchronous execution context is needed, a work item
22 While there are work items on the workqueue the worker executes the
23 functions associated with the work items one after the other. When
24 there is no work item left on the workqueue the worker becomes idle.
25 When a new work item gets queued, the worker begins executing again.
33 thread system-wide. A single MT wq needed to keep around the same
43 while an ST wq one for the whole system. Work items had to compete for
45 including proneness to deadlocks around the single execution context.
60 * Use per-CPU unified worker pools shared by all wq to provide
72 abstraction, the work item, is introduced.
74 A work item is a simple struct that holds a pointer to the function
76 wants a function to be executed asynchronously it has to set up a work
77 item pointing to that function and queue that work item on a
80 A work item can be executed in either a thread or the BH (softirq) context.
83 the functions off of the queue, one after the other. If no work is queued,
85 worker-pools.
87 The cmwq design differentiates between the user-facing workqueues that
88 subsystems and drivers queue work items on and the backend mechanism
89 which manages worker-pools and processes the queued work items.
91 There are two worker-pools, one for normal work items and the other
93 worker-pools to serve work items queued on unbound workqueues - the
98 Each per-CPU BH worker pool contains only one pseudo worker which represents
102 Subsystems and drivers can create and queue work items through special
104 aspects of the way the work items are executed by setting flags on the
105 workqueue they are putting the work item on. These flags include
110 When a work item is queued to a workqueue, the target worker-pool is
112 and appended on the shared worklist of the worker-pool. For example,
113 unless specifically overridden, a work item of a bound workqueue will
114 be queued on the worklist of either normal or highpri worker-pool that
123 Each worker-pool bound to an actual CPU implements concurrency
124 management by hooking into the scheduler. The worker-pool is notified
126 number of the currently runnable workers. Generally, work items are
128 maintaining just enough concurrency to prevent work processing from
130 workers on the CPU, the worker-pool doesn't start execution of a new
131 work, but, when the last running worker goes to sleep, it immediately
133 are pending work items. This allows using a minimal number of workers
136 Keeping idle workers around doesn't cost other than the memory space
150 through the use of rescue workers. All work items which might be used
152 wq's that have a rescue-worker reserved for execution under memory
153 pressure. Else it is possible that the worker-pool deadlocks waiting
162 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
167 forward progress guarantee, flush and work item attributes. ``@flags``
168 and ``@max_active`` control how work items are assigned execution
173 ---------
177 workqueues are always per-CPU and all BH work items are executed in the
183 BH work items cannot sleep. All other features such as delayed queueing,
187 Work items queued to an unbound wq are served by the special
188 worker-pools which host workers which are not bound to any
191 worker-pools try to start execution of work items as soon as
205 suspend operations. Work items on the wq are drained and no
206 new work item starts execution until thawed.
214 Work items of a highpri wq are queued to the highpri
215 worker-pool of the target cpu. Highpri worker-pools are
218 Note that normal and highpri worker-pools don't interact with
223 Work items of a CPU intensive wq do not contribute to the
225 work items will not prevent other work items in the same
226 worker-pool from starting execution. This is useful for bound
227 work items which are expected to hog CPU cycles so that their
230 Although CPU intensive work items don't contribute to the
233 non-CPU-intensive work items can delay execution of CPU
234 intensive work items.
240 --------------
243 CPU which can be assigned to the work items of a wq. For example, with
244 ``@max_active`` of 16, at most 16 work items of the wq can be executing
245 at the same time per CPU. This is always a per-CPU attribute, even for
253 The number of active work items of a wq is usually regulated by the
254 users of the wq, more specifically, by how many work items the users
256 throttling the number of active work items, specifying '0' is
259 Some users depend on strict execution ordering where only one work item
260 is in flight at any given time and the work items are processed in
272 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
339 * Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
342 there is dependency among multiple work items used during memory
353 (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items
355 flushed as a part of a group of work items, and don't require any
360 * Unless work items are expected to consume a huge amount of CPU
362 level of locality in wq operations and work item execution.
371 boundaries. A work item queued on the workqueue will be assigned to a worker
383 CPUs are not grouped. A work item issued on one CPU is processed by a
384 worker on the same CPU. This makes unbound workqueues behave as per-cpu
401 work item on a CPU close to the issuing CPU.
418 0 by default indicating that affinity scopes are not strict. When a work
419 item starts execution, workqueue makes a best-effort attempt to ensure
438 kernel, there exists a pronounced trade-off between locality and utilization
441 Higher locality leads to higher efficiency where more work is performed for
443 cause lower overall system utilization if the work items are not spread
445 testing with dm-crypt clearly illustrates this trade-off.
447 The tests are run on a CPU with 12-cores/24-threads split across four L3
449 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
453 Scenario 1: Enough issuers and work spread across the machine
454 -------------------------------------------------------------
458 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
459 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
460 --name=iops-test-job --verify=sha512
462 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
469 .. list-table::
471 :header-rows: 1
473 * - Affinity
474 - Bandwidth (MiBps)
475 - CPU util (%)
477 * - system
478 - 1159.40 ±1.34
479 - 99.31 ±0.02
481 * - cache
482 - 1166.40 ±0.89
483 - 99.34 ±0.01
485 * - cache (strict)
486 - 1166.00 ±0.71
487 - 99.35 ±0.01
491 machine but the cache-affine ones outperform by 0.6% thanks to improved
495 Scenario 2: Fewer issuers, enough work for saturation
496 -----------------------------------------------------
500 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
501 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
502 --time_based --group_reporting --name=iops-test-job --verify=sha512
504 The only difference from the previous scenario is ``--numjobs=8``. There are
505 a third of the issuers but is still enough total work to saturate the
508 .. list-table::
510 :header-rows: 1
512 * - Affinity
513 - Bandwidth (MiBps)
514 - CPU util (%)
516 * - system
517 - 1155.40 ±0.89
518 - 97.41 ±0.05
520 * - cache
521 - 1154.40 ±1.14
522 - 96.15 ±0.09
524 * - cache (strict)
525 - 1112.00 ±4.64
526 - 93.26 ±0.35
528 This is more than enough work to saturate the system. Both "system" and
533 Eight issuers moving around over four L3 cache scope still allow "cache
534 (strict)" to mostly saturate the machine but the loss of work conservation
538 Scenario 3: Even fewer issuers, not enough work to saturate
539 -----------------------------------------------------------
543 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
544 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
545 --time_based --group_reporting --name=iops-test-job --verify=sha512
547 Again, the only difference is ``--numjobs=4``. With the number of issuers
548 reduced to four, there now isn't enough work to saturate the whole system
551 .. list-table::
553 :header-rows: 1
555 * - Affinity
556 - Bandwidth (MiBps)
557 - CPU util (%)
559 * - system
560 - 993.60 ±1.82
561 - 75.49 ±0.06
563 * - cache
564 - 973.40 ±1.52
565 - 74.90 ±0.07
567 * - cache (strict)
568 - 828.20 ±4.49
569 - 66.84 ±0.29
576 ------------------------------
583 While the loss of work-conservation in certain scenarios hurts, it is a lot
594 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
600 * The loss of work-conservation in non-strict affinity scopes is likely
603 work-conservation in most cases. As such, it is possible that future
645 pod_node [0]=-1
651 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
653 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
655 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
657 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
661 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
662 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
663 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
665 Workqueue CPU -> pool
691 events 18545 0 6.1 0 5 - -
692 events_highpri 8 0 0.0 0 0 - -
693 events_long 3 0 0.0 0 0 - -
694 events_unbound 38306 0 0.1 - 7 - -
695 events_freezable 0 0 0.0 0 0 - -
696 events_power_efficient 29598 0 0.2 0 0 - -
697 events_freezable_pwr_ef 10 0 0.0 0 0 - -
698 sock_diag_events 0 0 0.0 0 0 - -
701 events 18548 0 6.1 0 5 - -
702 events_highpri 8 0 0.0 0 0 - -
703 events_long 3 0 0.0 0 0 - -
704 events_unbound 38322 0 0.1 - 7 - -
705 events_freezable 0 0 0.0 0 0 - -
706 events_power_efficient 29603 0 0.2 0 0 - -
707 events_freezable_pwr_ef 10 0 0.0 0 0 - -
708 sock_diag_events 0 0 0.0 0 0 - -
718 Because the work functions are executed by generic worker threads
733 2. A single work item that consumes lots of cpu cycles
742 If something is busy looping on work queueing, it would be dominating
743 the output and the offender can be determined with the work item
751 The work item's function should be trivially visible in the stack
755 Non-reentrance Conditions
758 Workqueue guarantees that a work item cannot be re-entrant if the following
759 conditions hold after a work item gets queued:
761 1. The work function hasn't been changed.
762 2. No one queues the work item to another workqueue.
763 3. The work item hasn't been reinitiated.
765 In other words, if the above conditions hold, the work item is guaranteed to be
766 executed by at most one worker system-wide at any given time.
768 Note that requeuing the work item (to the same queue) in the self function
770 required when breaking the conditions inside a work function.
776 .. kernel-doc:: include/linux/workqueue.h
778 .. kernel-doc:: kernel/workqueue.c