xref: /linux/Documentation/core-api/workqueue.rst (revision 26fbb4c8c7c3ee9a4c3b4de555a8587b5a19154e)
1====================================
2Concurrency Managed Workqueue (cmwq)
3====================================
4
5:Date: September, 2010
6:Author: Tejun Heo <tj@kernel.org>
7:Author: Florian Mickler <florian@mickler.org>
8
9
10Introduction
11============
12
13There are many cases where an asynchronous process execution context
14is needed and the workqueue (wq) API is the most commonly used
15mechanism for such cases.
16
17When such an asynchronous execution context is needed, a work item
18describing which function to execute is put on a queue.  An
19independent thread serves as the asynchronous execution context.  The
20queue is called workqueue and the thread is called worker.
21
22While there are work items on the workqueue the worker executes the
23functions associated with the work items one after the other.  When
24there is no work item left on the workqueue the worker becomes idle.
25When a new work item gets queued, the worker begins executing again.
26
27
28Why cmwq?
29=========
30
31In the original wq implementation, a multi threaded (MT) wq had one
32worker thread per CPU and a single threaded (ST) wq had one worker
33thread system-wide.  A single MT wq needed to keep around the same
34number of workers as the number of CPUs.  The kernel grew a lot of MT
35wq users over the years and with the number of CPU cores continuously
36rising, some systems saturated the default 32k PID space just booting
37up.
38
39Although MT wq wasted a lot of resource, the level of concurrency
40provided was unsatisfactory.  The limitation was common to both ST and
41MT wq albeit less severe on MT.  Each wq maintained its own separate
42worker pool.  An MT wq could provide only one execution context per CPU
43while an ST wq one for the whole system.  Work items had to compete for
44those very limited execution contexts leading to various problems
45including proneness to deadlocks around the single execution context.
46
47The tension between the provided level of concurrency and resource
48usage also forced its users to make unnecessary tradeoffs like libata
49choosing to use ST wq for polling PIOs and accepting an unnecessary
50limitation that no two polling PIOs can progress at the same time.  As
51MT wq don't provide much better concurrency, users which require
52higher level of concurrency, like async or fscache, had to implement
53their own thread pool.
54
55Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
56focus on the following goals.
57
58* Maintain compatibility with the original workqueue API.
59
60* Use per-CPU unified worker pools shared by all wq to provide
61  flexible level of concurrency on demand without wasting a lot of
62  resource.
63
64* Automatically regulate worker pool and level of concurrency so that
65  the API users don't need to worry about such details.
66
67
68The Design
69==========
70
71In order to ease the asynchronous execution of functions a new
72abstraction, the work item, is introduced.
73
74A work item is a simple struct that holds a pointer to the function
75that is to be executed asynchronously.  Whenever a driver or subsystem
76wants a function to be executed asynchronously it has to set up a work
77item pointing to that function and queue that work item on a
78workqueue.
79
80Special purpose threads, called worker threads, execute the functions
81off of the queue, one after the other.  If no work is queued, the
82worker threads become idle.  These worker threads are managed in so
83called worker-pools.
84
85The cmwq design differentiates between the user-facing workqueues that
86subsystems and drivers queue work items on and the backend mechanism
87which manages worker-pools and processes the queued work items.
88
89There are two worker-pools, one for normal work items and the other
90for high priority ones, for each possible CPU and some extra
91worker-pools to serve work items queued on unbound workqueues - the
92number of these backing pools is dynamic.
93
94Subsystems and drivers can create and queue work items through special
95workqueue API functions as they see fit. They can influence some
96aspects of the way the work items are executed by setting flags on the
97workqueue they are putting the work item on. These flags include
98things like CPU locality, concurrency limits, priority and more.  To
99get a detailed overview refer to the API description of
100``alloc_workqueue()`` below.
101
102When a work item is queued to a workqueue, the target worker-pool is
103determined according to the queue parameters and workqueue attributes
104and appended on the shared worklist of the worker-pool.  For example,
105unless specifically overridden, a work item of a bound workqueue will
106be queued on the worklist of either normal or highpri worker-pool that
107is associated to the CPU the issuer is running on.
108
109For any worker pool implementation, managing the concurrency level
110(how many execution contexts are active) is an important issue.  cmwq
111tries to keep the concurrency at a minimal but sufficient level.
112Minimal to save resources and sufficient in that the system is used at
113its full capacity.
114
115Each worker-pool bound to an actual CPU implements concurrency
116management by hooking into the scheduler.  The worker-pool is notified
117whenever an active worker wakes up or sleeps and keeps track of the
118number of the currently runnable workers.  Generally, work items are
119not expected to hog a CPU and consume many cycles.  That means
120maintaining just enough concurrency to prevent work processing from
121stalling should be optimal.  As long as there are one or more runnable
122workers on the CPU, the worker-pool doesn't start execution of a new
123work, but, when the last running worker goes to sleep, it immediately
124schedules a new worker so that the CPU doesn't sit idle while there
125are pending work items.  This allows using a minimal number of workers
126without losing execution bandwidth.
127
128Keeping idle workers around doesn't cost other than the memory space
129for kthreads, so cmwq holds onto idle ones for a while before killing
130them.
131
132For unbound workqueues, the number of backing pools is dynamic.
133Unbound workqueue can be assigned custom attributes using
134``apply_workqueue_attrs()`` and workqueue will automatically create
135backing worker pools matching the attributes.  The responsibility of
136regulating concurrency level is on the users.  There is also a flag to
137mark a bound wq to ignore the concurrency management.  Please refer to
138the API section for details.
139
140Forward progress guarantee relies on that workers can be created when
141more execution contexts are necessary, which in turn is guaranteed
142through the use of rescue workers.  All work items which might be used
143on code paths that handle memory reclaim are required to be queued on
144wq's that have a rescue-worker reserved for execution under memory
145pressure.  Else it is possible that the worker-pool deadlocks waiting
146for execution contexts to free up.
147
148
149Application Programming Interface (API)
150=======================================
151
152``alloc_workqueue()`` allocates a wq.  The original
153``create_*workqueue()`` functions are deprecated and scheduled for
154removal.  ``alloc_workqueue()`` takes three arguments - ``@name``,
155``@flags`` and ``@max_active``.  ``@name`` is the name of the wq and
156also used as the name of the rescuer thread if there is one.
157
158A wq no longer manages execution resources but serves as a domain for
159forward progress guarantee, flush and work item attributes. ``@flags``
160and ``@max_active`` control how work items are assigned execution
161resources, scheduled and executed.
162
163
164``flags``
165---------
166
167``WQ_UNBOUND``
168  Work items queued to an unbound wq are served by the special
169  worker-pools which host workers which are not bound to any
170  specific CPU.  This makes the wq behave as a simple execution
171  context provider without concurrency management.  The unbound
172  worker-pools try to start execution of work items as soon as
173  possible.  Unbound wq sacrifices locality but is useful for
174  the following cases.
175
176  * Wide fluctuation in the concurrency level requirement is
177    expected and using bound wq may end up creating large number
178    of mostly unused workers across different CPUs as the issuer
179    hops through different CPUs.
180
181  * Long running CPU intensive workloads which can be better
182    managed by the system scheduler.
183
184``WQ_FREEZABLE``
185  A freezable wq participates in the freeze phase of the system
186  suspend operations.  Work items on the wq are drained and no
187  new work item starts execution until thawed.
188
189``WQ_MEM_RECLAIM``
190  All wq which might be used in the memory reclaim paths **MUST**
191  have this flag set.  The wq is guaranteed to have at least one
192  execution context regardless of memory pressure.
193
194``WQ_HIGHPRI``
195  Work items of a highpri wq are queued to the highpri
196  worker-pool of the target cpu.  Highpri worker-pools are
197  served by worker threads with elevated nice level.
198
199  Note that normal and highpri worker-pools don't interact with
200  each other.  Each maintains its separate pool of workers and
201  implements concurrency management among its workers.
202
203``WQ_CPU_INTENSIVE``
204  Work items of a CPU intensive wq do not contribute to the
205  concurrency level.  In other words, runnable CPU intensive
206  work items will not prevent other work items in the same
207  worker-pool from starting execution.  This is useful for bound
208  work items which are expected to hog CPU cycles so that their
209  execution is regulated by the system scheduler.
210
211  Although CPU intensive work items don't contribute to the
212  concurrency level, start of their executions is still
213  regulated by the concurrency management and runnable
214  non-CPU-intensive work items can delay execution of CPU
215  intensive work items.
216
217  This flag is meaningless for unbound wq.
218
219Note that the flag ``WQ_NON_REENTRANT`` no longer exists as all
220workqueues are now non-reentrant - any work item is guaranteed to be
221executed by at most one worker system-wide at any given time.
222
223
224``max_active``
225--------------
226
227``@max_active`` determines the maximum number of execution contexts
228per CPU which can be assigned to the work items of a wq.  For example,
229with ``@max_active`` of 16, at most 16 work items of the wq can be
230executing at the same time per CPU.
231
232Currently, for a bound wq, the maximum limit for ``@max_active`` is
233512 and the default value used when 0 is specified is 256.  For an
234unbound wq, the limit is higher of 512 and 4 *
235``num_possible_cpus()``.  These values are chosen sufficiently high
236such that they are not the limiting factor while providing protection
237in runaway cases.
238
239The number of active work items of a wq is usually regulated by the
240users of the wq, more specifically, by how many work items the users
241may queue at the same time.  Unless there is a specific need for
242throttling the number of active work items, specifying '0' is
243recommended.
244
245Some users depend on the strict execution ordering of ST wq.  The
246combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to
247achieve this behavior.  Work items on such wq were always queued to the
248unbound worker-pools and only one work item could be active at any given
249time thus achieving the same ordering property as ST wq.
250
251In the current implementation the above configuration only guarantees
252ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should
253be used to achieve system-wide ST behavior.
254
255
256Example Execution Scenarios
257===========================
258
259The following example execution scenarios try to illustrate how cmwq
260behave under different configurations.
261
262 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU.
263 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms
264 again before finishing.  w1 and w2 burn CPU for 5ms then sleep for
265 10ms.
266
267Ignoring all other tasks, works and processing overhead, and assuming
268simple FIFO scheduling, the following is one highly simplified version
269of possible sequences of events with the original wq. ::
270
271 TIME IN MSECS	EVENT
272 0		w0 starts and burns CPU
273 5		w0 sleeps
274 15		w0 wakes up and burns CPU
275 20		w0 finishes
276 20		w1 starts and burns CPU
277 25		w1 sleeps
278 35		w1 wakes up and finishes
279 35		w2 starts and burns CPU
280 40		w2 sleeps
281 50		w2 wakes up and finishes
282
283And with cmwq with ``@max_active`` >= 3, ::
284
285 TIME IN MSECS	EVENT
286 0		w0 starts and burns CPU
287 5		w0 sleeps
288 5		w1 starts and burns CPU
289 10		w1 sleeps
290 10		w2 starts and burns CPU
291 15		w2 sleeps
292 15		w0 wakes up and burns CPU
293 20		w0 finishes
294 20		w1 wakes up and finishes
295 25		w2 wakes up and finishes
296
297If ``@max_active`` == 2, ::
298
299 TIME IN MSECS	EVENT
300 0		w0 starts and burns CPU
301 5		w0 sleeps
302 5		w1 starts and burns CPU
303 10		w1 sleeps
304 15		w0 wakes up and burns CPU
305 20		w0 finishes
306 20		w1 wakes up and finishes
307 20		w2 starts and burns CPU
308 25		w2 sleeps
309 35		w2 wakes up and finishes
310
311Now, let's assume w1 and w2 are queued to a different wq q1 which has
312``WQ_CPU_INTENSIVE`` set, ::
313
314 TIME IN MSECS	EVENT
315 0		w0 starts and burns CPU
316 5		w0 sleeps
317 5		w1 and w2 start and burn CPU
318 10		w1 sleeps
319 15		w2 sleeps
320 15		w0 wakes up and burns CPU
321 20		w0 finishes
322 20		w1 wakes up and finishes
323 25		w2 wakes up and finishes
324
325
326Guidelines
327==========
328
329* Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work
330  items which are used during memory reclaim.  Each wq with
331  ``WQ_MEM_RECLAIM`` set has an execution context reserved for it.  If
332  there is dependency among multiple work items used during memory
333  reclaim, they should be queued to separate wq each with
334  ``WQ_MEM_RECLAIM``.
335
336* Unless strict ordering is required, there is no need to use ST wq.
337
338* Unless there is a specific need, using 0 for @max_active is
339  recommended.  In most use cases, concurrency level usually stays
340  well under the default limit.
341
342* A wq serves as a domain for forward progress guarantee
343  (``WQ_MEM_RECLAIM``, flush and work item attributes.  Work items
344  which are not involved in memory reclaim and don't need to be
345  flushed as a part of a group of work items, and don't require any
346  special attribute, can use one of the system wq.  There is no
347  difference in execution characteristics between using a dedicated wq
348  and a system wq.
349
350* Unless work items are expected to consume a huge amount of CPU
351  cycles, using a bound wq is usually beneficial due to the increased
352  level of locality in wq operations and work item execution.
353
354
355Debugging
356=========
357
358Because the work functions are executed by generic worker threads
359there are a few tricks needed to shed some light on misbehaving
360workqueue users.
361
362Worker threads show up in the process list as: ::
363
364  root      5671  0.0  0.0      0     0 ?        S    12:07   0:00 [kworker/0:1]
365  root      5672  0.0  0.0      0     0 ?        S    12:07   0:00 [kworker/1:2]
366  root      5673  0.0  0.0      0     0 ?        S    12:12   0:00 [kworker/0:0]
367  root      5674  0.0  0.0      0     0 ?        S    12:13   0:00 [kworker/1:0]
368
369If kworkers are going crazy (using too much cpu), there are two types
370of possible problems:
371
372	1. Something being scheduled in rapid succession
373	2. A single work item that consumes lots of cpu cycles
374
375The first one can be tracked using tracing: ::
376
377	$ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event
378	$ cat /sys/kernel/debug/tracing/trace_pipe > out.txt
379	(wait a few secs)
380	^C
381
382If something is busy looping on work queueing, it would be dominating
383the output and the offender can be determined with the work item
384function.
385
386For the second type of problems it should be possible to just check
387the stack trace of the offending worker thread. ::
388
389	$ cat /proc/THE_OFFENDING_KWORKER/stack
390
391The work item's function should be trivially visible in the stack
392trace.
393
394
395Kernel Inline Documentations Reference
396======================================
397
398.. kernel-doc:: include/linux/workqueue.h
399
400.. kernel-doc:: kernel/workqueue.c
401