xref: /linux/Documentation/scheduler/completion.rst (revision 78964fcac47fc1525ecb4c37cd5fbc873c28320b)
1================================================
2Completions - "wait for completion" barrier APIs
3================================================
4
5Introduction:
6-------------
7
8If you have one or more threads that must wait for some kernel activity
9to have reached a point or a specific state, completions can provide a
10race-free solution to this problem. Semantically they are somewhat like a
11pthread_barrier() and have similar use-cases.
12
13Completions are a code synchronization mechanism which is preferable to any
14misuse of locks/semaphores and busy-loops. Any time you think of using
15yield() or some quirky msleep(1) loop to allow something else to proceed,
16you probably want to look into using one of the wait_for_completion*()
17calls and complete() instead.
18
19The advantage of using completions is that they have a well defined, focused
20purpose which makes it very easy to see the intent of the code, but they
21also result in more efficient code as all threads can continue execution
22until the result is actually needed, and both the waiting and the signalling
23is highly efficient using low level scheduler sleep/wakeup facilities.
24
25Completions are built on top of the waitqueue and wakeup infrastructure of
26the Linux scheduler. The event the threads on the waitqueue are waiting for
27is reduced to a simple flag in 'struct completion', appropriately called "done".
28
29As completions are scheduling related, the code can be found in
30kernel/sched/completion.c.
31
32
33Usage:
34------
35
36There are three main parts to using completions:
37
38 - the initialization of the 'struct completion' synchronization object
39 - the waiting part through a call to one of the variants of wait_for_completion(),
40 - the signaling side through a call to complete() or complete_all().
41
42There are also some helper functions for checking the state of completions.
43Note that while initialization must happen first, the waiting and signaling
44part can happen in any order. I.e. it's entirely normal for a thread
45to have marked a completion as 'done' before another thread checks whether
46it has to wait for it.
47
48To use completions you need to #include <linux/completion.h> and
49create a static or dynamic variable of type 'struct completion',
50which has only two fields::
51
52	struct completion {
53		unsigned int done;
54		wait_queue_head_t wait;
55	};
56
57This provides the ->wait waitqueue to place tasks on for waiting (if any), and
58the ->done completion flag for indicating whether it's completed or not.
59
60Completions should be named to refer to the event that is being synchronized on.
61A good example is::
62
63	wait_for_completion(&early_console_added);
64
65	complete(&early_console_added);
66
67Good, intuitive naming (as always) helps code readability. Naming a completion
68'complete' is not helpful unless the purpose is super obvious...
69
70
71Initializing completions:
72-------------------------
73
74Dynamically allocated completion objects should preferably be embedded in data
75structures that are assured to be alive for the life-time of the function/driver,
76to prevent races with asynchronous complete() calls from occurring.
77
78Particular care should be taken when using the _timeout() or _killable()/_interruptible()
79variants of wait_for_completion(), as it must be assured that memory de-allocation
80does not happen until all related activities (complete() or reinit_completion())
81have taken place, even if these wait functions return prematurely due to a timeout
82or a signal triggering.
83
84Initializing of dynamically allocated completion objects is done via a call to
85init_completion()::
86
87	init_completion(&dynamic_object->done);
88
89In this call we initialize the waitqueue and set ->done to 0, i.e. "not completed"
90or "not done".
91
92The re-initialization function, reinit_completion(), simply resets the
93->done field to 0 ("not done"), without touching the waitqueue.
94Callers of this function must make sure that there are no racy
95wait_for_completion() calls going on in parallel.
96
97Calling init_completion() on the same completion object twice is
98most likely a bug as it re-initializes the queue to an empty queue and
99enqueued tasks could get "lost" - use reinit_completion() in that case,
100but be aware of other races.
101
102For static declaration and initialization, macros are available.
103
104For static (or global) declarations in file scope you can use
105DECLARE_COMPLETION()::
106
107	static DECLARE_COMPLETION(setup_done);
108	DECLARE_COMPLETION(setup_done);
109
110Note that in this case the completion is boot time (or module load time)
111initialized to 'not done' and doesn't require an init_completion() call.
112
113When a completion is declared as a local variable within a function,
114then the initialization should always use DECLARE_COMPLETION_ONSTACK()
115explicitly, not just to make lockdep happy, but also to make it clear
116that limited scope had been considered and is intentional::
117
118	DECLARE_COMPLETION_ONSTACK(setup_done)
119
120Note that when using completion objects as local variables you must be
121acutely aware of the short life time of the function stack: the function
122must not return to a calling context until all activities (such as waiting
123threads) have ceased and the completion object is completely unused.
124
125To emphasise this again: in particular when using some of the waiting API variants
126with more complex outcomes, such as the timeout or signalling (_timeout(),
127_killable() and _interruptible()) variants, the wait might complete
128prematurely while the object might still be in use by another thread - and a return
129from the wait_on_completion*() caller function will deallocate the function
130stack and cause subtle data corruption if a complete() is done in some
131other thread. Simple testing might not trigger these kinds of races.
132
133If unsure, use dynamically allocated completion objects, preferably embedded
134in some other long lived object that has a boringly long life time which
135exceeds the life time of any helper threads using the completion object,
136or has a lock or other synchronization mechanism to make sure complete()
137is not called on a freed object.
138
139A naive DECLARE_COMPLETION() on the stack triggers a lockdep warning.
140
141Waiting for completions:
142------------------------
143
144For a thread to wait for some concurrent activity to finish, it
145calls wait_for_completion() on the initialized completion structure::
146
147	void wait_for_completion(struct completion *done)
148
149A typical usage scenario is::
150
151	CPU#1					CPU#2
152
153	struct completion setup_done;
154
155	init_completion(&setup_done);
156	initialize_work(...,&setup_done,...);
157
158	/* run non-dependent code */		/* do setup */
159
160	wait_for_completion(&setup_done);	complete(&setup_done);
161
162This is not implying any particular order between wait_for_completion() and
163the call to complete() - if the call to complete() happened before the call
164to wait_for_completion() then the waiting side simply will continue
165immediately as all dependencies are satisfied; if not, it will block until
166completion is signaled by complete().
167
168Note that wait_for_completion() is calling spin_lock_irq()/spin_unlock_irq(),
169so it can only be called safely when you know that interrupts are enabled.
170Calling it from IRQs-off atomic contexts will result in hard-to-detect
171spurious enabling of interrupts.
172
173The default behavior is to wait without a timeout and to mark the task as
174uninterruptible. wait_for_completion() and its variants are only safe
175in process context (as they can sleep) but not in atomic context,
176interrupt context, with disabled IRQs, or preemption is disabled - see also
177try_wait_for_completion() below for handling completion in atomic/interrupt
178context.
179
180As all variants of wait_for_completion() can (obviously) block for a long
181time depending on the nature of the activity they are waiting for, so in
182most cases you probably don't want to call this with held mutexes.
183
184
185wait_for_completion*() variants available:
186------------------------------------------
187
188The below variants all return status and this status should be checked in
189most(/all) cases - in cases where the status is deliberately not checked you
190probably want to make a note explaining this (e.g. see
191arch/arm/kernel/smp.c:__cpu_up()).
192
193A common problem that occurs is to have unclean assignment of return types,
194so take care to assign return-values to variables of the proper type.
195
196Checking for the specific meaning of return values also has been found
197to be quite inaccurate, e.g. constructs like::
198
199	if (!wait_for_completion_interruptible_timeout(...))
200
201... would execute the same code path for successful completion and for the
202interrupted case - which is probably not what you want::
203
204	int wait_for_completion_interruptible(struct completion *done)
205
206This function marks the task TASK_INTERRUPTIBLE while it is waiting.
207If a signal was received while waiting it will return -ERESTARTSYS; 0 otherwise::
208
209	unsigned long wait_for_completion_timeout(struct completion *done, unsigned long timeout)
210
211The task is marked as TASK_UNINTERRUPTIBLE and will wait at most 'timeout'
212jiffies. If a timeout occurs it returns 0, else the remaining time in
213jiffies (but at least 1).
214
215Timeouts are preferably calculated with msecs_to_jiffies() or usecs_to_jiffies(),
216to make the code largely HZ-invariant.
217
218If the returned timeout value is deliberately ignored a comment should probably explain
219why (e.g. see drivers/mfd/wm8350-core.c wm8350_read_auxadc())::
220
221	long wait_for_completion_interruptible_timeout(struct completion *done, unsigned long timeout)
222
223This function passes a timeout in jiffies and marks the task as
224TASK_INTERRUPTIBLE. If a signal was received it will return -ERESTARTSYS;
225otherwise it returns 0 if the completion timed out, or the remaining time in
226jiffies if completion occurred.
227
228Further variants include _killable which uses TASK_KILLABLE as the
229designated tasks state and will return -ERESTARTSYS if it is interrupted,
230or 0 if completion was achieved.  There is a _timeout variant as well::
231
232	long wait_for_completion_killable(struct completion *done)
233	long wait_for_completion_killable_timeout(struct completion *done, unsigned long timeout)
234
235The _io variants wait_for_completion_io() behave the same as the non-_io
236variants, except for accounting waiting time as 'waiting on IO', which has
237an impact on how the task is accounted in scheduling/IO stats::
238
239	void wait_for_completion_io(struct completion *done)
240	unsigned long wait_for_completion_io_timeout(struct completion *done, unsigned long timeout)
241
242
243Signaling completions:
244----------------------
245
246A thread that wants to signal that the conditions for continuation have been
247achieved calls complete() to signal exactly one of the waiters that it can
248continue::
249
250	void complete(struct completion *done)
251
252... or calls complete_all() to signal all current and future waiters::
253
254	void complete_all(struct completion *done)
255
256The signaling will work as expected even if completions are signaled before
257a thread starts waiting. This is achieved by the waiter "consuming"
258(decrementing) the done field of 'struct completion'. Waiting threads
259wakeup order is the same in which they were enqueued (FIFO order).
260
261If complete() is called multiple times then this will allow for that number
262of waiters to continue - each call to complete() will simply increment the
263done field. Calling complete_all() multiple times is a bug though. Both
264complete() and complete_all() can be called in IRQ/atomic context safely.
265
266There can only be one thread calling complete() or complete_all() on a
267particular 'struct completion' at any time - serialized through the wait
268queue spinlock. Any such concurrent calls to complete() or complete_all()
269probably are a design bug.
270
271Signaling completion from IRQ context is fine as it will appropriately
272lock with spin_lock_irqsave()/spin_unlock_irqrestore() and it will never
273sleep.
274
275
276try_wait_for_completion()/completion_done():
277--------------------------------------------
278
279The try_wait_for_completion() function will not put the thread on the wait
280queue but rather returns false if it would need to enqueue (block) the thread,
281else it consumes one posted completion and returns true::
282
283	bool try_wait_for_completion(struct completion *done)
284
285Finally, to check the state of a completion without changing it in any way,
286call completion_done(), which returns false if there are no posted
287completions that were not yet consumed by waiters (implying that there are
288waiters) and true otherwise::
289
290	bool completion_done(struct completion *done)
291
292Both try_wait_for_completion() and completion_done() are safe to be called in
293IRQ or atomic context.
294