xref: /linux/tools/memory-model/Documentation/locking.txt (revision f9bff0e31881d03badf191d3b0005839391f5f2b)
1Locking
2=======
3
4Locking is well-known and the common use cases are straightforward: Any
5CPU holding a given lock sees any changes previously seen or made by any
6CPU before it previously released that same lock.  This last sentence
7is the only part of this document that most developers will need to read.
8
9However, developers who would like to also access lock-protected shared
10variables outside of their corresponding locks should continue reading.
11
12
13Locking and Prior Accesses
14--------------------------
15
16The basic rule of locking is worth repeating:
17
18	Any CPU holding a given lock sees any changes previously seen
19	or made by any CPU before it previously released that same lock.
20
21Note that this statement is a bit stronger than "Any CPU holding a
22given lock sees all changes made by any CPU during the time that CPU was
23previously holding this same lock".  For example, consider the following
24pair of code fragments:
25
26	/* See MP+polocks.litmus. */
27	void CPU0(void)
28	{
29		WRITE_ONCE(x, 1);
30		spin_lock(&mylock);
31		WRITE_ONCE(y, 1);
32		spin_unlock(&mylock);
33	}
34
35	void CPU1(void)
36	{
37		spin_lock(&mylock);
38		r0 = READ_ONCE(y);
39		spin_unlock(&mylock);
40		r1 = READ_ONCE(x);
41	}
42
43The basic rule guarantees that if CPU0() acquires mylock before CPU1(),
44then both r0 and r1 must be set to the value 1.  This also has the
45consequence that if the final value of r0 is equal to 1, then the final
46value of r1 must also be equal to 1.  In contrast, the weaker rule would
47say nothing about the final value of r1.
48
49
50Locking and Subsequent Accesses
51-------------------------------
52
53The converse to the basic rule also holds:  Any CPU holding a given
54lock will not see any changes that will be made by any CPU after it
55subsequently acquires this same lock.  This converse statement is
56illustrated by the following litmus test:
57
58	/* See MP+porevlocks.litmus. */
59	void CPU0(void)
60	{
61		r0 = READ_ONCE(y);
62		spin_lock(&mylock);
63		r1 = READ_ONCE(x);
64		spin_unlock(&mylock);
65	}
66
67	void CPU1(void)
68	{
69		spin_lock(&mylock);
70		WRITE_ONCE(x, 1);
71		spin_unlock(&mylock);
72		WRITE_ONCE(y, 1);
73	}
74
75This converse to the basic rule guarantees that if CPU0() acquires
76mylock before CPU1(), then both r0 and r1 must be set to the value 0.
77This also has the consequence that if the final value of r1 is equal
78to 0, then the final value of r0 must also be equal to 0.  In contrast,
79the weaker rule would say nothing about the final value of r0.
80
81These examples show only a single pair of CPUs, but the effects of the
82locking basic rule extend across multiple acquisitions of a given lock
83across multiple CPUs.
84
85
86Double-Checked Locking
87----------------------
88
89It is well known that more than just a lock is required to make
90double-checked locking work correctly,  This litmus test illustrates
91one incorrect approach:
92
93	/* See Documentation/litmus-tests/locking/DCL-broken.litmus. */
94	void CPU0(void)
95	{
96		r0 = READ_ONCE(flag);
97		if (r0 == 0) {
98			spin_lock(&lck);
99			r1 = READ_ONCE(flag);
100			if (r1 == 0) {
101				WRITE_ONCE(data, 1);
102				WRITE_ONCE(flag, 1);
103			}
104			spin_unlock(&lck);
105		}
106		r2 = READ_ONCE(data);
107	}
108	/* CPU1() is the exactly the same as CPU0(). */
109
110There are two problems.  First, there is no ordering between the first
111READ_ONCE() of "flag" and the READ_ONCE() of "data".  Second, there is
112no ordering between the two WRITE_ONCE() calls.  It should therefore be
113no surprise that "r2" can be zero, and a quick herd7 run confirms this.
114
115One way to fix this is to use smp_load_acquire() and smp_store_release()
116as shown in this corrected version:
117
118	/* See Documentation/litmus-tests/locking/DCL-fixed.litmus. */
119	void CPU0(void)
120	{
121		r0 = smp_load_acquire(&flag);
122		if (r0 == 0) {
123			spin_lock(&lck);
124			r1 = READ_ONCE(flag);
125			if (r1 == 0) {
126				WRITE_ONCE(data, 1);
127				smp_store_release(&flag, 1);
128			}
129			spin_unlock(&lck);
130		}
131		r2 = READ_ONCE(data);
132	}
133	/* CPU1() is the exactly the same as CPU0(). */
134
135The smp_load_acquire() guarantees that its load from "flags" will
136be ordered before the READ_ONCE() from data, thus solving the first
137problem.  The smp_store_release() guarantees that its store will be
138ordered after the WRITE_ONCE() to "data", solving the second problem.
139The smp_store_release() pairs with the smp_load_acquire(), thus ensuring
140that the ordering provided by each actually takes effect.  Again, a
141quick herd7 run confirms this.
142
143In short, if you access a lock-protected variable without holding the
144corresponding lock, you will need to provide additional ordering, in
145this case, via the smp_load_acquire() and the smp_store_release().
146
147
148Ordering Provided by a Lock to CPUs Not Holding That Lock
149---------------------------------------------------------
150
151It is not necessarily the case that accesses ordered by locking will be
152seen as ordered by CPUs not holding that lock.  Consider this example:
153
154	/* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
155	void CPU0(void)
156	{
157		spin_lock(&mylock);
158		WRITE_ONCE(x, 1);
159		WRITE_ONCE(y, 1);
160		spin_unlock(&mylock);
161	}
162
163	void CPU1(void)
164	{
165		spin_lock(&mylock);
166		r0 = READ_ONCE(y);
167		WRITE_ONCE(z, 1);
168		spin_unlock(&mylock);
169	}
170
171	void CPU2(void)
172	{
173		WRITE_ONCE(z, 2);
174		smp_mb();
175		r1 = READ_ONCE(x);
176	}
177
178Counter-intuitive though it might be, it is quite possible to have
179the final value of r0 be 1, the final value of z be 2, and the final
180value of r1 be 0.  The reason for this surprising outcome is that CPU2()
181never acquired the lock, and thus did not fully benefit from the lock's
182ordering properties.
183
184Ordering can be extended to CPUs not holding the lock by careful use
185of smp_mb__after_spinlock():
186
187	/* See Z6.0+pooncelock+poonceLock+pombonce.litmus. */
188	void CPU0(void)
189	{
190		spin_lock(&mylock);
191		WRITE_ONCE(x, 1);
192		WRITE_ONCE(y, 1);
193		spin_unlock(&mylock);
194	}
195
196	void CPU1(void)
197	{
198		spin_lock(&mylock);
199		smp_mb__after_spinlock();
200		r0 = READ_ONCE(y);
201		WRITE_ONCE(z, 1);
202		spin_unlock(&mylock);
203	}
204
205	void CPU2(void)
206	{
207		WRITE_ONCE(z, 2);
208		smp_mb();
209		r1 = READ_ONCE(x);
210	}
211
212This addition of smp_mb__after_spinlock() strengthens the lock
213acquisition sufficiently to rule out the counter-intuitive outcome.
214In other words, the addition of the smp_mb__after_spinlock() prohibits
215the counter-intuitive result where the final value of r0 is 1, the final
216value of z is 2, and the final value of r1 is 0.
217
218
219No Roach-Motel Locking!
220-----------------------
221
222This example requires familiarity with the herd7 "filter" clause, so
223please read up on that topic in litmus-tests.txt.
224
225It is tempting to allow memory-reference instructions to be pulled
226into a critical section, but this cannot be allowed in the general case.
227For example, consider a spin loop preceding a lock-based critical section.
228Now, herd7 does not model spin loops, but we can emulate one with two
229loads, with a "filter" clause to constrain the first to return the
230initial value and the second to return the updated value, as shown below:
231
232	/* See Documentation/litmus-tests/locking/RM-fixed.litmus. */
233	void CPU0(void)
234	{
235		spin_lock(&lck);
236		r2 = atomic_inc_return(&y);
237		WRITE_ONCE(x, 1);
238		spin_unlock(&lck);
239	}
240
241	void CPU1(void)
242	{
243		r0 = READ_ONCE(x);
244		r1 = READ_ONCE(x);
245		spin_lock(&lck);
246		r2 = atomic_inc_return(&y);
247		spin_unlock(&lck);
248	}
249
250	filter (1:r0=0 /\ 1:r1=1)
251	exists (1:r2=1)
252
253The variable "x" is the control variable for the emulated spin loop.
254CPU0() sets it to "1" while holding the lock, and CPU1() emulates the
255spin loop by reading it twice, first into "1:r0" (which should get the
256initial value "0") and then into "1:r1" (which should get the updated
257value "1").
258
259The "filter" clause takes this into account, constraining "1:r0" to
260equal "0" and "1:r1" to equal 1.
261
262Then the "exists" clause checks to see if CPU1() acquired its lock first,
263which should not happen given the filter clause because CPU0() updates
264"x" while holding the lock.  And herd7 confirms this.
265
266But suppose that the compiler was permitted to reorder the spin loop
267into CPU1()'s critical section, like this:
268
269	/* See Documentation/litmus-tests/locking/RM-broken.litmus. */
270	void CPU0(void)
271	{
272		int r2;
273
274		spin_lock(&lck);
275		r2 = atomic_inc_return(&y);
276		WRITE_ONCE(x, 1);
277		spin_unlock(&lck);
278	}
279
280	void CPU1(void)
281	{
282		spin_lock(&lck);
283		r0 = READ_ONCE(x);
284		r1 = READ_ONCE(x);
285		r2 = atomic_inc_return(&y);
286		spin_unlock(&lck);
287	}
288
289	filter (1:r0=0 /\ 1:r1=1)
290	exists (1:r2=1)
291
292If "1:r0" is equal to "0", "1:r1" can never equal "1" because CPU0()
293cannot update "x" while CPU1() holds the lock.  And herd7 confirms this,
294showing zero executions matching the "filter" criteria.
295
296And this is why Linux-kernel lock and unlock primitives must prevent
297code from entering critical sections.  It is not sufficient to only
298prevent code from leaving them.
299