xref: /linux/tools/memory-model/Documentation/access-marking.txt (revision c532de5a67a70f8533d495f8f2aaa9a0491c3ad0)
1MARKING SHARED-MEMORY ACCESSES
2==============================
3
4This document provides guidelines for marking intentionally concurrent
5normal accesses to shared memory, that is "normal" as in accesses that do
6not use read-modify-write atomic operations.  It also describes how to
7document these accesses, both with comments and with special assertions
8processed by the Kernel Concurrency Sanitizer (KCSAN).  This discussion
9builds on an earlier LWN article [1] and Linux Foundation mentorship
10session [2].
11
12
13ACCESS-MARKING OPTIONS
14======================
15
16The Linux kernel provides the following access-marking options:
17
181.	Plain C-language accesses (unmarked), for example, "a = b;"
19
202.	Data-race marking, for example, "data_race(a = b);"
21
223.	READ_ONCE(), for example, "a = READ_ONCE(b);"
23	The various forms of atomic_read() also fit in here.
24
254.	WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
26	The various forms of atomic_set() also fit in here.
27
285.	__data_racy, for example "int __data_racy a;"
29
306.	KCSAN's negative-marking assertions, ASSERT_EXCLUSIVE_ACCESS()
31	and ASSERT_EXCLUSIVE_WRITER(), are described in the
32	"ACCESS-DOCUMENTATION OPTIONS" section below.
33
34These may be used in combination, as shown in this admittedly improbable
35example:
36
37	WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
38
39Neither plain C-language accesses nor data_race() (#1 and #2 above) place
40any sort of constraint on the compiler's choice of optimizations [3].
41In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
42compiler's use of code-motion and common-subexpression optimizations.
43Therefore, if a given access is involved in an intentional data race,
44using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
45preferable to data_race(), which in turn is usually preferable to plain
46C-language accesses.  It is permissible to combine #2 and #3, for example,
47data_race(READ_ONCE(a)), which will both restrict compiler optimizations
48and disable KCSAN diagnostics.
49
50KCSAN will complain about many types of data races involving plain
51C-language accesses, but marking all accesses involved in a given data
52race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
53KCSAN from complaining.  Of course, lack of KCSAN complaints does not
54imply correct code.  Therefore, please take a thoughtful approach
55when responding to KCSAN complaints.  Churning the code base with
56ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
57is unhelpful.
58
59In fact, the following sections describe situations where use of
60data_race() and even plain C-language accesses is preferable to
61READ_ONCE() and WRITE_ONCE().
62
63
64Use of the data_race() Macro
65----------------------------
66
67Here are some situations where data_race() should be used instead of
68READ_ONCE() and WRITE_ONCE():
69
701.	Data-racy loads from shared variables whose values are used only
71	for diagnostic purposes.
72
732.	Data-racy reads whose values are checked against marked reload.
74
753.	Reads whose values feed into error-tolerant heuristics.
76
774.	Writes setting values that feed into error-tolerant heuristics.
78
79
80Data-Racy Reads for Approximate Diagnostics
81
82Approximate diagnostics include lockdep reports, monitoring/statistics
83(including /proc and /sys output), WARN*()/BUG*() checks whose return
84values are ignored, and other situations where reads from shared variables
85are not an integral part of the core concurrency design.
86
87In fact, use of data_race() instead READ_ONCE() for these diagnostic
88reads can enable better checking of the remaining accesses implementing
89the core concurrency design.  For example, suppose that the core design
90prevents any non-diagnostic reads from shared variable x from running
91concurrently with updates to x.  Then using plain C-language writes
92to x allows KCSAN to detect reads from x from within regions of code
93that fail to exclude the updates.  In this case, it is important to use
94data_race() for the diagnostic reads because otherwise KCSAN would give
95false-positive warnings about these diagnostic reads.
96
97If it is necessary to both restrict compiler optimizations and disable
98KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
99data_race(READ_ONCE(a)).
100
101In theory, plain C-language loads can also be used for this use case.
102However, in practice this will have the disadvantage of causing KCSAN
103to generate false positives because KCSAN will have no way of knowing
104that the resulting data race was intentional.
105
106
107Data-Racy Reads That Are Checked Against Marked Reload
108
109The values from some reads are not implicitly trusted.  They are instead
110fed into some operation that checks the full value against a later marked
111load from memory, which means that the occasional arbitrarily bogus value
112is not a problem.  For example, if a bogus value is fed into cmpxchg(),
113all that happens is that this cmpxchg() fails, which normally results
114in a retry.  Unless the race condition that resulted in the bogus value
115recurs, this retry will with high probability succeed, so no harm done.
116
117However, please keep in mind that a data_race() load feeding into
118a cmpxchg_relaxed() might still be subject to load fusing on some
119architectures.  Therefore, it is best to capture the return value from
120the failing cmpxchg() for the next iteration of the loop, an approach
121that provides the compiler much less scope for mischievous optimizations.
122Capturing the return value from cmpxchg() also saves a memory reference
123in many cases.
124
125In theory, plain C-language loads can also be used for this use case.
126However, in practice this will have the disadvantage of causing KCSAN
127to generate false positives because KCSAN will have no way of knowing
128that the resulting data race was intentional.
129
130
131Reads Feeding Into Error-Tolerant Heuristics
132
133Values from some reads feed into heuristics that can tolerate occasional
134errors.  Such reads can use data_race(), thus allowing KCSAN to focus on
135the other accesses to the relevant shared variables.  But please note
136that data_race() loads are subject to load fusing, which can result in
137consistent errors, which in turn are quite capable of breaking heuristics.
138Therefore use of data_race() should be limited to cases where some other
139code (such as a barrier() call) will force the occasional reload.
140
141Note that this use case requires that the heuristic be able to handle
142any possible error.  In contrast, if the heuristics might be fatally
143confused by one or more of the possible erroneous values, use READ_ONCE()
144instead of data_race().
145
146In theory, plain C-language loads can also be used for this use case.
147However, in practice this will have the disadvantage of causing KCSAN
148to generate false positives because KCSAN will have no way of knowing
149that the resulting data race was intentional.
150
151
152Writes Setting Values Feeding Into Error-Tolerant Heuristics
153
154The values read into error-tolerant heuristics come from somewhere,
155for example, from sysfs.  This means that some code in sysfs writes
156to this same variable, and these writes can also use data_race().
157After all, if the heuristic can tolerate the occasional bogus value
158due to compiler-mangled reads, it can also tolerate the occasional
159compiler-mangled write, at least assuming that the proper value is in
160place once the write completes.
161
162Plain C-language stores can also be used for this use case.  However,
163in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
164will have the disadvantage of causing KCSAN to generate false positives
165because KCSAN will have no way of knowing that the resulting data race
166was intentional.
167
168
169Use of Plain C-Language Accesses
170--------------------------------
171
172Here are some example situations where plain C-language accesses should
173used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
174
1751.	Accesses protected by mutual exclusion, including strict locking
176	and sequence locking.
177
1782.	Initialization-time and cleanup-time accesses.	This covers a
179	wide variety of situations, including the uniprocessor phase of
180	system boot, variables to be used by not-yet-spawned kthreads,
181	structures not yet published to reference-counted or RCU-protected
182	data structures, and the cleanup side of any of these situations.
183
1843.	Per-CPU variables that are not accessed from other CPUs.
185
1864.	Private per-task variables, including on-stack variables, some
187	fields in the task_struct structure, and task-private heap data.
188
1895.	Any other loads for which there is not supposed to be a concurrent
190	store to that same variable.
191
1926.	Any other stores for which there should be neither concurrent
193	loads nor concurrent stores to that same variable.
194
195	But note that KCSAN makes two explicit exceptions to this rule
196	by default, refraining from flagging plain C-language stores:
197
198	a.	No matter what.  You can override this default by building
199		with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
200
201	b.	When the store writes the value already contained in
202		that variable.	You can override this default by building
203		with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
204
205	c.	When one of the stores is in an interrupt handler and
206		the other in the interrupted code.  You can override this
207		default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
208
209Note that it is important to use plain C-language accesses in these cases,
210because doing otherwise prevents KCSAN from detecting violations of your
211code's synchronization rules.
212
213
214Use of __data_racy
215------------------
216
217Adding the __data_racy type qualifier to the declaration of a variable
218causes KCSAN to treat all accesses to that variable as if they were
219enclosed by data_race().  However, __data_racy does not affect the
220compiler, though one could imagine hardened kernel builds treating the
221__data_racy type qualifier as if it was the volatile keyword.
222
223Note well that __data_racy is subject to the same pointer-declaration
224rules as are other type qualifiers such as const and volatile.
225For example:
226
227	int __data_racy *p; // Pointer to data-racy data.
228	int *__data_racy p; // Data-racy pointer to non-data-racy data.
229
230
231ACCESS-DOCUMENTATION OPTIONS
232============================
233
234It is important to comment marked accesses so that people reading your
235code, yourself included, are reminded of the synchronization design.
236However, it is even more important to comment plain C-language accesses
237that are intentionally involved in data races.  Such comments are
238needed to remind people reading your code, again, yourself included,
239of how the compiler has been prevented from optimizing those accesses
240into concurrency bugs.
241
242It is also possible to tell KCSAN about your synchronization design.
243For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
244concurrent access to variable foo by any other CPU is an error, even
245if that concurrent access is marked with READ_ONCE().  In addition,
246ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
247to be concurrent reads from foo from other CPUs, it is an error for some
248other CPU to be concurrently writing to foo, even if that concurrent
249write is marked with data_race() or WRITE_ONCE().
250
251Note that although KCSAN will call out data races involving either
252ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
253and data_race() writes on the other, KCSAN will not report the location
254of these data_race() writes.
255
256
257EXAMPLES
258========
259
260As noted earlier, the goal is to prevent the compiler from destroying
261your concurrent algorithm, to help the human reader, and to inform
262KCSAN of aspects of your concurrency design.  This section looks at a
263few examples showing how this can be done.
264
265
266Lock Protection With Lockless Diagnostic Access
267-----------------------------------------------
268
269For example, suppose a shared variable "foo" is read only while a
270reader-writer spinlock is read-held, written only while that same
271spinlock is write-held, except that it is also read locklessly for
272diagnostic purposes.  The code might look as follows:
273
274	int foo;
275	DEFINE_RWLOCK(foo_rwlock);
276
277	void update_foo(int newval)
278	{
279		write_lock(&foo_rwlock);
280		foo = newval;
281		do_something(newval);
282		write_unlock(&foo_rwlock);
283	}
284
285	int read_foo(void)
286	{
287		int ret;
288
289		read_lock(&foo_rwlock);
290		do_something_else();
291		ret = foo;
292		read_unlock(&foo_rwlock);
293		return ret;
294	}
295
296	void read_foo_diagnostic(void)
297	{
298		pr_info("Current value of foo: %d\n", data_race(foo));
299	}
300
301The reader-writer lock prevents the compiler from introducing concurrency
302bugs into any part of the main algorithm using foo, which means that
303the accesses to foo within both update_foo() and read_foo() can (and
304should) be plain C-language accesses.  One benefit of making them be
305plain C-language accesses is that KCSAN can detect any erroneous lockless
306reads from or updates to foo.  The data_race() in read_foo_diagnostic()
307tells KCSAN that data races are expected, and should be silently
308ignored.  This data_race() also tells the human reading the code that
309read_foo_diagnostic() might sometimes return a bogus value.
310
311If it is necessary to suppress compiler optimization and also detect
312buggy lockless writes, read_foo_diagnostic() can be updated as follows:
313
314	void read_foo_diagnostic(void)
315	{
316		pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
317	}
318
319Alternatively, given that KCSAN is to ignore all accesses in this function,
320this function can be marked __no_kcsan and the data_race() can be dropped:
321
322	void __no_kcsan read_foo_diagnostic(void)
323	{
324		pr_info("Current value of foo: %d\n", READ_ONCE(foo));
325	}
326
327However, in order for KCSAN to detect buggy lockless writes, your kernel
328must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.  If you
329need KCSAN to detect such a write even if that write did not change
330the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
331If you need KCSAN to detect such a write happening in an interrupt handler
332running on the same CPU doing the legitimate lock-protected write, you
333also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these
334Kconfig options set properly, KCSAN can be quite helpful, although
335it is not necessarily a full replacement for hardware watchpoints.
336On the other hand, neither are hardware watchpoints a full replacement
337for KCSAN because it is not always easy to tell hardware watchpoint to
338conditionally trap on accesses.
339
340
341Lock-Protected Writes With Lockless Reads
342-----------------------------------------
343
344For another example, suppose a shared variable "foo" is updated only
345while holding a spinlock, but is read locklessly.  The code might look
346as follows:
347
348	int foo;
349	DEFINE_SPINLOCK(foo_lock);
350
351	void update_foo(int newval)
352	{
353		spin_lock(&foo_lock);
354		WRITE_ONCE(foo, newval);
355		ASSERT_EXCLUSIVE_WRITER(foo);
356		do_something(newval);
357		spin_unlock(&foo_wlock);
358	}
359
360	int read_foo(void)
361	{
362		do_something_else();
363		return READ_ONCE(foo);
364	}
365
366Because foo is read locklessly, all accesses are marked.  The purpose
367of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
368concurrent write, whether marked or not.
369
370
371Lock-Protected Writes With Heuristic Lockless Reads
372---------------------------------------------------
373
374For another example, suppose that the code can normally make use of
375a per-data-structure lock, but there are times when a global lock
376is required.  These times are indicated via a global flag.  The code
377might look as follows, and is based loosely on nf_conntrack_lock(),
378nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
379
380	bool global_flag;
381	DEFINE_SPINLOCK(global_lock);
382	struct foo {
383		spinlock_t f_lock;
384		int f_data;
385	};
386
387	/* All foo structures are in the following array. */
388	int nfoo;
389	struct foo *foo_array;
390
391	void do_something_locked(struct foo *fp)
392	{
393		/* This works even if data_race() returns nonsense. */
394		if (!data_race(global_flag)) {
395			spin_lock(&fp->f_lock);
396			if (!smp_load_acquire(&global_flag)) {
397				do_something(fp);
398				spin_unlock(&fp->f_lock);
399				return;
400			}
401			spin_unlock(&fp->f_lock);
402		}
403		spin_lock(&global_lock);
404		/* global_lock held, thus global flag cannot be set. */
405		spin_lock(&fp->f_lock);
406		spin_unlock(&global_lock);
407		/*
408		 * global_flag might be set here, but begin_global()
409		 * will wait for ->f_lock to be released.
410		 */
411		do_something(fp);
412		spin_unlock(&fp->f_lock);
413	}
414
415	void begin_global(void)
416	{
417		int i;
418
419		spin_lock(&global_lock);
420		WRITE_ONCE(global_flag, true);
421		for (i = 0; i < nfoo; i++) {
422			/*
423			 * Wait for pre-existing local locks.  One at
424			 * a time to avoid lockdep limitations.
425			 */
426			spin_lock(&fp->f_lock);
427			spin_unlock(&fp->f_lock);
428		}
429	}
430
431	void end_global(void)
432	{
433		smp_store_release(&global_flag, false);
434		spin_unlock(&global_lock);
435	}
436
437All code paths leading from the do_something_locked() function's first
438read from global_flag acquire a lock, so endless load fusing cannot
439happen.
440
441If the value read from global_flag is true, then global_flag is
442rechecked while holding ->f_lock, which, if global_flag is now false,
443prevents begin_global() from completing.  It is therefore safe to invoke
444do_something().
445
446Otherwise, if either value read from global_flag is true, then after
447global_lock is acquired global_flag must be false.  The acquisition of
448->f_lock will prevent any call to begin_global() from returning, which
449means that it is safe to release global_lock and invoke do_something().
450
451For this to work, only those foo structures in foo_array[] may be passed
452to do_something_locked().  The reason for this is that the synchronization
453with begin_global() relies on momentarily holding the lock of each and
454every foo structure.
455
456The smp_load_acquire() and smp_store_release() are required because
457changes to a foo structure between calls to begin_global() and
458end_global() are carried out without holding that structure's ->f_lock.
459The smp_load_acquire() and smp_store_release() ensure that the next
460invocation of do_something() from do_something_locked() will see those
461changes.
462
463
464Lockless Reads and Writes
465-------------------------
466
467For another example, suppose a shared variable "foo" is both read and
468updated locklessly.  The code might look as follows:
469
470	int foo;
471
472	int update_foo(int newval)
473	{
474		int ret;
475
476		ret = xchg(&foo, newval);
477		do_something(newval);
478		return ret;
479	}
480
481	int read_foo(void)
482	{
483		do_something_else();
484		return READ_ONCE(foo);
485	}
486
487Because foo is accessed locklessly, all accesses are marked.  It does
488not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
489there really can be concurrent lockless writers.  KCSAN would
490flag any concurrent plain C-language reads from foo, and given
491CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
492C-language writes to foo.
493
494
495Lockless Reads and Writes, But With Single-Threaded Initialization
496------------------------------------------------------------------
497
498For yet another example, suppose that foo is initialized in a
499single-threaded manner, but that a number of kthreads are then created
500that locklessly and concurrently access foo.  Some snippets of this code
501might look as follows:
502
503	int foo;
504
505	void initialize_foo(int initval, int nkthreads)
506	{
507		int i;
508
509		foo = initval;
510		ASSERT_EXCLUSIVE_ACCESS(foo);
511		for (i = 0; i < nkthreads; i++)
512			kthread_run(access_foo_concurrently, ...);
513	}
514
515	/* Called from access_foo_concurrently(). */
516	int update_foo(int newval)
517	{
518		int ret;
519
520		ret = xchg(&foo, newval);
521		do_something(newval);
522		return ret;
523	}
524
525	/* Also called from access_foo_concurrently(). */
526	int read_foo(void)
527	{
528		do_something_else();
529		return READ_ONCE(foo);
530	}
531
532The initialize_foo() uses a plain C-language write to foo because there
533are not supposed to be concurrent accesses during initialization.  The
534ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
535reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
536flag buggy concurrent writes, even if:  (1) Those writes are marked or
537(2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
538
539
540Checking Stress-Test Race Coverage
541----------------------------------
542
543When designing stress tests it is important to ensure that race conditions
544of interest really do occur.  For example, consider the following code
545fragment:
546
547	int foo;
548
549	int update_foo(int newval)
550	{
551		return xchg(&foo, newval);
552	}
553
554	int xor_shift_foo(int shift, int mask)
555	{
556		int old, new, newold;
557
558		newold = data_race(foo); /* Checked by cmpxchg(). */
559		do {
560			old = newold;
561			new = (old << shift) ^ mask;
562			newold = cmpxchg(&foo, old, new);
563		} while (newold != old);
564		return old;
565	}
566
567	int read_foo(void)
568	{
569		return READ_ONCE(foo);
570	}
571
572If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
573invoked concurrently, the stress test should force this concurrency to
574actually happen.  KCSAN can evaluate the stress test when the above code
575is modified to read as follows:
576
577	int foo;
578
579	int update_foo(int newval)
580	{
581		ASSERT_EXCLUSIVE_ACCESS(foo);
582		return xchg(&foo, newval);
583	}
584
585	int xor_shift_foo(int shift, int mask)
586	{
587		int old, new, newold;
588
589		newold = data_race(foo); /* Checked by cmpxchg(). */
590		do {
591			old = newold;
592			new = (old << shift) ^ mask;
593			ASSERT_EXCLUSIVE_ACCESS(foo);
594			newold = cmpxchg(&foo, old, new);
595		} while (newold != old);
596		return old;
597	}
598
599
600	int read_foo(void)
601	{
602		ASSERT_EXCLUSIVE_ACCESS(foo);
603		return READ_ONCE(foo);
604	}
605
606If a given stress-test run does not result in KCSAN complaints from
607each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
608stress test needs improvement.  If the stress test was to be evaluated
609on a regular basis, it would be wise to place the above instances of
610ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
611false positives when not evaluating the stress test.
612
613
614REFERENCES
615==========
616
617[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
618    https://lwn.net/Articles/816854/
619
620[2] "The Kernel Concurrency Sanitizer"
621    https://www.linuxfoundation.org/webinars/the-kernel-concurrency-sanitizer
622
623[3] "Who's afraid of a big bad optimizing compiler?"
624    https://lwn.net/Articles/793253/
625