Lines Matching +full:load +full:- +full:acquire
1 Explanation of the Linux-Kernel Memory Consistency Model
15 7. THE PROGRAM ORDER RELATION: po AND po-loc
18 10. THE READS-FROM RELATION: rf, rfi, and rfe
20 12. THE FROM-READS RELATION: fr, fri, and fre
22 14. PROPAGATION ORDER RELATION: cumul-fence
28 20. THE HAPPENS-BEFORE RELATION: hb
29 21. THE PROPAGATES-BEFORE RELATION: pb
30 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
31 23. SRCU READ-SIDE CRITICAL SECTIONS
39 ------------
41 The Linux-kernel memory consistency model (LKMM) is rather complex and
43 linux-kernel.bell and linux-kernel.cat files that make up the formal
69 ----------
74 load instructions. The LKMM makes these predictions for code running
85 Each load instruction must obtain the value written by the most recent
87 factors such as DMA and mixed-size accesses.) But on multiprocessor
98 ----------------
159 predict that r1 = 42 or r2 = -7, because neither of those values ever
181 ----------------------------
190 by each load is simply the value written by the most recently executed
203 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
204 would be 1 since a load obtains its value from the most recent
255 -------------------
286 ------
306 Atomic read-modify-write accesses, such as atomic_inc() or xchg(),
313 logical computations, control-flow instructions, or accesses to
319 is concerned only with the store itself -- its value and its address
320 -- not the computation leading up to it.
328 THE PROGRAM ORDER RELATION: po AND po-loc
329 -----------------------------------------
336 that X is po-before Y (written as "X ->po Y" in formulas) if X occurs
339 This is inherently a single-CPU relation; two instructions executing
343 po-loc is a sub-relation of po. It links two memory accesses when the
345 same memory location (the "-loc" suffix).
348 program order we need to explain. The LKMM was inspired by low-level
375 need not even be stored in normal memory at all -- in principle a
381 ---------
398 For this code, the LKMM predicts that the load from x will always be
411 Given this version of the code, the LKMM would predict that the load
428 ------------------------------------------
481 a control dependency from the load to the store.
484 come earlier in program order. Symbolically, if we have R ->data X,
485 R ->addr X, or R ->ctrl X (where R is a read event), then we must also
486 have R ->po X. It wouldn't make sense for a computation to depend
498 There appears to be a data dependency from the load of x to the store
505 the value returned by the load from x, which would certainly destroy
507 the load generated for a READ_ONCE() -- that's one of the nice
508 properties of READ_ONCE() -- but it is allowed to ignore the load's
534 from the load can be discarded, breaking the address dependency.
542 THE READS-FROM RELATION: rf, rfi, and rfe
543 -----------------------------------------
545 The reads-from relation (rf) links a write event to a read event when
547 write. In colloquial terms, the load "reads from" the store. We
548 write W ->rf R to indicate that the load R reads from the store W. We
549 further distinguish the cases where the load and the store occur on
550 the same CPU (internal reads-from, or rfi) and where they occur on
551 different CPUs (external reads-from, or rfe).
559 of load-tearing, where a load obtains some of its bits from one store
561 and WRITE_ONCE() will prevent load-tearing; it's not possible to have:
580 On the other hand, load-tearing is unavoidable when mixed-size
601 If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
602 from both of P0's stores. It is possible to handle mixed-size and
609 ------------------------------------------------------------------
612 multi-processor system, the CPUs must share a consistent view of the
628 hardware-centric view, the order in which the stores get written to
629 x's cache line). We write W ->co W' if W comes before W' in the
636 Write-write coherence: If W ->po-loc W' (i.e., W comes before
638 and W' are two stores, then W ->co W'.
640 Write-read coherence: If W ->po-loc R, where W is a store and R
641 is a load, then R must read from W or from some other store
644 Read-write coherence: If R ->po-loc W, where R is a load and W
648 Read-read coherence: If R ->po-loc R', where R and R' are two
675 write-write coherence rule: Since the store of 23 comes later in
689 If r1 = 666 at the end, this would violate the read-write coherence
690 rule: The READ_ONCE() load comes before the WRITE_ONCE() store in
712 would violate the read-read coherence rule: The r1 load comes before
713 the r2 load in program order, so it must not read from a store that
719 encoded in Itanium's Very-Long-Instruction-Word format, and it is yet
723 Just like the po relation, co is inherently an ordering -- it is not
731 related by po. Coherence order is strictly per-location, or if you
735 THE FROM-READS RELATION: fr, fri, and fre
736 -----------------------------------------
738 The from-reads relation (fr) can be a little difficult for people to
739 grok. It describes the situation where a load reads a value that gets
740 overwritten by a store. In other words, we have R ->fr W when the
764 the load and the store are on the same CPU) and fre (when they are on
769 event W for the same location, we will have R ->fr W if and only if
770 the write which R reads from is co-before W. In symbols,
772 (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
776 --------------------
795 arrange for the store to be co-later than (i.e., to overwrite) any
798 When a CPU executes a load instruction R, it first checks to see
799 whether there are any as-yet unexecuted store instructions, for the
801 uses the value of the po-latest such store as the value obtained by R,
803 CPU asks the memory subsystem for the value to load and we say that R
805 of the co-latest store to the location in question which has already
814 First-In-First-Out order, and consequently the processing delay
816 have a partitioned design that results in non-FIFO behavior. We will
819 Note that load instructions may be executed speculatively and may be
821 premature executions; we simply say that the load executes at the
834 the CPU to execute all po-earlier instructions before any
835 po-later instructions;
837 smp_rmb() forces the CPU to execute all po-earlier loads
838 before any po-later loads;
840 smp_wmb() forces the CPU to execute all po-earlier stores
841 before any po-later stores;
843 Acquire fences, such as smp_load_acquire(), force the CPU to
844 execute the load associated with the fence (e.g., the load
845 part of an smp_load_acquire()) before any po-later
849 execute all po-earlier instructions before the store
856 For each other CPU C', smp_wmb() forces all po-earlier stores
857 on C to propagate to C' before any po-later stores do.
860 a release fence is executed (including all po-earlier
865 executed (including all po-earlier stores on C) is forced to
866 propagate to all other CPUs before any instructions po-after
873 strong fences are A-cumulative. By contrast, smp_wmb() fences are not
874 A-cumulative; they only affect the propagation of stores that are
882 PROPAGATION ORDER RELATION: cumul-fence
883 ---------------------------------------
886 smp_wmb() fences) are collectively referred to as cumul-fences, even
887 though smp_wmb() isn't A-cumulative. The cumul-fence relation is
894 where either X = E or else E ->rf X; or
897 order, where either X = E or else E ->rf X.
900 and W ->cumul-fence W', then W must propagate to any given CPU
906 -------------------------------------------------
919 Atomicity: This requires that atomic read-modify-write
923 Happens-before: This requires that certain instructions are
929 Rcu: This requires that RCU read-side critical sections and
931 Grace-Period Guarantee.
933 Plain-coherence: This requires that plain memory accesses
938 memory models (such as those for C11/C++11). The "happens-before" and
940 "rcu" and "plain-coherence" axioms are specific to the LKMM.
946 -----------------------------------
951 each load between the store that it reads from and the following
957 and po-loc relations agree with this global ordering; in other words,
958 whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
964 X0 -> X1 -> X2 -> ... -> Xn -> X0,
966 where each of the links is either rf, co, fr, or po-loc. This has to
976 -------------------
978 What does it mean to say that a read-modify-write (rmw) update, such
994 occurs in between CPU 1's load and store. To put it another way, the
1002 atomic read-modify-write and W' is the write event which R reads from,
1006 (R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
1016 Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn,
1019 degenerate case). We write this relation as: Z0 ->rmw-sequence Zn.
1023 cumul-fence relation. That is, if we have:
1025 U ->cumul-fence X -> rmw-sequence Y
1027 then also U ->cumul-fence Y. Thinking about this in terms of the
1028 operational model, U ->cumul-fence X says that the store U propagates
1032 the w-post-bounded relation defined below in the PLAIN ACCESSES AND
1038 updates with full-barrier semantics did not always guarantee ordering
1039 at least as strong as atomic updates with release-barrier semantics.)
1043 -----------------------------------------
1047 "preserved program order") relation, which links the po-earlier
1048 instruction to the po-later instruction and is thus a sub-relation of
1053 memory accesses with X ->po Y; then the CPU must execute X before Y if
1065 X is also an acquire fence, such as smp_load_acquire();
1072 X and Y are both loads, X ->addr Y (i.e., there is an address
1078 store; either a data, address, or control dependency from a load R to
1086 Dependencies to load instructions are more problematic. To begin with,
1087 there is no such thing as a data dependency to a load. Next, a CPU
1088 has no reason to respect a control dependency to a load, because it
1089 can always satisfy the second load speculatively before the first, and
1090 then ignore the result if it turns out that the second load shouldn't
1094 To be fair about it, all Linux-supported architectures do execute
1096 After all, a CPU cannot ask the memory subsystem to load a value from
1098 the split-cache design used by Alpha can cause it to behave in a way
1105 two loads. This happens when there is a dependency from a load to a
1106 store and a second, po-later load reads from that store:
1108 R ->dep W ->rfi R',
1120 (In theory, a CPU might forward a store to a load when it runs across
1127 because it could tell that the store and the second load access the
1135 R ->po-loc W
1137 (the po-loc link says that R comes before W in program order and they
1141 violation of the read-write coherence rule. Similarly, if we had
1143 W ->po-loc W'
1147 overwrite W', in violation of the write-write coherence rule.
1149 allowing out-of-order writes like this to occur. The model avoided
1150 violating the write-write coherence rule by requiring the CPU not to
1155 ------------------------
1162 int y = -1;
1181 can malfunction on Alpha systems (notice that P1 uses an ordinary load
1187 to ptr does. And since P1 can't execute its second load
1188 until it knows what location to load from, i.e., after executing its
1189 first load, the value x = 1 must have propagated to P1 before the
1190 second load executed. So why doesn't r2 end up equal to 1?
1208 adds this fence after every READ_ONCE() and atomic load on Alpha. The
1209 effect of the fence is to cause the CPU not to execute any po-later
1223 its second load, the x = 1 store would already be fully processed by
1229 The LKMM requires that smp_rmb(), acquire fences, and strong fences
1230 share this property: They do not allow the CPU to execute any po-later
1231 instructions (or po-later loads in the case of smp_rmb()) until all
1234 po-earlier stores to propagate to every other CPU in the system; then
1236 as of that time -- not just the stores received when the strong fence
1243 THE HAPPENS-BEFORE RELATION: hb
1244 -------------------------------
1246 The happens-before relation (hb) links memory accesses that have to
1250 W ->rfe R implies that W and R are on different CPUs. It also means
1253 must have executed before R, and so we have W ->hb R.
1255 The equivalent fact need not hold if W ->rfi R (i.e., W and R are on
1262 W ->coe W'. This means that W and W' are stores to the same location,
1268 R ->fre W means that W overwrites the value which R reads, but it
1300 order. We can deduce this from the operational model; if P0's load
1302 been forwarded to the load, so r1 would have ended up equal to 1, not
1305 order, and P1's store propagated to P0 before P0's load executed.
1326 in program order. If the second load had executed before the first
1328 load executed, and so r1 would have been 9 rather than 0. In this
1330 because P1's store overwrote the value read by P0's first load, and
1331 P1's store propagated to P0 before P0's second load executed.
1357 from P1's second load to its first (backwards!). The reason is
1364 P1 must execute its second load before the first. Indeed, if the load
1366 have propagated to P1 by the time P1's load from buf executed, so r2
1374 link (hence an hb link) from the first load to the second, and the
1375 prop relation would give an hb link from the second load to the first.
1378 outcome is impossible -- as it should be.
1381 followed by an arbitrary number of cumul-fence links, ending with an
1385 followed by two cumul-fences and an rfe link, utilizing the fact that
1386 release fences are A-cumulative:
1414 link from P0's store to its load. This is because P0's store gets
1417 store to y does (the first cumul-fence), the store to y propagates to P2
1418 before P2's load and store execute, P2's smp_store_release()
1420 store to z does (the second cumul-fence), and P0's load executes after the
1425 requirement is the content of the LKMM's "happens-before" axiom.
1433 THE PROPAGATES-BEFORE RELATION: pb
1434 ----------------------------------
1436 The propagates-before (pb) relation capitalizes on the special
1438 store is coherence-later than E and propagates to every CPU and to RAM
1440 F via a coe or fre link, an arbitrary number of cumul-fences, an
1448 E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
1452 be equal to X). Because of the cumul-fence links, we know that W will
1460 The case where E is a load is exactly the same, except that the first
1467 coherence order, contradicting the fact that E ->coe W. If E was a
1468 load, the memory subsystem would then be forced to satisfy E's read
1470 contradicting the fact that E ->fre W.
1495 If r0 = 0 at the end then there is a pb link from P0's load to P1's
1496 load: an fre link from P0's load to P1's store (which overwrites the
1497 value read by P0), and a strong fence between P1's store and its load.
1498 In this example, the sequences of cumul-fence and hb links are empty.
1502 Similarly, if r1 = 0 at the end then there is a pb link from P1's load
1513 RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
1514 ------------------------------------------------------------------------
1516 RCU (Read-Copy-Update) is a powerful synchronization mechanism. It
1517 rests on two concepts: grace periods and read-side critical sections.
1520 synchronize_rcu(). A read-side critical section (or just critical
1526 Grace-Period Guarantee, which states that a critical section can never
1576 suitable places in the RCU-related code. Thus, if a critical section
1589 rcu-link relation. rcu-link encompasses a very general notion of
1592 E ->rcu-link F includes cases where E is po-before some memory-access
1593 event X, F is po-after some memory-access event Y, and we have any of
1594 X ->rfe Y, X ->co Y, or X ->fr Y.
1596 The formal definition of the rcu-link relation is more than a little
1600 about rcu-link is the information in the preceding paragraph.
1602 The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring
1603 grace periods and read-side critical sections into the picture, in the
1606 E ->rcu-gp F means that E and F are in fact the same event,
1610 E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1611 and rcu_read_lock() fence events delimiting some read-side
1616 If we think of the rcu-link relation as standing for an extended
1617 "before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1621 after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1624 The LKMM goes on to define the rcu-order relation as a sequence of
1625 rcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1626 number of rcu-gp links is >= the number of rcu-rscsi links. For
1629 X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1631 would imply that X ->rcu-order V, because this sequence contains two
1632 rcu-gp links and one rcu-rscsi link. (It also implies that
1633 X ->rcu-order T and Z ->rcu-order V.) On the other hand:
1635 X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
1637 does not imply X ->rcu-order V, because the sequence contains only
1638 one rcu-gp link but two rcu-rscsi links.
1640 The rcu-order relation is important because the Grace Period Guarantee
1641 means that rcu-order links act kind of like strong fences. In
1642 particular, E ->rcu-order F implies not only that E begins before F
1643 ends, but also that any write po-before E will propagate to every CPU
1644 before any instruction po-after F can execute. (However, it does not
1646 fence event is linked to itself by rcu-order as a degenerate case.)
1651 G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
1654 and there are events X, Y and a read-side critical section C such that:
1656 1. G = W is po-before or equal to X;
1660 3. Y is po-before Z;
1666 From 1 - 4 we deduce that the grace period G ends before the critical
1671 executing and hence before any instruction po-after F can execute.
1673 covered by rcu-order.
1675 The rcu-fence relation is a simple extension of rcu-order. While
1676 rcu-order only links certain fence events (calls to synchronize_rcu(),
1677 rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events
1678 that are separated by an rcu-order link. This is analogous to the way
1679 the strong-fence relation links events that are separated by an
1680 smp_mb() fence event (as mentioned above, rcu-order links act kind of
1681 like strong fences). Written symbolically, X ->rcu-fence Y means
1684 X ->po E ->rcu-order F ->po Y.
1688 every CPU before Y executes. Thus rcu-fence is sort of a
1689 "super-strong" fence: Unlike the original strong fences (smp_mb() and
1690 synchronize_rcu()), rcu-fence is able to link events on different
1691 CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't
1694 Finally, the LKMM defines the RCU-before (rb) relation in terms of
1695 rcu-fence. This is done in essentially the same way as the pb
1696 relation was defined in terms of strong-fence. We will omit the
1697 details; the end result is that E ->rb F implies E must execute
1698 before F, just as E ->pb F does (and for much the same reasons).
1703 and F with E ->rcu-link F ->rcu-order E. Or to put it a third way,
1704 the axiom requires that there are no cycles consisting of rcu-gp and
1705 rcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1706 is >= the number of rcu-rscsi links.
1721 are events Q and R where Q is po-after L (which marks the start of the
1722 critical section), Q is "before" R in the sense used by the rcu-link
1723 relation, and R is po-before the grace period S. Thus we have:
1725 L ->rcu-link S.
1731 some event X which is po-after S. Symbolically, this amounts to:
1733 S ->po X ->hb* Z ->fr W ->rf Y ->po U.
1737 discussion of the rcu-link relation earlier) that S and U are related
1738 by rcu-link:
1740 S ->rcu-link U.
1742 Since S is a grace period we have S ->rcu-gp S, and since L and U are
1743 the start and end of the critical section C we have U ->rcu-rscsi L.
1746 S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
1751 For something a little more down-to-earth, let's see how the axiom
1776 P1's load at W reads from, so we have W ->fre Y. Since S ->po W and
1777 also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S
1780 If r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1781 so we have X ->rfe Z. Together with L ->po X and Z ->po S, this
1782 yields L ->rcu-link S. And since L and U are the start and end of a
1783 critical section, we have U ->rcu-rscsi L.
1785 Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1823 that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1824 L2 ->rcu-link U0. However this cycle is not forbidden, because the
1825 sequence of relations contains fewer instances of rcu-gp (one) than of
1826 rcu-rscsi (two). Consequently the outcome is allowed by the LKMM.
1831 -------------------- -------------------- --------------------
1852 The LKMM supports SRCU (Sleepable Read-Copy-Update) in addition to
1854 relations srcu-gp and srcu-rscsi added to represent SRCU grace periods
1855 and read-side critical sections. However, there are some significant
1856 differences between RCU read-side critical sections and their SRCU
1860 SRCU READ-SIDE CRITICAL SECTIONS
1861 --------------------------------
1863 The LKMM uses the srcu-rscsi relation to model SRCU read-side critical
1864 sections. They differ from RCU read-side critical sections in the
1870 an SRCU domain, and calls linked by srcu-rscsi must have the
1871 same domain. Read-side critical sections and grace periods
1882 read-side critical sections to overlap partially, as in the
1894 created two nested (fully overlapping) read-side critical
1903 an SRCU read-side critical section to start on one CPU and end
1907 srcu_read_lock() as a special type of load event (which is
1909 a value, just as a load does) and srcu_read_unlock() as a special type
1912 belonging to the "srcu-lock" and "srcu-unlock" event classes
1920 from the load (srcu-lock) to the store (srcu-unlock). For example,
1938 except for the presence of the special srcu-lock and srcu-unlock
1939 annotations. You can see at once that we have A ->data C and
1940 B ->data D. These dependencies tell the LKMM that C is the
1941 srcu-unlock event matching srcu-lock event A, and D is the
1942 srcu-unlock event matching srcu-lock event B.
1986 A[srcu-lock] ->data B[once] ->rf C[once] ->data D[srcu-unlock].
1988 The LKMM defines a carry-srcu-data relation to express this pattern;
1994 an srcu-lock event and the final data dependency leading to the
1995 matching srcu-unlock event. carry-srcu-data is complicated by the
1997 sequence are instances of srcu-unlock. This is necessary because in a
2005 the LKMM treats B as a store to the variable s and C as a load from
2008 A ->data B ->rf C ->data D.
2010 This would cause carry-srcu-data to mistakenly extend a data
2012 srcu-unlock event matching A's srcu-lock. To avoid such problems,
2013 carry-srcu-data does not accept sequences in which the ends of any of
2014 the intermediate ->data links (B above) is an srcu-unlock event.
2018 -------
2030 and the read part of the cmpxchg operation acts as an acquire fence.
2042 acquire fence only if the operation is successful). spin_unlock(&s)
2048 store-release in a spin_unlock() and the load-acquire which forms the
2050 spin_trylock() -- we can call these things lock-releases and
2051 lock-acquires -- have two properties beyond those of ordinary releases
2054 First, when a lock-acquire reads from or is po-after a lock-release,
2055 the LKMM requires that every instruction po-before the lock-release
2056 must execute before any instruction po-after the lock-acquire. This
2057 would naturally hold if the release and acquire operations were on
2084 Here the second spin_lock() is po-after the first spin_unlock(), and
2085 therefore the load of x must execute before the load of y, even though
2089 This requirement does not apply to ordinary release and acquire
2090 fences, only to lock-related operations. For instance, suppose P0()
2112 and thus it could load y before x, obtaining r2 = 0 and r1 = 1.
2114 Second, when a lock-acquire reads from or is po-after a lock-release,
2115 and some other stores W and W' occur po-before the lock-release and
2116 po-after the lock-acquire respectively, the LKMM requires that W must
2157 These two special requirements for lock-release and lock-acquire do
2165 -----------------------------
2170 operations of one kind or another. Ordinary C-language memory
2202 P1's store to x propagates to P0 before P0's load from x executes.
2203 But since P0's load from x is a plain access, the compiler may decide
2204 to carry out the load twice (for the comparison against NULL, then again
2222 would be no possibility of a NULL-pointer dereference.
2247 are "race candidates" if they satisfy 1 - 4. Thus, whether or not two
2269 If X is a load and X executes before a store Y, then indeed there is
2279 Therefore when X is a store, for X and Y to be non-concurrent the LKMM
2282 Y executes before X -- then Y must propagate to X's CPU before X
2284 relation (vis), where X ->vis Y is defined to hold if there is an
2288 cumul-fence links followed by an optional rfe link (if none of
2293 Z is connected to Y by a strong-fence link followed by a
2304 cumul-fence memory barriers force stores that are po-before
2306 po-after the barrier.
2312 strong-fence memory barriers force stores that are po-before
2315 po-after the barrier can execute.
2340 The smp_wmb() memory barrier gives a cumul-fence link from X to W, and
2345 Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y
2352 cumul-fence, pb, and so on -- including vis) apply only to marked
2385 corresponding to the first group of accesses will all end po-before
2387 -- even if some of the accesses are plain. (Of course, the CPU may
2428 cumul-fence. It guarantees that no matter what hash of
2430 access U, all those instructions will be po-before the fence.
2437 executed, i.e., X ->vis Y. (And if there is no rfe link then
2442 fence. It guarantees that all the machine-level instructions
2443 corresponding to the access V will be po-after the fence, and
2447 Thus U's store to buf is forced to propagate to P1 before V's load
2452 general. Suppose R is a plain load and we want to show that R
2455 X ->xb* E. If E was also a plain access, we would also look for a
2456 marked access Y such that X ->xb* Y, and Y and E are ordered by a
2458 "post-bounded" by X and E is "pre-bounded" by Y.
2461 "r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or
2462 "w-pre-bounded" by Y, depending on whether E was a store or a load.
2466 say that a marked access pre-bounds and post-bounds itself (e.g., if R
2467 above were a marked load then X could simply be taken to be R itself.)
2469 The need to distinguish between r- and w-bounding raises yet another
2482 thereby adding a load (and possibly replacing the store entirely).
2484 w-pre-bounded or w-post-bounded by a marked access, it also requires
2485 the store to be r-pre-bounded or r-post-bounded, so as to handle cases
2486 where the compiler adds a load.
2489 compiler has augmented a store with a load in this fashion, and the
2493 Incidentally, the other tranformation -- augmenting a plain load by
2494 adding in a store to the same location -- is not allowed. This is
2496 a concurrent load from that location. Two concurrent loads don't
2498 does race with a concurrent load. Thus adding a store might create a
2501 load, on the other hand, is acceptable because doing so won't create a
2504 The LKMM includes a second way to pre-bound plain accesses, in
2505 addition to fences: an address dependency from a marked load. That
2511 the LKMM says that the marked load of ptr pre-bounds the plain load of
2512 *p; the marked load must execute before any of the machine
2513 instructions corresponding to the plain load. This is a reasonable
2514 stipulation, since after all, the CPU can't perform the load of *p
2544 rcu_assign_pointer() performs a store-release, so the plain store to b
2545 is definitely w-post-bounded before the store to ptr, and the two
2549 that the load of ptr in P1 is r-pre-bounded before the load of *p
2575 could now perform the load of x before the load of ptr (there might be
2579 not need to be w-post-bounded: when it is separated from the other
2580 race-candidate access by a fence. At first glance this may seem
2583 Well, normal fences don't -- but rcu-fence can! Here's an example:
2602 Do the plain stores to y race? Clearly not if P1 reads a non-zero
2604 means that the read-side critical section in P1 must finish executing
2605 before the grace period in P0 does, because RCU's Grace-Period
2609 from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link
2612 This means there is an rcu-fence link from P1's "y = 2" store to P0's
2616 isn't w-post-bounded by any marked accesses.
2619 race-candidate stores W and W', where W ->co W', the LKMM says the
2622 w-post-bounded ; vis ; w-pre-bounded
2626 r-post-bounded ; xb* ; w-pre-bounded
2630 w-post-bounded ; vis ; r-pre-bounded
2632 sequence. For race-candidate load R and store W, the LKMM says the
2635 r-post-bounded ; xb* ; w-pre-bounded
2639 w-post-bounded ; vis ; r-pre-bounded
2644 strong-fence ; xb* ; {w and/or r}-pre-bounded
2646 sequence with no post-bounding, and in every case the LKMM also allows
2653 happens-before, propagates-before, and rcu axioms (which state that
2659 called the "plain-coherence" axiom because of their resemblance to the
2662 satisfy a load request and its determination of where a store will
2666 W by one of the xb* sequences listed above, then W ->rfe R is
2667 not allowed (i.e., a load cannot read from a store that it
2671 R by one of the vis sequences listed above, then R ->fre W is
2672 not allowed (i.e., if a store is visible to a load then the
2673 load must read from that store or one coherence-after it).
2676 to W' by one of the vis sequences listed above, then W' ->co W
2687 -------------
2712 that are part of a non-value-returning atomic update. For instance,
2719 it is not guaranteed that the load from y will execute after the
2721 non-value-returning atomic operations effectively to be executed off
2730 smp_store_release() -- which is basically how the Linux kernel treats
2735 an address dependency from a marked load R to a plain store W,
2740 pre-bounding by address dependencies, it is possible for the compiler
2744 links a marked load R to a store W, and the store is read by a load R'
2755 all po-earlier events against all po-later events, as smp_mb() does,
2758 smp_mb__before_atomic() orders all po-earlier events against
2759 po-later atomic updates and the events following them;
2761 smp_mb__after_atomic() orders po-earlier atomic updates and
2762 the events preceding them against all po-later events;
2764 smp_mb__after_spinlock() orders po-earlier lock acquisition
2765 events and the events preceding them against all po-later
2786 non-deadlocking executions. For example:
2810 will self-deadlock in the executions where it stores 36 in y.