Lines Matching full:stores
103 device, stores it in a buffer, and sets a flag to indicate the buffer
135 Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1
141 This pattern of memory accesses, where one CPU stores values to two
198 it, as loads can obtain values only from earlier stores.
203 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
207 P0 stores 1 to buf before storing 1 to flag, since it executes
223 each CPU stores to its own shared location and then loads from the
271 W: P0 stores 1 to flag executes before
274 Z: P0 stores 1 to buf executes before
275 W: P0 stores 1 to flag.
297 Write events correspond to stores to shared memory, such as
399 executed before either of the stores to y. However, a compiler could
400 lift the stores out of the conditional, transforming the code into
602 from both of P0's stores. It is possible to handle mixed-size and
614 shared memory, the stores to that location must form a single global
620 the stores to x is simply the order in which the stores overwrite one
627 stores reach x's location in memory (or if you prefer a more
628 hardware-centric view, the order in which the stores get written to
638 and W' are two stores, then W ->co W'.
725 just like with the rf relation, we distinguish between stores that
726 occur on the same CPU (internal coherence order, or coi) and stores
729 On the other hand, stores to different memory locations are never
760 stores to x, there would also be fr links from the READ_ONCE() to
786 only internal operations. However, loads, stores, and fences involve
811 time to process the stores that it receives, and a store can't be used
813 most architectures, the local caches process stores in
840 smp_wmb() forces the CPU to execute all po-earlier stores
841 before any po-later stores;
854 propagates stores. When a fence instruction is executed on CPU C:
856 For each other CPU C', smp_wmb() forces all po-earlier stores
857 on C to propagate to C' before any po-later stores do.
861 stores executed on C) is forced to propagate to C' before the
865 executed (including all po-earlier stores on C) is forced to
870 affects stores from other CPUs that propagate to CPU C before the
871 fence is executed, as well as stores that are executed on C before the
874 A-cumulative; they only affect the propagation of stores that are
890 E and F are both stores on the same CPU and an smp_wmb() fence
899 The operational model requires that whenever W and W' are both stores
920 operations really are atomic, that is, no other stores can
926 Propagation: This requires that certain stores propagate to
948 According to the principle of cache coherence, the stores to any fixed
988 CPU 0 stores 14 to x;
989 CPU 1 stores 14 to x;
1003 there must not be any stores coming between W' and W in the coherence
1020 Note that this implies Z0 and Zn are stores to the same variable.
1059 X and Y are both stores and an smp_wmb() fence occurs between
1193 stores do reach P1's local cache in the proper order, it can happen
1202 incoming stores in FIFO order. By contrast, other architectures
1211 the stores it has already received. Thus, if the code was changed to:
1232 outstanding stores have been processed by the local cache. In the
1234 po-earlier stores to propagate to every other CPU in the system; then
1235 it has to wait for the local cache to process all the stores received
1236 as of that time -- not just the stores received when the strong fence
1262 W ->coe W'. This means that W and W' are stores to the same location,
1266 the other is made later by the memory subsystem. When the stores are
1308 read from different stores:
1356 stores. If r1 = 1 and r2 = 0 at the end then there is a prop link
1419 guarantees that the stores to x and y both propagate to P0 before the
1574 In the kernel's implementations of RCU, the requirements for stores
1845 This requires P0 and P2 to execute their loads and stores out of
1911 memory location and a value). These loads and stores are annotated as
2115 and some other stores W and W' occur po-before the lock-release and
2304 cumul-fence memory barriers force stores that are po-before
2305 the barrier to propagate to other CPUs before stores that are
2312 strong-fence memory barriers force stores that are po-before
2464 (i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise
2546 stores will propagate to P1 in that order. However, rcu_dereference()
2602 Do the plain stores to y race? Clearly not if P1 reads a non-zero
2614 before the second can execute. Therefore the two stores cannot be
2619 race-candidate stores W and W', where W ->co W', the LKMM says the
2620 stores don't race if W can be linked to W' by a
2810 will self-deadlock in the executions where it stores 36 in y.