1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 7Contents: 8 9 (*) Abstract memory access model. 10 11 - Device operations. 12 - Guarantees. 13 14 (*) What are memory barriers? 15 16 - Varieties of memory barrier. 17 - What may not be assumed about memory barriers? 18 - Data dependency barriers. 19 - Control dependencies. 20 - SMP barrier pairing. 21 - Examples of memory barrier sequences. 22 - Read memory barriers vs load speculation. 23 24 (*) Explicit kernel barriers. 25 26 - Compiler barrier. 27 - CPU memory barriers. 28 - MMIO write barrier. 29 30 (*) Implicit kernel memory barriers. 31 32 - Locking functions. 33 - Interrupt disabling functions. 34 - Sleep and wake-up functions. 35 - Miscellaneous functions. 36 37 (*) Inter-CPU locking barrier effects. 38 39 - Locks vs memory accesses. 40 - Locks vs I/O accesses. 41 42 (*) Where are memory barriers needed? 43 44 - Interprocessor interaction. 45 - Atomic operations. 46 - Accessing devices. 47 - Interrupts. 48 49 (*) Kernel I/O barrier effects. 50 51 (*) Assumed minimum execution ordering model. 52 53 (*) The effects of the cpu cache. 54 55 - Cache coherency. 56 - Cache coherency vs DMA. 57 - Cache coherency vs MMIO. 58 59 (*) The things CPUs get up to. 60 61 - And then there's the Alpha. 62 63 (*) References. 64 65 66============================ 67ABSTRACT MEMORY ACCESS MODEL 68============================ 69 70Consider the following abstract model of the system: 71 72 : : 73 : : 74 : : 75 +-------+ : +--------+ : +-------+ 76 | | : | | : | | 77 | | : | | : | | 78 | CPU 1 |<----->| Memory |<----->| CPU 2 | 79 | | : | | : | | 80 | | : | | : | | 81 +-------+ : +--------+ : +-------+ 82 ^ : ^ : ^ 83 | : | : | 84 | : | : | 85 | : v : | 86 | : +--------+ : | 87 | : | | : | 88 | : | | : | 89 +---------->| Device |<----------+ 90 : | | : 91 : | | : 92 : +--------+ : 93 : : 94 95Each CPU executes a program that generates memory access operations. In the 96abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 97perform the memory operations in any order it likes, provided program causality 98appears to be maintained. Similarly, the compiler may also arrange the 99instructions it emits in any order it likes, provided it doesn't affect the 100apparent operation of the program. 101 102So in the above diagram, the effects of the memory operations performed by a 103CPU are perceived by the rest of the system as the operations cross the 104interface between the CPU and rest of the system (the dotted lines). 105 106 107For example, consider the following sequence of events: 108 109 CPU 1 CPU 2 110 =============== =============== 111 { A == 1; B == 2 } 112 A = 3; x = A; 113 B = 4; y = B; 114 115The set of accesses as seen by the memory system in the middle can be arranged 116in 24 different combinations: 117 118 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 119 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 120 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 121 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 122 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 123 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 124 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 125 STORE B=4, ... 126 ... 127 128and can thus result in four different combinations of values: 129 130 x == 1, y == 2 131 x == 1, y == 4 132 x == 3, y == 2 133 x == 3, y == 4 134 135 136Furthermore, the stores committed by a CPU to the memory system may not be 137perceived by the loads made by another CPU in the same order as the stores were 138committed. 139 140 141As a further example, consider this sequence of events: 142 143 CPU 1 CPU 2 144 =============== =============== 145 { A == 1, B == 2, C = 3, P == &A, Q == &C } 146 B = 4; Q = P; 147 P = &B D = *Q; 148 149There is an obvious data dependency here, as the value loaded into D depends on 150the address retrieved from P by CPU 2. At the end of the sequence, any of the 151following results are possible: 152 153 (Q == &A) and (D == 1) 154 (Q == &B) and (D == 2) 155 (Q == &B) and (D == 4) 156 157Note that CPU 2 will never try and load C into D because the CPU will load P 158into Q before issuing the load of *Q. 159 160 161DEVICE OPERATIONS 162----------------- 163 164Some devices present their control interfaces as collections of memory 165locations, but the order in which the control registers are accessed is very 166important. For instance, imagine an ethernet card with a set of internal 167registers that are accessed through an address port register (A) and a data 168port register (D). To read internal register 5, the following code might then 169be used: 170 171 *A = 5; 172 x = *D; 173 174but this might show up as either of the following two sequences: 175 176 STORE *A = 5, x = LOAD *D 177 x = LOAD *D, STORE *A = 5 178 179the second of which will almost certainly result in a malfunction, since it set 180the address _after_ attempting to read the register. 181 182 183GUARANTEES 184---------- 185 186There are some minimal guarantees that may be expected of a CPU: 187 188 (*) On any given CPU, dependent memory accesses will be issued in order, with 189 respect to itself. This means that for: 190 191 Q = P; D = *Q; 192 193 the CPU will issue the following memory operations: 194 195 Q = LOAD P, D = LOAD *Q 196 197 and always in that order. 198 199 (*) Overlapping loads and stores within a particular CPU will appear to be 200 ordered within that CPU. This means that for: 201 202 a = *X; *X = b; 203 204 the CPU will only issue the following sequence of memory operations: 205 206 a = LOAD *X, STORE *X = b 207 208 And for: 209 210 *X = c; d = *X; 211 212 the CPU will only issue: 213 214 STORE *X = c, d = LOAD *X 215 216 (Loads and stores overlap if they are targeted at overlapping pieces of 217 memory). 218 219And there are a number of things that _must_ or _must_not_ be assumed: 220 221 (*) It _must_not_ be assumed that independent loads and stores will be issued 222 in the order given. This means that for: 223 224 X = *A; Y = *B; *D = Z; 225 226 we may get any of the following sequences: 227 228 X = LOAD *A, Y = LOAD *B, STORE *D = Z 229 X = LOAD *A, STORE *D = Z, Y = LOAD *B 230 Y = LOAD *B, X = LOAD *A, STORE *D = Z 231 Y = LOAD *B, STORE *D = Z, X = LOAD *A 232 STORE *D = Z, X = LOAD *A, Y = LOAD *B 233 STORE *D = Z, Y = LOAD *B, X = LOAD *A 234 235 (*) It _must_ be assumed that overlapping memory accesses may be merged or 236 discarded. This means that for: 237 238 X = *A; Y = *(A + 4); 239 240 we may get any one of the following sequences: 241 242 X = LOAD *A; Y = LOAD *(A + 4); 243 Y = LOAD *(A + 4); X = LOAD *A; 244 {X, Y} = LOAD {*A, *(A + 4) }; 245 246 And for: 247 248 *A = X; Y = *A; 249 250 we may get either of: 251 252 STORE *A = X; Y = LOAD *A; 253 STORE *A = Y = X; 254 255 256========================= 257WHAT ARE MEMORY BARRIERS? 258========================= 259 260As can be seen above, independent memory operations are effectively performed 261in random order, but this can be a problem for CPU-CPU interaction and for I/O. 262What is required is some way of intervening to instruct the compiler and the 263CPU to restrict the order. 264 265Memory barriers are such interventions. They impose a perceived partial 266ordering over the memory operations on either side of the barrier. 267 268Such enforcement is important because the CPUs and other devices in a system 269can use a variety of tricks to improve performance, including reordering, 270deferral and combination of memory operations; speculative loads; speculative 271branch prediction and various types of caching. Memory barriers are used to 272override or suppress these tricks, allowing the code to sanely control the 273interaction of multiple CPUs and/or devices. 274 275 276VARIETIES OF MEMORY BARRIER 277--------------------------- 278 279Memory barriers come in four basic varieties: 280 281 (1) Write (or store) memory barriers. 282 283 A write memory barrier gives a guarantee that all the STORE operations 284 specified before the barrier will appear to happen before all the STORE 285 operations specified after the barrier with respect to the other 286 components of the system. 287 288 A write barrier is a partial ordering on stores only; it is not required 289 to have any effect on loads. 290 291 A CPU can be viewed as committing a sequence of store operations to the 292 memory system as time progresses. All stores before a write barrier will 293 occur in the sequence _before_ all the stores after the write barrier. 294 295 [!] Note that write barriers should normally be paired with read or data 296 dependency barriers; see the "SMP barrier pairing" subsection. 297 298 299 (2) Data dependency barriers. 300 301 A data dependency barrier is a weaker form of read barrier. In the case 302 where two loads are performed such that the second depends on the result 303 of the first (eg: the first load retrieves the address to which the second 304 load will be directed), a data dependency barrier would be required to 305 make sure that the target of the second load is updated before the address 306 obtained by the first load is accessed. 307 308 A data dependency barrier is a partial ordering on interdependent loads 309 only; it is not required to have any effect on stores, independent loads 310 or overlapping loads. 311 312 As mentioned in (1), the other CPUs in the system can be viewed as 313 committing sequences of stores to the memory system that the CPU being 314 considered can then perceive. A data dependency barrier issued by the CPU 315 under consideration guarantees that for any load preceding it, if that 316 load touches one of a sequence of stores from another CPU, then by the 317 time the barrier completes, the effects of all the stores prior to that 318 touched by the load will be perceptible to any loads issued after the data 319 dependency barrier. 320 321 See the "Examples of memory barrier sequences" subsection for diagrams 322 showing the ordering constraints. 323 324 [!] Note that the first load really has to have a _data_ dependency and 325 not a control dependency. If the address for the second load is dependent 326 on the first load, but the dependency is through a conditional rather than 327 actually loading the address itself, then it's a _control_ dependency and 328 a full read barrier or better is required. See the "Control dependencies" 329 subsection for more information. 330 331 [!] Note that data dependency barriers should normally be paired with 332 write barriers; see the "SMP barrier pairing" subsection. 333 334 335 (3) Read (or load) memory barriers. 336 337 A read barrier is a data dependency barrier plus a guarantee that all the 338 LOAD operations specified before the barrier will appear to happen before 339 all the LOAD operations specified after the barrier with respect to the 340 other components of the system. 341 342 A read barrier is a partial ordering on loads only; it is not required to 343 have any effect on stores. 344 345 Read memory barriers imply data dependency barriers, and so can substitute 346 for them. 347 348 [!] Note that read barriers should normally be paired with write barriers; 349 see the "SMP barrier pairing" subsection. 350 351 352 (4) General memory barriers. 353 354 A general memory barrier gives a guarantee that all the LOAD and STORE 355 operations specified before the barrier will appear to happen before all 356 the LOAD and STORE operations specified after the barrier with respect to 357 the other components of the system. 358 359 A general memory barrier is a partial ordering over both loads and stores. 360 361 General memory barriers imply both read and write memory barriers, and so 362 can substitute for either. 363 364 365And a couple of implicit varieties: 366 367 (5) LOCK operations. 368 369 This acts as a one-way permeable barrier. It guarantees that all memory 370 operations after the LOCK operation will appear to happen after the LOCK 371 operation with respect to the other components of the system. 372 373 Memory operations that occur before a LOCK operation may appear to happen 374 after it completes. 375 376 A LOCK operation should almost always be paired with an UNLOCK operation. 377 378 379 (6) UNLOCK operations. 380 381 This also acts as a one-way permeable barrier. It guarantees that all 382 memory operations before the UNLOCK operation will appear to happen before 383 the UNLOCK operation with respect to the other components of the system. 384 385 Memory operations that occur after an UNLOCK operation may appear to 386 happen before it completes. 387 388 LOCK and UNLOCK operations are guaranteed to appear with respect to each 389 other strictly in the order specified. 390 391 The use of LOCK and UNLOCK operations generally precludes the need for 392 other sorts of memory barrier (but note the exceptions mentioned in the 393 subsection "MMIO write barrier"). 394 395 396Memory barriers are only required where there's a possibility of interaction 397between two CPUs or between a CPU and a device. If it can be guaranteed that 398there won't be any such interaction in any particular piece of code, then 399memory barriers are unnecessary in that piece of code. 400 401 402Note that these are the _minimum_ guarantees. Different architectures may give 403more substantial guarantees, but they may _not_ be relied upon outside of arch 404specific code. 405 406 407WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 408---------------------------------------------- 409 410There are certain things that the Linux kernel memory barriers do not guarantee: 411 412 (*) There is no guarantee that any of the memory accesses specified before a 413 memory barrier will be _complete_ by the completion of a memory barrier 414 instruction; the barrier can be considered to draw a line in that CPU's 415 access queue that accesses of the appropriate type may not cross. 416 417 (*) There is no guarantee that issuing a memory barrier on one CPU will have 418 any direct effect on another CPU or any other hardware in the system. The 419 indirect effect will be the order in which the second CPU sees the effects 420 of the first CPU's accesses occur, but see the next point: 421 422 (*) There is no guarantee that a CPU will see the correct order of effects 423 from a second CPU's accesses, even _if_ the second CPU uses a memory 424 barrier, unless the first CPU _also_ uses a matching memory barrier (see 425 the subsection on "SMP Barrier Pairing"). 426 427 (*) There is no guarantee that some intervening piece of off-the-CPU 428 hardware[*] will not reorder the memory accesses. CPU cache coherency 429 mechanisms should propagate the indirect effects of a memory barrier 430 between CPUs, but might not do so in order. 431 432 [*] For information on bus mastering DMA and coherency please read: 433 434 Documentation/PCI/pci.txt 435 Documentation/PCI/PCI-DMA-mapping.txt 436 Documentation/DMA-API.txt 437 438 439DATA DEPENDENCY BARRIERS 440------------------------ 441 442The usage requirements of data dependency barriers are a little subtle, and 443it's not always obvious that they're needed. To illustrate, consider the 444following sequence of events: 445 446 CPU 1 CPU 2 447 =============== =============== 448 { A == 1, B == 2, C = 3, P == &A, Q == &C } 449 B = 4; 450 <write barrier> 451 P = &B 452 Q = P; 453 D = *Q; 454 455There's a clear data dependency here, and it would seem that by the end of the 456sequence, Q must be either &A or &B, and that: 457 458 (Q == &A) implies (D == 1) 459 (Q == &B) implies (D == 4) 460 461But! CPU 2's perception of P may be updated _before_ its perception of B, thus 462leading to the following situation: 463 464 (Q == &B) and (D == 2) ???? 465 466Whilst this may seem like a failure of coherency or causality maintenance, it 467isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 468Alpha). 469 470To deal with this, a data dependency barrier or better must be inserted 471between the address load and the data load: 472 473 CPU 1 CPU 2 474 =============== =============== 475 { A == 1, B == 2, C = 3, P == &A, Q == &C } 476 B = 4; 477 <write barrier> 478 P = &B 479 Q = P; 480 <data dependency barrier> 481 D = *Q; 482 483This enforces the occurrence of one of the two implications, and prevents the 484third possibility from arising. 485 486[!] Note that this extremely counterintuitive situation arises most easily on 487machines with split caches, so that, for example, one cache bank processes 488even-numbered cache lines and the other bank processes odd-numbered cache 489lines. The pointer P might be stored in an odd-numbered cache line, and the 490variable B might be stored in an even-numbered cache line. Then, if the 491even-numbered bank of the reading CPU's cache is extremely busy while the 492odd-numbered bank is idle, one can see the new value of the pointer P (&B), 493but the old value of the variable B (2). 494 495 496Another example of where data dependency barriers might by required is where a 497number is read from memory and then used to calculate the index for an array 498access: 499 500 CPU 1 CPU 2 501 =============== =============== 502 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } 503 M[1] = 4; 504 <write barrier> 505 P = 1 506 Q = P; 507 <data dependency barrier> 508 D = M[Q]; 509 510 511The data dependency barrier is very important to the RCU system, for example. 512See rcu_dereference() in include/linux/rcupdate.h. This permits the current 513target of an RCU'd pointer to be replaced with a new modified target, without 514the replacement target appearing to be incompletely initialised. 515 516See also the subsection on "Cache Coherency" for a more thorough example. 517 518 519CONTROL DEPENDENCIES 520-------------------- 521 522A control dependency requires a full read memory barrier, not simply a data 523dependency barrier to make it work correctly. Consider the following bit of 524code: 525 526 q = &a; 527 if (p) 528 q = &b; 529 <data dependency barrier> 530 x = *q; 531 532This will not have the desired effect because there is no actual data 533dependency, but rather a control dependency that the CPU may short-circuit by 534attempting to predict the outcome in advance. In such a case what's actually 535required is: 536 537 q = &a; 538 if (p) 539 q = &b; 540 <read barrier> 541 x = *q; 542 543 544SMP BARRIER PAIRING 545------------------- 546 547When dealing with CPU-CPU interactions, certain types of memory barrier should 548always be paired. A lack of appropriate pairing is almost certainly an error. 549 550A write barrier should always be paired with a data dependency barrier or read 551barrier, though a general barrier would also be viable. Similarly a read 552barrier or a data dependency barrier should always be paired with at least an 553write barrier, though, again, a general barrier is viable: 554 555 CPU 1 CPU 2 556 =============== =============== 557 a = 1; 558 <write barrier> 559 b = 2; x = b; 560 <read barrier> 561 y = a; 562 563Or: 564 565 CPU 1 CPU 2 566 =============== =============================== 567 a = 1; 568 <write barrier> 569 b = &a; x = b; 570 <data dependency barrier> 571 y = *x; 572 573Basically, the read barrier always has to be there, even though it can be of 574the "weaker" type. 575 576[!] Note that the stores before the write barrier would normally be expected to 577match the loads after the read barrier or the data dependency barrier, and vice 578versa: 579 580 CPU 1 CPU 2 581 =============== =============== 582 a = 1; }---- --->{ v = c 583 b = 2; } \ / { w = d 584 <write barrier> \ <read barrier> 585 c = 3; } / \ { x = a; 586 d = 4; }---- --->{ y = b; 587 588 589EXAMPLES OF MEMORY BARRIER SEQUENCES 590------------------------------------ 591 592Firstly, write barriers act as partial orderings on store operations. 593Consider the following sequence of events: 594 595 CPU 1 596 ======================= 597 STORE A = 1 598 STORE B = 2 599 STORE C = 3 600 <write barrier> 601 STORE D = 4 602 STORE E = 5 603 604This sequence of events is committed to the memory coherence system in an order 605that the rest of the system might perceive as the unordered set of { STORE A, 606STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 607}: 608 609 +-------+ : : 610 | | +------+ 611 | |------>| C=3 | } /\ 612 | | : +------+ }----- \ -----> Events perceptible to 613 | | : | A=1 | } \/ the rest of the system 614 | | : +------+ } 615 | CPU 1 | : | B=2 | } 616 | | +------+ } 617 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 618 | | +------+ } requires all stores prior to the 619 | | : | E=5 | } barrier to be committed before 620 | | : +------+ } further stores may take place 621 | |------>| D=4 | } 622 | | +------+ 623 +-------+ : : 624 | 625 | Sequence in which stores are committed to the 626 | memory system by CPU 1 627 V 628 629 630Secondly, data dependency barriers act as partial orderings on data-dependent 631loads. Consider the following sequence of events: 632 633 CPU 1 CPU 2 634 ======================= ======================= 635 { B = 7; X = 9; Y = 8; C = &Y } 636 STORE A = 1 637 STORE B = 2 638 <write barrier> 639 STORE C = &B LOAD X 640 STORE D = 4 LOAD C (gets &B) 641 LOAD *C (reads B) 642 643Without intervention, CPU 2 may perceive the events on CPU 1 in some 644effectively random order, despite the write barrier issued by CPU 1: 645 646 +-------+ : : : : 647 | | +------+ +-------+ | Sequence of update 648 | |------>| B=2 |----- --->| Y->8 | | of perception on 649 | | : +------+ \ +-------+ | CPU 2 650 | CPU 1 | : | A=1 | \ --->| C->&Y | V 651 | | +------+ | +-------+ 652 | | wwwwwwwwwwwwwwww | : : 653 | | +------+ | : : 654 | | : | C=&B |--- | : : +-------+ 655 | | : +------+ \ | +-------+ | | 656 | |------>| D=4 | ----------->| C->&B |------>| | 657 | | +------+ | +-------+ | | 658 +-------+ : : | : : | | 659 | : : | | 660 | : : | CPU 2 | 661 | +-------+ | | 662 Apparently incorrect ---> | | B->7 |------>| | 663 perception of B (!) | +-------+ | | 664 | : : | | 665 | +-------+ | | 666 The load of X holds ---> \ | X->9 |------>| | 667 up the maintenance \ +-------+ | | 668 of coherence of B ----->| B->2 | +-------+ 669 +-------+ 670 : : 671 672 673In the above example, CPU 2 perceives that B is 7, despite the load of *C 674(which would be B) coming after the LOAD of C. 675 676If, however, a data dependency barrier were to be placed between the load of C 677and the load of *C (ie: B) on CPU 2: 678 679 CPU 1 CPU 2 680 ======================= ======================= 681 { B = 7; X = 9; Y = 8; C = &Y } 682 STORE A = 1 683 STORE B = 2 684 <write barrier> 685 STORE C = &B LOAD X 686 STORE D = 4 LOAD C (gets &B) 687 <data dependency barrier> 688 LOAD *C (reads B) 689 690then the following will occur: 691 692 +-------+ : : : : 693 | | +------+ +-------+ 694 | |------>| B=2 |----- --->| Y->8 | 695 | | : +------+ \ +-------+ 696 | CPU 1 | : | A=1 | \ --->| C->&Y | 697 | | +------+ | +-------+ 698 | | wwwwwwwwwwwwwwww | : : 699 | | +------+ | : : 700 | | : | C=&B |--- | : : +-------+ 701 | | : +------+ \ | +-------+ | | 702 | |------>| D=4 | ----------->| C->&B |------>| | 703 | | +------+ | +-------+ | | 704 +-------+ : : | : : | | 705 | : : | | 706 | : : | CPU 2 | 707 | +-------+ | | 708 | | X->9 |------>| | 709 | +-------+ | | 710 Makes sure all effects ---> \ ddddddddddddddddd | | 711 prior to the store of C \ +-------+ | | 712 are perceptible to ----->| B->2 |------>| | 713 subsequent loads +-------+ | | 714 : : +-------+ 715 716 717And thirdly, a read barrier acts as a partial order on loads. Consider the 718following sequence of events: 719 720 CPU 1 CPU 2 721 ======================= ======================= 722 { A = 0, B = 9 } 723 STORE A=1 724 <write barrier> 725 STORE B=2 726 LOAD B 727 LOAD A 728 729Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 730some effectively random order, despite the write barrier issued by CPU 1: 731 732 +-------+ : : : : 733 | | +------+ +-------+ 734 | |------>| A=1 |------ --->| A->0 | 735 | | +------+ \ +-------+ 736 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 737 | | +------+ | +-------+ 738 | |------>| B=2 |--- | : : 739 | | +------+ \ | : : +-------+ 740 +-------+ : : \ | +-------+ | | 741 ---------->| B->2 |------>| | 742 | +-------+ | CPU 2 | 743 | | A->0 |------>| | 744 | +-------+ | | 745 | : : +-------+ 746 \ : : 747 \ +-------+ 748 ---->| A->1 | 749 +-------+ 750 : : 751 752 753If, however, a read barrier were to be placed between the load of B and the 754load of A on CPU 2: 755 756 CPU 1 CPU 2 757 ======================= ======================= 758 { A = 0, B = 9 } 759 STORE A=1 760 <write barrier> 761 STORE B=2 762 LOAD B 763 <read barrier> 764 LOAD A 765 766then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 7672: 768 769 +-------+ : : : : 770 | | +------+ +-------+ 771 | |------>| A=1 |------ --->| A->0 | 772 | | +------+ \ +-------+ 773 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 774 | | +------+ | +-------+ 775 | |------>| B=2 |--- | : : 776 | | +------+ \ | : : +-------+ 777 +-------+ : : \ | +-------+ | | 778 ---------->| B->2 |------>| | 779 | +-------+ | CPU 2 | 780 | : : | | 781 | : : | | 782 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 783 barrier causes all effects \ +-------+ | | 784 prior to the storage of B ---->| A->1 |------>| | 785 to be perceptible to CPU 2 +-------+ | | 786 : : +-------+ 787 788 789To illustrate this more completely, consider what could happen if the code 790contained a load of A either side of the read barrier: 791 792 CPU 1 CPU 2 793 ======================= ======================= 794 { A = 0, B = 9 } 795 STORE A=1 796 <write barrier> 797 STORE B=2 798 LOAD B 799 LOAD A [first load of A] 800 <read barrier> 801 LOAD A [second load of A] 802 803Even though the two loads of A both occur after the load of B, they may both 804come up with different values: 805 806 +-------+ : : : : 807 | | +------+ +-------+ 808 | |------>| A=1 |------ --->| A->0 | 809 | | +------+ \ +-------+ 810 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 811 | | +------+ | +-------+ 812 | |------>| B=2 |--- | : : 813 | | +------+ \ | : : +-------+ 814 +-------+ : : \ | +-------+ | | 815 ---------->| B->2 |------>| | 816 | +-------+ | CPU 2 | 817 | : : | | 818 | : : | | 819 | +-------+ | | 820 | | A->0 |------>| 1st | 821 | +-------+ | | 822 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 823 barrier causes all effects \ +-------+ | | 824 prior to the storage of B ---->| A->1 |------>| 2nd | 825 to be perceptible to CPU 2 +-------+ | | 826 : : +-------+ 827 828 829But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 830before the read barrier completes anyway: 831 832 +-------+ : : : : 833 | | +------+ +-------+ 834 | |------>| A=1 |------ --->| A->0 | 835 | | +------+ \ +-------+ 836 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 837 | | +------+ | +-------+ 838 | |------>| B=2 |--- | : : 839 | | +------+ \ | : : +-------+ 840 +-------+ : : \ | +-------+ | | 841 ---------->| B->2 |------>| | 842 | +-------+ | CPU 2 | 843 | : : | | 844 \ : : | | 845 \ +-------+ | | 846 ---->| A->1 |------>| 1st | 847 +-------+ | | 848 rrrrrrrrrrrrrrrrr | | 849 +-------+ | | 850 | A->1 |------>| 2nd | 851 +-------+ | | 852 : : +-------+ 853 854 855The guarantee is that the second load will always come up with A == 1 if the 856load of B came up with B == 2. No such guarantee exists for the first load of 857A; that may come up with either A == 0 or A == 1. 858 859 860READ MEMORY BARRIERS VS LOAD SPECULATION 861---------------------------------------- 862 863Many CPUs speculate with loads: that is they see that they will need to load an 864item from memory, and they find a time where they're not using the bus for any 865other loads, and so do the load in advance - even though they haven't actually 866got to that point in the instruction execution flow yet. This permits the 867actual load instruction to potentially complete immediately because the CPU 868already has the value to hand. 869 870It may turn out that the CPU didn't actually need the value - perhaps because a 871branch circumvented the load - in which case it can discard the value or just 872cache it for later use. 873 874Consider: 875 876 CPU 1 CPU 2 877 ======================= ======================= 878 LOAD B 879 DIVIDE } Divide instructions generally 880 DIVIDE } take a long time to perform 881 LOAD A 882 883Which might appear as this: 884 885 : : +-------+ 886 +-------+ | | 887 --->| B->2 |------>| | 888 +-------+ | CPU 2 | 889 : :DIVIDE | | 890 +-------+ | | 891 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 892 division speculates on the +-------+ ~ | | 893 LOAD of A : : ~ | | 894 : :DIVIDE | | 895 : : ~ | | 896 Once the divisions are complete --> : : ~-->| | 897 the CPU can then perform the : : | | 898 LOAD with immediate effect : : +-------+ 899 900 901Placing a read barrier or a data dependency barrier just before the second 902load: 903 904 CPU 1 CPU 2 905 ======================= ======================= 906 LOAD B 907 DIVIDE 908 DIVIDE 909 <read barrier> 910 LOAD A 911 912will force any value speculatively obtained to be reconsidered to an extent 913dependent on the type of barrier used. If there was no change made to the 914speculated memory location, then the speculated value will just be used: 915 916 : : +-------+ 917 +-------+ | | 918 --->| B->2 |------>| | 919 +-------+ | CPU 2 | 920 : :DIVIDE | | 921 +-------+ | | 922 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 923 division speculates on the +-------+ ~ | | 924 LOAD of A : : ~ | | 925 : :DIVIDE | | 926 : : ~ | | 927 : : ~ | | 928 rrrrrrrrrrrrrrrr~ | | 929 : : ~ | | 930 : : ~-->| | 931 : : | | 932 : : +-------+ 933 934 935but if there was an update or an invalidation from another CPU pending, then 936the speculation will be cancelled and the value reloaded: 937 938 : : +-------+ 939 +-------+ | | 940 --->| B->2 |------>| | 941 +-------+ | CPU 2 | 942 : :DIVIDE | | 943 +-------+ | | 944 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 945 division speculates on the +-------+ ~ | | 946 LOAD of A : : ~ | | 947 : :DIVIDE | | 948 : : ~ | | 949 : : ~ | | 950 rrrrrrrrrrrrrrrrr | | 951 +-------+ | | 952 The speculation is discarded ---> --->| A->1 |------>| | 953 and an updated value is +-------+ | | 954 retrieved : : +-------+ 955 956 957======================== 958EXPLICIT KERNEL BARRIERS 959======================== 960 961The Linux kernel has a variety of different barriers that act at different 962levels: 963 964 (*) Compiler barrier. 965 966 (*) CPU memory barriers. 967 968 (*) MMIO write barrier. 969 970 971COMPILER BARRIER 972---------------- 973 974The Linux kernel has an explicit compiler barrier function that prevents the 975compiler from moving the memory accesses either side of it to the other side: 976 977 barrier(); 978 979This is a general barrier - lesser varieties of compiler barrier do not exist. 980 981The compiler barrier has no direct effect on the CPU, which may then reorder 982things however it wishes. 983 984 985CPU MEMORY BARRIERS 986------------------- 987 988The Linux kernel has eight basic CPU memory barriers: 989 990 TYPE MANDATORY SMP CONDITIONAL 991 =============== ======================= =========================== 992 GENERAL mb() smp_mb() 993 WRITE wmb() smp_wmb() 994 READ rmb() smp_rmb() 995 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 996 997 998All memory barriers except the data dependency barriers imply a compiler 999barrier. Data dependencies do not impose any additional compiler ordering. 1000 1001Aside: In the case of data dependencies, the compiler would be expected to 1002issue the loads in the correct order (eg. `a[b]` would have to load the value 1003of b before loading a[b]), however there is no guarantee in the C specification 1004that the compiler may not speculate the value of b (eg. is equal to 1) and load 1005a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the 1006problem of a compiler reloading b after having loaded a[b], thus having a newer 1007copy of b than a[b]. A consensus has not yet been reached about these problems, 1008however the ACCESS_ONCE macro is a good place to start looking. 1009 1010SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1011systems because it is assumed that a CPU will appear to be self-consistent, 1012and will order overlapping accesses correctly with respect to itself. 1013 1014[!] Note that SMP memory barriers _must_ be used to control the ordering of 1015references to shared memory on SMP systems, though the use of locking instead 1016is sufficient. 1017 1018Mandatory barriers should not be used to control SMP effects, since mandatory 1019barriers unnecessarily impose overhead on UP systems. They may, however, be 1020used to control MMIO effects on accesses through relaxed memory I/O windows. 1021These are required even on non-SMP systems as they affect the order in which 1022memory operations appear to a device by prohibiting both the compiler and the 1023CPU from reordering them. 1024 1025 1026There are some more advanced barrier functions: 1027 1028 (*) set_mb(var, value) 1029 1030 This assigns the value to the variable and then inserts a full memory 1031 barrier after it, depending on the function. It isn't guaranteed to 1032 insert anything more than a compiler barrier in a UP compilation. 1033 1034 1035 (*) smp_mb__before_atomic_dec(); 1036 (*) smp_mb__after_atomic_dec(); 1037 (*) smp_mb__before_atomic_inc(); 1038 (*) smp_mb__after_atomic_inc(); 1039 1040 These are for use with atomic add, subtract, increment and decrement 1041 functions that don't return a value, especially when used for reference 1042 counting. These functions do not imply memory barriers. 1043 1044 As an example, consider a piece of code that marks an object as being dead 1045 and then decrements the object's reference count: 1046 1047 obj->dead = 1; 1048 smp_mb__before_atomic_dec(); 1049 atomic_dec(&obj->ref_count); 1050 1051 This makes sure that the death mark on the object is perceived to be set 1052 *before* the reference counter is decremented. 1053 1054 See Documentation/atomic_ops.txt for more information. See the "Atomic 1055 operations" subsection for information on where to use these. 1056 1057 1058 (*) smp_mb__before_clear_bit(void); 1059 (*) smp_mb__after_clear_bit(void); 1060 1061 These are for use similar to the atomic inc/dec barriers. These are 1062 typically used for bitwise unlocking operations, so care must be taken as 1063 there are no implicit memory barriers here either. 1064 1065 Consider implementing an unlock operation of some nature by clearing a 1066 locking bit. The clear_bit() would then need to be barriered like this: 1067 1068 smp_mb__before_clear_bit(); 1069 clear_bit( ... ); 1070 1071 This prevents memory operations before the clear leaking to after it. See 1072 the subsection on "Locking Functions" with reference to UNLOCK operation 1073 implications. 1074 1075 See Documentation/atomic_ops.txt for more information. See the "Atomic 1076 operations" subsection for information on where to use these. 1077 1078 1079MMIO WRITE BARRIER 1080------------------ 1081 1082The Linux kernel also has a special barrier for use with memory-mapped I/O 1083writes: 1084 1085 mmiowb(); 1086 1087This is a variation on the mandatory write barrier that causes writes to weakly 1088ordered I/O regions to be partially ordered. Its effects may go beyond the 1089CPU->Hardware interface and actually affect the hardware at some level. 1090 1091See the subsection "Locks vs I/O accesses" for more information. 1092 1093 1094=============================== 1095IMPLICIT KERNEL MEMORY BARRIERS 1096=============================== 1097 1098Some of the other functions in the linux kernel imply memory barriers, amongst 1099which are locking and scheduling functions. 1100 1101This specification is a _minimum_ guarantee; any particular architecture may 1102provide more substantial guarantees, but these may not be relied upon outside 1103of arch specific code. 1104 1105 1106LOCKING FUNCTIONS 1107----------------- 1108 1109The Linux kernel has a number of locking constructs: 1110 1111 (*) spin locks 1112 (*) R/W spin locks 1113 (*) mutexes 1114 (*) semaphores 1115 (*) R/W semaphores 1116 (*) RCU 1117 1118In all cases there are variants on "LOCK" operations and "UNLOCK" operations 1119for each construct. These operations all imply certain barriers: 1120 1121 (1) LOCK operation implication: 1122 1123 Memory operations issued after the LOCK will be completed after the LOCK 1124 operation has completed. 1125 1126 Memory operations issued before the LOCK may be completed after the LOCK 1127 operation has completed. 1128 1129 (2) UNLOCK operation implication: 1130 1131 Memory operations issued before the UNLOCK will be completed before the 1132 UNLOCK operation has completed. 1133 1134 Memory operations issued after the UNLOCK may be completed before the 1135 UNLOCK operation has completed. 1136 1137 (3) LOCK vs LOCK implication: 1138 1139 All LOCK operations issued before another LOCK operation will be completed 1140 before that LOCK operation. 1141 1142 (4) LOCK vs UNLOCK implication: 1143 1144 All LOCK operations issued before an UNLOCK operation will be completed 1145 before the UNLOCK operation. 1146 1147 All UNLOCK operations issued before a LOCK operation will be completed 1148 before the LOCK operation. 1149 1150 (5) Failed conditional LOCK implication: 1151 1152 Certain variants of the LOCK operation may fail, either due to being 1153 unable to get the lock immediately, or due to receiving an unblocked 1154 signal whilst asleep waiting for the lock to become available. Failed 1155 locks do not imply any sort of barrier. 1156 1157Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is 1158equivalent to a full barrier, but a LOCK followed by an UNLOCK is not. 1159 1160[!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way 1161 barriers is that the effects of instructions outside of a critical section 1162 may seep into the inside of the critical section. 1163 1164A LOCK followed by an UNLOCK may not be assumed to be full memory barrier 1165because it is possible for an access preceding the LOCK to happen after the 1166LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the 1167two accesses can themselves then cross: 1168 1169 *A = a; 1170 LOCK 1171 UNLOCK 1172 *B = b; 1173 1174may occur as: 1175 1176 LOCK, STORE *B, STORE *A, UNLOCK 1177 1178Locks and semaphores may not provide any guarantee of ordering on UP compiled 1179systems, and so cannot be counted on in such a situation to actually achieve 1180anything at all - especially with respect to I/O accesses - unless combined 1181with interrupt disabling operations. 1182 1183See also the section on "Inter-CPU locking barrier effects". 1184 1185 1186As an example, consider the following: 1187 1188 *A = a; 1189 *B = b; 1190 LOCK 1191 *C = c; 1192 *D = d; 1193 UNLOCK 1194 *E = e; 1195 *F = f; 1196 1197The following sequence of events is acceptable: 1198 1199 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK 1200 1201 [+] Note that {*F,*A} indicates a combined access. 1202 1203But none of the following are: 1204 1205 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E 1206 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F 1207 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F 1208 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E 1209 1210 1211 1212INTERRUPT DISABLING FUNCTIONS 1213----------------------------- 1214 1215Functions that disable interrupts (LOCK equivalent) and enable interrupts 1216(UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O 1217barriers are required in such a situation, they must be provided from some 1218other means. 1219 1220 1221SLEEP AND WAKE-UP FUNCTIONS 1222--------------------------- 1223 1224Sleeping and waking on an event flagged in global data can be viewed as an 1225interaction between two pieces of data: the task state of the task waiting for 1226the event and the global data used to indicate the event. To make sure that 1227these appear to happen in the right order, the primitives to begin the process 1228of going to sleep, and the primitives to initiate a wake up imply certain 1229barriers. 1230 1231Firstly, the sleeper normally follows something like this sequence of events: 1232 1233 for (;;) { 1234 set_current_state(TASK_UNINTERRUPTIBLE); 1235 if (event_indicated) 1236 break; 1237 schedule(); 1238 } 1239 1240A general memory barrier is interpolated automatically by set_current_state() 1241after it has altered the task state: 1242 1243 CPU 1 1244 =============================== 1245 set_current_state(); 1246 set_mb(); 1247 STORE current->state 1248 <general barrier> 1249 LOAD event_indicated 1250 1251set_current_state() may be wrapped by: 1252 1253 prepare_to_wait(); 1254 prepare_to_wait_exclusive(); 1255 1256which therefore also imply a general memory barrier after setting the state. 1257The whole sequence above is available in various canned forms, all of which 1258interpolate the memory barrier in the right place: 1259 1260 wait_event(); 1261 wait_event_interruptible(); 1262 wait_event_interruptible_exclusive(); 1263 wait_event_interruptible_timeout(); 1264 wait_event_killable(); 1265 wait_event_timeout(); 1266 wait_on_bit(); 1267 wait_on_bit_lock(); 1268 1269 1270Secondly, code that performs a wake up normally follows something like this: 1271 1272 event_indicated = 1; 1273 wake_up(&event_wait_queue); 1274 1275or: 1276 1277 event_indicated = 1; 1278 wake_up_process(event_daemon); 1279 1280A write memory barrier is implied by wake_up() and co. if and only if they wake 1281something up. The barrier occurs before the task state is cleared, and so sits 1282between the STORE to indicate the event and the STORE to set TASK_RUNNING: 1283 1284 CPU 1 CPU 2 1285 =============================== =============================== 1286 set_current_state(); STORE event_indicated 1287 set_mb(); wake_up(); 1288 STORE current->state <write barrier> 1289 <general barrier> STORE current->state 1290 LOAD event_indicated 1291 1292The available waker functions include: 1293 1294 complete(); 1295 wake_up(); 1296 wake_up_all(); 1297 wake_up_bit(); 1298 wake_up_interruptible(); 1299 wake_up_interruptible_all(); 1300 wake_up_interruptible_nr(); 1301 wake_up_interruptible_poll(); 1302 wake_up_interruptible_sync(); 1303 wake_up_interruptible_sync_poll(); 1304 wake_up_locked(); 1305 wake_up_locked_poll(); 1306 wake_up_nr(); 1307 wake_up_poll(); 1308 wake_up_process(); 1309 1310 1311[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 1312order multiple stores before the wake-up with respect to loads of those stored 1313values after the sleeper has called set_current_state(). For instance, if the 1314sleeper does: 1315 1316 set_current_state(TASK_INTERRUPTIBLE); 1317 if (event_indicated) 1318 break; 1319 __set_current_state(TASK_RUNNING); 1320 do_something(my_data); 1321 1322and the waker does: 1323 1324 my_data = value; 1325 event_indicated = 1; 1326 wake_up(&event_wait_queue); 1327 1328there's no guarantee that the change to event_indicated will be perceived by 1329the sleeper as coming after the change to my_data. In such a circumstance, the 1330code on both sides must interpolate its own memory barriers between the 1331separate data accesses. Thus the above sleeper ought to do: 1332 1333 set_current_state(TASK_INTERRUPTIBLE); 1334 if (event_indicated) { 1335 smp_rmb(); 1336 do_something(my_data); 1337 } 1338 1339and the waker should do: 1340 1341 my_data = value; 1342 smp_wmb(); 1343 event_indicated = 1; 1344 wake_up(&event_wait_queue); 1345 1346 1347MISCELLANEOUS FUNCTIONS 1348----------------------- 1349 1350Other functions that imply barriers: 1351 1352 (*) schedule() and similar imply full memory barriers. 1353 1354 1355================================= 1356INTER-CPU LOCKING BARRIER EFFECTS 1357================================= 1358 1359On SMP systems locking primitives give a more substantial form of barrier: one 1360that does affect memory access ordering on other CPUs, within the context of 1361conflict on any particular lock. 1362 1363 1364LOCKS VS MEMORY ACCESSES 1365------------------------ 1366 1367Consider the following: the system has a pair of spinlocks (M) and (Q), and 1368three CPUs; then should the following sequence of events occur: 1369 1370 CPU 1 CPU 2 1371 =============================== =============================== 1372 *A = a; *E = e; 1373 LOCK M LOCK Q 1374 *B = b; *F = f; 1375 *C = c; *G = g; 1376 UNLOCK M UNLOCK Q 1377 *D = d; *H = h; 1378 1379Then there is no guarantee as to what order CPU 3 will see the accesses to *A 1380through *H occur in, other than the constraints imposed by the separate locks 1381on the separate CPUs. It might, for example, see: 1382 1383 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M 1384 1385But it won't see any of: 1386 1387 *B, *C or *D preceding LOCK M 1388 *A, *B or *C following UNLOCK M 1389 *F, *G or *H preceding LOCK Q 1390 *E, *F or *G following UNLOCK Q 1391 1392 1393However, if the following occurs: 1394 1395 CPU 1 CPU 2 1396 =============================== =============================== 1397 *A = a; 1398 LOCK M [1] 1399 *B = b; 1400 *C = c; 1401 UNLOCK M [1] 1402 *D = d; *E = e; 1403 LOCK M [2] 1404 *F = f; 1405 *G = g; 1406 UNLOCK M [2] 1407 *H = h; 1408 1409CPU 3 might see: 1410 1411 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1], 1412 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D 1413 1414But assuming CPU 1 gets the lock first, CPU 3 won't see any of: 1415 1416 *B, *C, *D, *F, *G or *H preceding LOCK M [1] 1417 *A, *B or *C following UNLOCK M [1] 1418 *F, *G or *H preceding LOCK M [2] 1419 *A, *B, *C, *E, *F or *G following UNLOCK M [2] 1420 1421 1422LOCKS VS I/O ACCESSES 1423--------------------- 1424 1425Under certain circumstances (especially involving NUMA), I/O accesses within 1426two spinlocked sections on two different CPUs may be seen as interleaved by the 1427PCI bridge, because the PCI bridge does not necessarily participate in the 1428cache-coherence protocol, and is therefore incapable of issuing the required 1429read memory barriers. 1430 1431For example: 1432 1433 CPU 1 CPU 2 1434 =============================== =============================== 1435 spin_lock(Q) 1436 writel(0, ADDR) 1437 writel(1, DATA); 1438 spin_unlock(Q); 1439 spin_lock(Q); 1440 writel(4, ADDR); 1441 writel(5, DATA); 1442 spin_unlock(Q); 1443 1444may be seen by the PCI bridge as follows: 1445 1446 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 1447 1448which would probably cause the hardware to malfunction. 1449 1450 1451What is necessary here is to intervene with an mmiowb() before dropping the 1452spinlock, for example: 1453 1454 CPU 1 CPU 2 1455 =============================== =============================== 1456 spin_lock(Q) 1457 writel(0, ADDR) 1458 writel(1, DATA); 1459 mmiowb(); 1460 spin_unlock(Q); 1461 spin_lock(Q); 1462 writel(4, ADDR); 1463 writel(5, DATA); 1464 mmiowb(); 1465 spin_unlock(Q); 1466 1467this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 1468before either of the stores issued on CPU 2. 1469 1470 1471Furthermore, following a store by a load from the same device obviates the need 1472for the mmiowb(), because the load forces the store to complete before the load 1473is performed: 1474 1475 CPU 1 CPU 2 1476 =============================== =============================== 1477 spin_lock(Q) 1478 writel(0, ADDR) 1479 a = readl(DATA); 1480 spin_unlock(Q); 1481 spin_lock(Q); 1482 writel(4, ADDR); 1483 b = readl(DATA); 1484 spin_unlock(Q); 1485 1486 1487See Documentation/DocBook/deviceiobook.tmpl for more information. 1488 1489 1490================================= 1491WHERE ARE MEMORY BARRIERS NEEDED? 1492================================= 1493 1494Under normal operation, memory operation reordering is generally not going to 1495be a problem as a single-threaded linear piece of code will still appear to 1496work correctly, even if it's in an SMP kernel. There are, however, four 1497circumstances in which reordering definitely _could_ be a problem: 1498 1499 (*) Interprocessor interaction. 1500 1501 (*) Atomic operations. 1502 1503 (*) Accessing devices. 1504 1505 (*) Interrupts. 1506 1507 1508INTERPROCESSOR INTERACTION 1509-------------------------- 1510 1511When there's a system with more than one processor, more than one CPU in the 1512system may be working on the same data set at the same time. This can cause 1513synchronisation problems, and the usual way of dealing with them is to use 1514locks. Locks, however, are quite expensive, and so it may be preferable to 1515operate without the use of a lock if at all possible. In such a case 1516operations that affect both CPUs may have to be carefully ordered to prevent 1517a malfunction. 1518 1519Consider, for example, the R/W semaphore slow path. Here a waiting process is 1520queued on the semaphore, by virtue of it having a piece of its stack linked to 1521the semaphore's list of waiting processes: 1522 1523 struct rw_semaphore { 1524 ... 1525 spinlock_t lock; 1526 struct list_head waiters; 1527 }; 1528 1529 struct rwsem_waiter { 1530 struct list_head list; 1531 struct task_struct *task; 1532 }; 1533 1534To wake up a particular waiter, the up_read() or up_write() functions have to: 1535 1536 (1) read the next pointer from this waiter's record to know as to where the 1537 next waiter record is; 1538 1539 (2) read the pointer to the waiter's task structure; 1540 1541 (3) clear the task pointer to tell the waiter it has been given the semaphore; 1542 1543 (4) call wake_up_process() on the task; and 1544 1545 (5) release the reference held on the waiter's task struct. 1546 1547In other words, it has to perform this sequence of events: 1548 1549 LOAD waiter->list.next; 1550 LOAD waiter->task; 1551 STORE waiter->task; 1552 CALL wakeup 1553 RELEASE task 1554 1555and if any of these steps occur out of order, then the whole thing may 1556malfunction. 1557 1558Once it has queued itself and dropped the semaphore lock, the waiter does not 1559get the lock again; it instead just waits for its task pointer to be cleared 1560before proceeding. Since the record is on the waiter's stack, this means that 1561if the task pointer is cleared _before_ the next pointer in the list is read, 1562another CPU might start processing the waiter and might clobber the waiter's 1563stack before the up*() function has a chance to read the next pointer. 1564 1565Consider then what might happen to the above sequence of events: 1566 1567 CPU 1 CPU 2 1568 =============================== =============================== 1569 down_xxx() 1570 Queue waiter 1571 Sleep 1572 up_yyy() 1573 LOAD waiter->task; 1574 STORE waiter->task; 1575 Woken up by other event 1576 <preempt> 1577 Resume processing 1578 down_xxx() returns 1579 call foo() 1580 foo() clobbers *waiter 1581 </preempt> 1582 LOAD waiter->list.next; 1583 --- OOPS --- 1584 1585This could be dealt with using the semaphore lock, but then the down_xxx() 1586function has to needlessly get the spinlock again after being woken up. 1587 1588The way to deal with this is to insert a general SMP memory barrier: 1589 1590 LOAD waiter->list.next; 1591 LOAD waiter->task; 1592 smp_mb(); 1593 STORE waiter->task; 1594 CALL wakeup 1595 RELEASE task 1596 1597In this case, the barrier makes a guarantee that all memory accesses before the 1598barrier will appear to happen before all the memory accesses after the barrier 1599with respect to the other CPUs on the system. It does _not_ guarantee that all 1600the memory accesses before the barrier will be complete by the time the barrier 1601instruction itself is complete. 1602 1603On a UP system - where this wouldn't be a problem - the smp_mb() is just a 1604compiler barrier, thus making sure the compiler emits the instructions in the 1605right order without actually intervening in the CPU. Since there's only one 1606CPU, that CPU's dependency ordering logic will take care of everything else. 1607 1608 1609ATOMIC OPERATIONS 1610----------------- 1611 1612Whilst they are technically interprocessor interaction considerations, atomic 1613operations are noted specially as some of them imply full memory barriers and 1614some don't, but they're very heavily relied on as a group throughout the 1615kernel. 1616 1617Any atomic operation that modifies some state in memory and returns information 1618about the state (old or new) implies an SMP-conditional general memory barrier 1619(smp_mb()) on each side of the actual operation (with the exception of 1620explicit lock operations, described later). These include: 1621 1622 xchg(); 1623 cmpxchg(); 1624 atomic_cmpxchg(); 1625 atomic_inc_return(); 1626 atomic_dec_return(); 1627 atomic_add_return(); 1628 atomic_sub_return(); 1629 atomic_inc_and_test(); 1630 atomic_dec_and_test(); 1631 atomic_sub_and_test(); 1632 atomic_add_negative(); 1633 atomic_add_unless(); /* when succeeds (returns 1) */ 1634 test_and_set_bit(); 1635 test_and_clear_bit(); 1636 test_and_change_bit(); 1637 1638These are used for such things as implementing LOCK-class and UNLOCK-class 1639operations and adjusting reference counters towards object destruction, and as 1640such the implicit memory barrier effects are necessary. 1641 1642 1643The following operations are potential problems as they do _not_ imply memory 1644barriers, but might be used for implementing such things as UNLOCK-class 1645operations: 1646 1647 atomic_set(); 1648 set_bit(); 1649 clear_bit(); 1650 change_bit(); 1651 1652With these the appropriate explicit memory barrier should be used if necessary 1653(smp_mb__before_clear_bit() for instance). 1654 1655 1656The following also do _not_ imply memory barriers, and so may require explicit 1657memory barriers under some circumstances (smp_mb__before_atomic_dec() for 1658instance): 1659 1660 atomic_add(); 1661 atomic_sub(); 1662 atomic_inc(); 1663 atomic_dec(); 1664 1665If they're used for statistics generation, then they probably don't need memory 1666barriers, unless there's a coupling between statistical data. 1667 1668If they're used for reference counting on an object to control its lifetime, 1669they probably don't need memory barriers because either the reference count 1670will be adjusted inside a locked section, or the caller will already hold 1671sufficient references to make the lock, and thus a memory barrier unnecessary. 1672 1673If they're used for constructing a lock of some description, then they probably 1674do need memory barriers as a lock primitive generally has to do things in a 1675specific order. 1676 1677Basically, each usage case has to be carefully considered as to whether memory 1678barriers are needed or not. 1679 1680The following operations are special locking primitives: 1681 1682 test_and_set_bit_lock(); 1683 clear_bit_unlock(); 1684 __clear_bit_unlock(); 1685 1686These implement LOCK-class and UNLOCK-class operations. These should be used in 1687preference to other operations when implementing locking primitives, because 1688their implementations can be optimised on many architectures. 1689 1690[!] Note that special memory barrier primitives are available for these 1691situations because on some CPUs the atomic instructions used imply full memory 1692barriers, and so barrier instructions are superfluous in conjunction with them, 1693and in such cases the special barrier primitives will be no-ops. 1694 1695See Documentation/atomic_ops.txt for more information. 1696 1697 1698ACCESSING DEVICES 1699----------------- 1700 1701Many devices can be memory mapped, and so appear to the CPU as if they're just 1702a set of memory locations. To control such a device, the driver usually has to 1703make the right memory accesses in exactly the right order. 1704 1705However, having a clever CPU or a clever compiler creates a potential problem 1706in that the carefully sequenced accesses in the driver code won't reach the 1707device in the requisite order if the CPU or the compiler thinks it is more 1708efficient to reorder, combine or merge accesses - something that would cause 1709the device to malfunction. 1710 1711Inside of the Linux kernel, I/O should be done through the appropriate accessor 1712routines - such as inb() or writel() - which know how to make such accesses 1713appropriately sequential. Whilst this, for the most part, renders the explicit 1714use of memory barriers unnecessary, there are a couple of situations where they 1715might be needed: 1716 1717 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 1718 so for _all_ general drivers locks should be used and mmiowb() must be 1719 issued prior to unlocking the critical section. 1720 1721 (2) If the accessor functions are used to refer to an I/O memory window with 1722 relaxed memory access properties, then _mandatory_ memory barriers are 1723 required to enforce ordering. 1724 1725See Documentation/DocBook/deviceiobook.tmpl for more information. 1726 1727 1728INTERRUPTS 1729---------- 1730 1731A driver may be interrupted by its own interrupt service routine, and thus the 1732two parts of the driver may interfere with each other's attempts to control or 1733access the device. 1734 1735This may be alleviated - at least in part - by disabling local interrupts (a 1736form of locking), such that the critical operations are all contained within 1737the interrupt-disabled section in the driver. Whilst the driver's interrupt 1738routine is executing, the driver's core may not run on the same CPU, and its 1739interrupt is not permitted to happen again until the current interrupt has been 1740handled, thus the interrupt handler does not need to lock against that. 1741 1742However, consider a driver that was talking to an ethernet card that sports an 1743address register and a data register. If that driver's core talks to the card 1744under interrupt-disablement and then the driver's interrupt handler is invoked: 1745 1746 LOCAL IRQ DISABLE 1747 writew(ADDR, 3); 1748 writew(DATA, y); 1749 LOCAL IRQ ENABLE 1750 <interrupt> 1751 writew(ADDR, 4); 1752 q = readw(DATA); 1753 </interrupt> 1754 1755The store to the data register might happen after the second store to the 1756address register if ordering rules are sufficiently relaxed: 1757 1758 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 1759 1760 1761If ordering rules are relaxed, it must be assumed that accesses done inside an 1762interrupt disabled section may leak outside of it and may interleave with 1763accesses performed in an interrupt - and vice versa - unless implicit or 1764explicit barriers are used. 1765 1766Normally this won't be a problem because the I/O accesses done inside such 1767sections will include synchronous load operations on strictly ordered I/O 1768registers that form implicit I/O barriers. If this isn't sufficient then an 1769mmiowb() may need to be used explicitly. 1770 1771 1772A similar situation may occur between an interrupt routine and two routines 1773running on separate CPUs that communicate with each other. If such a case is 1774likely, then interrupt-disabling locks should be used to guarantee ordering. 1775 1776 1777========================== 1778KERNEL I/O BARRIER EFFECTS 1779========================== 1780 1781When accessing I/O memory, drivers should use the appropriate accessor 1782functions: 1783 1784 (*) inX(), outX(): 1785 1786 These are intended to talk to I/O space rather than memory space, but 1787 that's primarily a CPU-specific concept. The i386 and x86_64 processors do 1788 indeed have special I/O space access cycles and instructions, but many 1789 CPUs don't have such a concept. 1790 1791 The PCI bus, amongst others, defines an I/O space concept which - on such 1792 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 1793 space. However, it may also be mapped as a virtual I/O space in the CPU's 1794 memory map, particularly on those CPUs that don't support alternate I/O 1795 spaces. 1796 1797 Accesses to this space may be fully synchronous (as on i386), but 1798 intermediary bridges (such as the PCI host bridge) may not fully honour 1799 that. 1800 1801 They are guaranteed to be fully ordered with respect to each other. 1802 1803 They are not guaranteed to be fully ordered with respect to other types of 1804 memory and I/O operation. 1805 1806 (*) readX(), writeX(): 1807 1808 Whether these are guaranteed to be fully ordered and uncombined with 1809 respect to each other on the issuing CPU depends on the characteristics 1810 defined for the memory window through which they're accessing. On later 1811 i386 architecture machines, for example, this is controlled by way of the 1812 MTRR registers. 1813 1814 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 1815 provided they're not accessing a prefetchable device. 1816 1817 However, intermediary hardware (such as a PCI bridge) may indulge in 1818 deferral if it so wishes; to flush a store, a load from the same location 1819 is preferred[*], but a load from the same device or from configuration 1820 space should suffice for PCI. 1821 1822 [*] NOTE! attempting to load from the same location as was written to may 1823 cause a malfunction - consider the 16550 Rx/Tx serial registers for 1824 example. 1825 1826 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 1827 force stores to be ordered. 1828 1829 Please refer to the PCI specification for more information on interactions 1830 between PCI transactions. 1831 1832 (*) readX_relaxed() 1833 1834 These are similar to readX(), but are not guaranteed to be ordered in any 1835 way. Be aware that there is no I/O read barrier available. 1836 1837 (*) ioreadX(), iowriteX() 1838 1839 These will perform appropriately for the type of access they're actually 1840 doing, be it inX()/outX() or readX()/writeX(). 1841 1842 1843======================================== 1844ASSUMED MINIMUM EXECUTION ORDERING MODEL 1845======================================== 1846 1847It has to be assumed that the conceptual CPU is weakly-ordered but that it will 1848maintain the appearance of program causality with respect to itself. Some CPUs 1849(such as i386 or x86_64) are more constrained than others (such as powerpc or 1850frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 1851of arch-specific code. 1852 1853This means that it must be considered that the CPU will execute its instruction 1854stream in any order it feels like - or even in parallel - provided that if an 1855instruction in the stream depends on an earlier instruction, then that 1856earlier instruction must be sufficiently complete[*] before the later 1857instruction may proceed; in other words: provided that the appearance of 1858causality is maintained. 1859 1860 [*] Some instructions have more than one effect - such as changing the 1861 condition codes, changing registers or changing memory - and different 1862 instructions may depend on different effects. 1863 1864A CPU may also discard any instruction sequence that winds up having no 1865ultimate effect. For example, if two adjacent instructions both load an 1866immediate value into the same register, the first may be discarded. 1867 1868 1869Similarly, it has to be assumed that compiler might reorder the instruction 1870stream in any way it sees fit, again provided the appearance of causality is 1871maintained. 1872 1873 1874============================ 1875THE EFFECTS OF THE CPU CACHE 1876============================ 1877 1878The way cached memory operations are perceived across the system is affected to 1879a certain extent by the caches that lie between CPUs and memory, and by the 1880memory coherence system that maintains the consistency of state in the system. 1881 1882As far as the way a CPU interacts with another part of the system through the 1883caches goes, the memory system has to include the CPU's caches, and memory 1884barriers for the most part act at the interface between the CPU and its cache 1885(memory barriers logically act on the dotted line in the following diagram): 1886 1887 <--- CPU ---> : <----------- Memory -----------> 1888 : 1889 +--------+ +--------+ : +--------+ +-----------+ 1890 | | | | : | | | | +--------+ 1891 | CPU | | Memory | : | CPU | | | | | 1892 | Core |--->| Access |----->| Cache |<-->| | | | 1893 | | | Queue | : | | | |--->| Memory | 1894 | | | | : | | | | | | 1895 +--------+ +--------+ : +--------+ | | | | 1896 : | Cache | +--------+ 1897 : | Coherency | 1898 : | Mechanism | +--------+ 1899 +--------+ +--------+ : +--------+ | | | | 1900 | | | | : | | | | | | 1901 | CPU | | Memory | : | CPU | | |--->| Device | 1902 | Core |--->| Access |----->| Cache |<-->| | | | 1903 | | | Queue | : | | | | | | 1904 | | | | : | | | | +--------+ 1905 +--------+ +--------+ : +--------+ +-----------+ 1906 : 1907 : 1908 1909Although any particular load or store may not actually appear outside of the 1910CPU that issued it since it may have been satisfied within the CPU's own cache, 1911it will still appear as if the full memory access had taken place as far as the 1912other CPUs are concerned since the cache coherency mechanisms will migrate the 1913cacheline over to the accessing CPU and propagate the effects upon conflict. 1914 1915The CPU core may execute instructions in any order it deems fit, provided the 1916expected program causality appears to be maintained. Some of the instructions 1917generate load and store operations which then go into the queue of memory 1918accesses to be performed. The core may place these in the queue in any order 1919it wishes, and continue execution until it is forced to wait for an instruction 1920to complete. 1921 1922What memory barriers are concerned with is controlling the order in which 1923accesses cross from the CPU side of things to the memory side of things, and 1924the order in which the effects are perceived to happen by the other observers 1925in the system. 1926 1927[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 1928their own loads and stores as if they had happened in program order. 1929 1930[!] MMIO or other device accesses may bypass the cache system. This depends on 1931the properties of the memory window through which devices are accessed and/or 1932the use of any special device communication instructions the CPU may have. 1933 1934 1935CACHE COHERENCY 1936--------------- 1937 1938Life isn't quite as simple as it may appear above, however: for while the 1939caches are expected to be coherent, there's no guarantee that that coherency 1940will be ordered. This means that whilst changes made on one CPU will 1941eventually become visible on all CPUs, there's no guarantee that they will 1942become apparent in the same order on those other CPUs. 1943 1944 1945Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 1946has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 1947 1948 : 1949 : +--------+ 1950 : +---------+ | | 1951 +--------+ : +--->| Cache A |<------->| | 1952 | | : | +---------+ | | 1953 | CPU 1 |<---+ | | 1954 | | : | +---------+ | | 1955 +--------+ : +--->| Cache B |<------->| | 1956 : +---------+ | | 1957 : | Memory | 1958 : +---------+ | System | 1959 +--------+ : +--->| Cache C |<------->| | 1960 | | : | +---------+ | | 1961 | CPU 2 |<---+ | | 1962 | | : | +---------+ | | 1963 +--------+ : +--->| Cache D |<------->| | 1964 : +---------+ | | 1965 : +--------+ 1966 : 1967 1968Imagine the system has the following properties: 1969 1970 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 1971 resident in memory; 1972 1973 (*) an even-numbered cache line may be in cache B, cache D or it may still be 1974 resident in memory; 1975 1976 (*) whilst the CPU core is interrogating one cache, the other cache may be 1977 making use of the bus to access the rest of the system - perhaps to 1978 displace a dirty cacheline or to do a speculative load; 1979 1980 (*) each cache has a queue of operations that need to be applied to that cache 1981 to maintain coherency with the rest of the system; 1982 1983 (*) the coherency queue is not flushed by normal loads to lines already 1984 present in the cache, even though the contents of the queue may 1985 potentially affect those loads. 1986 1987Imagine, then, that two writes are made on the first CPU, with a write barrier 1988between them to guarantee that they will appear to reach that CPU's caches in 1989the requisite order: 1990 1991 CPU 1 CPU 2 COMMENT 1992 =============== =============== ======================================= 1993 u == 0, v == 1 and p == &u, q == &u 1994 v = 2; 1995 smp_wmb(); Make sure change to v is visible before 1996 change to p 1997 <A:modify v=2> v is now in cache A exclusively 1998 p = &v; 1999 <B:modify p=&v> p is now in cache B exclusively 2000 2001The write memory barrier forces the other CPUs in the system to perceive that 2002the local CPU's caches have apparently been updated in the correct order. But 2003now imagine that the second CPU wants to read those values: 2004 2005 CPU 1 CPU 2 COMMENT 2006 =============== =============== ======================================= 2007 ... 2008 q = p; 2009 x = *q; 2010 2011The above pair of reads may then fail to happen in the expected order, as the 2012cacheline holding p may get updated in one of the second CPU's caches whilst 2013the update to the cacheline holding v is delayed in the other of the second 2014CPU's caches by some other cache event: 2015 2016 CPU 1 CPU 2 COMMENT 2017 =============== =============== ======================================= 2018 u == 0, v == 1 and p == &u, q == &u 2019 v = 2; 2020 smp_wmb(); 2021 <A:modify v=2> <C:busy> 2022 <C:queue v=2> 2023 p = &v; q = p; 2024 <D:request p> 2025 <B:modify p=&v> <D:commit p=&v> 2026 <D:read p> 2027 x = *q; 2028 <C:read *q> Reads from v before v updated in cache 2029 <C:unbusy> 2030 <C:commit v=2> 2031 2032Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 2033no guarantee that, without intervention, the order of update will be the same 2034as that committed on CPU 1. 2035 2036 2037To intervene, we need to interpolate a data dependency barrier or a read 2038barrier between the loads. This will force the cache to commit its coherency 2039queue before processing any further requests: 2040 2041 CPU 1 CPU 2 COMMENT 2042 =============== =============== ======================================= 2043 u == 0, v == 1 and p == &u, q == &u 2044 v = 2; 2045 smp_wmb(); 2046 <A:modify v=2> <C:busy> 2047 <C:queue v=2> 2048 p = &v; q = p; 2049 <D:request p> 2050 <B:modify p=&v> <D:commit p=&v> 2051 <D:read p> 2052 smp_read_barrier_depends() 2053 <C:unbusy> 2054 <C:commit v=2> 2055 x = *q; 2056 <C:read *q> Reads from v after v updated in cache 2057 2058 2059This sort of problem can be encountered on DEC Alpha processors as they have a 2060split cache that improves performance by making better use of the data bus. 2061Whilst most CPUs do imply a data dependency barrier on the read when a memory 2062access depends on a read, not all do, so it may not be relied on. 2063 2064Other CPUs may also have split caches, but must coordinate between the various 2065cachelets for normal memory accesses. The semantics of the Alpha removes the 2066need for coordination in the absence of memory barriers. 2067 2068 2069CACHE COHERENCY VS DMA 2070---------------------- 2071 2072Not all systems maintain cache coherency with respect to devices doing DMA. In 2073such cases, a device attempting DMA may obtain stale data from RAM because 2074dirty cache lines may be resident in the caches of various CPUs, and may not 2075have been written back to RAM yet. To deal with this, the appropriate part of 2076the kernel must flush the overlapping bits of cache on each CPU (and maybe 2077invalidate them as well). 2078 2079In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2080cache lines being written back to RAM from a CPU's cache after the device has 2081installed its own data, or cache lines present in the CPU's cache may simply 2082obscure the fact that RAM has been updated, until at such time as the cacheline 2083is discarded from the CPU's cache and reloaded. To deal with this, the 2084appropriate part of the kernel must invalidate the overlapping bits of the 2085cache on each CPU. 2086 2087See Documentation/cachetlb.txt for more information on cache management. 2088 2089 2090CACHE COHERENCY VS MMIO 2091----------------------- 2092 2093Memory mapped I/O usually takes place through memory locations that are part of 2094a window in the CPU's memory space that has different properties assigned than 2095the usual RAM directed window. 2096 2097Amongst these properties is usually the fact that such accesses bypass the 2098caching entirely and go directly to the device buses. This means MMIO accesses 2099may, in effect, overtake accesses to cached memory that were emitted earlier. 2100A memory barrier isn't sufficient in such a case, but rather the cache must be 2101flushed between the cached memory write and the MMIO access if the two are in 2102any way dependent. 2103 2104 2105========================= 2106THE THINGS CPUS GET UP TO 2107========================= 2108 2109A programmer might take it for granted that the CPU will perform memory 2110operations in exactly the order specified, so that if the CPU is, for example, 2111given the following piece of code to execute: 2112 2113 a = *A; 2114 *B = b; 2115 c = *C; 2116 d = *D; 2117 *E = e; 2118 2119they would then expect that the CPU will complete the memory operation for each 2120instruction before moving on to the next one, leading to a definite sequence of 2121operations as seen by external observers in the system: 2122 2123 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2124 2125 2126Reality is, of course, much messier. With many CPUs and compilers, the above 2127assumption doesn't hold because: 2128 2129 (*) loads are more likely to need to be completed immediately to permit 2130 execution progress, whereas stores can often be deferred without a 2131 problem; 2132 2133 (*) loads may be done speculatively, and the result discarded should it prove 2134 to have been unnecessary; 2135 2136 (*) loads may be done speculatively, leading to the result having been fetched 2137 at the wrong time in the expected sequence of events; 2138 2139 (*) the order of the memory accesses may be rearranged to promote better use 2140 of the CPU buses and caches; 2141 2142 (*) loads and stores may be combined to improve performance when talking to 2143 memory or I/O hardware that can do batched accesses of adjacent locations, 2144 thus cutting down on transaction setup costs (memory and PCI devices may 2145 both be able to do this); and 2146 2147 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2148 mechanisms may alleviate this - once the store has actually hit the cache 2149 - there's no guarantee that the coherency management will be propagated in 2150 order to other CPUs. 2151 2152So what another CPU, say, might actually observe from the above piece of code 2153is: 2154 2155 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2156 2157 (Where "LOAD {*C,*D}" is a combined load) 2158 2159 2160However, it is guaranteed that a CPU will be self-consistent: it will see its 2161_own_ accesses appear to be correctly ordered, without the need for a memory 2162barrier. For instance with the following code: 2163 2164 U = *A; 2165 *A = V; 2166 *A = W; 2167 X = *A; 2168 *A = Y; 2169 Z = *A; 2170 2171and assuming no intervention by an external influence, it can be assumed that 2172the final result will appear to be: 2173 2174 U == the original value of *A 2175 X == W 2176 Z == Y 2177 *A == Y 2178 2179The code above may cause the CPU to generate the full sequence of memory 2180accesses: 2181 2182 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2183 2184in that order, but, without intervention, the sequence may have almost any 2185combination of elements combined or discarded, provided the program's view of 2186the world remains consistent. 2187 2188The compiler may also combine, discard or defer elements of the sequence before 2189the CPU even sees them. 2190 2191For instance: 2192 2193 *A = V; 2194 *A = W; 2195 2196may be reduced to: 2197 2198 *A = W; 2199 2200since, without a write barrier, it can be assumed that the effect of the 2201storage of V to *A is lost. Similarly: 2202 2203 *A = Y; 2204 Z = *A; 2205 2206may, without a memory barrier, be reduced to: 2207 2208 *A = Y; 2209 Z = Y; 2210 2211and the LOAD operation never appear outside of the CPU. 2212 2213 2214AND THEN THERE'S THE ALPHA 2215-------------------------- 2216 2217The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 2218some versions of the Alpha CPU have a split data cache, permitting them to have 2219two semantically-related cache lines updated at separate times. This is where 2220the data dependency barrier really becomes necessary as this synchronises both 2221caches with the memory coherence system, thus making it seem like pointer 2222changes vs new data occur in the right order. 2223 2224The Alpha defines the Linux kernel's memory barrier model. 2225 2226See the subsection on "Cache Coherency" above. 2227 2228 2229========== 2230REFERENCES 2231========== 2232 2233Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 2234Digital Press) 2235 Chapter 5.2: Physical Address Space Characteristics 2236 Chapter 5.4: Caches and Write Buffers 2237 Chapter 5.5: Data Sharing 2238 Chapter 5.6: Read/Write Ordering 2239 2240AMD64 Architecture Programmer's Manual Volume 2: System Programming 2241 Chapter 7.1: Memory-Access Ordering 2242 Chapter 7.4: Buffering and Combining Memory Writes 2243 2244IA-32 Intel Architecture Software Developer's Manual, Volume 3: 2245System Programming Guide 2246 Chapter 7.1: Locked Atomic Operations 2247 Chapter 7.2: Memory Ordering 2248 Chapter 7.4: Serializing Instructions 2249 2250The SPARC Architecture Manual, Version 9 2251 Chapter 8: Memory Models 2252 Appendix D: Formal Specification of the Memory Models 2253 Appendix J: Programming with the Memory Models 2254 2255UltraSPARC Programmer Reference Manual 2256 Chapter 5: Memory Accesses and Cacheability 2257 Chapter 15: Sparc-V9 Memory Models 2258 2259UltraSPARC III Cu User's Manual 2260 Chapter 9: Memory Models 2261 2262UltraSPARC IIIi Processor User's Manual 2263 Chapter 8: Memory Models 2264 2265UltraSPARC Architecture 2005 2266 Chapter 9: Memory 2267 Appendix D: Formal Specifications of the Memory Models 2268 2269UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 2270 Chapter 8: Memory Models 2271 Appendix F: Caches and Cache Coherency 2272 2273Solaris Internals, Core Kernel Architecture, p63-68: 2274 Chapter 3.3: Hardware Considerations for Locks and 2275 Synchronization 2276 2277Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 2278for Kernel Programmers: 2279 Chapter 13: Other Memory Models 2280 2281Intel Itanium Architecture Software Developer's Manual: Volume 1: 2282 Section 2.6: Speculation 2283 Section 4.4: Memory Access 2284