1.\" Copyright (c) 2000-2001 John H. Baldwin <jhb@FreeBSD.org> 2.\" 3.\" Redistribution and use in source and binary forms, with or without 4.\" modification, are permitted provided that the following conditions 5.\" are met: 6.\" 1. Redistributions of source code must retain the above copyright 7.\" notice, this list of conditions and the following disclaimer. 8.\" 2. Redistributions in binary form must reproduce the above copyright 9.\" notice, this list of conditions and the following disclaimer in the 10.\" documentation and/or other materials provided with the distribution. 11.\" 12.\" THIS SOFTWARE IS PROVIDED BY THE DEVELOPERS ``AS IS'' AND ANY EXPRESS OR 13.\" IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES 14.\" OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 15.\" IN NO EVENT SHALL THE DEVELOPERS BE LIABLE FOR ANY DIRECT, INDIRECT, 16.\" INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 17.\" NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 18.\" DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 19.\" THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 20.\" (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 21.\" THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 22.\" 23.Dd December 16, 2024 24.Dt ATOMIC 9 25.Os 26.Sh NAME 27.Nm atomic_add , 28.Nm atomic_clear , 29.Nm atomic_cmpset , 30.Nm atomic_fcmpset , 31.Nm atomic_fetchadd , 32.Nm atomic_interrupt_fence , 33.Nm atomic_load , 34.Nm atomic_readandclear , 35.Nm atomic_set , 36.Nm atomic_subtract , 37.Nm atomic_store , 38.Nm atomic_thread_fence 39.Nd atomic operations 40.Sh SYNOPSIS 41.In machine/atomic.h 42.Ft void 43.Fn atomic_add_[acq_|rel_]<type> "volatile <type> *p" "<type> v" 44.Ft void 45.Fn atomic_clear_[acq_|rel_]<type> "volatile <type> *p" "<type> v" 46.Ft int 47.Fo atomic_cmpset_[acq_|rel_]<type> 48.Fa "volatile <type> *dst" 49.Fa "<type> old" 50.Fa "<type> new" 51.Fc 52.Ft int 53.Fo atomic_fcmpset_[acq_|rel_]<type> 54.Fa "volatile <type> *dst" 55.Fa "<type> *old" 56.Fa "<type> new" 57.Fc 58.Ft <type> 59.Fn atomic_fetchadd_<type> "volatile <type> *p" "<type> v" 60.Ft void 61.Fn atomic_interrupt_fence "void" 62.Ft <type> 63.Fn atomic_load_[acq_]<type> "const volatile <type> *p" 64.Ft <type> 65.Fn atomic_readandclear_<type> "volatile <type> *p" 66.Ft void 67.Fn atomic_set_[acq_|rel_]<type> "volatile <type> *p" "<type> v" 68.Ft void 69.Fn atomic_subtract_[acq_|rel_]<type> "volatile <type> *p" "<type> v" 70.Ft void 71.Fn atomic_store_[rel_]<type> "volatile <type> *p" "<type> v" 72.Ft <type> 73.Fn atomic_swap_<type> "volatile <type> *p" "<type> v" 74.Ft int 75.Fn atomic_testandclear_<type> "volatile <type> *p" "u_int v" 76.Ft int 77.Fn atomic_testandset_<type> "volatile <type> *p" "u_int v" 78.Ft void 79.Fn atomic_thread_fence_[acq|acq_rel|rel|seq_cst] "void" 80.Sh DESCRIPTION 81Atomic operations are commonly used to implement reference counts and as 82building blocks for synchronization primitives, such as mutexes. 83.Pp 84All of these operations are performed 85.Em atomically 86across multiple threads and in the presence of interrupts, meaning that they 87are performed in an indivisible manner from the perspective of concurrently 88running threads and interrupt handlers. 89.Pp 90On all architectures supported by 91.Fx , 92ordinary loads and stores of integers in cache-coherent memory are 93inherently atomic if the integer is naturally aligned and its size does not 94exceed the processor's word size. 95However, such loads and stores may be elided from the program by 96the compiler, whereas atomic operations are always performed. 97.Pp 98When atomic operations are performed on cache-coherent memory, all 99operations on the same location are totally ordered. 100.Pp 101When an atomic load is performed on a location in cache-coherent memory, 102it reads the entire value that was defined by the last atomic store to 103each byte of the location. 104An atomic load will never return a value out of thin air. 105When an atomic store is performed on a location, no other thread or 106interrupt handler will observe a 107.Em torn write , 108or partial modification of the location. 109.Pp 110Except as noted below, the semantics of these operations are almost 111identical to the semantics of similarly named C11 atomic operations. 112.Ss Types 113Most atomic operations act upon a specific 114.Fa type . 115That type is indicated in the function name. 116In contrast to C11 atomic operations, 117.Fx Ns 's 118atomic operations are performed on ordinary integer types. 119The available types are: 120.Pp 121.Bl -tag -offset indent -width short -compact 122.It Li int 123unsigned integer 124.It Li long 125unsigned long integer 126.It Li ptr 127unsigned integer the size of a pointer 128.It Li 32 129unsigned 32-bit integer 130.It Li 64 131unsigned 64-bit integer 132.El 133.Pp 134For example, the function to atomically add two integers is called 135.Fn atomic_add_int . 136.Pp 137Certain architectures also provide operations for types smaller than 138.Dq Li int . 139.Pp 140.Bl -tag -offset indent -width short -compact 141.It Li char 142unsigned character 143.It Li short 144unsigned short integer 145.It Li 8 146unsigned 8-bit integer 147.It Li 16 148unsigned 16-bit integer 149.El 150.Pp 151These types must not be used in machine-independent code. 152.Ss Acquire and Release Operations 153By default, a thread's accesses to different memory locations might not be 154performed in 155.Em program order , 156that is, the order in which the accesses appear in the source code. 157To optimize the program's execution, both the compiler and processor might 158reorder the thread's accesses. 159However, both ensure that their reordering of the accesses is not visible to 160the thread. 161Otherwise, the traditional memory model that is expected by single-threaded 162programs would be violated. 163Nonetheless, other threads in a multithreaded program, such as the 164.Fx 165kernel, might observe the reordering. 166Moreover, in some cases, such as the implementation of synchronization between 167threads, arbitrary reordering might result in the incorrect execution of the 168program. 169To constrain the reordering that both the compiler and processor might perform 170on a thread's accesses, a programmer can use atomic operations with 171.Em acquire 172and 173.Em release 174semantics. 175.Pp 176Atomic operations on memory have up to three variants. 177The first, or 178.Em relaxed 179variant, performs the operation without imposing any ordering constraints on 180accesses to other memory locations. 181This variant is the default. 182The second variant has acquire semantics, and the third variant has release 183semantics. 184.Pp 185An atomic operation can only have 186.Em acquire 187semantics if it performs a load 188from memory. 189When an atomic operation has acquire semantics, a load performed as 190part of the operation must have 191completed before any subsequent load or store (by program order) is 192performed. 193Conversely, acquire semantics do not require that prior loads or stores have 194completed before a load from the atomic operation is performed. 195To denote acquire semantics, the suffix 196.Dq Li _acq 197is inserted into the function name immediately prior to the 198.Dq Li _ Ns Aq Fa type 199suffix. 200For example, to subtract two integers ensuring that the load of 201the value from memory is 202completed before any subsequent loads and stores are performed, use 203.Fn atomic_subtract_acq_int . 204.Pp 205An atomic operation can only have 206.Em release 207semantics if it performs a store to memory. 208When an atomic operation has release semantics, all prior loads or stores 209(by program order) must have completed before a store executed as part of 210the operation that is performed. 211Conversely, release semantics do not require that a store from the atomic 212operation must 213have completed before any subsequent load or store is performed. 214To denote release semantics, the suffix 215.Dq Li _rel 216is inserted into the function name immediately prior to the 217.Dq Li _ Ns Aq Fa type 218suffix. 219For example, to add two long integers ensuring that all prior loads and 220stores are completed before the store of the result is performed, use 221.Fn atomic_add_rel_long . 222.Pp 223When a release operation by one thread 224.Em synchronizes with 225an acquire operation by another thread, usually meaning that the acquire 226operation reads the value written by the release operation, then the effects 227of all prior stores by the releasing thread must become visible to 228subsequent loads by the acquiring thread. 229Moreover, the effects of all stores (by other threads) that were visible to 230the releasing thread must also become visible to the acquiring thread. 231These rules only apply to the synchronizing threads. 232Other threads might observe these stores in a different order. 233.Pp 234In effect, atomic operations with acquire and release semantics establish 235one-way barriers to reordering that enable the implementations of 236synchronization primitives to express their ordering requirements without 237also imposing unnecessary ordering. 238For example, for a critical section guarded by a mutex, an acquire operation 239when the mutex is locked and a release operation when the mutex is unlocked 240will prevent any loads or stores from moving outside of the critical 241section. 242However, they will not prevent the compiler or processor from moving loads 243or stores into the critical section, which does not violate the semantics of 244a mutex. 245.Ss Architecture-dependent caveats for compare-and-swap 246The 247.Fn atomic_[f]cmpset_<type> 248operations, specifically those without explicitly specified memory 249ordering, are defined as relaxed. 250Consequently, a thread's accesses to memory locations different from 251that of the atomic operation can be reordered in relation to the 252atomic operation. 253.Pp 254However, the implementation on the 255.Sy amd64 256and 257.Sy i386 258architectures provide sequentially consistent semantics. 259In particular, the reordering mentioned above cannot occur. 260.Pp 261On the 262.Sy arm64/aarch64 263architecture, the operation may include either acquire 264semantics on the constituent load or release semantics 265on the constituent store. 266This means that accesses to other locations in program order 267before the atomic, might be observed as executed after the load 268that is the part of the atomic operation (but not after the store 269from the operation due to release). 270Similarly, accesses after the atomic might be observed as executed 271before the store. 272.Ss Thread Fence Operations 273Alternatively, a programmer can use atomic thread fence operations to 274constrain the reordering of accesses. 275In contrast to other atomic operations, fences do not, themselves, access 276memory. 277.Pp 278When a fence has acquire semantics, all prior loads (by program order) must 279have completed before any subsequent load or store is performed. 280Thus, an acquire fence is a two-way barrier for load operations. 281To denote acquire semantics, the suffix 282.Dq Li _acq 283is appended to the function name, for example, 284.Fn atomic_thread_fence_acq . 285.Pp 286When a fence has release semantics, all prior loads or stores (by program 287order) must have completed before any subsequent store operation is 288performed. 289Thus, a release fence is a two-way barrier for store operations. 290To denote release semantics, the suffix 291.Dq Li _rel 292is appended to the function name, for example, 293.Fn atomic_thread_fence_rel . 294.Pp 295Although 296.Fn atomic_thread_fence_acq_rel 297implements both acquire and release semantics, it is not a full barrier. 298For example, a store prior to the fence (in program order) may be completed 299after a load subsequent to the fence. 300In contrast, 301.Fn atomic_thread_fence_seq_cst 302implements a full barrier. 303Neither loads nor stores may cross this barrier in either direction. 304.Pp 305In C11, a release fence by one thread synchronizes with an acquire fence by 306another thread when an atomic load that is prior to the acquire fence (by 307program order) reads the value written by an atomic store that is subsequent 308to the release fence. 309In contrast, in 310.Fx , 311because of the atomicity of ordinary, naturally 312aligned loads and stores, fences can also be synchronized by ordinary loads 313and stores. 314This simplifies the implementation and use of some synchronization 315primitives in 316.Fx . 317.Pp 318Since neither a compiler nor a processor can foresee which (atomic) load 319will read the value written by an (atomic) store, the ordering constraints 320imposed by fences must be more restrictive than acquire loads and release 321stores. 322Essentially, this is why fences are two-way barriers. 323.Pp 324Although fences impose more restrictive ordering than acquire loads and 325release stores, by separating access from ordering, they can sometimes 326facilitate more efficient implementations of synchronization primitives. 327For example, they can be used to avoid executing a memory barrier until a 328memory access shows that some condition is satisfied. 329.Ss Interrupt Fence Operations 330The 331.Fn atomic_interrupt_fence 332function establishes ordering between its call location and any interrupt 333handler executing on the same CPU. 334It is modeled after the similar C11 function 335.Fn atomic_signal_fence , 336and adapted for the kernel environment. 337.Ss Multiple Processors 338In multiprocessor systems, the atomicity of the atomic operations on memory 339depends on support for cache coherence in the underlying architecture. 340In general, cache coherence on the default memory type, 341.Dv VM_MEMATTR_DEFAULT , 342is guaranteed by all architectures that are supported by 343.Fx . 344For example, cache coherence is guaranteed on write-back memory by the 345.Tn amd64 346and 347.Tn i386 348architectures. 349However, on some architectures, cache coherence might not be enabled on all 350memory types. 351To determine if cache coherence is enabled for a non-default memory type, 352consult the architecture's documentation. 353.Ss Semantics 354This section describes the semantics of each operation using a C like notation. 355.Bl -hang 356.It Fn atomic_add p v 357.Bd -literal -compact 358*p += v; 359.Ed 360.It Fn atomic_clear p v 361.Bd -literal -compact 362*p &= ~v; 363.Ed 364.It Fn atomic_cmpset dst old new 365.Bd -literal -compact 366if (*dst == old) { 367 *dst = new; 368 return (1); 369} else 370 return (0); 371.Ed 372.El 373.Pp 374Some architectures do not implement the 375.Fn atomic_cmpset 376functions for the types 377.Dq Li char , 378.Dq Li short , 379.Dq Li 8 , 380and 381.Dq Li 16 . 382.Bl -hang 383.It Fn atomic_fcmpset dst *old new 384.El 385.Pp 386On architectures implementing 387.Em Compare And Swap 388operation in hardware, the functionality can be described as 389.Bd -literal -offset indent -compact 390if (*dst == *old) { 391 *dst = new; 392 return (1); 393} else { 394 *old = *dst; 395 return (0); 396} 397.Ed 398On architectures which provide 399.Em Load Linked/Store Conditional 400primitive, the write to 401.Dv *dst 402might also fail for several reasons, most important of which 403is a parallel write to 404.Dv *dst 405cache line by other CPU. 406In this case 407.Fn atomic_fcmpset 408function also returns 409.Dv false , 410despite 411.Dl *old == *dst . 412.Pp 413Some architectures do not implement the 414.Fn atomic_fcmpset 415functions for the types 416.Dq Li char , 417.Dq Li short , 418.Dq Li 8 , 419and 420.Dq Li 16 . 421.Bl -hang 422.It Fn atomic_fetchadd p v 423.Bd -literal -compact 424tmp = *p; 425*p += v; 426return (tmp); 427.Ed 428.El 429.Pp 430The 431.Fn atomic_fetchadd 432functions are only implemented for the types 433.Dq Li int , 434.Dq Li long 435and 436.Dq Li 32 437and do not have any variants with memory barriers at this time. 438.Bl -hang 439.It Fn atomic_load p 440.Bd -literal -compact 441return (*p); 442.Ed 443.It Fn atomic_readandclear p 444.Bd -literal -compact 445tmp = *p; 446*p = 0; 447return (tmp); 448.Ed 449.El 450.Pp 451The 452.Fn atomic_readandclear 453functions are not implemented for the types 454.Dq Li char , 455.Dq Li short , 456.Dq Li ptr , 457.Dq Li 8 , 458and 459.Dq Li 16 460and do not have any variants with memory barriers at this time. 461.Bl -hang 462.It Fn atomic_set p v 463.Bd -literal -compact 464*p |= v; 465.Ed 466.It Fn atomic_subtract p v 467.Bd -literal -compact 468*p -= v; 469.Ed 470.It Fn atomic_store p v 471.Bd -literal -compact 472*p = v; 473.Ed 474.It Fn atomic_swap p v 475.Bd -literal -compact 476tmp = *p; 477*p = v; 478return (tmp); 479.Ed 480.El 481.Pp 482The 483.Fn atomic_swap 484functions are not implemented for the types 485.Dq Li char , 486.Dq Li short , 487.Dq Li ptr , 488.Dq Li 8 , 489and 490.Dq Li 16 491and do not have any variants with memory barriers at this time. 492.Bl -hang 493.It Fn atomic_testandclear p v 494.Bd -literal -compact 495bit = 1 << (v % (sizeof(*p) * NBBY)); 496tmp = (*p & bit) != 0; 497*p &= ~bit; 498return (tmp); 499.Ed 500.El 501.Bl -hang 502.It Fn atomic_testandset p v 503.Bd -literal -compact 504bit = 1 << (v % (sizeof(*p) * NBBY)); 505tmp = (*p & bit) != 0; 506*p |= bit; 507return (tmp); 508.Ed 509.El 510.Pp 511The 512.Fn atomic_testandset 513and 514.Fn atomic_testandclear 515functions are only implemented for the types 516.Dq Li int , 517.Dq Li long , 518.Dq ptr , 519.Dq Li 32 , 520and 521.Dq Li 64 522and generally do not have any variants with memory barriers at this time 523except for 524.Fn atomic_testandset_acq_long . 525.Pp 526The type 527.Dq Li 64 528is currently not implemented for some of the atomic operations on the 529.Tn arm , 530.Tn i386 , 531and 532.Tn powerpc 533architectures. 534.Sh RETURN VALUES 535The 536.Fn atomic_cmpset 537function returns the result of the compare operation. 538The 539.Fn atomic_fcmpset 540function returns 541.Dv true 542if the operation succeeded. 543Otherwise it returns 544.Dv false 545and sets 546.Va *old 547to the found value. 548The 549.Fn atomic_fetchadd , 550.Fn atomic_load , 551.Fn atomic_readandclear , 552and 553.Fn atomic_swap 554functions return the value at the specified address. 555The 556.Fn atomic_testandset 557and 558.Fn atomic_testandclear 559function returns the result of the test operation. 560.Sh EXAMPLES 561This example uses the 562.Fn atomic_cmpset_acq_ptr 563and 564.Fn atomic_set_ptr 565functions to obtain a sleep mutex and handle recursion. 566Since the 567.Va mtx_lock 568member of a 569.Vt "struct mtx" 570is a pointer, the 571.Dq Li ptr 572type is used. 573.Bd -literal 574/* Try to obtain mtx_lock once. */ 575#define _obtain_lock(mp, tid) \\ 576 atomic_cmpset_acq_ptr(&(mp)->mtx_lock, MTX_UNOWNED, (tid)) 577 578/* Get a sleep lock, deal with recursion inline. */ 579#define _get_sleep_lock(mp, tid, opts, file, line) do { \\ 580 uintptr_t _tid = (uintptr_t)(tid); \\ 581 \\ 582 if (!_obtain_lock(mp, tid)) { \\ 583 if (((mp)->mtx_lock & MTX_FLAGMASK) != _tid) \\ 584 _mtx_lock_sleep((mp), _tid, (opts), (file), (line));\\ 585 else { \\ 586 atomic_set_ptr(&(mp)->mtx_lock, MTX_RECURSE); \\ 587 (mp)->mtx_recurse++; \\ 588 } \\ 589 } \\ 590} while (0) 591.Ed 592.Sh HISTORY 593The 594.Fn atomic_add , 595.Fn atomic_clear , 596.Fn atomic_set , 597and 598.Fn atomic_subtract 599operations were introduced in 600.Fx 3.0 . 601Initially, these operations were defined on the types 602.Dq Li char , 603.Dq Li short , 604.Dq Li int , 605and 606.Dq Li long . 607.Pp 608The 609.Fn atomic_cmpset , 610.Fn atomic_load_acq , 611.Fn atomic_readandclear , 612and 613.Fn atomic_store_rel 614operations were added in 615.Fx 5.0 . 616Simultaneously, the acquire and release variants were introduced, and 617support was added for operation on the types 618.Dq Li 8 , 619.Dq Li 16 , 620.Dq Li 32 , 621.Dq Li 64 , 622and 623.Dq Li ptr . 624.Pp 625The 626.Fn atomic_fetchadd 627operation was added in 628.Fx 6.0 . 629.Pp 630The 631.Fn atomic_swap 632and 633.Fn atomic_testandset 634operations were added in 635.Fx 10.0 . 636.Pp 637The 638.Fn atomic_testandclear 639and 640.Fn atomic_thread_fence 641operations were added in 642.Fx 11.0 . 643.Pp 644The relaxed variants of 645.Fn atomic_load 646and 647.Fn atomic_store 648were added in 649.Fx 12.0 . 650.Pp 651The 652.Fn atomic_interrupt_fence 653operation was added in 654.Fx 13.0 . 655