Lines Matching +full:only +full:- +full:1 +full:- +full:8 +full:v

1 .\" Copyright (c) 2000-2001 John H. Baldwin <jhb@FreeBSD.org>
6 .\" 1. Redistributions of source code must retain the above copyright
43 .Fn atomic_add_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
45 .Fn atomic_clear_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
59 .Fn atomic_fetchadd_<type> "volatile <type> *p" "<type> v"
67 .Fn atomic_set_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
69 .Fn atomic_subtract_[acq_|rel_]<type> "volatile <type> *p" "<type> v"
71 .Fn atomic_store_[rel_]<type> "volatile <type> *p" "<type> v"
73 .Fn atomic_swap_<type> "volatile <type> *p" "<type> v"
75 .Fn atomic_testandclear_<type> "volatile <type> *p" "u_int v"
77 .Fn atomic_testandset_<type> "volatile <type> *p" "u_int v"
92 ordinary loads and stores of integers in cache-coherent memory are
98 When atomic operations are performed on cache-coherent memory, all
101 When an atomic load is performed on a location in cache-coherent memory,
121 .Bl -tag -offset indent -width short -compact
129 unsigned 32-bit integer
131 unsigned 64-bit integer
140 .Bl -tag -offset indent -width short -compact
145 .It Li 8
146 unsigned 8-bit integer
148 unsigned 16-bit integer
151 These types must not be used in machine-independent code.
161 Otherwise, the traditional memory model that is expected by single-threaded
190 An atomic operation can only have acquire semantics if it performs a load
205 An atomic operation can only have release semantics if it performs a store
224 These rules only apply to the synchronizing threads.
228 one-way barriers to reordering that enable the implementations of
246 Thus, an acquire fence is a two-way barrier for load operations.
255 Thus, a release fence is a two-way barrier for store operations.
288 Essentially, this is why fences are two-way barriers.
310 For example, cache coherence is guaranteed on write-back memory by the
317 To determine if cache coherence is enabled for a non-default memory type,
321 .Bl -hang
322 .It Fn atomic_add p v
323 .Bd -literal -compact
324 *p += v;
326 .It Fn atomic_clear p v
327 .Bd -literal -compact
328 *p &= ~v;
331 .Bd -literal -compact
334 return (1);
345 .Dq Li 8 ,
348 .Bl -hang
355 .Bd -literal -offset indent -compact
358 return (1);
384 .Dq Li 8 ,
387 .Bl -hang
388 .It Fn atomic_fetchadd p v
389 .Bd -literal -compact
391 *p += v;
398 functions are only implemented for the types
404 .Bl -hang
406 .Bd -literal -compact
410 .Bd -literal -compact
423 .Dq Li 8 ,
427 .Bl -hang
428 .It Fn atomic_set p v
429 .Bd -literal -compact
430 *p |= v;
432 .It Fn atomic_subtract p v
433 .Bd -literal -compact
434 *p -= v;
436 .It Fn atomic_store p v
437 .Bd -literal -compact
438 *p = v;
440 .It Fn atomic_swap p v
441 .Bd -literal -compact
443 *p = v;
454 .Dq Li 8 ,
458 .Bl -hang
459 .It Fn atomic_testandclear p v
460 .Bd -literal -compact
461 bit = 1 << (v % (sizeof(*p) * NBBY));
467 .Bl -hang
468 .It Fn atomic_testandset p v
469 .Bd -literal -compact
470 bit = 1 << (v % (sizeof(*p) * NBBY));
481 functions are only implemented for the types
539 .Bd -literal
542 atomic_cmpset_acq_ptr(&(mp)->mtx_lock, MTX_UNOWNED, (tid))
549 if (((mp)->mtx_lock & MTX_FLAGMASK) != _tid) \\
552 atomic_set_ptr(&(mp)->mtx_lock, MTX_RECURSE); \\
553 (mp)->mtx_recurse++; \\
584 .Dq Li 8 ,