xref: /linux/Documentation/scheduler/membarrier.rst (revision 173b0b5b0e865348684c02bd9cb1d22b5d46e458)
1.. SPDX-License-Identifier: GPL-2.0
2
3========================
4membarrier() System Call
5========================
6
7MEMBARRIER_CMD_{PRIVATE,GLOBAL}_EXPEDITED - Architecture requirements
8=====================================================================
9
10Memory barriers before updating rq->curr
11----------------------------------------
12
13The commands MEMBARRIER_CMD_PRIVATE_EXPEDITED and MEMBARRIER_CMD_GLOBAL_EXPEDITED
14require each architecture to have a full memory barrier after coming from
15user-space, before updating rq->curr.  This barrier is implied by the sequence
16rq_lock(); smp_mb__after_spinlock() in __schedule().  The barrier matches a full
17barrier in the proximity of the membarrier system call exit, cf.
18membarrier_{private,global}_expedited().
19
20Memory barriers after updating rq->curr
21---------------------------------------
22
23The commands MEMBARRIER_CMD_PRIVATE_EXPEDITED and MEMBARRIER_CMD_GLOBAL_EXPEDITED
24require each architecture to have a full memory barrier after updating rq->curr,
25before returning to user-space.  The schemes providing this barrier on the various
26architectures are as follows.
27
28 - alpha, arc, arm, hexagon, mips rely on the full barrier implied by
29   spin_unlock() in finish_lock_switch().
30
31 - arm64 relies on the full barrier implied by switch_to().
32
33 - powerpc, riscv, s390, sparc, x86 rely on the full barrier implied by
34   switch_mm(), if mm is not NULL; they rely on the full barrier implied
35   by mmdrop(), otherwise.  On powerpc and riscv, switch_mm() relies on
36   membarrier_arch_switch_mm().
37
38The barrier matches a full barrier in the proximity of the membarrier system call
39entry, cf. membarrier_{private,global}_expedited().
40