| /linux/tools/memory-model/ |
| H A D | linux-kernel.def | |
| H A D | linux-kernel.bell | |
| /linux/tools/memory-model/litmus-tests/ |
| H A D | ISA2+pooncerelease+poacquirerelease+poacquireonce.litmus | 6 * This litmus test demonstrates that a release-acquire chain suffices 8 * that the release-acquire chain suffices is because in all but one 11 * (AKA non-rf) link, so release-acquire is all that is needed.
|
| H A D | README | 46 and load-acquire replaced with READ_ONCE(). 49 Can a release-acquire chain order a prior store against 58 Does a release-acquire pair suffice for the load-buffering 64 and load-acquire replaced with READ_ONCE(). 75 in one process, and use an acquire load followed by a pair of 80 acquire load followed by a pair of spin_is_locked() calls 91 As below, but with a release-acquire chain. 134 As below, but without the smp_wmb() and acquire load. 137 Can a smp_wmb(), instead of a release, and an acquire order 157 Is the ordering provided by a release-acquire chain sufficient [all …]
|
| H A D | S+fencewmbonceonce+poacquireonce.litmus | 6 * Can a smp_wmb(), instead of a release, and an acquire order a prior
|
| H A D | LB+poacquireonce+pooncerelease.litmus | 6 * Does a release-acquire pair suffice for the load-buffering litmus
|
| H A D | S+poonceonces.litmus | 6 * Starting with a two-process release-acquire chain ordering P0()'s
|
| H A D | ISA2+poonceonces.litmus | 6 * Given a release-acquire chain ordering the first process's store
|
| /linux/rust/kernel/sync/atomic/ |
| H A D | internal.rs | 205 fn read[acquire](a: &AtomicRepr<Self>) -> Self { 224 fn xchg[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self) -> Self { 235 fn try_cmpxchg[acquire, release, relaxed]( 260 fn fetch_add[acquire, release, relaxed](a: &AtomicRepr<Self>, v: Self::Delta) -> Self {
|
| /linux/Documentation/litmus-tests/atomic/ |
| H A D | cmpxchg-fail-unordered-2.litmus | 7 * an acquire release operation. (In contrast, a successful cmpxchg() 8 * does act as both an acquire and a release operation.)
|
| H A D | Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus | 1 C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire 7 * stronger than a normal acquire: both the read and write parts of
|
| H A D | cmpxchg-fail-ordered-2.litmus | 7 * operation have acquire ordering.
|
| /linux/rust/pin-init/examples/ |
| H A D | mutex.rs | 34 pub fn acquire(&self) -> SpinLockGuard<'_> { in acquire() method 93 let mut sguard = self.spin_lock.acquire(); in lock() 101 sguard = self.spin_lock.acquire(); in lock() 134 let sguard = self.mtx.spin_lock.acquire(); in drop()
|
| /linux/Documentation/litmus-tests/ |
| H A D | README | 15 Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus 17 stronger than a normal acquire: both the read and write parts of 29 Demonstrate that a failing cmpxchg() operation acts as an acquire 38 acquire operation.
|
| /linux/Documentation/locking/ |
| H A D | futex-requeue-pi.rst | 91 to be able to acquire the rt_mutex before returning to user space. 93 acquire the rt_mutex as it would open a race window between the 99 allow the requeue code to acquire an uncontended rt_mutex on behalf 115 requeueing, futex_requeue() attempts to acquire the requeue target 127 tasks as it can acquire the lock for, which in the majority of cases 129 either pthread_cond_broadcast() or pthread_cond_signal() acquire the
|
| H A D | ww-mutex-design.rst | 64 trying to acquire locks doesn't grab a new reservation id, but keeps the one it 66 acquire context. Furthermore the acquire context keeps track of debugging state 67 to catch w/w mutex interface abuse. An acquire context is representing a 71 w/w mutexes, since it is required to initialize the acquire context. The lock 74 Furthermore there are three different class of w/w lock acquire functions: 99 * Functions to only acquire a single w/w mutex, which results in the exact same 103 Again this is not strictly required. But often you only want to acquire a 104 single lock in which case it's pointless to set up an acquire context (and so 119 Three different ways to acquire locks within the same w/w class. Common 344 (1) Waiters with an acquire context are sorted by stamp order; waiters [all …]
|
| H A D | mutex-design.rst | 40 (i) fastpath: tries to atomically acquire the lock by cmpxchg()ing the owner with 54 to acquire the lock spinning on a local variable. It avoids expensive 97 - Point-of-acquire tracking, symbolic lookup of function names, 115 acquire the mutex and assume that the mutex_unlock() context is not using
|
| /linux/drivers/net/ethernet/broadcom/bnx2x/ |
| H A D | bnx2x_vfpf.c | 226 struct vfpf_acquire_tlv *req = &bp->vf2pf_mbox->req.acquire; in bnx2x_vfpf_acquire() 1365 struct vfpf_acquire_tlv *acquire) in bnx2x_vf_mbx_is_windows_vm() argument 1372 if (!acquire->bulletin_addr || in bnx2x_vf_mbx_is_windows_vm() 1373 acquire->resc_request.num_mc_filters == 32 || in bnx2x_vf_mbx_is_windows_vm() 1374 ((acquire->vfdev_info.vf_os & VF_OS_MASK) == in bnx2x_vf_mbx_is_windows_vm() 1393 if (bnx2x_vf_mbx_is_windows_vm(bp, &mbx->msg->req.acquire)) in bnx2x_vf_mbx_acquire_chk_dorq() 1403 struct vfpf_acquire_tlv *acquire = &mbx->msg->req.acquire; in bnx2x_vf_mbx_acquire() local 1408 vf->abs_vfid, acquire->vfdev_info.vf_id, acquire->vfdev_info.vf_os, in bnx2x_vf_mbx_acquire() 1409 acquire->resc_request.num_rxqs, acquire->resc_request.num_txqs, in bnx2x_vf_mbx_acquire() 1410 acquire->resc_request.num_sbs, acquire->resc_request.num_mac_filters, in bnx2x_vf_mbx_acquire() [all …]
|
| /linux/tools/memory-model/Documentation/ |
| H A D | herd-representation.txt | |
| H A D | glossary.txt | 31 An example special acquire operation is smp_load_acquire(), 33 acquire loads. 35 When an acquire load returns the value stored by a release store 36 to that same variable, (in other words, the acquire load "reads 38 store "happen before" any operations following that load acquire.
|
| /linux/drivers/net/ethernet/intel/e1000e/ |
| H A D | phy.c | 292 ret_val = hw->phy.ops.acquire(hw); in e1000e_read_phy_reg_m88() 317 ret_val = hw->phy.ops.acquire(hw); in e1000e_write_phy_reg_m88() 364 if (!hw->phy.ops.acquire) in __e1000e_read_phy_reg_igp() 367 ret_val = hw->phy.ops.acquire(hw); in __e1000e_read_phy_reg_igp() 431 if (!hw->phy.ops.acquire) in __e1000e_write_phy_reg_igp() 434 ret_val = hw->phy.ops.acquire(hw); in __e1000e_write_phy_reg_igp() 499 if (!hw->phy.ops.acquire) in __e1000_read_kmrn_reg() 502 ret_val = hw->phy.ops.acquire(hw); in __e1000_read_kmrn_reg() 572 if (!hw->phy.ops.acquire) in __e1000_write_kmrn_reg() 575 ret_val = hw->phy.ops.acquire(hw); in __e1000_write_kmrn_reg() [all …]
|
| /linux/drivers/media/dvb-frontends/ |
| H A D | as102_fe.h | 14 int (*stream_ctrl)(void *priv, int acquire, uint32_t elna_cfg);
|
| /linux/drivers/gpu/drm/nouveau/include/nvkm/core/ |
| H A D | memory.h | 37 void __iomem *(*acquire)(struct nvkm_memory *); member 73 #define nvkm_kmap(o) (o)->func->acquire(o)
|
| H A D | gpuobj.h | 27 void *(*acquire)(struct nvkm_gpuobj *); member
|
| /linux/Documentation/RCU/ |
| H A D | UP.rst | 60 callback function must acquire this same lock. In this case, if 129 like spin_lock_bh() to acquire the lock. Please note that 140 callbacks acquire locks directly. However, a great many RCU 141 callbacks do acquire locks *indirectly*, for example, via
|