Lines Matching +full:sync +full:- +full:write

14  *      - Redistributions of source code must retain the above
18 * - Redistributions in binary form must reproduce the above
41 are looking for barriers to use with cache-coherent multi-threaded
47 - CPU attached address space (the CPU memory could be a range of things:
48 cached/uncached/non-temporal CPU DRAM, uncached MMIO space in another
51 is not guaranteed to see a write from another CPU then it is also
52 OK for the DMA device to also not see the write after the barrier.
53 - A DMA initiator on a bus. For instance a PCI-E device issuing
57 happens if a MemRd TLP is sent in via PCI-E relative to a CPU WRITE to the
80 memory types or non-temporal stores are required to use SFENCE in their own
88 #define udma_to_device_barrier() asm volatile("sync" ::: "memory")
90 #define udma_to_device_barrier() asm volatile("sync" ::: "memory")
117 from the device - eg by reading a MMIO register or seeing that CPU memory is
134 #define udma_from_device_barrier() asm volatile("sync" ::: "memory")
167 wqe->addr = ...;
168 wqe->flags = ...;
170 wqe->valid = 1;
174 /* Promptly flush writes to MMIO Write Cominbing memory.
175 This should be used after a write to WC memory. This is both a barrier
184 This must also act as a barrier that prevents write combining, eg
195 PCI-E MemWr TLPs from the CPU.
202 #define mmio_flush_writes() asm volatile("sync" ::: "memory")
204 #define mmio_flush_writes() asm volatile("sync" ::: "memory")
223 /* Prevent WC writes from being re-ordered relative to other MMIO
224 writes. This should be used before a write to WC memory.
226 This must act as a barrier to prevent write re-ordering from different
236 PCI-E MemWr TLPs from the CPU.
252 /* Write Combining Spinlock primitive
254 Any access to a multi-value WC region must ensure that multiple cpus do not
255 write to the same values concurrently, these macros make that
279 * to force-flush the WC buffers quickly, and this SFENCE can be in mmio_wc_spinunlock()