Lines Matching +full:pci +full:- +full:to +full:- +full:cpu

4  * This software is available to you under a choice of one of two
5 * licenses. You may choose to be licensed under the terms of the GNU
14 * - Redistributions of source code must retain the above
18 * - Redistributions in binary form must reproduce the above
24 * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
41 are looking for barriers to use with cache-coherent multi-threaded
47 - CPU attached address space (the CPU memory could be a range of things:
48 cached/uncached/non-temporal CPU DRAM, uncached MMIO space in another
50 to the local CPU's view of the system. Eg if the local CPU
51 is not guaranteed to see a write from another CPU then it is also
52 OK for the DMA device to also not see the write after the barrier.
53 - A DMA initiator on a bus. For instance a PCI-E device issuing
57 happens if a MemRd TLP is sent in via PCI-E relative to a CPU WRITE to the
61 to make things very clear each narrow use is given a name and the proper
65 /* Ensure that the device's view of memory matches the CPU's view of memory.
67 to begin doing DMA, such as a device doorbell ring.
75 This is required to fence writes created by the libibverbs user. Those
76 writes could be to any CPU mapped memory object with any cachability mode.
79 only fenced normal stores to normal memory. libibverbs users using other
80 memory types or non-temporal stores are required to use SFENCE in their own
81 code prior to calling verbs to start a DMA.
116 CPU. This only makes sense after something that observes an ordered store
117 from the device - eg by reading a MMIO register or seeing that CPU memory is
124 that is a DMA target, to ensure that the following reads see the
153 /* Order writes to CPU memory so that a DMA device cannot view writes after
155 not guarantee any writes are visible to DMA.
159 valid bit to ensure the DMA device cannot observe a set valid bit with
162 Compared to udma_to_device_barrier() this barrier is not required to fence
163 anything but normal stores to normal malloc memory. Usage should be:
167 wqe->addr = ...;
168 wqe->flags = ...;
170 wqe->valid = 1;
174 /* Promptly flush writes to MMIO Write Cominbing memory.
175 This should be used after a write to WC memory. This is both a barrier
176 and a hint to the CPU to flush any buffers to reduce latency to TLP
179 This is not required to have any effect on CPU memory.
182 must be guaranteed to follow the natural ordering implied by the lock.
189 the CPU is allowed to produce a single TLP '2'.
191 Note that there is no order guarantee for writes to WC memory without
194 This is intended to be used in conjunction with WC memory to generate large
195 PCI-E MemWr TLPs from the CPU.
223 /* Prevent WC writes from being re-ordered relative to other MMIO
224 writes. This should be used before a write to WC memory.
226 This must act as a barrier to prevent write re-ordering from different
235 This is intended to be used in conjunction with WC memory to generate large
236 PCI-E MemWr TLPs from the CPU.
243 providers haphazardly open code writes to MMIO memory omitting even
247 is a stand in to indicate places where MMIO writes should be switched
248 to some future writel.
254 Any access to a multi-value WC region must ensure that multiple cpus do not
255 write to the same values concurrently, these macros make that
259 section are made visible as TLP to the device. The TLP must be seen by the
263 Use of these macros allow the fencing inside the spinlock to be combined
270 /* For x86 the serialization within the spin lock is enough to in mmio_wc_spinlock()
279 * to force-flush the WC buffers quickly, and this SFENCE can be in mmio_wc_spinunlock()