xref: /linux/Documentation/driver-api/device-io.rst (revision 537c2e91d3549e5d6020bb0576cf9b54a845255f)
1.. Copyright 2001 Matthew Wilcox
2..
3..     This documentation is free software; you can redistribute
4..     it and/or modify it under the terms of the GNU General Public
5..     License as published by the Free Software Foundation; either
6..     version 2 of the License, or (at your option) any later
7..     version.
8
9===============================
10Bus-Independent Device Accesses
11===============================
12
13:Author: Matthew Wilcox
14:Author: Alan Cox
15
16Introduction
17============
18
19Linux provides an API which abstracts performing IO across all busses
20and devices, allowing device drivers to be written independently of bus
21type.
22
23Memory Mapped IO
24================
25
26Getting Access to the Device
27----------------------------
28
29The most widely supported form of IO is memory mapped IO. That is, a
30part of the CPU's address space is interpreted not as accesses to
31memory, but as accesses to a device. Some architectures define devices
32to be at a fixed address, but most have some method of discovering
33devices. The PCI bus walk is a good example of such a scheme. This
34document does not cover how to receive such an address, but assumes you
35are starting with one. Physical addresses are of type unsigned long.
36
37This address should not be used directly. Instead, to get an address
38suitable for passing to the accessor functions described below, you
39should call ioremap(). An address suitable for accessing
40the device will be returned to you.
41
42After you've finished using the device (say, in your module's exit
43routine), call iounmap() in order to return the address
44space to the kernel. Most architectures allocate new address space each
45time you call ioremap(), and they can run out unless you
46call iounmap().
47
48Accessing the device
49--------------------
50
51The part of the interface most used by drivers is reading and writing
52memory-mapped registers on the device. Linux provides interfaces to read
53and write 8-bit, 16-bit, 32-bit and 64-bit quantities. Due to a
54historical accident, these are named byte, word, long and quad accesses.
55Both read and write accesses are supported; there is no prefetch support
56at this time.
57
58The functions are named readb(), readw(), readl(), readq(),
59readb_relaxed(), readw_relaxed(), readl_relaxed(), readq_relaxed(),
60writeb(), writew(), writel() and writeq().
61
62Some devices (such as framebuffers) would like to use larger transfers than
638 bytes at a time. For these devices, the memcpy_toio(),
64memcpy_fromio() and memset_io() functions are
65provided. Do not use memset or memcpy on IO addresses; they are not
66guaranteed to copy data in order.
67
68The read and write functions are defined to be ordered. That is the
69compiler is not permitted to reorder the I/O sequence. When the ordering
70can be compiler optimised, you can use __readb() and friends to
71indicate the relaxed ordering. Use this with care.
72
73While the basic functions are defined to be synchronous with respect to
74each other and ordered with respect to each other the busses the devices
75sit on may themselves have asynchronicity. In particular many authors
76are burned by the fact that PCI bus writes are posted asynchronously. A
77driver author must issue a read from the same device to ensure that
78writes have occurred in the specific cases the author cares. This kind
79of property cannot be hidden from driver writers in the API. In some
80cases, the read used to flush the device may be expected to fail (if the
81card is resetting, for example). In that case, the read should be done
82from config space, which is guaranteed to soft-fail if the card doesn't
83respond.
84
85The following is an example of flushing a write to a device when the
86driver would like to ensure the write's effects are visible prior to
87continuing execution::
88
89    static inline void
90    qla1280_disable_intrs(struct scsi_qla_host *ha)
91    {
92        struct device_reg *reg;
93
94        reg = ha->iobase;
95        /* disable risc and host interrupts */
96        WRT_REG_WORD(&reg->ictrl, 0);
97        /*
98         * The following read will ensure that the above write
99         * has been received by the device before we return from this
100         * function.
101         */
102        RD_REG_WORD(&reg->ictrl);
103        ha->flags.ints_enabled = 0;
104    }
105
106PCI ordering rules also guarantee that PIO read responses arrive after any
107outstanding DMA writes from that bus, since for some devices the result of
108a readb() call may signal to the driver that a DMA transaction is
109complete. In many cases, however, the driver may want to indicate that the
110next readb() call has no relation to any previous DMA writes
111performed by the device. The driver can use readb_relaxed() for
112these cases, although only some platforms will honor the relaxed
113semantics. Using the relaxed read functions will provide significant
114performance benefits on platforms that support it. The qla2xxx driver
115provides examples of how to use readX_relaxed(). In many cases, a majority
116of the driver's readX() calls can safely be converted to readX_relaxed()
117calls, since only a few will indicate or depend on DMA completion.
118
119Port Space Accesses
120===================
121
122Port Space Explained
123--------------------
124
125Another form of IO commonly supported is Port Space. This is a range of
126addresses separate to the normal memory address space. Access to these
127addresses is generally not as fast as accesses to the memory mapped
128addresses, and it also has a potentially smaller address space.
129
130Unlike memory mapped IO, no preparation is required to access port
131space.
132
133Accessing Port Space
134--------------------
135
136Accesses to this space are provided through a set of functions which
137allow 8-bit, 16-bit and 32-bit accesses; also known as byte, word and
138long. These functions are inb(), inw(),
139inl(), outb(), outw() and
140outl().
141
142Some variants are provided for these functions. Some devices require
143that accesses to their ports are slowed down. This functionality is
144provided by appending a ``_p`` to the end of the function.
145There are also equivalents to memcpy. The ins() and
146outs() functions copy bytes, words or longs to the given
147port.
148
149__iomem pointer tokens
150======================
151
152The data type for an MMIO address is an ``__iomem`` qualified pointer, such as
153``void __iomem *reg``. On most architectures it is a regular pointer that
154points to a virtual memory address and can be offset or dereferenced, but in
155portable code, it must only be passed from and to functions that explicitly
156operated on an ``__iomem`` token, in particular the ioremap() and
157readl()/writel() functions. The 'sparse' semantic code checker can be used to
158verify that this is done correctly.
159
160While on most architectures, ioremap() creates a page table entry for an
161uncached virtual address pointing to the physical MMIO address, some
162architectures require special instructions for MMIO, and the ``__iomem`` pointer
163just encodes the physical address or an offsettable cookie that is interpreted
164by readl()/writel().
165
166Differences between I/O access functions
167========================================
168
169readq(), readl(), readw(), readb(), writeq(), writel(), writew(), writeb()
170
171  These are the most generic accessors, providing serialization against other
172  MMIO accesses and DMA accesses as well as fixed endianness for accessing
173  little-endian PCI devices and on-chip peripherals. Portable device drivers
174  should generally use these for any access to ``__iomem`` pointers.
175
176  Note that posted writes are not strictly ordered against a spinlock, see
177  Documentation/driver-api/io_ordering.rst.
178
179readq_relaxed(), readl_relaxed(), readw_relaxed(), readb_relaxed(),
180writeq_relaxed(), writel_relaxed(), writew_relaxed(), writeb_relaxed()
181
182  On architectures that require an expensive barrier for serializing against
183  DMA, these "relaxed" versions of the MMIO accessors only serialize against
184  each other, but contain a less expensive barrier operation. A device driver
185  might use these in a particularly performance sensitive fast path, with a
186  comment that explains why the usage in a specific location is safe without
187  the extra barriers.
188
189  See memory-barriers.txt for a more detailed discussion on the precise ordering
190  guarantees of the non-relaxed and relaxed versions.
191
192ioread64(), ioread32(), ioread16(), ioread8(),
193iowrite64(), iowrite32(), iowrite16(), iowrite8()
194
195  These are an alternative to the normal readl()/writel() functions, with almost
196  identical behavior, but they can also operate on ``__iomem`` tokens returned
197  for mapping PCI I/O space with pci_iomap() or ioport_map(). On architectures
198  that require special instructions for I/O port access, this adds a small
199  overhead for an indirect function call implemented in lib/iomap.c, while on
200  other architectures, these are simply aliases.
201
202ioread64be(), ioread32be(), ioread16be()
203iowrite64be(), iowrite32be(), iowrite16be()
204
205  These behave in the same way as the ioread32()/iowrite32() family, but with
206  reversed byte order, for accessing devices with big-endian MMIO registers.
207  Device drivers that can operate on either big-endian or little-endian
208  registers may have to implement a custom wrapper function that picks one or
209  the other depending on which device was found.
210
211  Note: On some architectures, the normal readl()/writel() functions
212  traditionally assume that devices are the same endianness as the CPU, while
213  using a hardware byte-reverse on the PCI bus when running a big-endian kernel.
214  Drivers that use readl()/writel() this way are generally not portable, but
215  tend to be limited to a particular SoC.
216
217hi_lo_readq(), lo_hi_readq(), hi_lo_readq_relaxed(), lo_hi_readq_relaxed(),
218ioread64_lo_hi(), ioread64_hi_lo(), ioread64be_lo_hi(), ioread64be_hi_lo(),
219hi_lo_writeq(), lo_hi_writeq(), hi_lo_writeq_relaxed(), lo_hi_writeq_relaxed(),
220iowrite64_lo_hi(), iowrite64_hi_lo(), iowrite64be_lo_hi(), iowrite64be_hi_lo()
221
222  Some device drivers have 64-bit registers that cannot be accessed atomically
223  on 32-bit architectures but allow two consecutive 32-bit accesses instead.
224  Since it depends on the particular device which of the two halves has to be
225  accessed first, a helper is provided for each combination of 64-bit accessors
226  with either low/high or high/low word ordering. A device driver must include
227  either <linux/io-64-nonatomic-lo-hi.h> or <linux/io-64-nonatomic-hi-lo.h> to
228  get the function definitions along with helpers that redirect the normal
229  readq()/writeq() to them on architectures that do not provide 64-bit access
230  natively.
231
232__raw_readq(), __raw_readl(), __raw_readw(), __raw_readb(),
233__raw_writeq(), __raw_writel(), __raw_writew(), __raw_writeb()
234
235  These are low-level MMIO accessors without barriers or byteorder changes and
236  architecture specific behavior. Accesses are usually atomic in the sense that
237  a four-byte __raw_readl() does not get split into individual byte loads, but
238  multiple consecutive accesses can be combined on the bus. In portable code, it
239  is only safe to use these to access memory behind a device bus but not MMIO
240  registers, as there are no ordering guarantees with regard to other MMIO
241  accesses or even spinlocks. The byte order is generally the same as for normal
242  memory, so unlike the other functions, these can be used to copy data between
243  kernel memory and device memory.
244
245inl(), inw(), inb(), outl(), outw(), outb()
246
247  PCI I/O port resources traditionally require separate helpers as they are
248  implemented using special instructions on the x86 architecture. On most other
249  architectures, these are mapped to readl()/writel() style accessors
250  internally, usually pointing to a fixed area in virtual memory. Instead of an
251  ``__iomem`` pointer, the address is a 32-bit integer token to identify a port
252  number. PCI requires I/O port access to be non-posted, meaning that an outb()
253  must complete before the following code executes, while a normal writeb() may
254  still be in progress. On architectures that correctly implement this, I/O port
255  access is therefore ordered against spinlocks. Many non-x86 PCI host bridge
256  implementations and CPU architectures however fail to implement non-posted I/O
257  space on PCI, so they can end up being posted on such hardware.
258
259  In some architectures, the I/O port number space has a 1:1 mapping to
260  ``__iomem`` pointers, but this is not recommended and device drivers should
261  not rely on that for portability. Similarly, an I/O port number as described
262  in a PCI base address register may not correspond to the port number as seen
263  by a device driver. Portable drivers need to read the port number for the
264  resource provided by the kernel.
265
266  There are no direct 64-bit I/O port accessors, but pci_iomap() in combination
267  with ioread64/iowrite64 can be used instead.
268
269inl_p(), inw_p(), inb_p(), outl_p(), outw_p(), outb_p()
270
271  On ISA devices that require specific timing, the _p versions of the I/O
272  accessors add a small delay. On architectures that do not have ISA buses,
273  these are aliases to the normal inb/outb helpers.
274
275readsq, readsl, readsw, readsb
276writesq, writesl, writesw, writesb
277ioread64_rep, ioread32_rep, ioread16_rep, ioread8_rep
278iowrite64_rep, iowrite32_rep, iowrite16_rep, iowrite8_rep
279insl, insw, insb, outsl, outsw, outsb
280
281  These are helpers that access the same address multiple times, usually to copy
282  data between kernel memory byte stream and a FIFO buffer. Unlike the normal
283  MMIO accessors, these do not perform a byteswap on big-endian kernels, so the
284  first byte in the FIFO register corresponds to the first byte in the memory
285  buffer regardless of the architecture.
286
287Device memory mapping modes
288===========================
289
290Some architectures support multiple modes for mapping device memory.
291ioremap_*() variants provide a common abstraction around these
292architecture-specific modes, with a shared set of semantics.
293
294ioremap() is the most common mapping type, and is applicable to typical device
295memory (e.g. I/O registers). Other modes can offer weaker or stronger
296guarantees, if supported by the architecture. From most to least common, they
297are as follows:
298
299ioremap()
300---------
301
302The default mode, suitable for most memory-mapped devices, e.g. control
303registers. Memory mapped using ioremap() has the following characteristics:
304
305* Uncached - CPU-side caches are bypassed, and all reads and writes are handled
306  directly by the device
307* No speculative operations - the CPU may not issue a read or write to this
308  memory, unless the instruction that does so has been reached in committed
309  program flow.
310* No reordering - The CPU may not reorder accesses to this memory mapping with
311  respect to each other. On some architectures, this relies on barriers in
312  readl_relaxed()/writel_relaxed().
313* No repetition - The CPU may not issue multiple reads or writes for a single
314  program instruction.
315* No write-combining - Each I/O operation results in one discrete read or write
316  being issued to the device, and multiple writes are not combined into larger
317  writes. This may or may not be enforced when using __raw I/O accessors or
318  pointer dereferences.
319* Non-executable - The CPU is not allowed to speculate instruction execution
320  from this memory (it probably goes without saying, but you're also not
321  allowed to jump into device memory).
322
323On many platforms and buses (e.g. PCI), writes issued through ioremap()
324mappings are posted, which means that the CPU does not wait for the write to
325actually reach the target device before retiring the write instruction.
326
327On many platforms, I/O accesses must be aligned with respect to the access
328size; failure to do so will result in an exception or unpredictable results.
329
330ioremap_wc()
331------------
332
333Maps I/O memory as normal memory with write combining. Unlike ioremap(),
334
335* The CPU may speculatively issue reads from the device that the program
336  didn't actually execute, and may choose to basically read whatever it wants.
337* The CPU may reorder operations as long as the result is consistent from the
338  program's point of view.
339* The CPU may write to the same location multiple times, even when the program
340  issued a single write.
341* The CPU may combine several writes into a single larger write.
342
343This mode is typically used for video framebuffers, where it can increase
344performance of writes. It can also be used for other blocks of memory in
345devices (e.g. buffers or shared memory), but care must be taken as accesses are
346not guaranteed to be ordered with respect to normal ioremap() MMIO register
347accesses without explicit barriers.
348
349On a PCI bus, it is usually safe to use ioremap_wc() on MMIO areas marked as
350``IORESOURCE_PREFETCH``, but it may not be used on those without the flag.
351For on-chip devices, there is no corresponding flag, but a driver can use
352ioremap_wc() on a device that is known to be safe.
353
354ioremap_wt()
355------------
356
357Maps I/O memory as normal memory with write-through caching. Like ioremap_wc(),
358but also,
359
360* The CPU may cache writes issued to and reads from the device, and serve reads
361  from that cache.
362
363This mode is sometimes used for video framebuffers, where drivers still expect
364writes to reach the device in a timely manner (and not be stuck in the CPU
365cache), but reads may be served from the cache for efficiency. However, it is
366rarely useful these days, as framebuffer drivers usually perform writes only,
367for which ioremap_wc() is more efficient (as it doesn't needlessly trash the
368cache). Most drivers should not use this.
369
370ioremap_np()
371------------
372
373Like ioremap(), but explicitly requests non-posted write semantics. On some
374architectures and buses, ioremap() mappings have posted write semantics, which
375means that writes can appear to "complete" from the point of view of the
376CPU before the written data actually arrives at the target device. Writes are
377still ordered with respect to other writes and reads from the same device, but
378due to the posted write semantics, this is not the case with respect to other
379devices. ioremap_np() explicitly requests non-posted semantics, which means
380that the write instruction will not appear to complete until the device has
381received (and to some platform-specific extent acknowledged) the written data.
382
383This mapping mode primarily exists to cater for platforms with bus fabrics that
384require this particular mapping mode to work correctly. These platforms set the
385``IORESOURCE_MEM_NONPOSTED`` flag for a resource that requires ioremap_np()
386semantics and portable drivers should use an abstraction that automatically
387selects it where appropriate (see the `Higher-level ioremap abstractions`_
388section below).
389
390The bare ioremap_np() is only available on some architectures; on others, it
391always returns NULL. Drivers should not normally use it, unless they are
392platform-specific or they derive benefit from non-posted writes where
393supported, and can fall back to ioremap() otherwise. The normal approach to
394ensure posted write completion is to do a dummy read after a write as
395explained in `Accessing the device`_, which works with ioremap() on all
396platforms.
397
398ioremap_np() should never be used for PCI drivers. PCI memory space writes are
399always posted, even on architectures that otherwise implement ioremap_np().
400Using ioremap_np() for PCI BARs will at best result in posted write semantics,
401and at worst result in complete breakage.
402
403Note that non-posted write semantics are orthogonal to CPU-side ordering
404guarantees. A CPU may still choose to issue other reads or writes before a
405non-posted write instruction retires. See the previous section on MMIO access
406functions for details on the CPU side of things.
407
408ioremap_uc()
409------------
410
411ioremap_uc() is only meaningful on old x86-32 systems with the PAT extension,
412and on ia64 with its slightly unconventional ioremap() behavior, everywhere
413elss ioremap_uc() defaults to return NULL.
414
415
416Portable drivers should avoid the use of ioremap_uc(), use ioremap() instead.
417
418ioremap_cache()
419---------------
420
421ioremap_cache() effectively maps I/O memory as normal RAM. CPU write-back
422caches can be used, and the CPU is free to treat the device as if it were a
423block of RAM. This should never be used for device memory which has side
424effects of any kind, or which does not return the data previously written on
425read.
426
427It should also not be used for actual RAM, as the returned pointer is an
428``__iomem`` token. memremap() can be used for mapping normal RAM that is outside
429of the linear kernel memory area to a regular pointer.
430
431Portable drivers should avoid the use of ioremap_cache().
432
433Architecture example
434--------------------
435
436Here is how the above modes map to memory attribute settings on the ARM64
437architecture:
438
439+------------------------+--------------------------------------------+
440| API                    | Memory region type and cacheability        |
441+------------------------+--------------------------------------------+
442| ioremap_np()           | Device-nGnRnE                              |
443+------------------------+--------------------------------------------+
444| ioremap()              | Device-nGnRE                               |
445+------------------------+--------------------------------------------+
446| ioremap_uc()           | (not implemented)                          |
447+------------------------+--------------------------------------------+
448| ioremap_wc()           | Normal-Non Cacheable                       |
449+------------------------+--------------------------------------------+
450| ioremap_wt()           | (not implemented; fallback to ioremap)     |
451+------------------------+--------------------------------------------+
452| ioremap_cache()        | Normal-Write-Back Cacheable                |
453+------------------------+--------------------------------------------+
454
455Higher-level ioremap abstractions
456=================================
457
458Instead of using the above raw ioremap() modes, drivers are encouraged to use
459higher-level APIs. These APIs may implement platform-specific logic to
460automatically choose an appropriate ioremap mode on any given bus, allowing for
461a platform-agnostic driver to work on those platforms without any special
462cases. At the time of this writing, the following ioremap() wrappers have such
463logic:
464
465devm_ioremap_resource()
466
467  Can automatically select ioremap_np() over ioremap() according to platform
468  requirements, if the ``IORESOURCE_MEM_NONPOSTED`` flag is set on the struct
469  resource. Uses devres to automatically unmap the resource when the driver
470  probe() function fails or a device in unbound from its driver.
471
472  Documented in Documentation/driver-api/driver-model/devres.rst.
473
474of_address_to_resource()
475
476  Automatically sets the ``IORESOURCE_MEM_NONPOSTED`` flag for platforms that
477  require non-posted writes for certain buses (see the nonposted-mmio and
478  posted-mmio device tree properties).
479
480of_iomap()
481
482  Maps the resource described in a ``reg`` property in the device tree, doing
483  all required translations. Automatically selects ioremap_np() according to
484  platform requirements, as above.
485
486pci_ioremap_bar(), pci_ioremap_wc_bar()
487
488  Maps the resource described in a PCI base address without having to extract
489  the physical address first.
490
491pci_iomap(), pci_iomap_wc()
492
493  Like pci_ioremap_bar()/pci_ioremap_bar(), but also works on I/O space when
494  used together with ioread32()/iowrite32() and similar accessors
495
496pcim_iomap()
497
498  Like pci_iomap(), but uses devres to automatically unmap the resource when
499  the driver probe() function fails or a device in unbound from its driver
500
501  Documented in Documentation/driver-api/driver-model/devres.rst.
502
503Not using these wrappers may make drivers unusable on certain platforms with
504stricter rules for mapping I/O memory.
505
506Generalizing Access to System and I/O Memory
507============================================
508
509.. kernel-doc:: include/linux/iosys-map.h
510   :doc: overview
511
512.. kernel-doc:: include/linux/iosys-map.h
513   :internal:
514
515Public Functions Provided
516=========================
517
518.. kernel-doc:: arch/x86/include/asm/io.h
519   :internal:
520