xref: /linux/Documentation/block/data-integrity.rst (revision bb9ae1a66c85eeb626864efd812c62026e126ec0)
1==============
2Data Integrity
3==============
4
51. Introduction
6===============
7
8Modern filesystems feature checksumming of data and metadata to
9protect against data corruption.  However, the detection of the
10corruption is done at read time which could potentially be months
11after the data was written.  At that point the original data that the
12application tried to write is most likely lost.
13
14The solution is to ensure that the disk is actually storing what the
15application meant it to.  Recent additions to both the SCSI family
16protocols (SBC Data Integrity Field, SCC protection proposal) as well
17as SATA/T13 (External Path Protection) try to remedy this by adding
18support for appending integrity metadata to an I/O.  The integrity
19metadata (or protection information in SCSI terminology) includes a
20checksum for each sector as well as an incrementing counter that
21ensures the individual sectors are written in the right order.  And
22for some protection schemes also that the I/O is written to the right
23place on disk.
24
25Current storage controllers and devices implement various protective
26measures, for instance checksumming and scrubbing.  But these
27technologies are working in their own isolated domains or at best
28between adjacent nodes in the I/O path.  The interesting thing about
29DIF and the other integrity extensions is that the protection format
30is well defined and every node in the I/O path can verify the
31integrity of the I/O and reject it if corruption is detected.  This
32allows not only corruption prevention but also isolation of the point
33of failure.
34
352. The Data Integrity Extensions
36================================
37
38As written, the protocol extensions only protect the path between
39controller and storage device.  However, many controllers actually
40allow the operating system to interact with the integrity metadata
41(IMD).  We have been working with several FC/SAS HBA vendors to enable
42the protection information to be transferred to and from their
43controllers.
44
45The SCSI Data Integrity Field works by appending 8 bytes of protection
46information to each sector.  The data + integrity metadata is stored
47in 520 byte sectors on disk.  Data + IMD are interleaved when
48transferred between the controller and target.  The T13 proposal is
49similar.
50
51Because it is highly inconvenient for operating systems to deal with
52520 (and 4104) byte sectors, we approached several HBA vendors and
53encouraged them to allow separation of the data and integrity metadata
54scatter-gather lists.
55
56The controller will interleave the buffers on write and split them on
57read.  This means that Linux can DMA the data buffers to and from
58host memory without changes to the page cache.
59
60Also, the 16-bit CRC checksum mandated by both the SCSI and SATA specs
61is somewhat heavy to compute in software.  Benchmarks found that
62calculating this checksum had a significant impact on system
63performance for a number of workloads.  Some controllers allow a
64lighter-weight checksum to be used when interfacing with the operating
65system.  Emulex, for instance, supports the TCP/IP checksum instead.
66The IP checksum received from the OS is converted to the 16-bit CRC
67when writing and vice versa.  This allows the integrity metadata to be
68generated by Linux or the application at very low cost (comparable to
69software RAID5).
70
71The IP checksum is weaker than the CRC in terms of detecting bit
72errors.  However, the strength is really in the separation of the data
73buffers and the integrity metadata.  These two distinct buffers must
74match up for an I/O to complete.
75
76The separation of the data and integrity metadata buffers as well as
77the choice in checksums is referred to as the Data Integrity
78Extensions.  As these extensions are outside the scope of the protocol
79bodies (T10, T13), Oracle and its partners are trying to standardize
80them within the Storage Networking Industry Association.
81
823. Kernel Changes
83=================
84
85The data integrity framework in Linux enables protection information
86to be pinned to I/Os and sent to/received from controllers that
87support it.
88
89The advantage to the integrity extensions in SCSI and SATA is that
90they enable us to protect the entire path from application to storage
91device.  However, at the same time this is also the biggest
92disadvantage. It means that the protection information must be in a
93format that can be understood by the disk.
94
95Generally Linux/POSIX applications are agnostic to the intricacies of
96the storage devices they are accessing.  The virtual filesystem switch
97and the block layer make things like hardware sector size and
98transport protocols completely transparent to the application.
99
100However, this level of detail is required when preparing the
101protection information to send to a disk.  Consequently, the very
102concept of an end-to-end protection scheme is a layering violation.
103It is completely unreasonable for an application to be aware whether
104it is accessing a SCSI or SATA disk.
105
106The data integrity support implemented in Linux attempts to hide this
107from the application.  As far as the application (and to some extent
108the kernel) is concerned, the integrity metadata is opaque information
109that's attached to the I/O.
110
111The current implementation allows the block layer to automatically
112generate the protection information for any I/O.  Eventually the
113intent is to move the integrity metadata calculation to userspace for
114user data.  Metadata and other I/O that originates within the kernel
115will still use the automatic generation interface.
116
117Some storage devices allow each hardware sector to be tagged with a
11816-bit value.  The owner of this tag space is the owner of the block
119device.  I.e. the filesystem in most cases.  The filesystem can use
120this extra space to tag sectors as they see fit.  Because the tag
121space is limited, the block interface allows tagging bigger chunks by
122way of interleaving.  This way, 8*16 bits of information can be
123attached to a typical 4KB filesystem block.
124
125This also means that applications such as fsck and mkfs will need
126access to manipulate the tags from user space.  A passthrough
127interface for this is being worked on.
128
129
1304. Block Layer Implementation Details
131=====================================
132
1334.1 Bio
134-------
135
136The data integrity patches add a new field to struct bio when
137CONFIG_BLK_DEV_INTEGRITY is enabled.  bio_integrity(bio) returns a
138pointer to a struct bip which contains the bio integrity payload.
139Essentially a bip is a trimmed down struct bio which holds a bio_vec
140containing the integrity metadata and the required housekeeping
141information (bvec pool, vector count, etc.)
142
143A kernel subsystem can enable data integrity protection on a bio by
144calling bio_integrity_alloc(bio).  This will allocate and attach the
145bip to the bio.
146
147Individual pages containing integrity metadata can subsequently be
148attached using bio_integrity_add_page().
149
150bio_free() will automatically free the bip.
151
152
1534.2 Block Device
154----------------
155
156Block devices can set up the integrity information in the integrity
157sub-struture of the queue_limits structure.
158
159Layered block devices will need to pick a profile that's appropriate
160for all subdevices.  queue_limits_stack_integrity() can help with that.  DM
161and MD linear, RAID0 and RAID1 are currently supported.  RAID4/5/6
162will require extra work due to the application tag.
163
164
1655.0 Block Layer Integrity API
166=============================
167
1685.1 Normal Filesystem
169---------------------
170
171    The normal filesystem is unaware that the underlying block device
172    is capable of sending/receiving integrity metadata.  The IMD will
173    be automatically generated by the block layer at submit_bio() time
174    in case of a WRITE.  A READ request will cause the I/O integrity
175    to be verified upon completion.
176
177    IMD generation and verification can be toggled using the::
178
179      /sys/block/<bdev>/integrity/write_generate
180
181    and::
182
183      /sys/block/<bdev>/integrity/read_verify
184
185    flags.
186
187
1885.2 Integrity-Aware Filesystem
189------------------------------
190
191    A filesystem that is integrity-aware can prepare I/Os with IMD
192    attached.  It can also use the application tag space if this is
193    supported by the block device.
194
195
196    `bool bio_integrity_prep(bio);`
197
198      To generate IMD for WRITE and to set up buffers for READ, the
199      filesystem must call bio_integrity_prep(bio).
200
201      Prior to calling this function, the bio data direction and start
202      sector must be set, and the bio should have all data pages
203      added.  It is up to the caller to ensure that the bio does not
204      change while I/O is in progress.
205      Complete bio with error if prepare failed for some reason.
206
207
2085.3 Passing Existing Integrity Metadata
209---------------------------------------
210
211    Filesystems that either generate their own integrity metadata or
212    are capable of transferring IMD from user space can use the
213    following calls:
214
215
216    `struct bip * bio_integrity_alloc(bio, gfp_mask, nr_pages);`
217
218      Allocates the bio integrity payload and hangs it off of the bio.
219      nr_pages indicate how many pages of protection data need to be
220      stored in the integrity bio_vec list (similar to bio_alloc()).
221
222      The integrity payload will be freed at bio_free() time.
223
224
225    `int bio_integrity_add_page(bio, page, len, offset);`
226
227      Attaches a page containing integrity metadata to an existing
228      bio.  The bio must have an existing bip,
229      i.e. bio_integrity_alloc() must have been called.  For a WRITE,
230      the integrity metadata in the pages must be in a format
231      understood by the target device with the notable exception that
232      the sector numbers will be remapped as the request traverses the
233      I/O stack.  This implies that the pages added using this call
234      will be modified during I/O!  The first reference tag in the
235      integrity metadata must have a value of bip->bip_sector.
236
237      Pages can be added using bio_integrity_add_page() as long as
238      there is room in the bip bio_vec array (nr_pages).
239
240      Upon completion of a READ operation, the attached pages will
241      contain the integrity metadata received from the storage device.
242      It is up to the receiver to process them and verify data
243      integrity upon completion.
244
245
246----------------------------------------------------------------------
247
2482007-12-24 Martin K. Petersen <martin.petersen@oracle.com>
249