xref: /freebsd/sys/contrib/openzfs/man/man7/zpoolconcepts.7 (revision 2e3507c25e42292b45a5482e116d278f5515d04d)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or https://opensource.org/licenses/CDDL-1.0.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24.\" Copyright (c) 2017 Datto Inc.
25.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26.\" Copyright 2017 Nexenta Systems, Inc.
27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28.\"
29.Dd April 7, 2023
30.Dt ZPOOLCONCEPTS 7
31.Os
32.
33.Sh NAME
34.Nm zpoolconcepts
35.Nd overview of ZFS storage pools
36.
37.Sh DESCRIPTION
38.Ss Virtual Devices (vdevs)
39A "virtual device" describes a single device or a collection of devices,
40organized according to certain performance and fault characteristics.
41The following virtual devices are supported:
42.Bl -tag -width "special"
43.It Sy disk
44A block device, typically located under
45.Pa /dev .
46ZFS can use individual slices or partitions, though the recommended mode of
47operation is to use whole disks.
48A disk can be specified by a full path, or it can be a shorthand name
49.Po the relative portion of the path under
50.Pa /dev
51.Pc .
52A whole disk can be specified by omitting the slice or partition designation.
53For example,
54.Pa sda
55is equivalent to
56.Pa /dev/sda .
57When given a whole disk, ZFS automatically labels the disk, if necessary.
58.It Sy file
59A regular file.
60The use of files as a backing store is strongly discouraged.
61It is designed primarily for experimental purposes, as the fault tolerance of a
62file is only as good as the file system on which it resides.
63A file must be specified by a full path.
64.It Sy mirror
65A mirror of two or more devices.
66Data is replicated in an identical fashion across all components of a mirror.
67A mirror with
68.Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
69devices failing, without losing data.
70.It Sy raidz , raidz1 , raidz2 , raidz3
71A distributed-parity layout, similar to RAID-5/6, with improved distribution of
72parity, and which does not suffer from the RAID-5/6
73.Qq write hole ,
74.Pq in which data and parity become inconsistent after a power loss .
75Data and parity is striped across all disks within a raidz group, though not
76necessarily in a consistent stripe width.
77.Pp
78A raidz group can have single, double, or triple parity, meaning that the
79raidz group can sustain one, two, or three failures, respectively, without
80losing any data.
81The
82.Sy raidz1
83vdev type specifies a single-parity raidz group; the
84.Sy raidz2
85vdev type specifies a double-parity raidz group; and the
86.Sy raidz3
87vdev type specifies a triple-parity raidz group.
88The
89.Sy raidz
90vdev type is an alias for
91.Sy raidz1 .
92.Pp
93A raidz group with
94.Em N No disks of size Em X No with Em P No parity disks can hold approximately
95.Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
96The minimum number of devices in a raidz group is one more than the number of
97parity disks.
98The recommended number is between 3 and 9 to help increase performance.
99.It Sy draid , draid1 , draid2 , draid3
100A variant of raidz that provides integrated distributed hot spares, allowing
101for faster resilvering, while retaining the benefits of raidz.
102A dRAID vdev is constructed from multiple internal raidz groups, each with
103.Em D No data devices and Em P No parity devices .
104These groups are distributed over all of the children in order to fully
105utilize the available disk performance.
106.Pp
107Unlike raidz, dRAID uses a fixed stripe width (padding as necessary with
108zeros) to allow fully sequential resilvering.
109This fixed stripe width significantly affects both usable capacity and IOPS.
110For example, with the default
111.Em D=8 No and Em 4 KiB No disk sectors the minimum allocation size is Em 32 KiB .
112If using compression, this relatively large allocation size can reduce the
113effective compression ratio.
114When using ZFS volumes (zvols) and dRAID, the default of the
115.Sy volblocksize
116property is increased to account for the allocation size.
117If a dRAID pool will hold a significant amount of small blocks, it is
118recommended to also add a mirrored
119.Sy special
120vdev to store those blocks.
121.Pp
122In regards to I/O, performance is similar to raidz since, for any read, all
123.Em D No data disks must be accessed .
124Delivered random IOPS can be reasonably approximated as
125.Sy floor((N-S)/(D+P))*single_drive_IOPS .
126.Pp
127Like raidz, a dRAID can have single-, double-, or triple-parity.
128The
129.Sy draid1 ,
130.Sy draid2 ,
131and
132.Sy draid3
133types can be used to specify the parity level.
134The
135.Sy draid
136vdev type is an alias for
137.Sy draid1 .
138.Pp
139A dRAID with
140.Em N No disks of size Em X , D No data disks per redundancy group , Em P
141.No parity level, and Em S No distributed hot spares can hold approximately
142.Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
143devices failing without losing data.
144.It Sy draid Ns Oo Ar parity Oc Ns Oo Sy \&: Ns Ar data Ns Sy d Oc Ns Oo Sy \&: Ns Ar children Ns Sy c Oc Ns Oo Sy \&: Ns Ar spares Ns Sy s Oc
145A non-default dRAID configuration can be specified by appending one or more
146of the following optional arguments to the
147.Sy draid
148keyword:
149.Bl -tag -compact -width "children"
150.It Ar parity
151The parity level (1-3).
152.It Ar data
153The number of data devices per redundancy group.
154In general, a smaller value of
155.Em D No will increase IOPS, improve the compression ratio ,
156and speed up resilvering at the expense of total usable capacity.
157Defaults to
158.Em 8 , No unless Em N-P-S No is less than Em 8 .
159.It Ar children
160The expected number of children.
161Useful as a cross-check when listing a large number of devices.
162An error is returned when the provided number of children differs.
163.It Ar spares
164The number of distributed hot spares.
165Defaults to zero.
166.El
167.It Sy spare
168A pseudo-vdev which keeps track of available hot spares for a pool.
169For more information, see the
170.Sx Hot Spares
171section.
172.It Sy log
173A separate intent log device.
174If more than one log device is specified, then writes are load-balanced between
175devices.
176Log devices can be mirrored.
177However, raidz vdev types are not supported for the intent log.
178For more information, see the
179.Sx Intent Log
180section.
181.It Sy dedup
182A device solely dedicated for deduplication tables.
183The redundancy of this device should match the redundancy of the other normal
184devices in the pool.
185If more than one dedup device is specified, then
186allocations are load-balanced between those devices.
187.It Sy special
188A device dedicated solely for allocating various kinds of internal metadata,
189and optionally small file blocks.
190The redundancy of this device should match the redundancy of the other normal
191devices in the pool.
192If more than one special device is specified, then
193allocations are load-balanced between those devices.
194.Pp
195For more information on special allocations, see the
196.Sx Special Allocation Class
197section.
198.It Sy cache
199A device used to cache storage pool data.
200A cache device cannot be configured as a mirror or raidz group.
201For more information, see the
202.Sx Cache Devices
203section.
204.El
205.Pp
206Virtual devices cannot be nested arbitrarily.
207A mirror, raidz or draid virtual device can only be created with files or disks.
208Mirrors of mirrors or other such combinations are not allowed.
209.Pp
210A pool can have any number of virtual devices at the top of the configuration
211.Po known as
212.Qq root vdevs
213.Pc .
214Data is dynamically distributed across all top-level devices to balance data
215among devices.
216As new virtual devices are added, ZFS automatically places data on the newly
217available devices.
218.Pp
219Virtual devices are specified one at a time on the command line,
220separated by whitespace.
221Keywords like
222.Sy mirror No and Sy raidz
223are used to distinguish where a group ends and another begins.
224For example, the following creates a pool with two root vdevs,
225each a mirror of two disks:
226.Dl # Nm zpool Cm create Ar mypool Sy mirror Ar sda sdb Sy mirror Ar sdc sdd
227.
228.Ss Device Failure and Recovery
229ZFS supports a rich set of mechanisms for handling device failure and data
230corruption.
231All metadata and data is checksummed, and ZFS automatically repairs bad data
232from a good copy, when corruption is detected.
233.Pp
234In order to take advantage of these features, a pool must make use of some form
235of redundancy, using either mirrored or raidz groups.
236While ZFS supports running in a non-redundant configuration, where each root
237vdev is simply a disk or file, this is strongly discouraged.
238A single case of bit corruption can render some or all of your data unavailable.
239.Pp
240A pool's health status is described by one of three states:
241.Sy online , degraded , No or Sy faulted .
242An online pool has all devices operating normally.
243A degraded pool is one in which one or more devices have failed, but the data is
244still available due to a redundant configuration.
245A faulted pool has corrupted metadata, or one or more faulted devices, and
246insufficient replicas to continue functioning.
247.Pp
248The health of the top-level vdev, such as a mirror or raidz device,
249is potentially impacted by the state of its associated vdevs
250or component devices.
251A top-level vdev or component device is in one of the following states:
252.Bl -tag -width "DEGRADED"
253.It Sy DEGRADED
254One or more top-level vdevs is in the degraded state because one or more
255component devices are offline.
256Sufficient replicas exist to continue functioning.
257.Pp
258One or more component devices is in the degraded or faulted state, but
259sufficient replicas exist to continue functioning.
260The underlying conditions are as follows:
261.Bl -bullet -compact
262.It
263The number of checksum errors exceeds acceptable levels and the device is
264degraded as an indication that something may be wrong.
265ZFS continues to use the device as necessary.
266.It
267The number of I/O errors exceeds acceptable levels.
268The device could not be marked as faulted because there are insufficient
269replicas to continue functioning.
270.El
271.It Sy FAULTED
272One or more top-level vdevs is in the faulted state because one or more
273component devices are offline.
274Insufficient replicas exist to continue functioning.
275.Pp
276One or more component devices is in the faulted state, and insufficient
277replicas exist to continue functioning.
278The underlying conditions are as follows:
279.Bl -bullet -compact
280.It
281The device could be opened, but the contents did not match expected values.
282.It
283The number of I/O errors exceeds acceptable levels and the device is faulted to
284prevent further use of the device.
285.El
286.It Sy OFFLINE
287The device was explicitly taken offline by the
288.Nm zpool Cm offline
289command.
290.It Sy ONLINE
291The device is online and functioning.
292.It Sy REMOVED
293The device was physically removed while the system was running.
294Device removal detection is hardware-dependent and may not be supported on all
295platforms.
296.It Sy UNAVAIL
297The device could not be opened.
298If a pool is imported when a device was unavailable, then the device will be
299identified by a unique identifier instead of its path since the path was never
300correct in the first place.
301.El
302.Pp
303Checksum errors represent events where a disk returned data that was expected
304to be correct, but was not.
305In other words, these are instances of silent data corruption.
306The checksum errors are reported in
307.Nm zpool Cm status
308and
309.Nm zpool Cm events .
310When a block is stored redundantly, a damaged block may be reconstructed
311(e.g. from raidz parity or a mirrored copy).
312In this case, ZFS reports the checksum error against the disks that contained
313damaged data.
314If a block is unable to be reconstructed (e.g. due to 3 disks being damaged
315in a raidz2 group), it is not possible to determine which disks were silently
316corrupted.
317In this case, checksum errors are reported for all disks on which the block
318is stored.
319.Pp
320If a device is removed and later re-attached to the system,
321ZFS attempts to bring the device online automatically.
322Device attachment detection is hardware-dependent
323and might not be supported on all platforms.
324.
325.Ss Hot Spares
326ZFS allows devices to be associated with pools as
327.Qq hot spares .
328These devices are not actively used in the pool.
329But, when an active device
330fails, it is automatically replaced by a hot spare.
331To create a pool with hot spares, specify a
332.Sy spare
333vdev with any number of devices.
334For example,
335.Dl # Nm zpool Cm create Ar pool Sy mirror Ar sda sdb Sy spare Ar sdc sdd
336.Pp
337Spares can be shared across multiple pools, and can be added with the
338.Nm zpool Cm add
339command and removed with the
340.Nm zpool Cm remove
341command.
342Once a spare replacement is initiated, a new
343.Sy spare
344vdev is created within the configuration that will remain there until the
345original device is replaced.
346At this point, the hot spare becomes available again, if another device fails.
347.Pp
348If a pool has a shared spare that is currently being used, the pool cannot be
349exported, since other pools may use this shared spare, which may lead to
350potential data corruption.
351.Pp
352Shared spares add some risk.
353If the pools are imported on different hosts,
354and both pools suffer a device failure at the same time,
355both could attempt to use the spare at the same time.
356This may not be detected, resulting in data corruption.
357.Pp
358An in-progress spare replacement can be cancelled by detaching the hot spare.
359If the original faulted device is detached, then the hot spare assumes its
360place in the configuration, and is removed from the spare list of all active
361pools.
362.Pp
363The
364.Sy draid
365vdev type provides distributed hot spares.
366These hot spares are named after the dRAID vdev they're a part of
367.Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
368.No which is a single parity dRAID Pc
369and may only be used by that dRAID vdev.
370Otherwise, they behave the same as normal hot spares.
371.Pp
372Spares cannot replace log devices.
373.
374.Ss Intent Log
375The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
376transactions.
377For instance, databases often require their transactions to be on stable storage
378devices when returning from a system call.
379NFS and other applications can also use
380.Xr fsync 2
381to ensure data stability.
382By default, the intent log is allocated from blocks within the main pool.
383However, it might be possible to get better performance using separate intent
384log devices such as NVRAM or a dedicated disk.
385For example:
386.Dl # Nm zpool Cm create Ar pool sda sdb Sy log Ar sdc
387.Pp
388Multiple log devices can also be specified, and they can be mirrored.
389See the
390.Sx EXAMPLES
391section for an example of mirroring multiple log devices.
392.Pp
393Log devices can be added, replaced, attached, detached, and removed.
394In addition, log devices are imported and exported as part of the pool
395that contains them.
396Mirrored devices can be removed by specifying the top-level mirror vdev.
397.
398.Ss Cache Devices
399Devices can be added to a storage pool as
400.Qq cache devices .
401These devices provide an additional layer of caching between main memory and
402disk.
403For read-heavy workloads, where the working set size is much larger than what
404can be cached in main memory, using cache devices allows much more of this
405working set to be served from low latency media.
406Using cache devices provides the greatest performance improvement for random
407read-workloads of mostly static content.
408.Pp
409To create a pool with cache devices, specify a
410.Sy cache
411vdev with any number of devices.
412For example:
413.Dl # Nm zpool Cm create Ar pool sda sdb Sy cache Ar sdc sdd
414.Pp
415Cache devices cannot be mirrored or part of a raidz configuration.
416If a read error is encountered on a cache device, that read I/O is reissued to
417the original storage pool device, which might be part of a mirrored or raidz
418configuration.
419.Pp
420The content of the cache devices is persistent across reboots and restored
421asynchronously when importing the pool in L2ARC (persistent L2ARC).
422This can be disabled by setting
423.Sy l2arc_rebuild_enabled Ns = Ns Sy 0 .
424For cache devices smaller than
425.Em 1 GiB ,
426ZFS does not write the metadata structures
427required for rebuilding the L2ARC, to conserve space.
428This can be changed with
429.Sy l2arc_rebuild_blocks_min_l2size .
430The cache device header
431.Pq Em 512 B
432is updated even if no metadata structures are written.
433Setting
434.Sy l2arc_headroom Ns = Ns Sy 0
435will result in scanning the full-length ARC lists for cacheable content to be
436written in L2ARC (persistent ARC).
437If a cache device is added with
438.Nm zpool Cm add ,
439its label and header will be overwritten and its contents will not be
440restored in L2ARC, even if the device was previously part of the pool.
441If a cache device is onlined with
442.Nm zpool Cm online ,
443its contents will be restored in L2ARC.
444This is useful in case of memory pressure,
445where the contents of the cache device are not fully restored in L2ARC.
446The user can off- and online the cache device when there is less memory
447pressure, to fully restore its contents to L2ARC.
448.
449.Ss Pool checkpoint
450Before starting critical procedures that include destructive actions
451.Pq like Nm zfs Cm destroy ,
452an administrator can checkpoint the pool's state and, in the case of a
453mistake or failure, rewind the entire pool back to the checkpoint.
454Otherwise, the checkpoint can be discarded when the procedure has completed
455successfully.
456.Pp
457A pool checkpoint can be thought of as a pool-wide snapshot and should be used
458with care as it contains every part of the pool's state, from properties to vdev
459configuration.
460Thus, certain operations are not allowed while a pool has a checkpoint.
461Specifically, vdev removal/attach/detach, mirror splitting, and
462changing the pool's GUID.
463Adding a new vdev is supported, but in the case of a rewind it will have to be
464added again.
465Finally, users of this feature should keep in mind that scrubs in a pool that
466has a checkpoint do not repair checkpointed data.
467.Pp
468To create a checkpoint for a pool:
469.Dl # Nm zpool Cm checkpoint Ar pool
470.Pp
471To later rewind to its checkpointed state, you need to first export it and
472then rewind it during import:
473.Dl # Nm zpool Cm export Ar pool
474.Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
475.Pp
476To discard the checkpoint from a pool:
477.Dl # Nm zpool Cm checkpoint Fl d Ar pool
478.Pp
479Dataset reservations (controlled by the
480.Sy reservation No and Sy refreservation
481properties) may be unenforceable while a checkpoint exists, because the
482checkpoint is allowed to consume the dataset's reservation.
483Finally, data that is part of the checkpoint but has been freed in the
484current state of the pool won't be scanned during a scrub.
485.
486.Ss Special Allocation Class
487Allocations in the special class are dedicated to specific block types.
488By default, this includes all metadata, the indirect blocks of user data, and
489any deduplication tables.
490The class can also be provisioned to accept small file blocks.
491.Pp
492A pool must always have at least one normal
493.Pq non- Ns Sy dedup Ns /- Ns Sy special
494vdev before
495other devices can be assigned to the special class.
496If the
497.Sy special
498class becomes full, then allocations intended for it
499will spill back into the normal class.
500.Pp
501Deduplication tables can be excluded from the special class by unsetting the
502.Sy zfs_ddt_data_is_special
503ZFS module parameter.
504.Pp
505Inclusion of small file blocks in the special class is opt-in.
506Each dataset can control the size of small file blocks allowed
507in the special class by setting the
508.Sy special_small_blocks
509property to nonzero.
510See
511.Xr zfsprops 7
512for more info on this property.
513