xref: /illumos-gate/usr/src/man/man8/zpool.8 (revision c55633c3b85a97a093b3f79f341aee08eb6bd15b)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24.\" Copyright 2017 Nexenta Systems, Inc.
25.\" Copyright (c) 2017 Datto Inc.
26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27.\" Copyright 2021 Joyent, Inc.
28.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
29.\"
30.Dd March 30, 2022
31.Dt ZPOOL 8
32.Os
33.Sh NAME
34.Nm zpool
35.Nd configure ZFS storage pools
36.Sh SYNOPSIS
37.Nm
38.Fl \&?
39.Nm
40.Cm add
41.Op Fl fgLnP
42.Oo Fl o Ar property Ns = Ns Ar value Oc
43.Ar pool vdev Ns ...
44.Nm
45.Cm attach
46.Op Fl f
47.Oo Fl o Ar property Ns = Ns Ar value Oc
48.Ar pool device new_device
49.Nm
50.Cm checkpoint
51.Op Fl d, -discard
52.Ar pool
53.Nm
54.Cm clear
55.Ar pool
56.Op Ar device
57.Nm
58.Cm create
59.Op Fl dfn
60.Op Fl B
61.Op Fl m Ar mountpoint
62.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
63.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
64.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
65.Op Fl R Ar root
66.Op Fl t Ar tempname
67.Ar pool vdev Ns ...
68.Nm
69.Cm destroy
70.Op Fl f
71.Ar pool
72.Nm
73.Cm detach
74.Ar pool device
75.Nm
76.Cm export
77.Op Fl f
78.Ar pool Ns ...
79.Nm
80.Cm get
81.Op Fl Hp
82.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
83.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
84.Ar pool Ns ...
85.Nm
86.Cm history
87.Op Fl il
88.Oo Ar pool Oc Ns ...
89.Nm
90.Cm import
91.Op Fl D
92.Op Fl d Ar dir
93.Nm
94.Cm import
95.Fl a
96.Op Fl DflmN
97.Op Fl F Op Fl n
98.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
99.Op Fl o Ar mntopts
100.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
101.Op Fl R Ar root
102.Nm
103.Cm import
104.Op Fl Dfmt
105.Op Fl F Op Fl n
106.Op Fl -rewind-to-checkpoint
107.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
108.Op Fl o Ar mntopts
109.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
110.Op Fl R Ar root
111.Ar pool Ns | Ns Ar id
112.Op Ar newpool
113.Nm
114.Cm initialize
115.Op Fl c | Fl s
116.Ar pool
117.Op Ar device Ns ...
118.Nm
119.Cm iostat
120.Op Oo Fl lq Oc | Ns Fl rw
121.Op Fl T Sy u Ns | Ns Sy d
122.Op Fl ghHLnpPvy
123.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
124.Op Ar interval Op Ar count
125.Nm
126.Cm labelclear
127.Op Fl f
128.Ar device
129.Nm
130.Cm list
131.Op Fl HgLpPv
132.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
133.Op Fl T Sy u Ns | Ns Sy d
134.Oo Ar pool Oc Ns ...
135.Op Ar interval Op Ar count
136.Nm
137.Cm offline
138.Op Fl t
139.Ar pool Ar device Ns ...
140.Nm
141.Cm online
142.Op Fl e
143.Ar pool Ar device Ns ...
144.Nm
145.Cm reguid
146.Ar pool
147.Nm
148.Cm reopen
149.Ar pool
150.Nm
151.Cm remove
152.Op Fl np
153.Ar pool Ar device Ns ...
154.Nm
155.Cm remove
156.Fl s
157.Ar pool
158.Nm
159.Cm replace
160.Op Fl f
161.Ar pool Ar device Op Ar new_device
162.Nm
163.Cm resilver
164.Ar pool Ns ...
165.Nm
166.Cm scrub
167.Op Fl s | Fl p
168.Ar pool Ns ...
169.Nm
170.Cm trim
171.Op Fl d
172.Op Fl r Ar rate
173.Op Fl c | Fl s
174.Ar pool
175.Op Ar device Ns ...
176.Nm
177.Cm set
178.Ar property Ns = Ns Ar value
179.Ar pool
180.Nm
181.Cm split
182.Op Fl gLlnP
183.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
184.Op Fl R Ar root
185.Ar pool newpool
186.Nm
187.Cm status
188.Op Fl DigLpPstvx
189.Op Fl T Sy u Ns | Ns Sy d
190.Oo Ar pool Oc Ns ...
191.Op Ar interval Op Ar count
192.Nm
193.Cm sync
194.Oo Ar pool Oc Ns ...
195.Nm
196.Cm upgrade
197.Nm
198.Cm upgrade
199.Fl v
200.Nm
201.Cm upgrade
202.Op Fl V Ar version
203.Fl a Ns | Ns Ar pool Ns ...
204.Sh DESCRIPTION
205The
206.Nm
207command configures ZFS storage pools.
208A storage pool is a collection of devices that provides physical storage and
209data replication for ZFS datasets.
210All datasets within a storage pool share the same space.
211See
212.Xr zfs 8
213for information on managing datasets.
214.Ss Virtual Devices (vdevs)
215A "virtual device" describes a single device or a collection of devices
216organized according to certain performance and fault characteristics.
217The following virtual devices are supported:
218.Bl -tag -width Ds
219.It Sy disk
220A block device, typically located under
221.Pa /dev/dsk .
222ZFS can use individual slices or partitions, though the recommended mode of
223operation is to use whole disks.
224A disk can be specified by a full path, or it can be a shorthand name
225.Po the relative portion of the path under
226.Pa /dev/dsk
227.Pc .
228A whole disk can be specified by omitting the slice or partition designation.
229For example,
230.Pa c0t0d0
231is equivalent to
232.Pa /dev/dsk/c0t0d0s2 .
233When given a whole disk, ZFS automatically labels the disk, if necessary.
234.It Sy file
235A regular file.
236The use of files as a backing store is strongly discouraged.
237It is designed primarily for experimental purposes, as the fault tolerance of a
238file is only as good as the file system of which it is a part.
239A file must be specified by a full path.
240.It Sy mirror
241A mirror of two or more devices.
242Data is replicated in an identical fashion across all components of a mirror.
243A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
244failing before data integrity is compromised.
245.It Sy raidz , raidz1 , raidz2 , raidz3
246A variation on RAID-5 that allows for better distribution of parity and
247eliminates the RAID-5
248.Qq write hole
249.Pq in which data and parity become inconsistent after a power loss .
250Data and parity is striped across all disks within a raidz group.
251.Pp
252A raidz group can have single-, double-, or triple-parity, meaning that the
253raidz group can sustain one, two, or three failures, respectively, without
254losing any data.
255The
256.Sy raidz1
257vdev type specifies a single-parity raidz group; the
258.Sy raidz2
259vdev type specifies a double-parity raidz group; and the
260.Sy raidz3
261vdev type specifies a triple-parity raidz group.
262The
263.Sy raidz
264vdev type is an alias for
265.Sy raidz1 .
266.Pp
267A raidz group with N disks of size X with P parity disks can hold approximately
268(N-P)*X bytes and can withstand P device(s) failing before data integrity is
269compromised.
270The minimum number of devices in a raidz group is one more than the number of
271parity disks.
272The recommended number is between 3 and 9 to help increase performance.
273.It Sy spare
274A special pseudo-vdev which keeps track of available hot spares for a pool.
275For more information, see the
276.Sx Hot Spares
277section.
278.It Sy log
279A separate intent log device.
280If more than one log device is specified, then writes are load-balanced between
281devices.
282Log devices can be mirrored.
283However, raidz vdev types are not supported for the intent log.
284For more information, see the
285.Sx Intent Log
286section.
287.It Sy dedup
288A device dedicated solely for allocating dedup data.
289The redundancy of this device should match the redundancy of the other normal
290devices in the pool.
291If more than one dedup device is specified, then allocations are load-balanced
292between devices.
293.It Sy special
294A device dedicated solely for allocating various kinds of internal metadata,
295and optionally small file data.
296The redundancy of this device should match the redundancy of the other normal
297devices in the pool.
298If more than one special device is specified, then allocations are
299load-balanced between devices.
300.Pp
301For more information on special allocations, see the
302.Sx Special Allocation Class
303section.
304.It Sy cache
305A device used to cache storage pool data.
306A cache device cannot be configured as a mirror or raidz group.
307For more information, see the
308.Sx Cache Devices
309section.
310.El
311.Pp
312Virtual devices cannot be nested, so a mirror or raidz virtual device can only
313contain files or disks.
314Mirrors of mirrors
315.Pq or other combinations
316are not allowed.
317.Pp
318A pool can have any number of virtual devices at the top of the configuration
319.Po known as
320.Qq root vdevs
321.Pc .
322Data is dynamically distributed across all top-level devices to balance data
323among devices.
324As new virtual devices are added, ZFS automatically places data on the newly
325available devices.
326.Pp
327Virtual devices are specified one at a time on the command line, separated by
328whitespace.
329The keywords
330.Sy mirror
331and
332.Sy raidz
333are used to distinguish where a group ends and another begins.
334For example, the following creates two root vdevs, each a mirror of two disks:
335.Bd -literal
336# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
337.Ed
338.Ss Device Failure and Recovery
339ZFS supports a rich set of mechanisms for handling device failure and data
340corruption.
341All metadata and data is checksummed, and ZFS automatically repairs bad data
342from a good copy when corruption is detected.
343.Pp
344In order to take advantage of these features, a pool must make use of some form
345of redundancy, using either mirrored or raidz groups.
346While ZFS supports running in a non-redundant configuration, where each root
347vdev is simply a disk or file, this is strongly discouraged.
348A single case of bit corruption can render some or all of your data unavailable.
349.Pp
350A pool's health status is described by one of three states: online, degraded,
351or faulted.
352An online pool has all devices operating normally.
353A degraded pool is one in which one or more devices have failed, but the data is
354still available due to a redundant configuration.
355A faulted pool has corrupted metadata, or one or more faulted devices, and
356insufficient replicas to continue functioning.
357.Pp
358The health of the top-level vdev, such as mirror or raidz device, is
359potentially impacted by the state of its associated vdevs, or component
360devices.
361A top-level vdev or component device is in one of the following states:
362.Bl -tag -width "DEGRADED"
363.It Sy DEGRADED
364One or more top-level vdevs is in the degraded state because one or more
365component devices are offline.
366Sufficient replicas exist to continue functioning.
367.Pp
368One or more component devices is in the degraded or faulted state, but
369sufficient replicas exist to continue functioning.
370The underlying conditions are as follows:
371.Bl -bullet
372.It
373The number of checksum errors exceeds acceptable levels and the device is
374degraded as an indication that something may be wrong.
375ZFS continues to use the device as necessary.
376.It
377The number of I/O errors exceeds acceptable levels.
378The device could not be marked as faulted because there are insufficient
379replicas to continue functioning.
380.El
381.It Sy FAULTED
382One or more top-level vdevs is in the faulted state because one or more
383component devices are offline.
384Insufficient replicas exist to continue functioning.
385.Pp
386One or more component devices is in the faulted state, and insufficient
387replicas exist to continue functioning.
388The underlying conditions are as follows:
389.Bl -bullet
390.It
391The device could be opened, but the contents did not match expected values.
392.It
393The number of I/O errors exceeds acceptable levels and the device is faulted to
394prevent further use of the device.
395.El
396.It Sy OFFLINE
397The device was explicitly taken offline by the
398.Nm zpool Cm offline
399command.
400.It Sy ONLINE
401The device is online and functioning.
402.It Sy REMOVED
403The device was physically removed while the system was running.
404Device removal detection is hardware-dependent and may not be supported on all
405platforms.
406.It Sy UNAVAIL
407The device could not be opened.
408If a pool is imported when a device was unavailable, then the device will be
409identified by a unique identifier instead of its path since the path was never
410correct in the first place.
411.El
412.Pp
413If a device is removed and later re-attached to the system, ZFS attempts
414to put the device online automatically.
415Device attach detection is hardware-dependent and might not be supported on all
416platforms.
417.Ss Hot Spares
418ZFS allows devices to be associated with pools as
419.Qq hot spares .
420These devices are not actively used in the pool, but when an active device
421fails, it is automatically replaced by a hot spare.
422To create a pool with hot spares, specify a
423.Sy spare
424vdev with any number of devices.
425For example,
426.Bd -literal
427# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
428.Ed
429.Pp
430Spares can be shared across multiple pools, and can be added with the
431.Nm zpool Cm add
432command and removed with the
433.Nm zpool Cm remove
434command.
435Once a spare replacement is initiated, a new
436.Sy spare
437vdev is created within the configuration that will remain there until the
438original device is replaced.
439At this point, the hot spare becomes available again if another device fails.
440.Pp
441If a pool has a shared spare that is currently being used, the pool can not be
442exported since other pools may use this shared spare, which may lead to
443potential data corruption.
444.Pp
445Shared spares add some risk.
446If the pools are imported on different hosts, and both pools suffer a device
447failure at the same time, both could attempt to use the spare at the same time.
448This may not be detected, resulting in data corruption.
449.Pp
450An in-progress spare replacement can be cancelled by detaching the hot spare.
451If the original faulted device is detached, then the hot spare assumes its
452place in the configuration, and is removed from the spare list of all active
453pools.
454.Pp
455Spares cannot replace log devices.
456.Ss Intent Log
457The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
458transactions.
459For instance, databases often require their transactions to be on stable storage
460devices when returning from a system call.
461NFS and other applications can also use
462.Xr fsync 3C
463to ensure data stability.
464By default, the intent log is allocated from blocks within the main pool.
465However, it might be possible to get better performance using separate intent
466log devices such as NVRAM or a dedicated disk.
467For example:
468.Bd -literal
469# zpool create pool c0d0 c1d0 log c2d0
470.Ed
471.Pp
472Multiple log devices can also be specified, and they can be mirrored.
473See the
474.Sx EXAMPLES
475section for an example of mirroring multiple log devices.
476.Pp
477Log devices can be added, replaced, attached, detached, and imported and
478exported as part of the larger pool.
479Mirrored devices can be removed by specifying the top-level mirror vdev.
480.Ss Cache Devices
481Devices can be added to a storage pool as
482.Qq cache devices .
483These devices provide an additional layer of caching between main memory and
484disk.
485For read-heavy workloads, where the working set size is much larger than what
486can be cached in main memory, using cache devices allow much more of this
487working set to be served from low latency media.
488Using cache devices provides the greatest performance improvement for random
489read-workloads of mostly static content.
490.Pp
491To create a pool with cache devices, specify a
492.Sy cache
493vdev with any number of devices.
494For example:
495.Bd -literal
496# zpool create pool c0d0 c1d0 cache c2d0 c3d0
497.Ed
498.Pp
499Cache devices cannot be mirrored or part of a raidz configuration.
500If a read error is encountered on a cache device, that read I/O is reissued to
501the original storage pool device, which might be part of a mirrored or raidz
502configuration.
503.Pp
504The content of the cache devices is considered volatile, as is the case with
505other system caches.
506.Ss Pool checkpoint
507Before starting critical procedures that include destructive actions
508.Pq e.g. zfs Cm destroy ,
509an administrator can checkpoint the pool's state and in the case of a
510mistake or failure, rewind the entire pool back to the checkpoint.
511The checkpoint is automatically discarded upon rewinding.
512Otherwise, the checkpoint can be discarded when the procedure has completed
513successfully.
514.Pp
515A pool checkpoint can be thought of as a pool-wide snapshot and should be used
516with care as it contains every part of the pool's state, from properties to vdev
517configuration.
518Thus, while a pool has a checkpoint certain operations are not allowed.
519Specifically, vdev removal/attach/detach, mirror splitting, and
520changing the pool's guid.
521Adding a new vdev is supported but in the case of a rewind it will have to be
522added again.
523Finally, users of this feature should keep in mind that scrubs in a pool that
524has a checkpoint do not repair checkpointed data.
525.Pp
526To create a checkpoint for a pool:
527.Bd -literal
528# zpool checkpoint pool
529.Ed
530.Pp
531To later rewind to its checkpointed state (which also discards the checkpoint),
532you need to first export it and then rewind it during import:
533.Bd -literal
534# zpool export pool
535# zpool import --rewind-to-checkpoint pool
536.Ed
537.Pp
538To discard the checkpoint from a pool without rewinding:
539.Bd -literal
540# zpool checkpoint -d pool
541.Ed
542.Pp
543Dataset reservations (controlled by the
544.Nm reservation
545or
546.Nm refreservation
547zfs properties) may be unenforceable while a checkpoint exists, because the
548checkpoint is allowed to consume the dataset's reservation.
549Finally, data that is part of the checkpoint but has been freed in the
550current state of the pool won't be scanned during a scrub.
551.Ss Special Allocation Class
552The allocations in the special class are dedicated to specific block types.
553By default this includes all metadata, the indirect blocks of user data, and
554any dedup data.
555The class can also be provisioned to accept a limited percentage of small file
556data blocks.
557.Pp
558A pool must always have at least one general (non-specified) vdev before
559other devices can be assigned to the special class.
560If the special class becomes full, then allocations intended for it will spill
561back into the normal class.
562.Pp
563Dedup data can be excluded from the special class by setting the
564.Sy zfs_ddt_data_is_special
565zfs kernel variable to false (0).
566.Pp
567Inclusion of small file blocks in the special class is opt-in.
568Each dataset can control the size of small file blocks allowed in the special
569class by setting the
570.Sy special_small_blocks
571dataset property.
572It defaults to zero so you must opt-in by setting it to a non-zero value.
573See
574.Xr zfs 8
575for more info on setting this property.
576.Ss Properties
577Each pool has several properties associated with it.
578Some properties are read-only statistics while others are configurable and
579change the behavior of the pool.
580.Pp
581The following are read-only properties:
582.Bl -tag -width Ds
583.It Cm allocated
584Amount of storage space used within the pool.
585.It Sy bootsize
586The size of the system boot partition.
587This property can only be set at pool creation time and is read-only once pool
588is created.
589Setting this property implies using the
590.Fl B
591option.
592.It Sy capacity
593Percentage of pool space used.
594This property can also be referred to by its shortened column name,
595.Sy cap .
596.It Sy expandsize
597Amount of uninitialized space within the pool or device that can be used to
598increase the total capacity of the pool.
599Uninitialized space consists of any space on an EFI labeled vdev which has not
600been brought online
601.Po e.g, using
602.Nm zpool Cm online Fl e
603.Pc .
604This space occurs when a LUN is dynamically expanded.
605.It Sy fragmentation
606The amount of fragmentation in the pool.
607.It Sy free
608The amount of free space available in the pool.
609.It Sy freeing
610After a file system or snapshot is destroyed, the space it was using is
611returned to the pool asynchronously.
612.Sy freeing
613is the amount of space remaining to be reclaimed.
614Over time
615.Sy freeing
616will decrease while
617.Sy free
618increases.
619.It Sy health
620The current health of the pool.
621Health can be one of
622.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
623.It Sy guid
624A unique identifier for the pool.
625.It Sy size
626Total size of the storage pool.
627.It Sy unsupported@ Ns Em feature_guid
628Information about unsupported features that are enabled on the pool.
629See
630.Xr zpool-features 7
631for details.
632.El
633.Pp
634The space usage properties report actual physical space available to the
635storage pool.
636The physical space can be different from the total amount of space that any
637contained datasets can actually use.
638The amount of space used in a raidz configuration depends on the characteristics
639of the data being written.
640In addition, ZFS reserves some space for internal accounting that the
641.Xr zfs 8
642command takes into account, but the
643.Nm
644command does not.
645For non-full pools of a reasonable size, these effects should be invisible.
646For small pools, or pools that are close to being completely full, these
647discrepancies may become more noticeable.
648.Pp
649The following property can be set at creation time and import time:
650.Bl -tag -width Ds
651.It Sy altroot
652Alternate root directory.
653If set, this directory is prepended to any mount points within the pool.
654This can be used when examining an unknown pool where the mount points cannot be
655trusted, or in an alternate boot environment, where the typical paths are not
656valid.
657.Sy altroot
658is not a persistent property.
659It is valid only while the system is up.
660Setting
661.Sy altroot
662defaults to using
663.Sy cachefile Ns = Ns Sy none ,
664though this may be overridden using an explicit setting.
665.El
666.Pp
667The following property can be set only at import time:
668.Bl -tag -width Ds
669.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
670If set to
671.Sy on ,
672the pool will be imported in read-only mode.
673This property can also be referred to by its shortened column name,
674.Sy rdonly .
675.El
676.Pp
677The following properties can be set at creation time and import time, and later
678changed with the
679.Nm zpool Cm set
680command:
681.Bl -tag -width Ds
682.It Sy ashift Ns = Ns Sy ashift
683Pool sector size exponent, to the power of
684.Sy 2
685(internally referred to as
686.Sy ashift
687). Values from 9 to 16, inclusive, are valid; also, the
688value 0 (the default) means to auto-detect using the kernel's block
689layer and a ZFS internal exception list.
690I/O operations will be aligned to the specified size boundaries.
691Additionally, the minimum (disk) write size will be set to the specified size,
692so this represents a space vs performance trade-off.
693For optimal performance, the pool sector size should be greater than or equal to
694the sector size of the underlying disks.
695The typical case for setting this property is when performance is important and
696the underlying disks use 4KiB sectors but report 512B sectors to the OS (for
697compatibility reasons); in that case, set
698.Sy ashift=12
699(which is 1<<12 = 4096). When set, this property is
700used as the default hint value in subsequent vdev operations (add,
701attach and replace).
702Changing this value will not modify any existing
703vdev, not even on disk replacement; however it can be used, for
704instance, to replace a dying 512B sectors disk with a newer 4KiB
705sectors device: this will probably result in bad performance but at the
706same time could prevent loss of data.
707.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
708Controls automatic pool expansion when the underlying LUN is grown.
709If set to
710.Sy on ,
711the pool will be resized according to the size of the expanded device.
712If the device is part of a mirror or raidz then all devices within that
713mirror/raidz group must be expanded before the new space is made available to
714the pool.
715The default behavior is
716.Sy off .
717This property can also be referred to by its shortened column name,
718.Sy expand .
719.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
720Controls automatic device replacement.
721If set to
722.Sy off ,
723device replacement must be initiated by the administrator by using the
724.Nm zpool Cm replace
725command.
726If set to
727.Sy on ,
728any new device, found in the same physical location as a device that previously
729belonged to the pool, is automatically formatted and replaced.
730The default behavior is
731.Sy off .
732This property can also be referred to by its shortened column name,
733.Sy replace .
734.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
735Identifies the default bootable dataset for the root pool.
736This property is expected to be set mainly by the installation and upgrade
737programs.
738.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
739Controls the location of where the pool configuration is cached.
740Discovering all pools on system startup requires a cached copy of the
741configuration data that is stored on the root file system.
742All pools in this cache are automatically imported when the system boots.
743Some environments, such as install and clustering, need to cache this
744information in a different location so that pools are not automatically
745imported.
746Setting this property caches the pool configuration in a different location that
747can later be imported with
748.Nm zpool Cm import Fl c .
749Setting it to the special value
750.Sy none
751creates a temporary pool that is never cached, and the special value
752.Qq
753.Pq empty string
754uses the default location.
755.Pp
756Multiple pools can share the same cache file.
757Because the kernel destroys and recreates this file when pools are added and
758removed, care should be taken when attempting to access this file.
759When the last pool using a
760.Sy cachefile
761is exported or destroyed, the file is removed.
762.It Sy comment Ns = Ns Ar text
763A text string consisting of printable ASCII characters that will be stored
764such that it is available even if the pool becomes faulted.
765An administrator can provide additional information about a pool using this
766property.
767.It Sy dedupditto Ns = Ns Ar number
768Threshold for the number of block ditto copies.
769If the reference count for a deduplicated block increases above this number, a
770new ditto copy of this block is automatically stored.
771The default setting is
772.Sy 0
773which causes no ditto copies to be created for deduplicated blocks.
774The minimum legal nonzero setting is
775.Sy 100 .
776.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
777Controls whether a non-privileged user is granted access based on the dataset
778permissions defined on the dataset.
779See
780.Xr zfs 8
781for more information on ZFS delegated administration.
782.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
783Controls the system behavior in the event of catastrophic pool failure.
784This condition is typically a result of a loss of connectivity to the underlying
785storage device(s) or a failure of all devices within the pool.
786The behavior of such an event is determined as follows:
787.Bl -tag -width "continue"
788.It Sy wait
789Blocks all I/O access until the device connectivity is recovered and the errors
790are cleared.
791This is the default behavior.
792.It Sy continue
793Returns
794.Er EIO
795to any new write I/O requests but allows reads to any of the remaining healthy
796devices.
797Any write requests that have yet to be committed to disk would be blocked.
798.It Sy panic
799Prints out a message to the console and generates a system crash dump.
800.El
801.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
802When set to
803.Sy on
804space which has been recently freed, and is no longer allocated by the pool,
805will be periodically trimmed.
806This allows block device vdevs which support BLKDISCARD, such as SSDs, or
807file vdevs on which the underlying file system supports hole-punching, to
808reclaim unused blocks.
809The default setting for this property is
810.Sy off .
811.Pp
812Automatic TRIM does not immediately reclaim blocks after a free.
813Instead, it will optimistically delay allowing smaller ranges to be
814aggregated in to a few larger ones.
815These can then be issued more efficiently to the storage.
816.Pp
817Be aware that automatic trimming of recently freed data blocks can put
818significant stress on the underlying storage devices.
819This will vary depending of how well the specific device handles these commands.
820For lower end devices it is often possible to achieve most of the benefits
821of automatic trimming by running an on-demand (manual) TRIM periodically
822using the
823.Nm zpool Cm trim
824command.
825.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
826The value of this property is the current state of
827.Ar feature_name .
828The only valid value when setting this property is
829.Sy enabled
830which moves
831.Ar feature_name
832to the enabled state.
833See
834.Xr zpool-features 7
835for details on feature states.
836.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
837Controls whether information about snapshots associated with this pool is
838output when
839.Nm zfs Cm list
840is run without the
841.Fl t
842option.
843The default value is
844.Sy off .
845This property can also be referred to by its shortened name,
846.Sy listsnaps .
847.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
848Controls whether a pool activity check should be performed during
849.Nm zpool Cm import .
850When a pool is determined to be active it cannot be imported, even with the
851.Fl f
852option.
853This property is intended to be used in failover configurations
854where multiple hosts have access to a pool on shared storage.
855.sp
856Multihost provides protection on import only.
857It does not protect against an
858individual device being used in multiple pools, regardless of the type of vdev.
859See the discussion under
860.Sy zpool create.
861.sp
862When this property is on, periodic writes to storage occur to show the pool is
863in use.
864See
865.Sy zfs_multihost_interval
866in the
867.Xr zfs-module-parameters 7
868man page.
869In order to enable this property each host must set a unique hostid.
870The default value is
871.Sy off .
872.It Sy version Ns = Ns Ar version
873The current on-disk version of the pool.
874This can be increased, but never decreased.
875The preferred method of updating pools is with the
876.Nm zpool Cm upgrade
877command, though this property can be used when a specific version is needed for
878backwards compatibility.
879Once feature flags are enabled on a pool this property will no longer have a
880value.
881.El
882.Ss Subcommands
883All subcommands that modify state are logged persistently to the pool in their
884original form.
885.Pp
886The
887.Nm
888command provides subcommands to create and destroy storage pools, add capacity
889to storage pools, and provide information about the storage pools.
890The following subcommands are supported:
891.Bl -tag -width Ds
892.It Xo
893.Nm
894.Fl \&?
895.Xc
896Displays a help message.
897.It Xo
898.Nm
899.Cm add
900.Op Fl fgLnP
901.Oo Fl o Ar property Ns = Ns Ar value Oc
902.Ar pool vdev Ns ...
903.Xc
904Adds the specified virtual devices to the given pool.
905The
906.Ar vdev
907specification is described in the
908.Sx Virtual Devices
909section.
910The behavior of the
911.Fl f
912option, and the device checks performed are described in the
913.Nm zpool Cm create
914subcommand.
915.Bl -tag -width Ds
916.It Fl f
917Forces use of
918.Ar vdev Ns s ,
919even if they appear in use or specify a conflicting replication level.
920Not all devices can be overridden in this manner.
921.It Fl g
922Display
923.Ar vdev ,
924GUIDs instead of the normal device names.
925These GUIDs can be used in place of
926device names for the zpool detach/offline/remove/replace commands.
927.It Fl L
928Display real paths for
929.Ar vdev Ns s
930resolving all symbolic links.
931This can be used to look up the current block
932device name regardless of the /dev/disk/ path used to open it.
933.It Fl n
934Displays the configuration that would be used without actually adding the
935.Ar vdev Ns s .
936The actual pool creation can still fail due to insufficient privileges or
937device sharing.
938.It Fl P
939Display real paths for
940.Ar vdev Ns s
941instead of only the last component of the path.
942This can be used in conjunction with the
943.Fl L
944flag.
945.It Fl o Ar property Ns = Ns Ar value
946Sets the given pool properties.
947See the
948.Sx Properties
949section for a list of valid properties that can be set.
950The only property supported at the moment is
951.Sy ashift.
952.El
953.It Xo
954.Nm
955.Cm attach
956.Op Fl f
957.Oo Fl o Ar property Ns = Ns Ar value Oc
958.Ar pool device new_device
959.Xc
960Attaches
961.Ar new_device
962to the existing
963.Ar device .
964The existing device cannot be part of a raidz configuration.
965If
966.Ar device
967is not currently part of a mirrored configuration,
968.Ar device
969automatically transforms into a two-way mirror of
970.Ar device
971and
972.Ar new_device .
973If
974.Ar device
975is part of a two-way mirror, attaching
976.Ar new_device
977creates a three-way mirror, and so on.
978In either case,
979.Ar new_device
980begins to resilver immediately.
981.Bl -tag -width Ds
982.It Fl f
983Forces use of
984.Ar new_device ,
985even if its appears to be in use.
986Not all devices can be overridden in this manner.
987.It Fl o Ar property Ns = Ns Ar value
988Sets the given pool properties.
989See the
990.Sx Properties
991section for a list of valid properties that can be set.
992The only property supported at the moment is
993.Sy ashift.
994.El
995.It Xo
996.Nm
997.Cm checkpoint
998.Op Fl d, -discard
999.Ar pool
1000.Xc
1001Checkpoints the current state of
1002.Ar pool
1003, which can be later restored by
1004.Nm zpool Cm import --rewind-to-checkpoint .
1005Rewinding will also discard the checkpoint.
1006The existence of a checkpoint in a pool prohibits the following
1007.Nm zpool
1008commands:
1009.Cm remove ,
1010.Cm attach ,
1011.Cm detach ,
1012.Cm split ,
1013and
1014.Cm reguid .
1015In addition, it may break reservation boundaries if the pool lacks free
1016space.
1017The
1018.Nm zpool Cm status
1019command indicates the existence of a checkpoint or the progress of discarding a
1020checkpoint from a pool.
1021The
1022.Nm zpool Cm list
1023command reports how much space the checkpoint takes from the pool.
1024.Bl -tag -width Ds
1025.It Fl d, -discard
1026Discards an existing checkpoint from
1027.Ar pool
1028without rewinding.
1029.El
1030.It Xo
1031.Nm
1032.Cm clear
1033.Ar pool
1034.Op Ar device
1035.Xc
1036Clears device errors in a pool.
1037If no arguments are specified, all device errors within the pool are cleared.
1038If one or more devices is specified, only those errors associated with the
1039specified device or devices are cleared.
1040If multihost is enabled, and the pool has been suspended, this will not
1041resume I/O.
1042While the pool was suspended, it may have been imported on
1043another host, and resuming I/O could result in pool damage.
1044.It Xo
1045.Nm
1046.Cm create
1047.Op Fl dfn
1048.Op Fl B
1049.Op Fl m Ar mountpoint
1050.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1051.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
1052.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1053.Op Fl R Ar root
1054.Op Fl t Ar tempname
1055.Ar pool vdev Ns ...
1056.Xc
1057Creates a new storage pool containing the virtual devices specified on the
1058command line.
1059The pool name must begin with a letter, and can only contain
1060alphanumeric characters as well as underscore
1061.Pq Qq Sy _ ,
1062dash
1063.Pq Qq Sy - ,
1064and period
1065.Pq Qq Sy \&. .
1066The pool names
1067.Sy mirror ,
1068.Sy raidz ,
1069.Sy spare
1070and
1071.Sy log
1072are reserved, as are names beginning with the pattern
1073.Sy c[0-9] .
1074The
1075.Ar vdev
1076specification is described in the
1077.Sx Virtual Devices
1078section.
1079.Pp
1080The command attempts to verify that each device specified is accessible and not
1081currently in use by another subsystem.
1082However this check is not robust enough
1083to detect simultaneous attempts to use a new device in different pools, even if
1084.Sy multihost
1085is
1086.Sy enabled.
1087The
1088administrator must ensure that simultaneous invocations of any combination of
1089.Sy zpool replace ,
1090.Sy zpool create ,
1091.Sy zpool add ,
1092or
1093.Sy zpool labelclear ,
1094do not refer to the same device.
1095Using the same device in two pools will
1096result in pool corruption.
1097.sp
1098There are some uses, such as being currently mounted, or specified as the
1099dedicated dump device, that prevents a device from ever being used by ZFS.
1100Other uses, such as having a preexisting UFS file system, can be overridden with
1101the
1102.Fl f
1103option.
1104.Pp
1105The command also checks that the replication strategy for the pool is
1106consistent.
1107An attempt to combine redundant and non-redundant storage in a single pool, or
1108to mix disks and files, results in an error unless
1109.Fl f
1110is specified.
1111The use of differently sized devices within a single raidz or mirror group is
1112also flagged as an error unless
1113.Fl f
1114is specified.
1115.Pp
1116Unless the
1117.Fl R
1118option is specified, the default mount point is
1119.Pa / Ns Ar pool .
1120The mount point must not exist or must be empty, or else the root dataset
1121cannot be mounted.
1122This can be overridden with the
1123.Fl m
1124option.
1125.Pp
1126By default all supported features are enabled on the new pool unless the
1127.Fl d
1128option is specified.
1129.Bl -tag -width Ds
1130.It Fl B
1131Create whole disk pool with EFI System partition to support booting system
1132with UEFI firmware.
1133Default size is 256MB.
1134To create boot partition with custom size, set the
1135.Sy bootsize
1136property with the
1137.Fl o
1138option.
1139See the
1140.Sx Properties
1141section for details.
1142.It Fl d
1143Do not enable any features on the new pool.
1144Individual features can be enabled by setting their corresponding properties to
1145.Sy enabled
1146with the
1147.Fl o
1148option.
1149See
1150.Xr zpool-features 7
1151for details about feature properties.
1152.It Fl f
1153Forces use of
1154.Ar vdev Ns s ,
1155even if they appear in use or specify a conflicting replication level.
1156Not all devices can be overridden in this manner.
1157.It Fl m Ar mountpoint
1158Sets the mount point for the root dataset.
1159The default mount point is
1160.Pa /pool
1161or
1162.Pa altroot/pool
1163if
1164.Ar altroot
1165is specified.
1166The mount point must be an absolute path,
1167.Sy legacy ,
1168or
1169.Sy none .
1170For more information on dataset mount points, see
1171.Xr zfs 8 .
1172.It Fl n
1173Displays the configuration that would be used without actually creating the
1174pool.
1175The actual pool creation can still fail due to insufficient privileges or
1176device sharing.
1177.It Fl o Ar property Ns = Ns Ar value
1178Sets the given pool properties.
1179See the
1180.Sx Properties
1181section for a list of valid properties that can be set.
1182.It Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value
1183Sets the given pool feature.
1184See
1185.Xr zpool-features 7
1186for a list of valid features that can be set.
1187.Pp
1188.Ar value
1189can either be
1190.Sy disabled
1191or
1192.Sy enabled .
1193.It Fl O Ar file-system-property Ns = Ns Ar value
1194Sets the given file system properties in the root file system of the pool.
1195See the
1196.Sx Properties
1197section of
1198.Xr zfs 8
1199for a list of valid properties that can be set.
1200.It Fl R Ar root
1201Equivalent to
1202.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1203.It Fl t Ar tempname
1204Sets the in-core pool name to
1205.Pa tempname
1206while the on-disk name will be the name specified as the pool name
1207.Pa pool .
1208This will set the default cachefile property to
1209.Sy none.
1210This is intended to handle name space collisions when creating pools
1211for other systems, such as virtual machines or physical machines
1212whose pools live on network block devices.
1213.El
1214.It Xo
1215.Nm
1216.Cm destroy
1217.Op Fl f
1218.Ar pool
1219.Xc
1220Destroys the given pool, freeing up any devices for other use.
1221This command tries to unmount any active datasets before destroying the pool.
1222.Bl -tag -width Ds
1223.It Fl f
1224Forces any active datasets contained within the pool to be unmounted.
1225.El
1226.It Xo
1227.Nm
1228.Cm detach
1229.Ar pool device
1230.Xc
1231Detaches
1232.Ar device
1233from a mirror.
1234The operation is refused if there are no other valid replicas of the data.
1235.It Xo
1236.Nm
1237.Cm export
1238.Op Fl f
1239.Ar pool Ns ...
1240.Xc
1241Exports the given pools from the system.
1242All devices are marked as exported, but are still considered in use by other
1243subsystems.
1244The devices can be moved between systems
1245.Pq even those of different endianness
1246and imported as long as a sufficient number of devices are present.
1247.Pp
1248Before exporting the pool, all datasets within the pool are unmounted.
1249A pool can not be exported if it has a shared spare that is currently being
1250used.
1251.Pp
1252For pools to be portable, you must give the
1253.Nm
1254command whole disks, not just slices, so that ZFS can label the disks with
1255portable EFI labels.
1256Otherwise, disk drivers on platforms of different endianness will not recognize
1257the disks.
1258.Bl -tag -width Ds
1259.It Fl f
1260Forcefully unmount all datasets, using the
1261.Nm unmount Fl f
1262command.
1263.Pp
1264This command will forcefully export the pool even if it has a shared spare that
1265is currently being used.
1266This may lead to potential data corruption.
1267.El
1268.It Xo
1269.Nm
1270.Cm get
1271.Op Fl Hp
1272.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1273.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1274.Ar pool Ns ...
1275.Xc
1276Retrieves the given list of properties
1277.Po
1278or all properties if
1279.Sy all
1280is used
1281.Pc
1282for the specified storage pool(s).
1283These properties are displayed with the following fields:
1284.Bd -literal
1285        name          Name of storage pool
1286        property      Property name
1287        value         Property value
1288        source        Property source, either 'default' or 'local'.
1289.Ed
1290.Pp
1291See the
1292.Sx Properties
1293section for more information on the available pool properties.
1294.Bl -tag -width Ds
1295.It Fl H
1296Scripted mode.
1297Do not display headers, and separate fields by a single tab instead of arbitrary
1298space.
1299.It Fl o Ar field
1300A comma-separated list of columns to display.
1301.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1302is the default value.
1303.It Fl p
1304Display numbers in parsable (exact) values.
1305.El
1306.It Xo
1307.Nm
1308.Cm history
1309.Op Fl il
1310.Oo Ar pool Oc Ns ...
1311.Xc
1312Displays the command history of the specified pool(s) or all pools if no pool is
1313specified.
1314.Bl -tag -width Ds
1315.It Fl i
1316Displays internally logged ZFS events in addition to user initiated events.
1317.It Fl l
1318Displays log records in long format, which in addition to standard format
1319includes, the user name, the hostname, and the zone in which the operation was
1320performed.
1321.El
1322.It Xo
1323.Nm
1324.Cm import
1325.Op Fl D
1326.Op Fl d Ar dir
1327.Xc
1328Lists pools available to import.
1329If the
1330.Fl d
1331option is not specified, this command searches for devices in
1332.Pa /dev/dsk .
1333The
1334.Fl d
1335option can be specified multiple times, and all directories are searched.
1336If the device appears to be part of an exported pool, this command displays a
1337summary of the pool with the name of the pool, a numeric identifier, as well as
1338the vdev layout and current health of the device for each device or file.
1339Destroyed pools, pools that were previously destroyed with the
1340.Nm zpool Cm destroy
1341command, are not listed unless the
1342.Fl D
1343option is specified.
1344.Pp
1345The numeric identifier is unique, and can be used instead of the pool name when
1346multiple exported pools of the same name are available.
1347.Bl -tag -width Ds
1348.It Fl c Ar cachefile
1349Reads configuration from the given
1350.Ar cachefile
1351that was created with the
1352.Sy cachefile
1353pool property.
1354This
1355.Ar cachefile
1356is used instead of searching for devices.
1357.It Fl d Ar dir
1358Searches for devices or files in
1359.Ar dir .
1360The
1361.Fl d
1362option can be specified multiple times.
1363.It Fl D
1364Lists destroyed pools only.
1365.El
1366.It Xo
1367.Nm
1368.Cm import
1369.Fl a
1370.Op Fl DflmN
1371.Op Fl F Op Fl n
1372.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1373.Op Fl o Ar mntopts
1374.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1375.Op Fl R Ar root
1376.Xc
1377Imports all pools found in the search directories.
1378Identical to the previous command, except that all pools with a sufficient
1379number of devices available are imported.
1380Destroyed pools, pools that were previously destroyed with the
1381.Nm zpool Cm destroy
1382command, will not be imported unless the
1383.Fl D
1384option is specified.
1385.Bl -tag -width Ds
1386.It Fl a
1387Searches for and imports all pools found.
1388.It Fl c Ar cachefile
1389Reads configuration from the given
1390.Ar cachefile
1391that was created with the
1392.Sy cachefile
1393pool property.
1394This
1395.Ar cachefile
1396is used instead of searching for devices.
1397.It Fl d Ar dir
1398Searches for devices or files in
1399.Ar dir .
1400The
1401.Fl d
1402option can be specified multiple times.
1403This option is incompatible with the
1404.Fl c
1405option.
1406.It Fl D
1407Imports destroyed pools only.
1408The
1409.Fl f
1410option is also required.
1411.It Fl f
1412Forces import, even if the pool appears to be potentially active.
1413.It Fl F
1414Recovery mode for a non-importable pool.
1415Attempt to return the pool to an importable state by discarding the last few
1416transactions.
1417Not all damaged pools can be recovered by using this option.
1418If successful, the data from the discarded transactions is irretrievably lost.
1419This option is ignored if the pool is importable or already imported.
1420.It Fl l
1421Indicates that this command will request encryption keys for all encrypted
1422datasets it attempts to mount as it is bringing the pool online.
1423Note that if any datasets have a
1424.Sy keylocation
1425of
1426.Sy prompt
1427this command will block waiting for the keys to be entered.
1428Without this flag encrypted datasets will be left unavailable until the keys are
1429loaded.
1430.It Fl m
1431Allows a pool to import when there is a missing log device.
1432Recent transactions can be lost because the log device will be discarded.
1433.It Fl n
1434Used with the
1435.Fl F
1436recovery option.
1437Determines whether a non-importable pool can be made importable again, but does
1438not actually perform the pool recovery.
1439For more details about pool recovery mode, see the
1440.Fl F
1441option, above.
1442.It Fl N
1443Import the pool without mounting any file systems.
1444.It Fl o Ar mntopts
1445Comma-separated list of mount options to use when mounting datasets within the
1446pool.
1447See
1448.Xr zfs 8
1449for a description of dataset properties and mount options.
1450.It Fl o Ar property Ns = Ns Ar value
1451Sets the specified property on the imported pool.
1452See the
1453.Sx Properties
1454section for more information on the available pool properties.
1455.It Fl R Ar root
1456Sets the
1457.Sy cachefile
1458property to
1459.Sy none
1460and the
1461.Sy altroot
1462property to
1463.Ar root .
1464.El
1465.It Xo
1466.Nm
1467.Cm import
1468.Op Fl Dfmt
1469.Op Fl F Op Fl n
1470.Op Fl -rewind-to-checkpoint
1471.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1472.Op Fl o Ar mntopts
1473.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1474.Op Fl R Ar root
1475.Ar pool Ns | Ns Ar id
1476.Op Ar newpool
1477.Xc
1478Imports a specific pool.
1479A pool can be identified by its name or the numeric identifier.
1480If
1481.Ar newpool
1482is specified, the pool is imported using the name
1483.Ar newpool .
1484Otherwise, it is imported with the same name as its exported name.
1485.Pp
1486If a device is removed from a system without running
1487.Nm zpool Cm export
1488first, the device appears as potentially active.
1489It cannot be determined if this was a failed export, or whether the device is
1490really in use from another host.
1491To import a pool in this state, the
1492.Fl f
1493option is required.
1494.Bl -tag -width Ds
1495.It Fl c Ar cachefile
1496Reads configuration from the given
1497.Ar cachefile
1498that was created with the
1499.Sy cachefile
1500pool property.
1501This
1502.Ar cachefile
1503is used instead of searching for devices.
1504.It Fl d Ar dir
1505Searches for devices or files in
1506.Ar dir .
1507The
1508.Fl d
1509option can be specified multiple times.
1510This option is incompatible with the
1511.Fl c
1512option.
1513.It Fl D
1514Imports destroyed pool.
1515The
1516.Fl f
1517option is also required.
1518.It Fl f
1519Forces import, even if the pool appears to be potentially active.
1520.It Fl F
1521Recovery mode for a non-importable pool.
1522Attempt to return the pool to an importable state by discarding the last few
1523transactions.
1524Not all damaged pools can be recovered by using this option.
1525If successful, the data from the discarded transactions is irretrievably lost.
1526This option is ignored if the pool is importable or already imported.
1527.It Fl l
1528Indicates that the zpool command will request encryption keys for all
1529encrypted datasets it attempts to mount as it is bringing the pool
1530online.
1531This is equivalent to running
1532.Nm Cm mount
1533on each encrypted dataset immediately after the pool is imported.
1534If any datasets have a
1535.Sy prompt
1536keysource this command will block waiting for the key to be entered.
1537Otherwise, encrypted datasets will be left unavailable until the keys are
1538loaded.
1539.It Fl m
1540Allows a pool to import when there is a missing log device.
1541Recent transactions can be lost because the log device will be discarded.
1542.It Fl n
1543Used with the
1544.Fl F
1545recovery option.
1546Determines whether a non-importable pool can be made importable again, but does
1547not actually perform the pool recovery.
1548For more details about pool recovery mode, see the
1549.Fl F
1550option, above.
1551.It Fl o Ar mntopts
1552Comma-separated list of mount options to use when mounting datasets within the
1553pool.
1554See
1555.Xr zfs 8
1556for a description of dataset properties and mount options.
1557.It Fl o Ar property Ns = Ns Ar value
1558Sets the specified property on the imported pool.
1559See the
1560.Sx Properties
1561section for more information on the available pool properties.
1562.It Fl R Ar root
1563Sets the
1564.Sy cachefile
1565property to
1566.Sy none
1567and the
1568.Sy altroot
1569property to
1570.Ar root .
1571.It Fl t
1572Used with
1573.Ar newpool .
1574Specifies that
1575.Ar newpool
1576is temporary.
1577Temporary pool names last until export.
1578Ensures that the original pool name will be used in all label updates and
1579therefore is retained upon export.
1580Will also set
1581.Sy cachefile
1582property to
1583.Sy none
1584when not explicitly specified.
1585.It Fl -rewind-to-checkpoint
1586Rewinds pool to the checkpointed state.
1587Once the pool is imported with this flag there is no way to undo the rewind.
1588All changes and data that were written after the checkpoint are lost!
1589The only exception is when the
1590.Sy readonly
1591mounting option is enabled.
1592In this case, the checkpointed state of the pool is opened and an
1593administrator can see how the pool would look like if they were
1594to fully rewind.
1595.El
1596.It Xo
1597.Nm
1598.Cm initialize
1599.Op Fl c | Fl s
1600.Ar pool
1601.Op Ar device Ns ...
1602.Xc
1603Begins initializing by writing to all unallocated regions on the specified
1604devices, or all eligible devices in the pool if no individual devices are
1605specified.
1606Only leaf data or log devices may be initialized.
1607.Bl -tag -width Ds
1608.It Fl c, -cancel
1609Cancel initializing on the specified devices, or all eligible devices if none
1610are specified.
1611If one or more target devices are invalid or are not currently being
1612initialized, the command will fail and no cancellation will occur on any device.
1613.It Fl s -suspend
1614Suspend initializing on the specified devices, or all eligible devices if none
1615are specified.
1616If one or more target devices are invalid or are not currently being
1617initialized, the command will fail and no suspension will occur on any device.
1618Initializing can then be resumed by running
1619.Nm zpool Cm initialize
1620with no flags on the relevant target devices.
1621.El
1622.It Xo
1623.Nm
1624.Cm iostat
1625.Op Oo Fl lq Oc | Ns Fl rw
1626.Op Fl T Sy u Ns | Ns Sy d
1627.Op Fl ghHLnpPvy
1628.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1629.Op Ar interval Op Ar count
1630.Xc
1631Displays I/O statistics for the given pools/vdevs.
1632Physical I/Os may be observed via
1633.Xr iostat 8 .
1634If writes are located nearby, they may be merged into a single larger operation.
1635Additional I/O may be generated depending on the level of vdev redundancy.
1636To filter output, you may pass in a list of pools, a pool and list of vdevs
1637in that pool, or a list of any vdevs from any pool.
1638If no items are specified, statistics for every pool in the system are shown.
1639When given an
1640.Ar interval ,
1641the statistics are printed every
1642.Ar interval
1643seconds until ^C is pressed.
1644If
1645.Fl n
1646flag is specified the headers are displayed only once, otherwise they are
1647displayed periodically.
1648If
1649.Ar count
1650is specified, the command exits after
1651.Ar count
1652reports are printed.
1653The first report printed is always the statistics since boot regardless of
1654whether
1655.Ar interval
1656and
1657.Ar count
1658are passed.
1659Also note that the units of
1660.Sy K ,
1661.Sy M ,
1662.Sy G ...
1663that are printed in the report are in base 1024.
1664To get the raw values, use the
1665.Fl p
1666flag.
1667.Bl -tag -width Ds
1668.It Fl T Sy u Ns | Ns Sy d
1669Display a time stamp.
1670Specify
1671.Sy u
1672for a printed representation of the internal representation of time.
1673See
1674.Xr time 2 .
1675Specify
1676.Sy d
1677for standard date format.
1678See
1679.Xr date 1 .
1680.It Fl i
1681Display vdev initialization status.
1682.It Fl g
1683Display vdev GUIDs instead of the normal device names.
1684These GUIDs can be used in place of device names for the zpool
1685detach/offline/remove/replace commands.
1686.It Fl H
1687Scripted mode.
1688Do not display headers, and separate fields by a single tab instead of
1689arbitrary space.
1690.It Fl L
1691Display real paths for vdevs resolving all symbolic links.
1692This can be used to look up the current block device name regardless of the
1693.Pa /dev/dsk/
1694path used to open it.
1695.It Fl n
1696Print headers only once when passed.
1697.It Fl p
1698Display numbers in parsable (exact) values.
1699Time values are in nanoseconds.
1700.It Fl P
1701Display full paths for vdevs instead of only the last component of
1702the path.
1703This can be used in conjunction with the
1704.Fl L
1705flag.
1706.It Fl r
1707Print request size histograms for the leaf vdev's IO
1708This includes histograms of individual IOs (ind) and aggregate IOs (agg).
1709These stats can be useful for observing how well IO aggregation is working.
1710Note that TRIM IOs may exceed 16M, but will be counted as 16M.
1711.It Fl v
1712Verbose statistics Reports usage statistics for individual vdevs within the
1713pool, in addition to the pool-wide statistics.
1714.It Fl y
1715Omit statistics since boot.
1716Normally the first line of output reports the statistics since boot.
1717This option suppresses that first line of output.
1718.Ar interval
1719.It Fl w
1720Display latency histograms:
1721.Pp
1722.Ar total_wait :
1723Total IO time (queuing + disk IO time).
1724.Ar disk_wait :
1725Disk IO time (time reading/writing the disk).
1726.Ar syncq_wait :
1727Amount of time IO spent in synchronous priority queues.
1728Does not include disk time.
1729.Ar asyncq_wait :
1730Amount of time IO spent in asynchronous priority queues.
1731Does not include disk time.
1732.Ar scrub :
1733Amount of time IO spent in scrub queue.
1734Does not include disk time.
1735.It Fl l
1736Include average latency statistics:
1737.Pp
1738.Ar total_wait :
1739Average total IO time (queuing + disk IO time).
1740.Ar disk_wait :
1741Average disk IO time (time reading/writing the disk).
1742.Ar syncq_wait :
1743Average amount of time IO spent in synchronous priority queues.
1744Does not include disk time.
1745.Ar asyncq_wait :
1746Average amount of time IO spent in asynchronous priority queues.
1747Does not include disk time.
1748.Ar scrub :
1749Average queuing time in scrub queue.
1750Does not include disk time.
1751.Ar trim :
1752Average queuing time in trim queue.
1753Does not include disk time.
1754.It Fl q
1755Include active queue statistics.
1756Each priority queue has both pending (
1757.Ar pend )
1758and active (
1759.Ar activ )
1760IOs.
1761Pending IOs are waiting to be issued to the disk, and active IOs have been
1762issued to disk and are waiting for completion.
1763These stats are broken out by priority queue:
1764.Pp
1765.Ar syncq_read/write :
1766Current number of entries in synchronous priority
1767queues.
1768.Ar asyncq_read/write :
1769Current number of entries in asynchronous priority queues.
1770.Ar scrubq_read :
1771Current number of entries in scrub queue.
1772.Ar trimq_write :
1773Current number of entries in trim queue.
1774.Pp
1775All queue statistics are instantaneous measurements of the number of
1776entries in the queues.
1777If you specify an interval, the measurements will be sampled from the end of
1778the interval.
1779.El
1780.It Xo
1781.Nm
1782.Cm labelclear
1783.Op Fl f
1784.Ar device
1785.Xc
1786Removes ZFS label information from the specified
1787.Ar device .
1788The
1789.Ar device
1790must not be part of an active pool configuration.
1791.Bl -tag -width Ds
1792.It Fl f
1793Treat exported or foreign devices as inactive.
1794.El
1795.It Xo
1796.Nm
1797.Cm list
1798.Op Fl HgLpPv
1799.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1800.Op Fl T Sy u Ns | Ns Sy d
1801.Oo Ar pool Oc Ns ...
1802.Op Ar interval Op Ar count
1803.Xc
1804Lists the given pools along with a health status and space usage.
1805If no
1806.Ar pool Ns s
1807are specified, all pools in the system are listed.
1808When given an
1809.Ar interval ,
1810the information is printed every
1811.Ar interval
1812seconds until ^C is pressed.
1813If
1814.Ar count
1815is specified, the command exits after
1816.Ar count
1817reports are printed.
1818.Bl -tag -width Ds
1819.It Fl g
1820Display vdev GUIDs instead of the normal device names.
1821These GUIDs can be used in place of device names for the zpool
1822detach/offline/remove/replace commands.
1823.It Fl H
1824Scripted mode.
1825Do not display headers, and separate fields by a single tab instead of arbitrary
1826space.
1827.It Fl o Ar property
1828Comma-separated list of properties to display.
1829See the
1830.Sx Properties
1831section for a list of valid properties.
1832The default list is
1833.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation , capacity ,
1834.Cm dedupratio , health , altroot .
1835.It Fl L
1836Display real paths for vdevs resolving all symbolic links.
1837This can be used to look up the current block device name regardless of the
1838/dev/disk/ path used to open it.
1839.It Fl p
1840Display numbers in parsable
1841.Pq exact
1842values.
1843.It Fl P
1844Display full paths for vdevs instead of only the last component of
1845the path.
1846This can be used in conjunction with the
1847.Fl L
1848flag.
1849.It Fl T Sy u Ns | Ns Sy d
1850Display a time stamp.
1851Specify
1852.Sy u
1853for a printed representation of the internal representation of time.
1854See
1855.Xr time 2 .
1856Specify
1857.Sy d
1858for standard date format.
1859See
1860.Xr date 1 .
1861.It Fl v
1862Verbose statistics.
1863Reports usage statistics for individual vdevs within the pool, in addition to
1864the pool-wise statistics.
1865.El
1866.It Xo
1867.Nm
1868.Cm offline
1869.Op Fl t
1870.Ar pool Ar device Ns ...
1871.Xc
1872Takes the specified physical device offline.
1873While the
1874.Ar device
1875is offline, no attempt is made to read or write to the device.
1876This command is not applicable to spares.
1877.Bl -tag -width Ds
1878.It Fl t
1879Temporary.
1880Upon reboot, the specified physical device reverts to its previous state.
1881.El
1882.It Xo
1883.Nm
1884.Cm online
1885.Op Fl e
1886.Ar pool Ar device Ns ...
1887.Xc
1888Brings the specified physical device online.
1889This command is not applicable to spares.
1890.Bl -tag -width Ds
1891.It Fl e
1892Expand the device to use all available space.
1893If the device is part of a mirror or raidz then all devices must be expanded
1894before the new space will become available to the pool.
1895.El
1896.It Xo
1897.Nm
1898.Cm reguid
1899.Ar pool
1900.Xc
1901Generates a new unique identifier for the pool.
1902You must ensure that all devices in this pool are online and healthy before
1903performing this action.
1904.It Xo
1905.Nm
1906.Cm reopen
1907.Ar pool
1908.Xc
1909Reopen all the vdevs associated with the pool.
1910.It Xo
1911.Nm
1912.Cm remove
1913.Op Fl np
1914.Ar pool Ar device Ns ...
1915.Xc
1916Removes the specified device from the pool.
1917This command currently only supports removing hot spares, cache, log
1918devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1919.sp
1920Removing a top-level vdev reduces the total amount of space in the storage pool.
1921The specified device will be evacuated by copying all allocated space from it to
1922the other devices in the pool.
1923In this case, the
1924.Nm zpool Cm remove
1925command initiates the removal and returns, while the evacuation continues in
1926the background.
1927The removal progress can be monitored with
1928.Nm zpool Cm status.
1929This feature must be enabled to be used, see
1930.Xr zpool-features 7
1931.Pp
1932A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1933same.
1934Non-log devices or data devices that are part of a mirrored configuration can be removed using
1935the
1936.Nm zpool Cm detach
1937command.
1938.Bl -tag -width Ds
1939.It Fl n
1940Do not actually perform the removal ("no-op").
1941Instead, print the estimated amount of memory that will be used by the
1942mapping table after the removal completes.
1943This is nonzero only for top-level vdevs.
1944.El
1945.Bl -tag -width Ds
1946.It Fl p
1947Used in conjunction with the
1948.Fl n
1949flag, displays numbers as parsable (exact) values.
1950.El
1951.It Xo
1952.Nm
1953.Cm remove
1954.Fl s
1955.Ar pool
1956.Xc
1957Stops and cancels an in-progress removal of a top-level vdev.
1958.It Xo
1959.Nm
1960.Cm replace
1961.Op Fl f
1962.Ar pool Ar device Op Ar new_device
1963.Xc
1964Replaces
1965.Ar old_device
1966with
1967.Ar new_device .
1968This is equivalent to attaching
1969.Ar new_device ,
1970waiting for it to resilver, and then detaching
1971.Ar old_device .
1972.Pp
1973The size of
1974.Ar new_device
1975must be greater than or equal to the minimum size of all the devices in a mirror
1976or raidz configuration.
1977.Pp
1978.Ar new_device
1979is required if the pool is not redundant.
1980If
1981.Ar new_device
1982is not specified, it defaults to
1983.Ar old_device .
1984This form of replacement is useful after an existing disk has failed and has
1985been physically replaced.
1986In this case, the new disk may have the same
1987.Pa /dev/dsk
1988path as the old device, even though it is actually a different disk.
1989ZFS recognizes this.
1990.Bl -tag -width Ds
1991.It Fl f
1992Forces use of
1993.Ar new_device ,
1994even if its appears to be in use.
1995Not all devices can be overridden in this manner.
1996.El
1997.It Xo
1998.Nm
1999.Cm resilver
2000.Ar pool Ns ...
2001.Xc
2002Starts a resilver.
2003If an existing resilver is already running it will be restarted from the
2004beginning.
2005Any drives that were scheduled for a deferred resilver will be added to the
2006new one.
2007This requires the
2008.Sy resilver_defer
2009feature.
2010.It Xo
2011.Nm
2012.Cm scrub
2013.Op Fl s | Fl p
2014.Ar pool Ns ...
2015.Xc
2016Begins a scrub or resumes a paused scrub.
2017The scrub examines all data in the specified pools to verify that it checksums
2018correctly.
2019For replicated
2020.Pq mirror or raidz
2021devices, ZFS automatically repairs any damage discovered during the scrub.
2022The
2023.Nm zpool Cm status
2024command reports the progress of the scrub and summarizes the results of the
2025scrub upon completion.
2026.Pp
2027Scrubbing and resilvering are very similar operations.
2028The difference is that resilvering only examines data that ZFS knows to be out
2029of date
2030.Po
2031for example, when attaching a new device to a mirror or replacing an existing
2032device
2033.Pc ,
2034whereas scrubbing examines all data to discover silent errors due to hardware
2035faults or disk failure.
2036.Pp
2037Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2038one at a time.
2039If a scrub is paused, the
2040.Nm zpool Cm scrub
2041resumes it.
2042If a resilver is in progress, ZFS does not allow a scrub to be started until the
2043resilver completes.
2044.Pp
2045Note that, due to changes in pool data on a live system, it is possible for
2046scrubs to progress slightly beyond 100% completion.
2047During this period, no completion time estimate will be provided.
2048.Bl -tag -width Ds
2049.It Fl s
2050Stop scrubbing.
2051.El
2052.Bl -tag -width Ds
2053.It Fl p
2054Pause scrubbing.
2055Scrub pause state and progress are periodically synced to disk.
2056If the system is restarted or pool is exported during a paused scrub,
2057even after import, scrub will remain paused until it is resumed.
2058Once resumed the scrub will pick up from the place where it was last
2059checkpointed to disk.
2060To resume a paused scrub issue
2061.Nm zpool Cm scrub
2062again.
2063.El
2064.It Xo
2065.Nm
2066.Cm set
2067.Ar property Ns = Ns Ar value
2068.Ar pool
2069.Xc
2070Sets the given property on the specified pool.
2071See the
2072.Sx Properties
2073section for more information on what properties can be set and acceptable
2074values.
2075.It Xo
2076.Nm
2077.Cm split
2078.Op Fl gLlnP
2079.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2080.Op Fl R Ar root
2081.Ar pool newpool
2082.Xc
2083Splits devices off
2084.Ar pool
2085creating
2086.Ar newpool .
2087All vdevs in
2088.Ar pool
2089must be mirrors.
2090At the time of the split,
2091.Ar newpool
2092will be a replica of
2093.Ar pool .
2094.Bl -tag -width Ds
2095.It Fl g
2096Display vdev GUIDs instead of the normal device names.
2097These GUIDs can be used in place of device names for the zpool
2098detach/offline/remove/replace commands.
2099.It Fl L
2100Display real paths for vdevs resolving all symbolic links.
2101This can be used to look up the current block device name regardless of the
2102.Pa /dev/disk/
2103path used to open it.
2104.It Fl l
2105Indicates that this command will request encryption keys for all encrypted
2106datasets it attempts to mount as it is bringing the new pool online.
2107Note that if any datasets have a
2108.Sy keylocation
2109of
2110.Sy prompt
2111this command will block waiting for the keys to be entered.
2112Without this flag encrypted datasets will be left unavailable and unmounted
2113until the keys are loaded.
2114.It Fl n
2115Do dry run, do not actually perform the split.
2116Print out the expected configuration of
2117.Ar newpool .
2118.It Fl P
2119Display full paths for vdevs instead of only the last component of
2120the path.
2121This can be used in conjunction with the
2122.Fl L
2123flag.
2124.It Fl o Ar property Ns = Ns Ar value
2125Sets the specified property for
2126.Ar newpool .
2127See the
2128.Sx Properties
2129section for more information on the available pool properties.
2130.It Fl R Ar root
2131Set
2132.Sy altroot
2133for
2134.Ar newpool
2135to
2136.Ar root
2137and automatically import it.
2138.El
2139.It Xo
2140.Nm
2141.Cm status
2142.Op Fl DigLpPstvx
2143.Op Fl T Sy u Ns | Ns Sy d
2144.Oo Ar pool Oc Ns ...
2145.Op Ar interval Op Ar count
2146.Xc
2147Displays the detailed health status for the given pools.
2148If no
2149.Ar pool
2150is specified, then the status of each pool in the system is displayed.
2151For more information on pool and device health, see the
2152.Sx Device Failure and Recovery
2153section.
2154.Pp
2155If a scrub or resilver is in progress, this command reports the percentage done
2156and the estimated time to completion.
2157Both of these are only approximate, because the amount of data in the pool and
2158the other workloads on the system can change.
2159.Bl -tag -width Ds
2160.It Fl D
2161Display a histogram of deduplication statistics, showing the allocated
2162.Pq physically present on disk
2163and referenced
2164.Pq logically referenced in the pool
2165block counts and sizes by reference count.
2166.It Fl g
2167Display vdev GUIDs instead of the normal device names.
2168These GUIDs can be used in place of device names for the zpool
2169detach/offline/remove/replace commands.
2170.It Fl L
2171Display real paths for vdevs resolving all symbolic links.
2172This can be used to look up the current block device name regardless of the
2173.Pa /dev/disk/
2174path used to open it.
2175.It Fl p
2176Display numbers in parsable (exact) values.
2177.It Fl P
2178Display full paths for vdevs instead of only the last component of
2179the path.
2180This can be used in conjunction with the
2181.Fl L
2182flag.
2183.It Fl s
2184Display the number of leaf VDEV slow IOs.
2185This is the number of IOs that didn't complete in
2186.Sy zio_slow_io_ms
2187milliseconds (default 30 seconds).
2188This does not necessarily mean the IOs failed to complete, just took an
2189unreasonably long amount of time.
2190This may indicate a problem with the underlying storage.
2191.It Fl t
2192Display vdev TRIM status.
2193.It Fl T Sy u Ns | Ns Sy d
2194Display a time stamp.
2195Specify
2196.Sy u
2197for a printed representation of the internal representation of time.
2198See
2199.Xr time 2 .
2200Specify
2201.Sy d
2202for standard date format.
2203See
2204.Xr date 1 .
2205.It Fl v
2206Displays verbose data error information, printing out a complete list of all
2207data errors since the last complete pool scrub.
2208.It Fl x
2209Only display status for pools that are exhibiting errors or are otherwise
2210unavailable.
2211Warnings about pools not using the latest on-disk format will not be included.
2212.El
2213.It Xo
2214.Nm
2215.Cm sync
2216.Oo Ar pool Oc Ns ...
2217.Xc
2218Forces all in-core dirty data to be written to the primary pool storage and
2219not the ZIL.
2220It will also update administrative information including quota reporting.
2221Without arguments,
2222.Nm zpool Cm sync
2223will sync all pools on the system.
2224Otherwise, it will only sync the specified
2225.Ar pool .
2226.It Xo
2227.Nm
2228.Cm trim
2229.Op Fl d
2230.Op Fl r Ar rate
2231.Op Fl c | Fl s
2232.Ar pool
2233.Op Ar device Ns ...
2234.Xc
2235Initiates an immediate on-demand TRIM operation for all of the free space in
2236a pool.
2237This operation informs the underlying storage devices of all blocks
2238in the pool which are no longer allocated and allows thinly provisioned
2239devices to reclaim the space.
2240.Pp
2241A manual on-demand TRIM operation can be initiated irrespective of the
2242.Sy autotrim
2243pool property setting.
2244See the documentation for the
2245.Sy autotrim
2246property above for the types of vdev devices which can be trimmed.
2247.Bl -tag -width Ds
2248.It Fl d -secure
2249Causes a secure TRIM to be initiated.
2250When performing a secure TRIM, the device guarantees that data stored on the
2251trimmed blocks has been erased.
2252This requires support from the device and is not supported by all SSDs.
2253.It Fl r -rate Ar rate
2254Controls the rate at which the TRIM operation progresses.
2255Without this option TRIM is executed as quickly as possible.
2256The rate, expressed in bytes per second, is applied on a per-vdev basis and
2257may be set differently for each leaf vdev.
2258.It Fl c, -cancel
2259Cancel trimming on the specified devices, or all eligible devices if none
2260are specified.
2261If one or more target devices are invalid or are not currently being
2262trimmed, the command will fail and no cancellation will occur on any device.
2263.It Fl s -suspend
2264Suspend trimming on the specified devices, or all eligible devices if none
2265are specified.
2266If one or more target devices are invalid or are not currently being
2267trimmed, the command will fail and no suspension will occur on any device.
2268Trimming can then be resumed by running
2269.Nm zpool Cm trim
2270with no flags on the relevant target devices.
2271.El
2272.It Xo
2273.Nm
2274.Cm upgrade
2275.Xc
2276Displays pools which do not have all supported features enabled and pools
2277formatted using a legacy ZFS version number.
2278These pools can continue to be used, but some features may not be available.
2279Use
2280.Nm zpool Cm upgrade Fl a
2281to enable all features on all pools.
2282.It Xo
2283.Nm
2284.Cm upgrade
2285.Fl v
2286.Xc
2287Displays legacy ZFS versions supported by the current software.
2288See
2289.Xr zpool-features 7
2290for a description of feature flags features supported by the current software.
2291.It Xo
2292.Nm
2293.Cm upgrade
2294.Op Fl V Ar version
2295.Fl a Ns | Ns Ar pool Ns ...
2296.Xc
2297Enables all supported features on the given pool.
2298Once this is done, the pool will no longer be accessible on systems that do not
2299support feature flags.
2300See
2301.Xr zpool-features 7
2302for details on compatibility with systems that support feature flags, but do not
2303support all features enabled on the pool.
2304.Bl -tag -width Ds
2305.It Fl a
2306Enables all supported features on all pools.
2307.It Fl V Ar version
2308Upgrade to the specified legacy version.
2309If the
2310.Fl V
2311flag is specified, no features will be enabled on the pool.
2312This option can only be used to increase the version number up to the last
2313supported legacy version number.
2314.El
2315.El
2316.Sh EXIT STATUS
2317The following exit values are returned:
2318.Bl -tag -width Ds
2319.It Sy 0
2320Successful completion.
2321.It Sy 1
2322An error occurred.
2323.It Sy 2
2324Invalid command line options were specified.
2325.El
2326.Sh EXAMPLES
2327.Bl -tag -width Ds
2328.It Sy Example 1 No Creating a RAID-Z Storage Pool
2329The following command creates a pool with a single raidz root vdev that
2330consists of six disks.
2331.Bd -literal
2332# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
2333.Ed
2334.It Sy Example 2 No Creating a Mirrored Storage Pool
2335The following command creates a pool with two mirrors, where each mirror
2336contains two disks.
2337.Bd -literal
2338# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
2339.Ed
2340.It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
2341The following command creates an unmirrored pool using two disk slices.
2342.Bd -literal
2343# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
2344.Ed
2345.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2346The following command creates an unmirrored pool using files.
2347While not recommended, a pool based on files can be useful for experimental
2348purposes.
2349.Bd -literal
2350# zpool create tank /path/to/file/a /path/to/file/b
2351.Ed
2352.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2353The following command adds two mirrored disks to the pool
2354.Em tank ,
2355assuming the pool is already made up of two-way mirrors.
2356The additional space is immediately available to any datasets within the pool.
2357.Bd -literal
2358# zpool add tank mirror c1t0d0 c1t1d0
2359.Ed
2360.It Sy Example 6 No Listing Available ZFS Storage Pools
2361The following command lists all available pools on the system.
2362In this case, the pool
2363.Em zion
2364is faulted due to a missing device.
2365The results from this command are similar to the following:
2366.Bd -literal
2367# zpool list
2368NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2369rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
2370tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
2371zion       -      -      -      -         -      -      -  FAULTED -
2372.Ed
2373.It Sy Example 7 No Destroying a ZFS Storage Pool
2374The following command destroys the pool
2375.Em tank
2376and any datasets contained within.
2377.Bd -literal
2378# zpool destroy -f tank
2379.Ed
2380.It Sy Example 8 No Exporting a ZFS Storage Pool
2381The following command exports the devices in pool
2382.Em tank
2383so that they can be relocated or later imported.
2384.Bd -literal
2385# zpool export tank
2386.Ed
2387.It Sy Example 9 No Importing a ZFS Storage Pool
2388The following command displays available pools, and then imports the pool
2389.Em tank
2390for use on the system.
2391The results from this command are similar to the following:
2392.Bd -literal
2393# zpool import
2394  pool: tank
2395    id: 15451357997522795478
2396 state: ONLINE
2397action: The pool can be imported using its name or numeric identifier.
2398config:
2399
2400        tank        ONLINE
2401          mirror    ONLINE
2402            c1t2d0  ONLINE
2403            c1t3d0  ONLINE
2404
2405# zpool import tank
2406.Ed
2407.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2408The following command upgrades all ZFS Storage pools to the current version of
2409the software.
2410.Bd -literal
2411# zpool upgrade -a
2412This system is currently running ZFS version 2.
2413.Ed
2414.It Sy Example 11 No Managing Hot Spares
2415The following command creates a new pool with an available hot spare:
2416.Bd -literal
2417# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
2418.Ed
2419.Pp
2420If one of the disks were to fail, the pool would be reduced to the degraded
2421state.
2422The failed device can be replaced using the following command:
2423.Bd -literal
2424# zpool replace tank c0t0d0 c0t3d0
2425.Ed
2426.Pp
2427Once the data has been resilvered, the spare is automatically removed and is
2428made available for use should another device fail.
2429The hot spare can be permanently removed from the pool using the following
2430command:
2431.Bd -literal
2432# zpool remove tank c0t2d0
2433.Ed
2434.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2435The following command creates a ZFS storage pool consisting of two, two-way
2436mirrors and mirrored log devices:
2437.Bd -literal
2438# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
2439  c4d0 c5d0
2440.Ed
2441.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2442The following command adds two disks for use as cache devices to a ZFS storage
2443pool:
2444.Bd -literal
2445# zpool add pool cache c2d0 c3d0
2446.Ed
2447.Pp
2448Once added, the cache devices gradually fill with content from main memory.
2449Depending on the size of your cache devices, it could take over an hour for
2450them to fill.
2451Capacity and reads can be monitored using the
2452.Cm iostat
2453option as follows:
2454.Bd -literal
2455# zpool iostat -v pool 5
2456.Ed
2457.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2458The following commands remove the mirrored log device
2459.Sy mirror-2
2460and mirrored top-level data device
2461.Sy mirror-1 .
2462.Pp
2463Given this configuration:
2464.Bd -literal
2465  pool: tank
2466 state: ONLINE
2467 scrub: none requested
2468config:
2469
2470         NAME        STATE     READ WRITE CKSUM
2471         tank        ONLINE       0     0     0
2472           mirror-0  ONLINE       0     0     0
2473             c6t0d0  ONLINE       0     0     0
2474             c6t1d0  ONLINE       0     0     0
2475           mirror-1  ONLINE       0     0     0
2476             c6t2d0  ONLINE       0     0     0
2477             c6t3d0  ONLINE       0     0     0
2478         logs
2479           mirror-2  ONLINE       0     0     0
2480             c4t0d0  ONLINE       0     0     0
2481             c4t1d0  ONLINE       0     0     0
2482.Ed
2483.Pp
2484The command to remove the mirrored log
2485.Sy mirror-2
2486is:
2487.Bd -literal
2488# zpool remove tank mirror-2
2489.Ed
2490.Pp
2491The command to remove the mirrored data
2492.Sy mirror-1
2493is:
2494.Bd -literal
2495# zpool remove tank mirror-1
2496.Ed
2497.It Sy Example 15 No Displaying expanded space on a device
2498The following command displays the detailed information for the pool
2499.Em data .
2500This pool is comprised of a single raidz vdev where one of its devices
2501increased its capacity by 10GB.
2502In this example, the pool will not be able to utilize this extra capacity until
2503all the devices under the raidz vdev have been expanded.
2504.Bd -literal
2505# zpool list -v data
2506NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2507data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
2508  raidz1    23.9G  14.6G  9.30G    48%         -
2509    c1t1d0      -      -      -      -         -
2510    c1t2d0      -      -      -      -       10G
2511    c1t3d0      -      -      -      -         -
2512.Ed
2513.El
2514.Sh ENVIRONMENT VARIABLES
2515.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2516.It Ev ZPOOL_VDEV_NAME_GUID
2517Cause
2518.Nm zpool subcommands to output vdev guids by default.
2519This behavior is identical to the
2520.Nm zpool status -g
2521command line option.
2522.El
2523.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2524.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2525Cause
2526.Nm zpool
2527subcommands to follow links for vdev names by default.
2528This behavior is identical to the
2529.Nm zpool status -L
2530command line option.
2531.El
2532.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2533.It Ev ZPOOL_VDEV_NAME_PATH
2534Cause
2535.Nm zpool
2536subcommands to output full vdev path names by default.
2537This behavior is identical to the
2538.Nm zpool status -P
2539command line option.
2540.El
2541.Sh INTERFACE STABILITY
2542.Sy Evolving
2543.Sh SEE ALSO
2544.Xr attributes 7 ,
2545.Xr zpool-features 7 ,
2546.Xr zfs 8
2547