xref: /illumos-gate/usr/src/man/man8/zpool.8 (revision cdd3e9a818787b4def17c9f707f435885ce0ed31)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24.\" Copyright 2017 Nexenta Systems, Inc.
25.\" Copyright (c) 2017 Datto Inc.
26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27.\" Copyright 2021 Joyent, Inc.
28.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
29.\" Copyright 2024 OmniOS Community Edition (OmniOSce) Association.
30.\"
31.Dd May 8, 2024
32.Dt ZPOOL 8
33.Os
34.Sh NAME
35.Nm zpool
36.Nd configure ZFS storage pools
37.Sh SYNOPSIS
38.Nm
39.Fl \&?
40.Nm
41.Cm add
42.Op Fl fgLnP
43.Oo Fl o Ar property Ns = Ns Ar value Oc
44.Ar pool vdev Ns ...
45.Nm
46.Cm attach
47.Op Fl f
48.Oo Fl o Ar property Ns = Ns Ar value Oc
49.Ar pool device new_device
50.Nm
51.Cm checkpoint
52.Op Fl d, -discard
53.Ar pool
54.Nm
55.Cm clear
56.Ar pool
57.Op Ar device
58.Nm
59.Cm create
60.Op Fl dfn
61.Op Fl B
62.Op Fl m Ar mountpoint
63.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
64.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
65.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
66.Op Fl R Ar root
67.Op Fl t Ar tempname
68.Ar pool vdev Ns ...
69.Nm
70.Cm destroy
71.Op Fl f
72.Ar pool
73.Nm
74.Cm detach
75.Ar pool device
76.Nm
77.Cm export
78.Op Fl f
79.Ar pool Ns ...
80.Nm
81.Cm get
82.Op Fl Hp
83.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
84.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
85.Ar pool Ns ...
86.Nm
87.Cm history
88.Op Fl il
89.Oo Ar pool Oc Ns ...
90.Nm
91.Cm import
92.Op Fl D
93.Op Fl d Ar dir
94.Nm
95.Cm import
96.Fl a
97.Op Fl DflmN
98.Op Fl F Op Fl n
99.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
100.Op Fl o Ar mntopts
101.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
102.Op Fl R Ar root
103.Nm
104.Cm import
105.Op Fl Dfmt
106.Op Fl F Op Fl n
107.Op Fl -rewind-to-checkpoint
108.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
109.Op Fl o Ar mntopts
110.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
111.Op Fl R Ar root
112.Ar pool Ns | Ns Ar id
113.Op Ar newpool
114.Nm
115.Cm initialize
116.Op Fl c | Fl s
117.Ar pool
118.Op Ar device Ns ...
119.Nm
120.Cm iostat
121.Op Oo Fl lq Oc | Ns Fl rw
122.Op Fl T Sy u Ns | Ns Sy d
123.Op Fl ghHLnpPvy
124.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
125.Op Ar interval Op Ar count
126.Nm
127.Cm labelclear
128.Op Fl f
129.Ar device
130.Nm
131.Cm list
132.Op Fl HgLpPv
133.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
134.Op Fl T Sy u Ns | Ns Sy d
135.Oo Ar pool Oc Ns ...
136.Op Ar interval Op Ar count
137.Nm
138.Cm offline
139.Op Fl t
140.Ar pool Ar device Ns ...
141.Nm
142.Cm online
143.Op Fl e
144.Ar pool Ar device Ns ...
145.Nm
146.Cm reguid
147.Ar pool
148.Nm
149.Cm reopen
150.Ar pool
151.Nm
152.Cm remove
153.Op Fl np
154.Ar pool Ar device Ns ...
155.Nm
156.Cm remove
157.Fl s
158.Ar pool
159.Nm
160.Cm replace
161.Op Fl f
162.Ar pool Ar device Op Ar new_device
163.Nm
164.Cm resilver
165.Ar pool Ns ...
166.Nm
167.Cm scrub
168.Op Fl s | Fl p
169.Ar pool Ns ...
170.Nm
171.Cm trim
172.Op Fl d
173.Op Fl r Ar rate
174.Op Fl c | Fl s
175.Ar pool
176.Op Ar device Ns ...
177.Nm
178.Cm set
179.Ar property Ns = Ns Ar value
180.Ar pool
181.Nm
182.Cm split
183.Op Fl gLlnP
184.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
185.Op Fl R Ar root
186.Ar pool newpool
187.Nm
188.Cm status
189.Op Fl DigLpPstvx
190.Op Fl T Sy u Ns | Ns Sy d
191.Oo Ar pool Oc Ns ...
192.Op Ar interval Op Ar count
193.Nm
194.Cm sync
195.Oo Ar pool Oc Ns ...
196.Nm
197.Cm upgrade
198.Nm
199.Cm upgrade
200.Fl v
201.Nm
202.Cm upgrade
203.Op Fl V Ar version
204.Fl a Ns | Ns Ar pool Ns ...
205.Sh DESCRIPTION
206The
207.Nm
208command configures ZFS storage pools.
209A storage pool is a collection of devices that provides physical storage and
210data replication for ZFS datasets.
211All datasets within a storage pool share the same space.
212See
213.Xr zfs 8
214for information on managing datasets.
215.Ss Virtual Devices (vdevs)
216A "virtual device" describes a single device or a collection of devices
217organized according to certain performance and fault characteristics.
218The following virtual devices are supported:
219.Bl -tag -width Ds
220.It Sy disk
221A block device, typically located under
222.Pa /dev/dsk .
223ZFS can use individual slices or partitions, though the recommended mode of
224operation is to use whole disks.
225A disk can be specified by a full path, or it can be a shorthand name
226.Po the relative portion of the path under
227.Pa /dev/dsk
228.Pc .
229A whole disk can be specified by omitting the slice or partition designation.
230For example,
231.Pa c0t0d0
232is equivalent to
233.Pa /dev/dsk/c0t0d0s2 .
234When given a whole disk, ZFS automatically labels the disk, if necessary.
235.It Sy file
236A regular file.
237The use of files as a backing store is strongly discouraged.
238It is designed primarily for experimental purposes, as the fault tolerance of a
239file is only as good as the file system of which it is a part.
240A file must be specified by a full path.
241.It Sy mirror
242A mirror of two or more devices.
243Data is replicated in an identical fashion across all components of a mirror.
244A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
245failing before data integrity is compromised.
246.It Sy raidz , raidz1 , raidz2 , raidz3
247A variation on RAID-5 that allows for better distribution of parity and
248eliminates the RAID-5
249.Qq write hole
250.Pq in which data and parity become inconsistent after a power loss .
251Data and parity is striped across all disks within a raidz group.
252.Pp
253A raidz group can have single-, double-, or triple-parity, meaning that the
254raidz group can sustain one, two, or three failures, respectively, without
255losing any data.
256The
257.Sy raidz1
258vdev type specifies a single-parity raidz group; the
259.Sy raidz2
260vdev type specifies a double-parity raidz group; and the
261.Sy raidz3
262vdev type specifies a triple-parity raidz group.
263The
264.Sy raidz
265vdev type is an alias for
266.Sy raidz1 .
267.Pp
268A raidz group with N disks of size X with P parity disks can hold approximately
269(N-P)*X bytes and can withstand P device(s) failing before data integrity is
270compromised.
271The minimum number of devices in a raidz group is one more than the number of
272parity disks.
273The recommended number is between 3 and 9 to help increase performance.
274.It Sy spare
275A special pseudo-vdev which keeps track of available hot spares for a pool.
276For more information, see the
277.Sx Hot Spares
278section.
279.It Sy log
280A separate intent log device.
281If more than one log device is specified, then writes are load-balanced between
282devices.
283Log devices can be mirrored.
284However, raidz vdev types are not supported for the intent log.
285For more information, see the
286.Sx Intent Log
287section.
288.It Sy dedup
289A device dedicated solely for allocating dedup data.
290The redundancy of this device should match the redundancy of the other normal
291devices in the pool.
292If more than one dedup device is specified, then allocations are load-balanced
293between devices.
294.It Sy special
295A device dedicated solely for allocating various kinds of internal metadata,
296and optionally small file data.
297The redundancy of this device should match the redundancy of the other normal
298devices in the pool.
299If more than one special device is specified, then allocations are
300load-balanced between devices.
301.Pp
302For more information on special allocations, see the
303.Sx Special Allocation Class
304section.
305.It Sy cache
306A device used to cache storage pool data.
307A cache device cannot be configured as a mirror or raidz group.
308For more information, see the
309.Sx Cache Devices
310section.
311.El
312.Pp
313Virtual devices cannot be nested, so a mirror or raidz virtual device can only
314contain files or disks.
315Mirrors of mirrors
316.Pq or other combinations
317are not allowed.
318.Pp
319A pool can have any number of virtual devices at the top of the configuration
320.Po known as
321.Qq root vdevs
322.Pc .
323Data is dynamically distributed across all top-level devices to balance data
324among devices.
325As new virtual devices are added, ZFS automatically places data on the newly
326available devices.
327.Pp
328Virtual devices are specified one at a time on the command line, separated by
329whitespace.
330The keywords
331.Sy mirror
332and
333.Sy raidz
334are used to distinguish where a group ends and another begins.
335For example, the following creates two root vdevs, each a mirror of two disks:
336.Bd -literal
337# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
338.Ed
339.Ss Device Failure and Recovery
340ZFS supports a rich set of mechanisms for handling device failure and data
341corruption.
342All metadata and data is checksummed, and ZFS automatically repairs bad data
343from a good copy when corruption is detected.
344.Pp
345In order to take advantage of these features, a pool must make use of some form
346of redundancy, using either mirrored or raidz groups.
347While ZFS supports running in a non-redundant configuration, where each root
348vdev is simply a disk or file, this is strongly discouraged.
349A single case of bit corruption can render some or all of your data unavailable.
350.Pp
351A pool's health status is described by one of three states: online, degraded,
352or faulted.
353An online pool has all devices operating normally.
354A degraded pool is one in which one or more devices have failed, but the data is
355still available due to a redundant configuration.
356A faulted pool has corrupted metadata, or one or more faulted devices, and
357insufficient replicas to continue functioning.
358.Pp
359The health of the top-level vdev, such as mirror or raidz device, is
360potentially impacted by the state of its associated vdevs, or component
361devices.
362A top-level vdev or component device is in one of the following states:
363.Bl -tag -width "DEGRADED"
364.It Sy DEGRADED
365One or more top-level vdevs is in the degraded state because one or more
366component devices are offline.
367Sufficient replicas exist to continue functioning.
368.Pp
369One or more component devices is in the degraded or faulted state, but
370sufficient replicas exist to continue functioning.
371The underlying conditions are as follows:
372.Bl -bullet
373.It
374The number of checksum errors exceeds acceptable levels and the device is
375degraded as an indication that something may be wrong.
376ZFS continues to use the device as necessary.
377.It
378The number of I/O errors exceeds acceptable levels.
379The device could not be marked as faulted because there are insufficient
380replicas to continue functioning.
381.El
382.It Sy FAULTED
383One or more top-level vdevs is in the faulted state because one or more
384component devices are offline.
385Insufficient replicas exist to continue functioning.
386.Pp
387One or more component devices is in the faulted state, and insufficient
388replicas exist to continue functioning.
389The underlying conditions are as follows:
390.Bl -bullet
391.It
392The device could be opened, but the contents did not match expected values.
393.It
394The number of I/O errors exceeds acceptable levels and the device is faulted to
395prevent further use of the device.
396.El
397.It Sy OFFLINE
398The device was explicitly taken offline by the
399.Nm zpool Cm offline
400command.
401.It Sy ONLINE
402The device is online and functioning.
403.It Sy REMOVED
404The device was physically removed while the system was running.
405Device removal detection is hardware-dependent and may not be supported on all
406platforms.
407.It Sy UNAVAIL
408The device could not be opened.
409If a pool is imported when a device was unavailable, then the device will be
410identified by a unique identifier instead of its path since the path was never
411correct in the first place.
412.El
413.Pp
414If a device is removed and later re-attached to the system, ZFS attempts
415to put the device online automatically.
416Device attach detection is hardware-dependent and might not be supported on all
417platforms.
418.Ss Hot Spares
419ZFS allows devices to be associated with pools as
420.Qq hot spares .
421These devices are not actively used in the pool, but when an active device
422fails, it is automatically replaced by a hot spare.
423If there is more than one spare that could be used as a replacement then they
424are tried in order of increasing capacity so that the smallest available spare
425that can replace the failed device is used.
426To create a pool with hot spares, specify a
427.Sy spare
428vdev with any number of devices.
429For example,
430.Bd -literal
431# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
432.Ed
433.Pp
434Spares can be shared across multiple pools, and can be added with the
435.Nm zpool Cm add
436command and removed with the
437.Nm zpool Cm remove
438command.
439Once a spare replacement is initiated, a new
440.Sy spare
441vdev is created within the configuration that will remain there until the
442original device is replaced.
443At this point, the hot spare becomes available again if another device fails.
444.Pp
445If a pool has a shared spare that is currently being used, the pool can not be
446exported since other pools may use this shared spare, which may lead to
447potential data corruption.
448.Pp
449Shared spares add some risk.
450If the pools are imported on different hosts, and both pools suffer a device
451failure at the same time, both could attempt to use the spare at the same time.
452This may not be detected, resulting in data corruption.
453.Pp
454An in-progress spare replacement can be cancelled by detaching the hot spare.
455If the original faulted device is detached, then the hot spare assumes its
456place in the configuration, and is removed from the spare list of all active
457pools.
458.Pp
459Spares cannot replace log devices.
460.Ss Intent Log
461The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
462transactions.
463For instance, databases often require their transactions to be on stable storage
464devices when returning from a system call.
465NFS and other applications can also use
466.Xr fsync 3C
467to ensure data stability.
468By default, the intent log is allocated from blocks within the main pool.
469However, it might be possible to get better performance using separate intent
470log devices such as NVRAM or a dedicated disk.
471For example:
472.Bd -literal
473# zpool create pool c0d0 c1d0 log c2d0
474.Ed
475.Pp
476Multiple log devices can also be specified, and they can be mirrored.
477See the
478.Sx EXAMPLES
479section for an example of mirroring multiple log devices.
480.Pp
481Log devices can be added, replaced, attached, detached, and imported and
482exported as part of the larger pool.
483Mirrored devices can be removed by specifying the top-level mirror vdev.
484.Ss Cache Devices
485Devices can be added to a storage pool as
486.Qq cache devices .
487These devices provide an additional layer of caching between main memory and
488disk.
489For read-heavy workloads, where the working set size is much larger than what
490can be cached in main memory, using cache devices allow much more of this
491working set to be served from low latency media.
492Using cache devices provides the greatest performance improvement for random
493read-workloads of mostly static content.
494.Pp
495To create a pool with cache devices, specify a
496.Sy cache
497vdev with any number of devices.
498For example:
499.Bd -literal
500# zpool create pool c0d0 c1d0 cache c2d0 c3d0
501.Ed
502.Pp
503Cache devices cannot be mirrored or part of a raidz configuration.
504If a read error is encountered on a cache device, that read I/O is reissued to
505the original storage pool device, which might be part of a mirrored or raidz
506configuration.
507.Pp
508The content of the cache devices is considered volatile, as is the case with
509other system caches.
510.Ss Pool checkpoint
511Before starting critical procedures that include destructive actions
512.Pq e.g. zfs Cm destroy ,
513an administrator can checkpoint the pool's state and in the case of a
514mistake or failure, rewind the entire pool back to the checkpoint.
515The checkpoint is automatically discarded upon rewinding.
516Otherwise, the checkpoint can be discarded when the procedure has completed
517successfully.
518.Pp
519A pool checkpoint can be thought of as a pool-wide snapshot and should be used
520with care as it contains every part of the pool's state, from properties to vdev
521configuration.
522Thus, while a pool has a checkpoint certain operations are not allowed.
523Specifically, vdev removal/attach/detach, mirror splitting, and
524changing the pool's guid.
525Adding a new vdev is supported but in the case of a rewind it will have to be
526added again.
527Finally, users of this feature should keep in mind that scrubs in a pool that
528has a checkpoint do not repair checkpointed data.
529.Pp
530To create a checkpoint for a pool:
531.Bd -literal
532# zpool checkpoint pool
533.Ed
534.Pp
535To later rewind to its checkpointed state (which also discards the checkpoint),
536you need to first export it and then rewind it during import:
537.Bd -literal
538# zpool export pool
539# zpool import --rewind-to-checkpoint pool
540.Ed
541.Pp
542To discard the checkpoint from a pool without rewinding:
543.Bd -literal
544# zpool checkpoint -d pool
545.Ed
546.Pp
547Dataset reservations (controlled by the
548.Nm reservation
549or
550.Nm refreservation
551zfs properties) may be unenforceable while a checkpoint exists, because the
552checkpoint is allowed to consume the dataset's reservation.
553Finally, data that is part of the checkpoint but has been freed in the
554current state of the pool won't be scanned during a scrub.
555.Ss Special Allocation Class
556The allocations in the special class are dedicated to specific block types.
557By default this includes all metadata, the indirect blocks of user data, and
558any dedup data.
559The class can also be provisioned to accept a limited percentage of small file
560data blocks.
561.Pp
562A pool must always have at least one general (non-specified) vdev before
563other devices can be assigned to the special class.
564If the special class becomes full, then allocations intended for it will spill
565back into the normal class.
566.Pp
567Dedup data can be excluded from the special class by setting the
568.Sy zfs_ddt_data_is_special
569zfs kernel variable to false (0).
570.Pp
571Inclusion of small file blocks in the special class is opt-in.
572Each dataset can control the size of small file blocks allowed in the special
573class by setting the
574.Sy special_small_blocks
575dataset property.
576It defaults to zero so you must opt-in by setting it to a non-zero value.
577See
578.Xr zfs 8
579for more info on setting this property.
580.Ss Properties
581Each pool has several properties associated with it.
582Some properties are read-only statistics while others are configurable and
583change the behavior of the pool.
584.Pp
585The following are read-only properties:
586.Bl -tag -width Ds
587.It Cm allocated
588Amount of storage space used within the pool.
589.It Sy bootsize
590The size of the system boot partition.
591This property can only be set at pool creation time and is read-only once pool
592is created.
593Setting this property implies using the
594.Fl B
595option.
596.It Sy capacity
597Percentage of pool space used.
598This property can also be referred to by its shortened column name,
599.Sy cap .
600.It Sy expandsize
601Amount of uninitialized space within the pool or device that can be used to
602increase the total capacity of the pool.
603Uninitialized space consists of any space on an EFI labeled vdev which has not
604been brought online
605.Po e.g, using
606.Nm zpool Cm online Fl e
607.Pc .
608This space occurs when a LUN is dynamically expanded.
609.It Sy fragmentation
610The amount of fragmentation in the pool.
611.It Sy free
612The amount of free space available in the pool.
613.It Sy freeing
614After a file system or snapshot is destroyed, the space it was using is
615returned to the pool asynchronously.
616.Sy freeing
617is the amount of space remaining to be reclaimed.
618Over time
619.Sy freeing
620will decrease while
621.Sy free
622increases.
623.It Sy health
624The current health of the pool.
625Health can be one of
626.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
627.It Sy guid
628A unique identifier for the pool.
629.It Sy size
630Total size of the storage pool.
631.It Sy unsupported@ Ns Em feature_guid
632Information about unsupported features that are enabled on the pool.
633See
634.Xr zpool-features 7
635for details.
636.El
637.Pp
638The space usage properties report actual physical space available to the
639storage pool.
640The physical space can be different from the total amount of space that any
641contained datasets can actually use.
642The amount of space used in a raidz configuration depends on the characteristics
643of the data being written.
644In addition, ZFS reserves some space for internal accounting that the
645.Xr zfs 8
646command takes into account, but the
647.Nm
648command does not.
649For non-full pools of a reasonable size, these effects should be invisible.
650For small pools, or pools that are close to being completely full, these
651discrepancies may become more noticeable.
652.Pp
653The following property can be set at creation time and import time:
654.Bl -tag -width Ds
655.It Sy altroot
656Alternate root directory.
657If set, this directory is prepended to any mount points within the pool.
658This can be used when examining an unknown pool where the mount points cannot be
659trusted, or in an alternate boot environment, where the typical paths are not
660valid.
661.Sy altroot
662is not a persistent property.
663It is valid only while the system is up.
664Setting
665.Sy altroot
666defaults to using
667.Sy cachefile Ns = Ns Sy none ,
668though this may be overridden using an explicit setting.
669.El
670.Pp
671The following property can be set only at import time:
672.Bl -tag -width Ds
673.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
674If set to
675.Sy on ,
676the pool will be imported in read-only mode.
677This property can also be referred to by its shortened column name,
678.Sy rdonly .
679.El
680.Pp
681The following properties can be set at creation time and import time, and later
682changed with the
683.Nm zpool Cm set
684command:
685.Bl -tag -width Ds
686.It Sy ashift Ns = Ns Sy ashift
687Pool sector size exponent, to the power of
688.Sy 2
689(internally referred to as
690.Sy ashift
691). Values from 9 to 16, inclusive, are valid; also, the
692value 0 (the default) means to auto-detect using the kernel's block
693layer and a ZFS internal exception list.
694I/O operations will be aligned to the specified size boundaries.
695Additionally, the minimum (disk) write size will be set to the specified size,
696so this represents a space vs performance trade-off.
697For optimal performance, the pool sector size should be greater than or equal to
698the sector size of the underlying disks.
699The typical case for setting this property is when performance is important and
700the underlying disks use 4KiB sectors but report 512B sectors to the OS (for
701compatibility reasons); in that case, set
702.Sy ashift=12
703(which is 1<<12 = 4096). When set, this property is
704used as the default hint value in subsequent vdev operations (add,
705attach and replace).
706Changing this value will not modify any existing
707vdev, not even on disk replacement; however it can be used, for
708instance, to replace a dying 512B sectors disk with a newer 4KiB
709sectors device: this will probably result in bad performance but at the
710same time could prevent loss of data.
711.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
712Controls automatic pool expansion when the underlying LUN is grown.
713If set to
714.Sy on ,
715the pool will be resized according to the size of the expanded device.
716If the device is part of a mirror or raidz then all devices within that
717mirror/raidz group must be expanded before the new space is made available to
718the pool.
719The default behavior is
720.Sy off .
721This property can also be referred to by its shortened column name,
722.Sy expand .
723.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
724Controls automatic device replacement.
725If set to
726.Sy off ,
727device replacement must be initiated by the administrator by using the
728.Nm zpool Cm replace
729command.
730If set to
731.Sy on ,
732any new device, found in the same physical location as a device that previously
733belonged to the pool, is automatically formatted and replaced.
734The default behavior is
735.Sy off .
736This property can also be referred to by its shortened column name,
737.Sy replace .
738.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
739Identifies the default bootable dataset for the root pool.
740This property is expected to be set mainly by the installation and upgrade
741programs.
742.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
743Controls the location of where the pool configuration is cached.
744Discovering all pools on system startup requires a cached copy of the
745configuration data that is stored on the root file system.
746All pools in this cache are automatically imported when the system boots.
747Some environments, such as install and clustering, need to cache this
748information in a different location so that pools are not automatically
749imported.
750Setting this property caches the pool configuration in a different location that
751can later be imported with
752.Nm zpool Cm import Fl c .
753Setting it to the special value
754.Sy none
755creates a temporary pool that is never cached, and the special value
756.Qq
757.Pq empty string
758uses the default location.
759.Pp
760Multiple pools can share the same cache file.
761Because the kernel destroys and recreates this file when pools are added and
762removed, care should be taken when attempting to access this file.
763When the last pool using a
764.Sy cachefile
765is exported or destroyed, the file is removed.
766.It Sy comment Ns = Ns Ar text
767A text string consisting of printable ASCII characters that will be stored
768such that it is available even if the pool becomes faulted.
769An administrator can provide additional information about a pool using this
770property.
771.It Sy dedupditto Ns = Ns Ar number
772Threshold for the number of block ditto copies.
773If the reference count for a deduplicated block increases above this number, a
774new ditto copy of this block is automatically stored.
775The default setting is
776.Sy 0
777which causes no ditto copies to be created for deduplicated blocks.
778The minimum legal nonzero setting is
779.Sy 100 .
780.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
781Controls whether a non-privileged user is granted access based on the dataset
782permissions defined on the dataset.
783See
784.Xr zfs 8
785for more information on ZFS delegated administration.
786.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
787Controls the system behavior in the event of catastrophic pool failure.
788This condition is typically a result of a loss of connectivity to the underlying
789storage device(s) or a failure of all devices within the pool.
790The behavior of such an event is determined as follows:
791.Bl -tag -width "continue"
792.It Sy wait
793Blocks all I/O access until the device connectivity is recovered and the errors
794are cleared.
795This is the default behavior.
796.It Sy continue
797Returns
798.Er EIO
799to any new write I/O requests but allows reads to any of the remaining healthy
800devices.
801Any write requests that have yet to be committed to disk would be blocked.
802.It Sy panic
803Prints out a message to the console and generates a system crash dump.
804.El
805.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
806When set to
807.Sy on
808space which has been recently freed, and is no longer allocated by the pool,
809will be periodically trimmed.
810This allows block device vdevs which support BLKDISCARD, such as SSDs, or
811file vdevs on which the underlying file system supports hole-punching, to
812reclaim unused blocks.
813The default setting for this property is
814.Sy off .
815.Pp
816Automatic TRIM does not immediately reclaim blocks after a free.
817Instead, it will optimistically delay allowing smaller ranges to be
818aggregated in to a few larger ones.
819These can then be issued more efficiently to the storage.
820.Pp
821Be aware that automatic trimming of recently freed data blocks can put
822significant stress on the underlying storage devices.
823This will vary depending of how well the specific device handles these commands.
824For lower end devices it is often possible to achieve most of the benefits
825of automatic trimming by running an on-demand (manual) TRIM periodically
826using the
827.Nm zpool Cm trim
828command.
829.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
830The value of this property is the current state of
831.Ar feature_name .
832The only valid value when setting this property is
833.Sy enabled
834which moves
835.Ar feature_name
836to the enabled state.
837See
838.Xr zpool-features 7
839for details on feature states.
840.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
841Controls whether information about snapshots associated with this pool is
842output when
843.Nm zfs Cm list
844is run without the
845.Fl t
846option.
847The default value is
848.Sy off .
849This property can also be referred to by its shortened name,
850.Sy listsnaps .
851.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
852Controls whether a pool activity check should be performed during
853.Nm zpool Cm import .
854When a pool is determined to be active it cannot be imported, even with the
855.Fl f
856option.
857This property is intended to be used in failover configurations
858where multiple hosts have access to a pool on shared storage.
859.sp
860Multihost provides protection on import only.
861It does not protect against an
862individual device being used in multiple pools, regardless of the type of vdev.
863See the discussion under
864.Sy zpool create.
865.sp
866When this property is on, periodic writes to storage occur to show the pool is
867in use.
868See
869.Sy zfs_multihost_interval
870in the
871.Xr zfs-module-parameters 7
872man page.
873In order to enable this property each host must set a unique hostid.
874The default value is
875.Sy off .
876.It Sy version Ns = Ns Ar version
877The current on-disk version of the pool.
878This can be increased, but never decreased.
879The preferred method of updating pools is with the
880.Nm zpool Cm upgrade
881command, though this property can be used when a specific version is needed for
882backwards compatibility.
883Once feature flags are enabled on a pool this property will no longer have a
884value.
885.El
886.Ss Subcommands
887All subcommands that modify state are logged persistently to the pool in their
888original form.
889.Pp
890The
891.Nm
892command provides subcommands to create and destroy storage pools, add capacity
893to storage pools, and provide information about the storage pools.
894The following subcommands are supported:
895.Bl -tag -width Ds
896.It Xo
897.Nm
898.Fl \&?
899.Xc
900Displays a help message.
901.It Xo
902.Nm
903.Cm add
904.Op Fl fgLnP
905.Oo Fl o Ar property Ns = Ns Ar value Oc
906.Ar pool vdev Ns ...
907.Xc
908Adds the specified virtual devices to the given pool.
909The
910.Ar vdev
911specification is described in the
912.Sx Virtual Devices
913section.
914The behavior of the
915.Fl f
916option, and the device checks performed are described in the
917.Nm zpool Cm create
918subcommand.
919.Bl -tag -width Ds
920.It Fl f
921Forces use of
922.Ar vdev Ns s ,
923even if they appear in use or specify a conflicting replication level.
924Not all devices can be overridden in this manner.
925.It Fl g
926Display
927.Ar vdev ,
928GUIDs instead of the normal device names.
929These GUIDs can be used in place of
930device names for the zpool detach/offline/remove/replace commands.
931.It Fl L
932Display real paths for
933.Ar vdev Ns s
934resolving all symbolic links.
935This can be used to look up the current block
936device name regardless of the /dev/disk/ path used to open it.
937.It Fl n
938Displays the configuration that would be used without actually adding the
939.Ar vdev Ns s .
940The actual pool creation can still fail due to insufficient privileges or
941device sharing.
942.It Fl P
943Display real paths for
944.Ar vdev Ns s
945instead of only the last component of the path.
946This can be used in conjunction with the
947.Fl L
948flag.
949.It Fl o Ar property Ns = Ns Ar value
950Sets the given pool properties.
951See the
952.Sx Properties
953section for a list of valid properties that can be set.
954The only property supported at the moment is
955.Sy ashift.
956.El
957.It Xo
958.Nm
959.Cm attach
960.Op Fl f
961.Oo Fl o Ar property Ns = Ns Ar value Oc
962.Ar pool device new_device
963.Xc
964Attaches
965.Ar new_device
966to the existing
967.Ar device .
968The existing device cannot be part of a raidz configuration.
969If
970.Ar device
971is not currently part of a mirrored configuration,
972.Ar device
973automatically transforms into a two-way mirror of
974.Ar device
975and
976.Ar new_device .
977If
978.Ar device
979is part of a two-way mirror, attaching
980.Ar new_device
981creates a three-way mirror, and so on.
982In either case,
983.Ar new_device
984begins to resilver immediately.
985.Bl -tag -width Ds
986.It Fl f
987Forces use of
988.Ar new_device ,
989even if its appears to be in use.
990Not all devices can be overridden in this manner.
991.It Fl o Ar property Ns = Ns Ar value
992Sets the given pool properties.
993See the
994.Sx Properties
995section for a list of valid properties that can be set.
996The only property supported at the moment is
997.Sy ashift.
998.El
999.It Xo
1000.Nm
1001.Cm checkpoint
1002.Op Fl d, -discard
1003.Ar pool
1004.Xc
1005Checkpoints the current state of
1006.Ar pool
1007, which can be later restored by
1008.Nm zpool Cm import --rewind-to-checkpoint .
1009Rewinding will also discard the checkpoint.
1010The existence of a checkpoint in a pool prohibits the following
1011.Nm zpool
1012commands:
1013.Cm remove ,
1014.Cm attach ,
1015.Cm detach ,
1016.Cm split ,
1017and
1018.Cm reguid .
1019In addition, it may break reservation boundaries if the pool lacks free
1020space.
1021The
1022.Nm zpool Cm status
1023command indicates the existence of a checkpoint or the progress of discarding a
1024checkpoint from a pool.
1025The
1026.Nm zpool Cm list
1027command reports how much space the checkpoint takes from the pool.
1028.Bl -tag -width Ds
1029.It Fl d, -discard
1030Discards an existing checkpoint from
1031.Ar pool
1032without rewinding.
1033.El
1034.It Xo
1035.Nm
1036.Cm clear
1037.Ar pool
1038.Op Ar device
1039.Xc
1040Clears device errors in a pool.
1041If no arguments are specified, all device errors within the pool are cleared.
1042If one or more devices is specified, only those errors associated with the
1043specified device or devices are cleared.
1044If multihost is enabled, and the pool has been suspended, this will not
1045resume I/O.
1046While the pool was suspended, it may have been imported on
1047another host, and resuming I/O could result in pool damage.
1048.It Xo
1049.Nm
1050.Cm create
1051.Op Fl dfn
1052.Op Fl B
1053.Op Fl m Ar mountpoint
1054.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1055.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
1056.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1057.Op Fl R Ar root
1058.Op Fl t Ar tempname
1059.Ar pool vdev Ns ...
1060.Xc
1061Creates a new storage pool containing the virtual devices specified on the
1062command line.
1063The pool name must begin with a letter, and can only contain
1064alphanumeric characters as well as underscore
1065.Pq Qq Sy _ ,
1066dash
1067.Pq Qq Sy - ,
1068and period
1069.Pq Qq Sy \&. .
1070The pool names
1071.Sy mirror ,
1072.Sy raidz ,
1073.Sy spare
1074and
1075.Sy log
1076are reserved, as are names beginning with the pattern
1077.Sy c[0-9] .
1078The
1079.Ar vdev
1080specification is described in the
1081.Sx Virtual Devices
1082section.
1083.Pp
1084The command attempts to verify that each device specified is accessible and not
1085currently in use by another subsystem.
1086However this check is not robust enough
1087to detect simultaneous attempts to use a new device in different pools, even if
1088.Sy multihost
1089is
1090.Sy enabled.
1091The
1092administrator must ensure that simultaneous invocations of any combination of
1093.Sy zpool replace ,
1094.Sy zpool create ,
1095.Sy zpool add ,
1096or
1097.Sy zpool labelclear ,
1098do not refer to the same device.
1099Using the same device in two pools will
1100result in pool corruption.
1101.sp
1102There are some uses, such as being currently mounted, or specified as the
1103dedicated dump device, that prevents a device from ever being used by ZFS.
1104Other uses, such as having a preexisting UFS file system, can be overridden with
1105the
1106.Fl f
1107option.
1108.Pp
1109The command also checks that the replication strategy for the pool is
1110consistent.
1111An attempt to combine redundant and non-redundant storage in a single pool, or
1112to mix disks and files, results in an error unless
1113.Fl f
1114is specified.
1115The use of differently sized devices within a single raidz or mirror group is
1116also flagged as an error unless
1117.Fl f
1118is specified.
1119.Pp
1120Unless the
1121.Fl R
1122option is specified, the default mount point is
1123.Pa / Ns Ar pool .
1124The mount point must not exist or must be empty, or else the root dataset
1125cannot be mounted.
1126This can be overridden with the
1127.Fl m
1128option.
1129.Pp
1130By default all supported features are enabled on the new pool unless the
1131.Fl d
1132option is specified.
1133.Bl -tag -width Ds
1134.It Fl B
1135Create whole disk pool with EFI System partition to support booting system
1136with UEFI firmware.
1137Default size is 256MB.
1138To create boot partition with custom size, set the
1139.Sy bootsize
1140property with the
1141.Fl o
1142option.
1143See the
1144.Sx Properties
1145section for details.
1146.It Fl d
1147Do not enable any features on the new pool.
1148Individual features can be enabled by setting their corresponding properties to
1149.Sy enabled
1150with the
1151.Fl o
1152option.
1153See
1154.Xr zpool-features 7
1155for details about feature properties.
1156.It Fl f
1157Forces use of
1158.Ar vdev Ns s ,
1159even if they appear in use or specify a conflicting replication level.
1160Not all devices can be overridden in this manner.
1161.It Fl m Ar mountpoint
1162Sets the mount point for the root dataset.
1163The default mount point is
1164.Pa /pool
1165or
1166.Pa altroot/pool
1167if
1168.Ar altroot
1169is specified.
1170The mount point must be an absolute path,
1171.Sy legacy ,
1172or
1173.Sy none .
1174For more information on dataset mount points, see
1175.Xr zfs 8 .
1176.It Fl n
1177Displays the configuration that would be used without actually creating the
1178pool.
1179The actual pool creation can still fail due to insufficient privileges or
1180device sharing.
1181.It Fl o Ar property Ns = Ns Ar value
1182Sets the given pool properties.
1183See the
1184.Sx Properties
1185section for a list of valid properties that can be set.
1186.It Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value
1187Sets the given pool feature.
1188See
1189.Xr zpool-features 7
1190for a list of valid features that can be set.
1191.Pp
1192.Ar value
1193can either be
1194.Sy disabled
1195or
1196.Sy enabled .
1197.It Fl O Ar file-system-property Ns = Ns Ar value
1198Sets the given file system properties in the root file system of the pool.
1199See the
1200.Sx Properties
1201section of
1202.Xr zfs 8
1203for a list of valid properties that can be set.
1204.It Fl R Ar root
1205Equivalent to
1206.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1207.It Fl t Ar tempname
1208Sets the in-core pool name to
1209.Pa tempname
1210while the on-disk name will be the name specified as the pool name
1211.Pa pool .
1212This will set the default cachefile property to
1213.Sy none.
1214This is intended to handle name space collisions when creating pools
1215for other systems, such as virtual machines or physical machines
1216whose pools live on network block devices.
1217.El
1218.It Xo
1219.Nm
1220.Cm destroy
1221.Op Fl f
1222.Ar pool
1223.Xc
1224Destroys the given pool, freeing up any devices for other use.
1225This command tries to unmount any active datasets before destroying the pool.
1226.Bl -tag -width Ds
1227.It Fl f
1228Forces any active datasets contained within the pool to be unmounted.
1229.El
1230.It Xo
1231.Nm
1232.Cm detach
1233.Ar pool device
1234.Xc
1235Detaches
1236.Ar device
1237from a mirror.
1238The operation is refused if there are no other valid replicas of the data.
1239.It Xo
1240.Nm
1241.Cm export
1242.Op Fl f
1243.Ar pool Ns ...
1244.Xc
1245Exports the given pools from the system.
1246All devices are marked as exported, but are still considered in use by other
1247subsystems.
1248The devices can be moved between systems
1249.Pq even those of different endianness
1250and imported as long as a sufficient number of devices are present.
1251.Pp
1252Before exporting the pool, all datasets within the pool are unmounted.
1253A pool can not be exported if it has a shared spare that is currently being
1254used.
1255.Pp
1256For pools to be portable, you must give the
1257.Nm
1258command whole disks, not just slices, so that ZFS can label the disks with
1259portable EFI labels.
1260Otherwise, disk drivers on platforms of different endianness will not recognize
1261the disks.
1262.Bl -tag -width Ds
1263.It Fl f
1264Forcefully unmount all datasets, using the
1265.Nm unmount Fl f
1266command.
1267.Pp
1268This command will forcefully export the pool even if it has a shared spare that
1269is currently being used.
1270This may lead to potential data corruption.
1271.El
1272.It Xo
1273.Nm
1274.Cm get
1275.Op Fl Hp
1276.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1277.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1278.Ar pool Ns ...
1279.Xc
1280Retrieves the given list of properties
1281.Po
1282or all properties if
1283.Sy all
1284is used
1285.Pc
1286for the specified storage pool(s).
1287These properties are displayed with the following fields:
1288.Bd -literal
1289        name          Name of storage pool
1290        property      Property name
1291        value         Property value
1292        source        Property source, either 'default' or 'local'.
1293.Ed
1294.Pp
1295See the
1296.Sx Properties
1297section for more information on the available pool properties.
1298.Bl -tag -width Ds
1299.It Fl H
1300Scripted mode.
1301Do not display headers, and separate fields by a single tab instead of arbitrary
1302space.
1303.It Fl o Ar field
1304A comma-separated list of columns to display.
1305.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1306is the default value.
1307.It Fl p
1308Display numbers in parsable (exact) values.
1309.El
1310.It Xo
1311.Nm
1312.Cm history
1313.Op Fl il
1314.Oo Ar pool Oc Ns ...
1315.Xc
1316Displays the command history of the specified pool(s) or all pools if no pool is
1317specified.
1318.Bl -tag -width Ds
1319.It Fl i
1320Displays internally logged ZFS events in addition to user initiated events.
1321.It Fl l
1322Displays log records in long format, which in addition to standard format
1323includes, the user name, the hostname, and the zone in which the operation was
1324performed.
1325.El
1326.It Xo
1327.Nm
1328.Cm import
1329.Op Fl D
1330.Op Fl d Ar dir
1331.Xc
1332Lists pools available to import.
1333If the
1334.Fl d
1335option is not specified, this command searches for devices in
1336.Pa /dev/dsk .
1337The
1338.Fl d
1339option can be specified multiple times, and all directories are searched.
1340If the device appears to be part of an exported pool, this command displays a
1341summary of the pool with the name of the pool, a numeric identifier, as well as
1342the vdev layout and current health of the device for each device or file.
1343Destroyed pools, pools that were previously destroyed with the
1344.Nm zpool Cm destroy
1345command, are not listed unless the
1346.Fl D
1347option is specified.
1348.Pp
1349The numeric identifier is unique, and can be used instead of the pool name when
1350multiple exported pools of the same name are available.
1351.Bl -tag -width Ds
1352.It Fl c Ar cachefile
1353Reads configuration from the given
1354.Ar cachefile
1355that was created with the
1356.Sy cachefile
1357pool property.
1358This
1359.Ar cachefile
1360is used instead of searching for devices.
1361.It Fl d Ar dir
1362Searches for devices or files in
1363.Ar dir .
1364The
1365.Fl d
1366option can be specified multiple times.
1367.It Fl D
1368Lists destroyed pools only.
1369.El
1370.It Xo
1371.Nm
1372.Cm import
1373.Fl a
1374.Op Fl DflmN
1375.Op Fl F Op Fl n
1376.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1377.Op Fl o Ar mntopts
1378.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1379.Op Fl R Ar root
1380.Xc
1381Imports all pools found in the search directories.
1382Identical to the previous command, except that all pools with a sufficient
1383number of devices available are imported.
1384Destroyed pools, pools that were previously destroyed with the
1385.Nm zpool Cm destroy
1386command, will not be imported unless the
1387.Fl D
1388option is specified.
1389.Bl -tag -width Ds
1390.It Fl a
1391Searches for and imports all pools found.
1392.It Fl c Ar cachefile
1393Reads configuration from the given
1394.Ar cachefile
1395that was created with the
1396.Sy cachefile
1397pool property.
1398This
1399.Ar cachefile
1400is used instead of searching for devices.
1401.It Fl d Ar dir
1402Searches for devices or files in
1403.Ar dir .
1404The
1405.Fl d
1406option can be specified multiple times.
1407This option is incompatible with the
1408.Fl c
1409option.
1410.It Fl D
1411Imports destroyed pools only.
1412The
1413.Fl f
1414option is also required.
1415.It Fl f
1416Forces import, even if the pool appears to be potentially active.
1417.It Fl F
1418Recovery mode for a non-importable pool.
1419Attempt to return the pool to an importable state by discarding the last few
1420transactions.
1421Not all damaged pools can be recovered by using this option.
1422If successful, the data from the discarded transactions is irretrievably lost.
1423This option is ignored if the pool is importable or already imported.
1424.It Fl l
1425Indicates that this command will request encryption keys for all encrypted
1426datasets it attempts to mount as it is bringing the pool online.
1427Note that if any datasets have a
1428.Sy keylocation
1429of
1430.Sy prompt
1431this command will block waiting for the keys to be entered.
1432Without this flag encrypted datasets will be left unavailable until the keys are
1433loaded.
1434.It Fl m
1435Allows a pool to import when there is a missing log device.
1436Recent transactions can be lost because the log device will be discarded.
1437.It Fl n
1438Used with the
1439.Fl F
1440recovery option.
1441Determines whether a non-importable pool can be made importable again, but does
1442not actually perform the pool recovery.
1443For more details about pool recovery mode, see the
1444.Fl F
1445option, above.
1446.It Fl N
1447Import the pool without mounting any file systems.
1448.It Fl o Ar mntopts
1449Comma-separated list of mount options to use when mounting datasets within the
1450pool.
1451See
1452.Xr zfs 8
1453for a description of dataset properties and mount options.
1454.It Fl o Ar property Ns = Ns Ar value
1455Sets the specified property on the imported pool.
1456See the
1457.Sx Properties
1458section for more information on the available pool properties.
1459.It Fl R Ar root
1460Sets the
1461.Sy cachefile
1462property to
1463.Sy none
1464and the
1465.Sy altroot
1466property to
1467.Ar root .
1468.El
1469.It Xo
1470.Nm
1471.Cm import
1472.Op Fl Dfmt
1473.Op Fl F Op Fl n
1474.Op Fl -rewind-to-checkpoint
1475.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1476.Op Fl o Ar mntopts
1477.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1478.Op Fl R Ar root
1479.Ar pool Ns | Ns Ar id
1480.Op Ar newpool
1481.Xc
1482Imports a specific pool.
1483A pool can be identified by its name or the numeric identifier.
1484If
1485.Ar newpool
1486is specified, the pool is imported using the name
1487.Ar newpool .
1488Otherwise, it is imported with the same name as its exported name.
1489.Pp
1490If a device is removed from a system without running
1491.Nm zpool Cm export
1492first, the device appears as potentially active.
1493It cannot be determined if this was a failed export, or whether the device is
1494really in use from another host.
1495To import a pool in this state, the
1496.Fl f
1497option is required.
1498.Bl -tag -width Ds
1499.It Fl c Ar cachefile
1500Reads configuration from the given
1501.Ar cachefile
1502that was created with the
1503.Sy cachefile
1504pool property.
1505This
1506.Ar cachefile
1507is used instead of searching for devices.
1508.It Fl d Ar dir
1509Searches for devices or files in
1510.Ar dir .
1511The
1512.Fl d
1513option can be specified multiple times.
1514This option is incompatible with the
1515.Fl c
1516option.
1517.It Fl D
1518Imports destroyed pool.
1519The
1520.Fl f
1521option is also required.
1522.It Fl f
1523Forces import, even if the pool appears to be potentially active.
1524.It Fl F
1525Recovery mode for a non-importable pool.
1526Attempt to return the pool to an importable state by discarding the last few
1527transactions.
1528Not all damaged pools can be recovered by using this option.
1529If successful, the data from the discarded transactions is irretrievably lost.
1530This option is ignored if the pool is importable or already imported.
1531.It Fl l
1532Indicates that the zpool command will request encryption keys for all
1533encrypted datasets it attempts to mount as it is bringing the pool
1534online.
1535This is equivalent to running
1536.Nm Cm mount
1537on each encrypted dataset immediately after the pool is imported.
1538If any datasets have a
1539.Sy prompt
1540keysource this command will block waiting for the key to be entered.
1541Otherwise, encrypted datasets will be left unavailable until the keys are
1542loaded.
1543.It Fl m
1544Allows a pool to import when there is a missing log device.
1545Recent transactions can be lost because the log device will be discarded.
1546.It Fl n
1547Used with the
1548.Fl F
1549recovery option.
1550Determines whether a non-importable pool can be made importable again, but does
1551not actually perform the pool recovery.
1552For more details about pool recovery mode, see the
1553.Fl F
1554option, above.
1555.It Fl o Ar mntopts
1556Comma-separated list of mount options to use when mounting datasets within the
1557pool.
1558See
1559.Xr zfs 8
1560for a description of dataset properties and mount options.
1561.It Fl o Ar property Ns = Ns Ar value
1562Sets the specified property on the imported pool.
1563See the
1564.Sx Properties
1565section for more information on the available pool properties.
1566.It Fl R Ar root
1567Sets the
1568.Sy cachefile
1569property to
1570.Sy none
1571and the
1572.Sy altroot
1573property to
1574.Ar root .
1575.It Fl t
1576Used with
1577.Ar newpool .
1578Specifies that
1579.Ar newpool
1580is temporary.
1581Temporary pool names last until export.
1582Ensures that the original pool name will be used in all label updates and
1583therefore is retained upon export.
1584Will also set
1585.Sy cachefile
1586property to
1587.Sy none
1588when not explicitly specified.
1589.It Fl -rewind-to-checkpoint
1590Rewinds pool to the checkpointed state.
1591Once the pool is imported with this flag there is no way to undo the rewind.
1592All changes and data that were written after the checkpoint are lost!
1593The only exception is when the
1594.Sy readonly
1595mounting option is enabled.
1596In this case, the checkpointed state of the pool is opened and an
1597administrator can see how the pool would look like if they were
1598to fully rewind.
1599.El
1600.It Xo
1601.Nm
1602.Cm initialize
1603.Op Fl c | Fl s
1604.Ar pool
1605.Op Ar device Ns ...
1606.Xc
1607Begins initializing by writing to all unallocated regions on the specified
1608devices, or all eligible devices in the pool if no individual devices are
1609specified.
1610Only leaf data or log devices may be initialized.
1611.Bl -tag -width Ds
1612.It Fl c, -cancel
1613Cancel initializing on the specified devices, or all eligible devices if none
1614are specified.
1615If one or more target devices are invalid or are not currently being
1616initialized, the command will fail and no cancellation will occur on any device.
1617.It Fl s -suspend
1618Suspend initializing on the specified devices, or all eligible devices if none
1619are specified.
1620If one or more target devices are invalid or are not currently being
1621initialized, the command will fail and no suspension will occur on any device.
1622Initializing can then be resumed by running
1623.Nm zpool Cm initialize
1624with no flags on the relevant target devices.
1625.El
1626.It Xo
1627.Nm
1628.Cm iostat
1629.Op Oo Fl lq Oc | Ns Fl rw
1630.Op Fl T Sy u Ns | Ns Sy d
1631.Op Fl ghHLnpPvy
1632.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1633.Op Ar interval Op Ar count
1634.Xc
1635Displays I/O statistics for the given pools/vdevs.
1636Physical I/Os may be observed via
1637.Xr iostat 8 .
1638If writes are located nearby, they may be merged into a single larger operation.
1639Additional I/O may be generated depending on the level of vdev redundancy.
1640To filter output, you may pass in a list of pools, a pool and list of vdevs
1641in that pool, or a list of any vdevs from any pool.
1642If no items are specified, statistics for every pool in the system are shown.
1643When given an
1644.Ar interval ,
1645the statistics are printed every
1646.Ar interval
1647seconds until ^C is pressed.
1648If
1649.Fl n
1650flag is specified the headers are displayed only once, otherwise they are
1651displayed periodically.
1652If
1653.Ar count
1654is specified, the command exits after
1655.Ar count
1656reports are printed.
1657The first report printed is always the statistics since boot regardless of
1658whether
1659.Ar interval
1660and
1661.Ar count
1662are passed.
1663Also note that the units of
1664.Sy K ,
1665.Sy M ,
1666.Sy G ...
1667that are printed in the report are in base 1024.
1668To get the raw values, use the
1669.Fl p
1670flag.
1671.Bl -tag -width Ds
1672.It Fl T Sy u Ns | Ns Sy d
1673Display a time stamp.
1674Specify
1675.Sy u
1676for a printed representation of the internal representation of time.
1677See
1678.Xr time 2 .
1679Specify
1680.Sy d
1681for standard date format.
1682See
1683.Xr date 1 .
1684.It Fl i
1685Display vdev initialization status.
1686.It Fl g
1687Display vdev GUIDs instead of the normal device names.
1688These GUIDs can be used in place of device names for the zpool
1689detach/offline/remove/replace commands.
1690.It Fl H
1691Scripted mode.
1692Do not display headers, and separate fields by a single tab instead of
1693arbitrary space.
1694.It Fl L
1695Display real paths for vdevs resolving all symbolic links.
1696This can be used to look up the current block device name regardless of the
1697.Pa /dev/dsk/
1698path used to open it.
1699.It Fl n
1700Print headers only once when passed.
1701.It Fl p
1702Display numbers in parsable (exact) values.
1703Time values are in nanoseconds.
1704.It Fl P
1705Display full paths for vdevs instead of only the last component of
1706the path.
1707This can be used in conjunction with the
1708.Fl L
1709flag.
1710.It Fl r
1711Print request size histograms for the leaf vdev's IO
1712This includes histograms of individual IOs (ind) and aggregate IOs (agg).
1713These stats can be useful for observing how well IO aggregation is working.
1714Note that TRIM IOs may exceed 16M, but will be counted as 16M.
1715.It Fl v
1716Verbose statistics Reports usage statistics for individual vdevs within the
1717pool, in addition to the pool-wide statistics.
1718.It Fl y
1719Omit statistics since boot.
1720Normally the first line of output reports the statistics since boot.
1721This option suppresses that first line of output.
1722.Ar interval
1723.It Fl w
1724Display latency histograms:
1725.Pp
1726.Ar total_wait :
1727Total IO time (queuing + disk IO time).
1728.Ar disk_wait :
1729Disk IO time (time reading/writing the disk).
1730.Ar syncq_wait :
1731Amount of time IO spent in synchronous priority queues.
1732Does not include disk time.
1733.Ar asyncq_wait :
1734Amount of time IO spent in asynchronous priority queues.
1735Does not include disk time.
1736.Ar scrub :
1737Amount of time IO spent in scrub queue.
1738Does not include disk time.
1739.It Fl l
1740Include average latency statistics:
1741.Pp
1742.Ar total_wait :
1743Average total IO time (queuing + disk IO time).
1744.Ar disk_wait :
1745Average disk IO time (time reading/writing the disk).
1746.Ar syncq_wait :
1747Average amount of time IO spent in synchronous priority queues.
1748Does not include disk time.
1749.Ar asyncq_wait :
1750Average amount of time IO spent in asynchronous priority queues.
1751Does not include disk time.
1752.Ar scrub :
1753Average queuing time in scrub queue.
1754Does not include disk time.
1755.Ar trim :
1756Average queuing time in trim queue.
1757Does not include disk time.
1758.It Fl q
1759Include active queue statistics.
1760Each priority queue has both pending (
1761.Ar pend )
1762and active (
1763.Ar activ )
1764IOs.
1765Pending IOs are waiting to be issued to the disk, and active IOs have been
1766issued to disk and are waiting for completion.
1767These stats are broken out by priority queue:
1768.Pp
1769.Ar syncq_read/write :
1770Current number of entries in synchronous priority
1771queues.
1772.Ar asyncq_read/write :
1773Current number of entries in asynchronous priority queues.
1774.Ar scrubq_read :
1775Current number of entries in scrub queue.
1776.Ar trimq_write :
1777Current number of entries in trim queue.
1778.Pp
1779All queue statistics are instantaneous measurements of the number of
1780entries in the queues.
1781If you specify an interval, the measurements will be sampled from the end of
1782the interval.
1783.El
1784.It Xo
1785.Nm
1786.Cm labelclear
1787.Op Fl f
1788.Ar device
1789.Xc
1790Removes ZFS label information from the specified
1791.Ar device .
1792The
1793.Ar device
1794must not be part of an active pool configuration.
1795.Bl -tag -width Ds
1796.It Fl f
1797Treat exported or foreign devices as inactive.
1798.El
1799.It Xo
1800.Nm
1801.Cm list
1802.Op Fl HgLpPv
1803.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1804.Op Fl T Sy u Ns | Ns Sy d
1805.Oo Ar pool Oc Ns ...
1806.Op Ar interval Op Ar count
1807.Xc
1808Lists the given pools along with a health status and space usage.
1809If no
1810.Ar pool Ns s
1811are specified, all pools in the system are listed.
1812When given an
1813.Ar interval ,
1814the information is printed every
1815.Ar interval
1816seconds until ^C is pressed.
1817If
1818.Ar count
1819is specified, the command exits after
1820.Ar count
1821reports are printed.
1822.Bl -tag -width Ds
1823.It Fl g
1824Display vdev GUIDs instead of the normal device names.
1825These GUIDs can be used in place of device names for the zpool
1826detach/offline/remove/replace commands.
1827.It Fl H
1828Scripted mode.
1829Do not display headers, and separate fields by a single tab instead of arbitrary
1830space.
1831.It Fl o Ar property
1832Comma-separated list of properties to display.
1833See the
1834.Sx Properties
1835section for a list of valid properties.
1836The default list is
1837.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation , capacity ,
1838.Cm dedupratio , health , altroot .
1839.It Fl L
1840Display real paths for vdevs resolving all symbolic links.
1841This can be used to look up the current block device name regardless of the
1842/dev/disk/ path used to open it.
1843.It Fl p
1844Display numbers in parsable
1845.Pq exact
1846values.
1847.It Fl P
1848Display full paths for vdevs instead of only the last component of
1849the path.
1850This can be used in conjunction with the
1851.Fl L
1852flag.
1853.It Fl T Sy u Ns | Ns Sy d
1854Display a time stamp.
1855Specify
1856.Sy u
1857for a printed representation of the internal representation of time.
1858See
1859.Xr time 2 .
1860Specify
1861.Sy d
1862for standard date format.
1863See
1864.Xr date 1 .
1865.It Fl v
1866Verbose statistics.
1867Reports usage statistics for individual vdevs within the pool, in addition to
1868the pool-wise statistics.
1869.El
1870.It Xo
1871.Nm
1872.Cm offline
1873.Op Fl t
1874.Ar pool Ar device Ns ...
1875.Xc
1876Takes the specified physical device offline.
1877While the
1878.Ar device
1879is offline, no attempt is made to read or write to the device.
1880This command is not applicable to spares.
1881.Bl -tag -width Ds
1882.It Fl t
1883Temporary.
1884Upon reboot, the specified physical device reverts to its previous state.
1885.El
1886.It Xo
1887.Nm
1888.Cm online
1889.Op Fl e
1890.Ar pool Ar device Ns ...
1891.Xc
1892Brings the specified physical device online.
1893This command is not applicable to spares.
1894.Bl -tag -width Ds
1895.It Fl e
1896Expand the device to use all available space.
1897If the device is part of a mirror or raidz then all devices must be expanded
1898before the new space will become available to the pool.
1899.El
1900.It Xo
1901.Nm
1902.Cm reguid
1903.Ar pool
1904.Xc
1905Generates a new unique identifier for the pool.
1906You must ensure that all devices in this pool are online and healthy before
1907performing this action.
1908.It Xo
1909.Nm
1910.Cm reopen
1911.Ar pool
1912.Xc
1913Reopen all the vdevs associated with the pool.
1914.It Xo
1915.Nm
1916.Cm remove
1917.Op Fl np
1918.Ar pool Ar device Ns ...
1919.Xc
1920Removes the specified device from the pool.
1921This command currently only supports removing hot spares, cache, log
1922devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1923.sp
1924Removing a top-level vdev reduces the total amount of space in the storage pool.
1925The specified device will be evacuated by copying all allocated space from it to
1926the other devices in the pool.
1927In this case, the
1928.Nm zpool Cm remove
1929command initiates the removal and returns, while the evacuation continues in
1930the background.
1931The removal progress can be monitored with
1932.Nm zpool Cm status.
1933This feature must be enabled to be used, see
1934.Xr zpool-features 7
1935.Pp
1936A mirrored top-level device (log or data) can be removed by specifying the top-level mirror for the
1937same.
1938Non-log devices or data devices that are part of a mirrored configuration can be removed using
1939the
1940.Nm zpool Cm detach
1941command.
1942.Bl -tag -width Ds
1943.It Fl n
1944Do not actually perform the removal ("no-op").
1945Instead, print the estimated amount of memory that will be used by the
1946mapping table after the removal completes.
1947This is nonzero only for top-level vdevs.
1948.El
1949.Bl -tag -width Ds
1950.It Fl p
1951Used in conjunction with the
1952.Fl n
1953flag, displays numbers as parsable (exact) values.
1954.El
1955.It Xo
1956.Nm
1957.Cm remove
1958.Fl s
1959.Ar pool
1960.Xc
1961Stops and cancels an in-progress removal of a top-level vdev.
1962.It Xo
1963.Nm
1964.Cm replace
1965.Op Fl f
1966.Ar pool Ar device Op Ar new_device
1967.Xc
1968Replaces
1969.Ar old_device
1970with
1971.Ar new_device .
1972This is equivalent to attaching
1973.Ar new_device ,
1974waiting for it to resilver, and then detaching
1975.Ar old_device .
1976.Pp
1977The size of
1978.Ar new_device
1979must be greater than or equal to the minimum size of all the devices in a mirror
1980or raidz configuration.
1981.Pp
1982.Ar new_device
1983is required if the pool is not redundant.
1984If
1985.Ar new_device
1986is not specified, it defaults to
1987.Ar old_device .
1988This form of replacement is useful after an existing disk has failed and has
1989been physically replaced.
1990In this case, the new disk may have the same
1991.Pa /dev/dsk
1992path as the old device, even though it is actually a different disk.
1993ZFS recognizes this.
1994.Bl -tag -width Ds
1995.It Fl f
1996Forces use of
1997.Ar new_device ,
1998even if its appears to be in use.
1999Not all devices can be overridden in this manner.
2000.El
2001.It Xo
2002.Nm
2003.Cm resilver
2004.Ar pool Ns ...
2005.Xc
2006Starts a resilver.
2007If an existing resilver is already running it will be restarted from the
2008beginning.
2009Any drives that were scheduled for a deferred resilver will be added to the
2010new one.
2011This requires the
2012.Sy resilver_defer
2013feature.
2014.It Xo
2015.Nm
2016.Cm scrub
2017.Op Fl s | Fl p
2018.Ar pool Ns ...
2019.Xc
2020Begins a scrub or resumes a paused scrub.
2021The scrub examines all data in the specified pools to verify that it checksums
2022correctly.
2023For replicated
2024.Pq mirror or raidz
2025devices, ZFS automatically repairs any damage discovered during the scrub.
2026The
2027.Nm zpool Cm status
2028command reports the progress of the scrub and summarizes the results of the
2029scrub upon completion.
2030.Pp
2031Scrubbing and resilvering are very similar operations.
2032The difference is that resilvering only examines data that ZFS knows to be out
2033of date
2034.Po
2035for example, when attaching a new device to a mirror or replacing an existing
2036device
2037.Pc ,
2038whereas scrubbing examines all data to discover silent errors due to hardware
2039faults or disk failure.
2040.Pp
2041Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2042one at a time.
2043If a scrub is paused, the
2044.Nm zpool Cm scrub
2045resumes it.
2046If a resilver is in progress, ZFS does not allow a scrub to be started until the
2047resilver completes.
2048.Pp
2049Note that, due to changes in pool data on a live system, it is possible for
2050scrubs to progress slightly beyond 100% completion.
2051During this period, no completion time estimate will be provided.
2052.Bl -tag -width Ds
2053.It Fl s
2054Stop scrubbing.
2055.El
2056.Bl -tag -width Ds
2057.It Fl p
2058Pause scrubbing.
2059Scrub pause state and progress are periodically synced to disk.
2060If the system is restarted or pool is exported during a paused scrub,
2061even after import, scrub will remain paused until it is resumed.
2062Once resumed the scrub will pick up from the place where it was last
2063checkpointed to disk.
2064To resume a paused scrub issue
2065.Nm zpool Cm scrub
2066again.
2067.El
2068.It Xo
2069.Nm
2070.Cm set
2071.Ar property Ns = Ns Ar value
2072.Ar pool
2073.Xc
2074Sets the given property on the specified pool.
2075See the
2076.Sx Properties
2077section for more information on what properties can be set and acceptable
2078values.
2079.It Xo
2080.Nm
2081.Cm split
2082.Op Fl gLlnP
2083.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2084.Op Fl R Ar root
2085.Ar pool newpool
2086.Xc
2087Splits devices off
2088.Ar pool
2089creating
2090.Ar newpool .
2091All vdevs in
2092.Ar pool
2093must be mirrors.
2094At the time of the split,
2095.Ar newpool
2096will be a replica of
2097.Ar pool .
2098.Bl -tag -width Ds
2099.It Fl g
2100Display vdev GUIDs instead of the normal device names.
2101These GUIDs can be used in place of device names for the zpool
2102detach/offline/remove/replace commands.
2103.It Fl L
2104Display real paths for vdevs resolving all symbolic links.
2105This can be used to look up the current block device name regardless of the
2106.Pa /dev/disk/
2107path used to open it.
2108.It Fl l
2109Indicates that this command will request encryption keys for all encrypted
2110datasets it attempts to mount as it is bringing the new pool online.
2111Note that if any datasets have a
2112.Sy keylocation
2113of
2114.Sy prompt
2115this command will block waiting for the keys to be entered.
2116Without this flag encrypted datasets will be left unavailable and unmounted
2117until the keys are loaded.
2118.It Fl n
2119Do dry run, do not actually perform the split.
2120Print out the expected configuration of
2121.Ar newpool .
2122.It Fl P
2123Display full paths for vdevs instead of only the last component of
2124the path.
2125This can be used in conjunction with the
2126.Fl L
2127flag.
2128.It Fl o Ar property Ns = Ns Ar value
2129Sets the specified property for
2130.Ar newpool .
2131See the
2132.Sx Properties
2133section for more information on the available pool properties.
2134.It Fl R Ar root
2135Set
2136.Sy altroot
2137for
2138.Ar newpool
2139to
2140.Ar root
2141and automatically import it.
2142.El
2143.It Xo
2144.Nm
2145.Cm status
2146.Op Fl DigLpPstvx
2147.Op Fl T Sy u Ns | Ns Sy d
2148.Oo Ar pool Oc Ns ...
2149.Op Ar interval Op Ar count
2150.Xc
2151Displays the detailed health status for the given pools.
2152If no
2153.Ar pool
2154is specified, then the status of each pool in the system is displayed.
2155For more information on pool and device health, see the
2156.Sx Device Failure and Recovery
2157section.
2158.Pp
2159If a scrub or resilver is in progress, this command reports the percentage done
2160and the estimated time to completion.
2161Both of these are only approximate, because the amount of data in the pool and
2162the other workloads on the system can change.
2163.Bl -tag -width Ds
2164.It Fl D
2165Display a histogram of deduplication statistics, showing the allocated
2166.Pq physically present on disk
2167and referenced
2168.Pq logically referenced in the pool
2169block counts and sizes by reference count.
2170.It Fl g
2171Display vdev GUIDs instead of the normal device names.
2172These GUIDs can be used in place of device names for the zpool
2173detach/offline/remove/replace commands.
2174.It Fl L
2175Display real paths for vdevs resolving all symbolic links.
2176This can be used to look up the current block device name regardless of the
2177.Pa /dev/disk/
2178path used to open it.
2179.It Fl p
2180Display numbers in parsable (exact) values.
2181.It Fl P
2182Display full paths for vdevs instead of only the last component of
2183the path.
2184This can be used in conjunction with the
2185.Fl L
2186flag.
2187.It Fl s
2188Display the number of leaf VDEV slow IOs.
2189This is the number of IOs that didn't complete in
2190.Sy zio_slow_io_ms
2191milliseconds (default 30 seconds).
2192This does not necessarily mean the IOs failed to complete, just took an
2193unreasonably long amount of time.
2194This may indicate a problem with the underlying storage.
2195.It Fl t
2196Display vdev TRIM status.
2197.It Fl T Sy u Ns | Ns Sy d
2198Display a time stamp.
2199Specify
2200.Sy u
2201for a printed representation of the internal representation of time.
2202See
2203.Xr time 2 .
2204Specify
2205.Sy d
2206for standard date format.
2207See
2208.Xr date 1 .
2209.It Fl v
2210Displays verbose data error information, printing out a complete list of all
2211data errors since the last complete pool scrub.
2212.It Fl x
2213Only display status for pools that are exhibiting errors or are otherwise
2214unavailable.
2215Warnings about pools not using the latest on-disk format will not be included.
2216.El
2217.It Xo
2218.Nm
2219.Cm sync
2220.Oo Ar pool Oc Ns ...
2221.Xc
2222Forces all in-core dirty data to be written to the primary pool storage and
2223not the ZIL.
2224It will also update administrative information including quota reporting.
2225Without arguments,
2226.Nm zpool Cm sync
2227will sync all pools on the system.
2228Otherwise, it will only sync the specified
2229.Ar pool .
2230.It Xo
2231.Nm
2232.Cm trim
2233.Op Fl d
2234.Op Fl r Ar rate
2235.Op Fl c | Fl s
2236.Ar pool
2237.Op Ar device Ns ...
2238.Xc
2239Initiates an immediate on-demand TRIM operation for all of the free space in
2240a pool.
2241This operation informs the underlying storage devices of all blocks
2242in the pool which are no longer allocated and allows thinly provisioned
2243devices to reclaim the space.
2244.Pp
2245A manual on-demand TRIM operation can be initiated irrespective of the
2246.Sy autotrim
2247pool property setting.
2248See the documentation for the
2249.Sy autotrim
2250property above for the types of vdev devices which can be trimmed.
2251.Bl -tag -width Ds
2252.It Fl d -secure
2253Causes a secure TRIM to be initiated.
2254When performing a secure TRIM, the device guarantees that data stored on the
2255trimmed blocks has been erased.
2256This requires support from the device and is not supported by all SSDs.
2257.It Fl r -rate Ar rate
2258Controls the rate at which the TRIM operation progresses.
2259Without this option TRIM is executed as quickly as possible.
2260The rate, expressed in bytes per second, is applied on a per-vdev basis and
2261may be set differently for each leaf vdev.
2262.It Fl c, -cancel
2263Cancel trimming on the specified devices, or all eligible devices if none
2264are specified.
2265If one or more target devices are invalid or are not currently being
2266trimmed, the command will fail and no cancellation will occur on any device.
2267.It Fl s -suspend
2268Suspend trimming on the specified devices, or all eligible devices if none
2269are specified.
2270If one or more target devices are invalid or are not currently being
2271trimmed, the command will fail and no suspension will occur on any device.
2272Trimming can then be resumed by running
2273.Nm zpool Cm trim
2274with no flags on the relevant target devices.
2275.El
2276.It Xo
2277.Nm
2278.Cm upgrade
2279.Xc
2280Displays pools which do not have all supported features enabled and pools
2281formatted using a legacy ZFS version number.
2282These pools can continue to be used, but some features may not be available.
2283Use
2284.Nm zpool Cm upgrade Fl a
2285to enable all features on all pools.
2286.It Xo
2287.Nm
2288.Cm upgrade
2289.Fl v
2290.Xc
2291Displays legacy ZFS versions supported by the current software.
2292See
2293.Xr zpool-features 7
2294for a description of feature flags features supported by the current software.
2295.It Xo
2296.Nm
2297.Cm upgrade
2298.Op Fl V Ar version
2299.Fl a Ns | Ns Ar pool Ns ...
2300.Xc
2301Enables all supported features on the given pool.
2302Once this is done, the pool will no longer be accessible on systems that do not
2303support feature flags.
2304See
2305.Xr zpool-features 7
2306for details on compatibility with systems that support feature flags, but do not
2307support all features enabled on the pool.
2308.Bl -tag -width Ds
2309.It Fl a
2310Enables all supported features on all pools.
2311.It Fl V Ar version
2312Upgrade to the specified legacy version.
2313If the
2314.Fl V
2315flag is specified, no features will be enabled on the pool.
2316This option can only be used to increase the version number up to the last
2317supported legacy version number.
2318.El
2319.El
2320.Sh EXIT STATUS
2321The following exit values are returned:
2322.Bl -tag -width Ds
2323.It Sy 0
2324Successful completion.
2325.It Sy 1
2326An error occurred.
2327.It Sy 2
2328Invalid command line options were specified.
2329.El
2330.Sh EXAMPLES
2331.Bl -tag -width Ds
2332.It Sy Example 1 No Creating a RAID-Z Storage Pool
2333The following command creates a pool with a single raidz root vdev that
2334consists of six disks.
2335.Bd -literal
2336# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
2337.Ed
2338.It Sy Example 2 No Creating a Mirrored Storage Pool
2339The following command creates a pool with two mirrors, where each mirror
2340contains two disks.
2341.Bd -literal
2342# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
2343.Ed
2344.It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
2345The following command creates an unmirrored pool using two disk slices.
2346.Bd -literal
2347# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
2348.Ed
2349.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2350The following command creates an unmirrored pool using files.
2351While not recommended, a pool based on files can be useful for experimental
2352purposes.
2353.Bd -literal
2354# zpool create tank /path/to/file/a /path/to/file/b
2355.Ed
2356.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2357The following command adds two mirrored disks to the pool
2358.Em tank ,
2359assuming the pool is already made up of two-way mirrors.
2360The additional space is immediately available to any datasets within the pool.
2361.Bd -literal
2362# zpool add tank mirror c1t0d0 c1t1d0
2363.Ed
2364.It Sy Example 6 No Listing Available ZFS Storage Pools
2365The following command lists all available pools on the system.
2366In this case, the pool
2367.Em zion
2368is faulted due to a missing device.
2369The results from this command are similar to the following:
2370.Bd -literal
2371# zpool list
2372NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2373rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
2374tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
2375zion       -      -      -      -         -      -      -  FAULTED -
2376.Ed
2377.It Sy Example 7 No Destroying a ZFS Storage Pool
2378The following command destroys the pool
2379.Em tank
2380and any datasets contained within.
2381.Bd -literal
2382# zpool destroy -f tank
2383.Ed
2384.It Sy Example 8 No Exporting a ZFS Storage Pool
2385The following command exports the devices in pool
2386.Em tank
2387so that they can be relocated or later imported.
2388.Bd -literal
2389# zpool export tank
2390.Ed
2391.It Sy Example 9 No Importing a ZFS Storage Pool
2392The following command displays available pools, and then imports the pool
2393.Em tank
2394for use on the system.
2395The results from this command are similar to the following:
2396.Bd -literal
2397# zpool import
2398  pool: tank
2399    id: 15451357997522795478
2400 state: ONLINE
2401action: The pool can be imported using its name or numeric identifier.
2402config:
2403
2404        tank        ONLINE
2405          mirror    ONLINE
2406            c1t2d0  ONLINE
2407            c1t3d0  ONLINE
2408
2409# zpool import tank
2410.Ed
2411.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2412The following command upgrades all ZFS Storage pools to the current version of
2413the software.
2414.Bd -literal
2415# zpool upgrade -a
2416This system is currently running ZFS version 2.
2417.Ed
2418.It Sy Example 11 No Managing Hot Spares
2419The following command creates a new pool with an available hot spare:
2420.Bd -literal
2421# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
2422.Ed
2423.Pp
2424If one of the disks were to fail, the pool would be reduced to the degraded
2425state.
2426The failed device can be replaced using the following command:
2427.Bd -literal
2428# zpool replace tank c0t0d0 c0t3d0
2429.Ed
2430.Pp
2431Once the data has been resilvered, the spare is automatically removed and is
2432made available for use should another device fail.
2433The hot spare can be permanently removed from the pool using the following
2434command:
2435.Bd -literal
2436# zpool remove tank c0t2d0
2437.Ed
2438.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2439The following command creates a ZFS storage pool consisting of two, two-way
2440mirrors and mirrored log devices:
2441.Bd -literal
2442# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
2443  c4d0 c5d0
2444.Ed
2445.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2446The following command adds two disks for use as cache devices to a ZFS storage
2447pool:
2448.Bd -literal
2449# zpool add pool cache c2d0 c3d0
2450.Ed
2451.Pp
2452Once added, the cache devices gradually fill with content from main memory.
2453Depending on the size of your cache devices, it could take over an hour for
2454them to fill.
2455Capacity and reads can be monitored using the
2456.Cm iostat
2457option as follows:
2458.Bd -literal
2459# zpool iostat -v pool 5
2460.Ed
2461.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2462The following commands remove the mirrored log device
2463.Sy mirror-2
2464and mirrored top-level data device
2465.Sy mirror-1 .
2466.Pp
2467Given this configuration:
2468.Bd -literal
2469  pool: tank
2470 state: ONLINE
2471 scrub: none requested
2472config:
2473
2474         NAME        STATE     READ WRITE CKSUM
2475         tank        ONLINE       0     0     0
2476           mirror-0  ONLINE       0     0     0
2477             c6t0d0  ONLINE       0     0     0
2478             c6t1d0  ONLINE       0     0     0
2479           mirror-1  ONLINE       0     0     0
2480             c6t2d0  ONLINE       0     0     0
2481             c6t3d0  ONLINE       0     0     0
2482         logs
2483           mirror-2  ONLINE       0     0     0
2484             c4t0d0  ONLINE       0     0     0
2485             c4t1d0  ONLINE       0     0     0
2486.Ed
2487.Pp
2488The command to remove the mirrored log
2489.Sy mirror-2
2490is:
2491.Bd -literal
2492# zpool remove tank mirror-2
2493.Ed
2494.Pp
2495The command to remove the mirrored data
2496.Sy mirror-1
2497is:
2498.Bd -literal
2499# zpool remove tank mirror-1
2500.Ed
2501.It Sy Example 15 No Displaying expanded space on a device
2502The following command displays the detailed information for the pool
2503.Em data .
2504This pool is comprised of a single raidz vdev where one of its devices
2505increased its capacity by 10GB.
2506In this example, the pool will not be able to utilize this extra capacity until
2507all the devices under the raidz vdev have been expanded.
2508.Bd -literal
2509# zpool list -v data
2510NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2511data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
2512  raidz1    23.9G  14.6G  9.30G    48%         -
2513    c1t1d0      -      -      -      -         -
2514    c1t2d0      -      -      -      -       10G
2515    c1t3d0      -      -      -      -         -
2516.Ed
2517.El
2518.Sh ENVIRONMENT VARIABLES
2519.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2520.It Ev ZPOOL_VDEV_NAME_GUID
2521Cause
2522.Nm zpool subcommands to output vdev guids by default.
2523This behavior is identical to the
2524.Nm zpool status -g
2525command line option.
2526.El
2527.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2528.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2529Cause
2530.Nm zpool
2531subcommands to follow links for vdev names by default.
2532This behavior is identical to the
2533.Nm zpool status -L
2534command line option.
2535.El
2536.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2537.It Ev ZPOOL_VDEV_NAME_PATH
2538Cause
2539.Nm zpool
2540subcommands to output full vdev path names by default.
2541This behavior is identical to the
2542.Nm zpool status -P
2543command line option.
2544.El
2545.Sh INTERFACE STABILITY
2546.Sy Evolving
2547.Sh SEE ALSO
2548.Xr attributes 7 ,
2549.Xr zpool-features 7 ,
2550.Xr zfs 8
2551