xref: /illumos-gate/usr/src/man/man8/zpool.8 (revision 608eb926e14f4ba4736b2d59e891335f1cba9e1e)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2012, 2017 by Delphix. All rights reserved.
24.\" Copyright 2017 Nexenta Systems, Inc.
25.\" Copyright (c) 2017 Datto Inc.
26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27.\" Copyright 2021 Joyent, Inc.
28.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
29.\" Copyright 2024 OmniOS Community Edition (OmniOSce) Association.
30.\" Copyright 2025 Hans Rosenfeld
31.\"
32.Dd September 6, 2025
33.Dt ZPOOL 8
34.Os
35.Sh NAME
36.Nm zpool
37.Nd configure ZFS storage pools
38.Sh SYNOPSIS
39.Nm
40.Fl \&?
41.Nm
42.Cm add
43.Op Fl fgLnP
44.Oo Fl o Ar property Ns = Ns Ar value Oc
45.Ar pool vdev Ns ...
46.Nm
47.Cm attach
48.Op Fl f
49.Oo Fl o Ar property Ns = Ns Ar value Oc
50.Ar pool device new_device
51.Nm
52.Cm checkpoint
53.Op Fl d , -discard
54.Ar pool
55.Nm
56.Cm clear
57.Ar pool
58.Op Ar device
59.Nm
60.Cm create
61.Op Fl dfn
62.Op Fl B
63.Op Fl m Ar mountpoint
64.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
65.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
66.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
67.Op Fl R Ar root
68.Op Fl t Ar tempname
69.Ar pool vdev Ns ...
70.Nm
71.Cm destroy
72.Op Fl f
73.Ar pool
74.Nm
75.Cm detach
76.Ar pool device
77.Nm
78.Cm export
79.Op Fl f
80.Ar pool Ns ...
81.Nm
82.Cm get
83.Op Fl Hp
84.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
85.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
86.Ar pool Ns ...
87.Nm
88.Cm history
89.Op Fl il
90.Oo Ar pool Oc Ns ...
91.Nm
92.Cm import
93.Op Fl D
94.Op Fl d Ar dir
95.Nm
96.Cm import
97.Fl a
98.Op Fl DflmN
99.Op Fl F Op Fl n
100.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
101.Op Fl o Ar mntopts
102.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
103.Op Fl R Ar root
104.Nm
105.Cm import
106.Op Fl Dfmt
107.Op Fl F Op Fl n
108.Op Fl -rewind-to-checkpoint
109.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
110.Op Fl o Ar mntopts
111.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
112.Op Fl R Ar root
113.Ar pool Ns | Ns Ar id
114.Op Ar newpool
115.Nm
116.Cm initialize
117.Op Fl c | Fl s
118.Ar pool
119.Op Ar device Ns ...
120.Nm
121.Cm iostat
122.Op Oo Fl lq Oc | Ns Fl rw
123.Op Fl T Sy u Ns | Ns Sy d
124.Op Fl ghHLnpPvy
125.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
126.Op Ar interval Op Ar count
127.Nm
128.Cm labelclear
129.Op Fl f
130.Ar device
131.Nm
132.Cm list
133.Op Fl HgLpPv
134.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
135.Op Fl T Sy u Ns | Ns Sy d
136.Oo Ar pool Oc Ns ...
137.Op Ar interval Op Ar count
138.Nm
139.Cm offline
140.Op Fl t
141.Ar pool Ar device Ns ...
142.Nm
143.Cm online
144.Op Fl e
145.Ar pool Ar device Ns ...
146.Nm
147.Cm reguid
148.Ar pool
149.Nm
150.Cm reopen
151.Ar pool
152.Nm
153.Cm remove
154.Op Fl np
155.Ar pool Ar device Ns ...
156.Nm
157.Cm remove
158.Fl s
159.Ar pool
160.Nm
161.Cm replace
162.Op Fl f
163.Ar pool Ar device Op Ar new_device
164.Nm
165.Cm resilver
166.Ar pool Ns ...
167.Nm
168.Cm scrub
169.Op Fl s | Fl p
170.Ar pool Ns ...
171.Nm
172.Cm trim
173.Op Fl d
174.Op Fl r Ar rate
175.Op Fl c | Fl s
176.Ar pool
177.Op Ar device Ns ...
178.Nm
179.Cm set
180.Ar property Ns = Ns Ar value
181.Ar pool
182.Nm
183.Cm split
184.Op Fl gLlnP
185.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
186.Op Fl R Ar root
187.Ar pool newpool
188.Nm
189.Cm status
190.Op Fl DigLpPstvx
191.Op Fl T Sy u Ns | Ns Sy d
192.Oo Ar pool Oc Ns ...
193.Op Ar interval Op Ar count
194.Nm
195.Cm sync
196.Oo Ar pool Oc Ns ...
197.Nm
198.Cm upgrade
199.Nm
200.Cm upgrade
201.Fl v
202.Nm
203.Cm upgrade
204.Op Fl V Ar version
205.Fl a Ns | Ns Ar pool Ns ...
206.Sh DESCRIPTION
207The
208.Nm
209command configures ZFS storage pools.
210A storage pool is a collection of devices that provides physical storage and
211data replication for ZFS datasets.
212All datasets within a storage pool share the same space.
213See
214.Xr zfs 8
215for information on managing datasets.
216.Ss Virtual Devices (vdevs)
217A "virtual device" describes a single device or a collection of devices
218organized according to certain performance and fault characteristics.
219The following virtual devices are supported:
220.Bl -tag -width Ds
221.It Sy disk
222A block device, typically located under
223.Pa /dev/dsk .
224ZFS can use individual slices or partitions, though the recommended mode of
225operation is to use whole disks.
226A disk can be specified by a full path, or it can be a shorthand name
227.Po the relative portion of the path under
228.Pa /dev/dsk
229.Pc .
230A whole disk can be specified by omitting the slice or partition designation.
231For example,
232.Pa c0t0d0
233is equivalent to
234.Pa /dev/dsk/c0t0d0s2 .
235When given a whole disk, ZFS automatically labels the disk, if necessary.
236.It Sy file
237A regular file.
238The use of files as a backing store is strongly discouraged.
239It is designed primarily for experimental purposes, as the fault tolerance of a
240file is only as good as the file system of which it is a part.
241A file must be specified by a full path.
242.It Sy mirror
243A mirror of two or more devices.
244Data is replicated in an identical fashion across all components of a mirror.
245A mirror with N disks of size X can hold X bytes and can withstand (N-1) devices
246failing before data integrity is compromised.
247.It Sy raidz , raidz1 , raidz2 , raidz3
248A variation on RAID-5 that allows for better distribution of parity and
249eliminates the RAID-5
250.Qq write hole
251.Pq in which data and parity become inconsistent after a power loss .
252Data and parity is striped across all disks within a raidz group.
253.Pp
254A raidz group can have single-, double-, or triple-parity, meaning that the
255raidz group can sustain one, two, or three failures, respectively, without
256losing any data.
257The
258.Sy raidz1
259vdev type specifies a single-parity raidz group; the
260.Sy raidz2
261vdev type specifies a double-parity raidz group; and the
262.Sy raidz3
263vdev type specifies a triple-parity raidz group.
264The
265.Sy raidz
266vdev type is an alias for
267.Sy raidz1 .
268.Pp
269A raidz group with N disks of size X with P parity disks can hold approximately
270(N-P)*X bytes and can withstand P device(s) failing before data integrity is
271compromised.
272The minimum number of devices in a raidz group is one more than the number of
273parity disks.
274The recommended number is between 3 and 9 to help increase performance.
275.It Sy spare
276A special pseudo-vdev which keeps track of available hot spares for a pool.
277For more information, see the
278.Sx Hot Spares
279section.
280.It Sy log
281A separate intent log device.
282If more than one log device is specified, then writes are load-balanced between
283devices.
284Log devices can be mirrored.
285However, raidz vdev types are not supported for the intent log.
286For more information, see the
287.Sx Intent Log
288section.
289.It Sy dedup
290A device dedicated solely for allocating dedup data.
291The redundancy of this device should match the redundancy of the other normal
292devices in the pool.
293If more than one dedup device is specified, then allocations are load-balanced
294between devices.
295.It Sy special
296A device dedicated solely for allocating various kinds of internal metadata,
297and optionally small file data.
298The redundancy of this device should match the redundancy of the other normal
299devices in the pool.
300If more than one special device is specified, then allocations are
301load-balanced between devices.
302.Pp
303For more information on special allocations, see the
304.Sx Special Allocation Class
305section.
306.It Sy cache
307A device used to cache storage pool data.
308A cache device cannot be configured as a mirror or raidz group.
309For more information, see the
310.Sx Cache Devices
311section.
312.El
313.Pp
314Virtual devices cannot be nested, so a mirror or raidz virtual device can only
315contain files or disks.
316Mirrors of mirrors
317.Pq or other combinations
318are not allowed.
319.Pp
320A pool can have any number of virtual devices at the top of the configuration
321.Po known as
322.Qq root vdevs
323.Pc .
324Data is dynamically distributed across all top-level devices to balance data
325among devices.
326As new virtual devices are added, ZFS automatically places data on the newly
327available devices.
328.Pp
329Virtual devices are specified one at a time on the command line, separated by
330whitespace.
331The keywords
332.Sy mirror
333and
334.Sy raidz
335are used to distinguish where a group ends and another begins.
336For example, the following creates two root vdevs, each a mirror of two disks:
337.Bd -literal
338# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
339.Ed
340.Ss Device Failure and Recovery
341ZFS supports a rich set of mechanisms for handling device failure and data
342corruption.
343All metadata and data is checksummed, and ZFS automatically repairs bad data
344from a good copy when corruption is detected.
345.Pp
346In order to take advantage of these features, a pool must make use of some form
347of redundancy, using either mirrored or raidz groups.
348While ZFS supports running in a non-redundant configuration, where each root
349vdev is simply a disk or file, this is strongly discouraged.
350A single case of bit corruption can render some or all of your data unavailable.
351.Pp
352A pool's health status is described by one of three states: online, degraded,
353or faulted.
354An online pool has all devices operating normally.
355A degraded pool is one in which one or more devices have failed, but the data is
356still available due to a redundant configuration.
357A faulted pool has corrupted metadata, or one or more faulted devices, and
358insufficient replicas to continue functioning.
359.Pp
360The health of the top-level vdev, such as mirror or raidz device, is
361potentially impacted by the state of its associated vdevs, or component
362devices.
363A top-level vdev or component device is in one of the following states:
364.Bl -tag -width "DEGRADED"
365.It Sy DEGRADED
366One or more top-level vdevs is in the degraded state because one or more
367component devices are offline.
368Sufficient replicas exist to continue functioning.
369.Pp
370One or more component devices is in the degraded or faulted state, but
371sufficient replicas exist to continue functioning.
372The underlying conditions are as follows:
373.Bl -bullet
374.It
375The number of checksum errors exceeds acceptable levels and the device is
376degraded as an indication that something may be wrong.
377ZFS continues to use the device as necessary.
378.It
379The number of I/O errors exceeds acceptable levels.
380The device could not be marked as faulted because there are insufficient
381replicas to continue functioning.
382.El
383.It Sy FAULTED
384One or more top-level vdevs is in the faulted state because one or more
385component devices are offline.
386Insufficient replicas exist to continue functioning.
387.Pp
388One or more component devices is in the faulted state, and insufficient
389replicas exist to continue functioning.
390The underlying conditions are as follows:
391.Bl -bullet
392.It
393The device could be opened, but the contents did not match expected values.
394.It
395The number of I/O errors exceeds acceptable levels and the device is faulted to
396prevent further use of the device.
397.El
398.It Sy OFFLINE
399The device was explicitly taken offline by the
400.Nm zpool Cm offline
401command.
402.It Sy ONLINE
403The device is online and functioning.
404.It Sy REMOVED
405The device was physically removed while the system was running.
406Device removal detection is hardware-dependent and may not be supported on all
407platforms.
408.It Sy UNAVAIL
409The device could not be opened.
410If a pool is imported when a device was unavailable, then the device will be
411identified by a unique identifier instead of its path since the path was never
412correct in the first place.
413.El
414.Pp
415If a device is removed and later re-attached to the system, ZFS attempts
416to put the device online automatically.
417Device attach detection is hardware-dependent and might not be supported on all
418platforms.
419.Ss Hot Spares
420ZFS allows devices to be associated with pools as
421.Qq hot spares .
422These devices are not actively used in the pool, but when an active device
423fails, it is automatically replaced by a hot spare.
424If there is more than one spare that could be used as a replacement then they
425are tried in order of increasing capacity so that the smallest available spare
426that can replace the failed device is used.
427To create a pool with hot spares, specify a
428.Sy spare
429vdev with any number of devices.
430For example,
431.Bd -literal
432# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
433.Ed
434.Pp
435Spares can be shared across multiple pools, and can be added with the
436.Nm zpool Cm add
437command and removed with the
438.Nm zpool Cm remove
439command.
440Once a spare replacement is initiated, a new
441.Sy spare
442vdev is created within the configuration that will remain there until the
443original device is replaced.
444At this point, the hot spare becomes available again if another device fails.
445.Pp
446If a pool has a shared spare that is currently being used, the pool can not be
447exported since other pools may use this shared spare, which may lead to
448potential data corruption.
449.Pp
450Shared spares add some risk.
451If the pools are imported on different hosts, and both pools suffer a device
452failure at the same time, both could attempt to use the spare at the same time.
453This may not be detected, resulting in data corruption.
454.Pp
455An in-progress spare replacement can be cancelled by detaching the hot spare.
456If the original faulted device is detached, then the hot spare assumes its
457place in the configuration, and is removed from the spare list of all active
458pools.
459.Pp
460Spares cannot replace log devices.
461.Ss Intent Log
462The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
463transactions.
464For instance, databases often require their transactions to be on stable storage
465devices when returning from a system call.
466NFS and other applications can also use
467.Xr fsync 3C
468to ensure data stability.
469By default, the intent log is allocated from blocks within the main pool.
470However, it might be possible to get better performance using separate intent
471log devices such as NVRAM or a dedicated disk.
472For example:
473.Bd -literal
474# zpool create pool c0d0 c1d0 log c2d0
475.Ed
476.Pp
477Multiple log devices can also be specified, and they can be mirrored.
478See the
479.Sx EXAMPLES
480section for an example of mirroring multiple log devices.
481.Pp
482Log devices can be added, replaced, attached, detached, and imported and
483exported as part of the larger pool.
484Mirrored devices can be removed by specifying the top-level mirror vdev.
485.Ss Cache Devices
486Devices can be added to a storage pool as
487.Qq cache devices .
488These devices provide an additional layer of caching between main memory and
489disk.
490For read-heavy workloads, where the working set size is much larger than what
491can be cached in main memory, using cache devices allow much more of this
492working set to be served from low latency media.
493Using cache devices provides the greatest performance improvement for random
494read-workloads of mostly static content.
495.Pp
496To create a pool with cache devices, specify a
497.Sy cache
498vdev with any number of devices.
499For example:
500.Bd -literal
501# zpool create pool c0d0 c1d0 cache c2d0 c3d0
502.Ed
503.Pp
504Cache devices cannot be mirrored or part of a raidz configuration.
505If a read error is encountered on a cache device, that read I/O is reissued to
506the original storage pool device, which might be part of a mirrored or raidz
507configuration.
508.Pp
509The content of the cache devices is considered volatile, as is the case with
510other system caches.
511.Ss Pool checkpoint
512Before starting critical procedures that include destructive actions
513.Pq e.g. zfs Cm destroy ,
514an administrator can checkpoint the pool's state and in the case of a
515mistake or failure, rewind the entire pool back to the checkpoint.
516The checkpoint is automatically discarded upon rewinding.
517Otherwise, the checkpoint can be discarded when the procedure has completed
518successfully.
519.Pp
520A pool checkpoint can be thought of as a pool-wide snapshot and should be used
521with care as it contains every part of the pool's state, from properties to vdev
522configuration.
523Thus, while a pool has a checkpoint certain operations are not allowed.
524Specifically, vdev removal/attach/detach, mirror splitting, and
525changing the pool's guid.
526Adding a new vdev is supported but in the case of a rewind it will have to be
527added again.
528Finally, users of this feature should keep in mind that scrubs in a pool that
529has a checkpoint do not repair checkpointed data.
530.Pp
531To create a checkpoint for a pool:
532.Bd -literal
533# zpool checkpoint pool
534.Ed
535.Pp
536To later rewind to its checkpointed state (which also discards the checkpoint),
537you need to first export it and then rewind it during import:
538.Bd -literal
539# zpool export pool
540# zpool import --rewind-to-checkpoint pool
541.Ed
542.Pp
543To discard the checkpoint from a pool without rewinding:
544.Bd -literal
545# zpool checkpoint -d pool
546.Ed
547.Pp
548Dataset reservations (controlled by the
549.Nm reservation
550or
551.Nm refreservation
552zfs properties) may be unenforceable while a checkpoint exists, because the
553checkpoint is allowed to consume the dataset's reservation.
554Finally, data that is part of the checkpoint but has been freed in the
555current state of the pool won't be scanned during a scrub.
556.Ss Special Allocation Class
557The allocations in the special class are dedicated to specific block types.
558By default this includes all metadata, the indirect blocks of user data, and
559any dedup data.
560The class can also be provisioned to accept a limited percentage of small file
561data blocks.
562.Pp
563A pool must always have at least one general (non-specified) vdev before
564other devices can be assigned to the special class.
565If the special class becomes full, then allocations intended for it will spill
566back into the normal class.
567.Pp
568Dedup data can be excluded from the special class by setting the
569.Sy zfs_ddt_data_is_special
570zfs kernel variable to false (0).
571.Pp
572Inclusion of small file blocks in the special class is opt-in.
573Each dataset can control the size of small file blocks allowed in the special
574class by setting the
575.Sy special_small_blocks
576dataset property.
577It defaults to zero so you must opt-in by setting it to a non-zero value.
578See
579.Xr zfs 8
580for more info on setting this property.
581.Ss Properties
582Each pool has several properties associated with it.
583Some properties are read-only statistics while others are configurable and
584change the behavior of the pool.
585.Pp
586The following are read-only properties:
587.Bl -tag -width Ds
588.It Cm allocated
589Amount of storage space used within the pool.
590.It Sy bootsize
591The size of the system boot partition.
592This property can only be set at pool creation time and is read-only once pool
593is created.
594Setting this property implies using the
595.Fl B
596option.
597.It Sy capacity
598Percentage of pool space used.
599This property can also be referred to by its shortened column name,
600.Sy cap .
601.It Sy expandsize
602Amount of uninitialized space within the pool or device that can be used to
603increase the total capacity of the pool.
604Uninitialized space consists of any space on an EFI labeled vdev which has not
605been brought online
606.Po e.g, using
607.Nm zpool Cm online Fl e
608.Pc .
609This space occurs when a LUN is dynamically expanded.
610.It Sy fragmentation
611The amount of fragmentation in the pool.
612.It Sy free
613The amount of free space available in the pool.
614.It Sy freeing
615After a file system or snapshot is destroyed, the space it was using is
616returned to the pool asynchronously.
617.Sy freeing
618is the amount of space remaining to be reclaimed.
619Over time
620.Sy freeing
621will decrease while
622.Sy free
623increases.
624.It Sy health
625The current health of the pool.
626Health can be one of
627.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
628.It Sy guid
629A unique identifier for the pool.
630.It Sy size
631Total size of the storage pool.
632.It Sy unsupported@ Ns Em feature_guid
633Information about unsupported features that are enabled on the pool.
634See
635.Xr zpool-features 7
636for details.
637.El
638.Pp
639The space usage properties report actual physical space available to the
640storage pool.
641The physical space can be different from the total amount of space that any
642contained datasets can actually use.
643The amount of space used in a raidz configuration depends on the characteristics
644of the data being written.
645In addition, ZFS reserves some space for internal accounting that the
646.Xr zfs 8
647command takes into account, but the
648.Nm
649command does not.
650For non-full pools of a reasonable size, these effects should be invisible.
651For small pools, or pools that are close to being completely full, these
652discrepancies may become more noticeable.
653.Pp
654The following property can be set at creation time and import time:
655.Bl -tag -width Ds
656.It Sy altroot
657Alternate root directory.
658If set, this directory is prepended to any mount points within the pool.
659This can be used when examining an unknown pool where the mount points cannot be
660trusted, or in an alternate boot environment, where the typical paths are not
661valid.
662.Sy altroot
663is not a persistent property.
664It is valid only while the system is up.
665Setting
666.Sy altroot
667defaults to using
668.Sy cachefile Ns = Ns Sy none ,
669though this may be overridden using an explicit setting.
670.El
671.Pp
672The following property can be set only at import time:
673.Bl -tag -width Ds
674.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
675If set to
676.Sy on ,
677the pool will be imported in read-only mode.
678This property can also be referred to by its shortened column name,
679.Sy rdonly .
680.El
681.Pp
682The following properties can be set at creation time and import time, and later
683changed with the
684.Nm zpool Cm set
685command:
686.Bl -tag -width Ds
687.It Sy ashift Ns = Ns Sy ashift
688Pool sector size exponent, to the power of
689.Sy 2
690(internally referred to as
691.Sy ashift
692). Values from 9 to 16, inclusive, are valid; also, the
693value 0 (the default) means to auto-detect using the kernel's block
694layer and a ZFS internal exception list.
695I/O operations will be aligned to the specified size boundaries.
696Additionally, the minimum (disk) write size will be set to the specified size,
697so this represents a space vs performance trade-off.
698For optimal performance, the pool sector size should be greater than or equal to
699the sector size of the underlying disks.
700The typical case for setting this property is when performance is important and
701the underlying disks use 4KiB sectors but report 512B sectors to the OS (for
702compatibility reasons); in that case, set
703.Sy ashift=12
704(which is 1<<12 = 4096). When set, this property is
705used as the default hint value in subsequent vdev operations (add,
706attach and replace).
707Changing this value will not modify any existing
708vdev, not even on disk replacement; however it can be used, for
709instance, to replace a dying 512B sectors disk with a newer 4KiB
710sectors device: this will probably result in bad performance but at the
711same time could prevent loss of data.
712.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
713Controls automatic pool expansion when the underlying LUN is grown.
714If set to
715.Sy on ,
716the pool will be resized according to the size of the expanded device.
717If the device is part of a mirror or raidz then all devices within that
718mirror/raidz group must be expanded before the new space is made available to
719the pool.
720The default behavior is
721.Sy off .
722This property can also be referred to by its shortened column name,
723.Sy expand .
724.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
725Controls automatic device replacement.
726If set to
727.Sy off ,
728device replacement must be initiated by the administrator by using the
729.Nm zpool Cm replace
730command.
731If set to
732.Sy on ,
733any new device, found in the same physical location as a device that previously
734belonged to the pool, is automatically formatted and replaced.
735The default behavior is
736.Sy off .
737This property can also be referred to by its shortened column name,
738.Sy replace .
739.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
740Identifies the default bootable dataset for the root pool.
741This property is expected to be set mainly by the installation and upgrade
742programs.
743.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
744Controls the location of where the pool configuration is cached.
745Discovering all pools on system startup requires a cached copy of the
746configuration data that is stored on the root file system.
747All pools in this cache are automatically imported when the system boots.
748Some environments, such as install and clustering, need to cache this
749information in a different location so that pools are not automatically
750imported.
751Setting this property caches the pool configuration in a different location that
752can later be imported with
753.Nm zpool Cm import Fl c .
754Setting it to the special value
755.Sy none
756creates a temporary pool that is never cached, and the special value
757.Qq
758.Pq empty string
759uses the default location.
760.Pp
761Multiple pools can share the same cache file.
762Because the kernel destroys and recreates this file when pools are added and
763removed, care should be taken when attempting to access this file.
764When the last pool using a
765.Sy cachefile
766is exported or destroyed, the file is removed.
767.It Sy comment Ns = Ns Ar text
768A text string consisting of printable ASCII characters that will be stored
769such that it is available even if the pool becomes faulted.
770An administrator can provide additional information about a pool using this
771property.
772.It Sy dedupditto Ns = Ns Ar number
773Threshold for the number of block ditto copies.
774If the reference count for a deduplicated block increases above this number, a
775new ditto copy of this block is automatically stored.
776The default setting is
777.Sy 0
778which causes no ditto copies to be created for deduplicated blocks.
779The minimum legal nonzero setting is
780.Sy 100 .
781.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
782Controls whether a non-privileged user is granted access based on the dataset
783permissions defined on the dataset.
784See
785.Xr zfs 8
786for more information on ZFS delegated administration.
787.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
788Controls the system behavior in the event of catastrophic pool failure.
789This condition is typically a result of a loss of connectivity to the underlying
790storage device(s) or a failure of all devices within the pool.
791The behavior of such an event is determined as follows:
792.Bl -tag -width "continue"
793.It Sy wait
794Blocks all I/O access until the device connectivity is recovered and the errors
795are cleared.
796This is the default behavior.
797.It Sy continue
798Returns
799.Er EIO
800to any new write I/O requests but allows reads to any of the remaining healthy
801devices.
802Any write requests that have yet to be committed to disk would be blocked.
803.It Sy panic
804Prints out a message to the console and generates a system crash dump.
805.El
806.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off
807When set to
808.Sy on
809space which has been recently freed, and is no longer allocated by the pool,
810will be periodically trimmed.
811This allows block device vdevs which support BLKDISCARD, such as SSDs, or
812file vdevs on which the underlying file system supports hole-punching, to
813reclaim unused blocks.
814The default setting for this property is
815.Sy off .
816.Pp
817Automatic TRIM does not immediately reclaim blocks after a free.
818Instead, it will optimistically delay allowing smaller ranges to be
819aggregated in to a few larger ones.
820These can then be issued more efficiently to the storage.
821.Pp
822Be aware that automatic trimming of recently freed data blocks can put
823significant stress on the underlying storage devices.
824This will vary depending of how well the specific device handles these commands.
825For lower end devices it is often possible to achieve most of the benefits
826of automatic trimming by running an on-demand (manual) TRIM periodically
827using the
828.Nm zpool Cm trim
829command.
830.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
831The value of this property is the current state of
832.Ar feature_name .
833The only valid value when setting this property is
834.Sy enabled
835which moves
836.Ar feature_name
837to the enabled state.
838See
839.Xr zpool-features 7
840for details on feature states.
841.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off
842Controls whether information about snapshots associated with this pool is
843output when
844.Nm zfs Cm list
845is run without the
846.Fl t
847option.
848The default value is
849.Sy off .
850This property can also be referred to by its shortened name,
851.Sy listsnaps .
852.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off
853Controls whether a pool activity check should be performed during
854.Nm zpool Cm import .
855When a pool is determined to be active it cannot be imported, even with the
856.Fl f
857option.
858This property is intended to be used in failover configurations
859where multiple hosts have access to a pool on shared storage.
860.sp
861Multihost provides protection on import only.
862It does not protect against an
863individual device being used in multiple pools, regardless of the type of vdev.
864See the discussion under
865.Sy zpool create .
866.sp
867When this property is on, periodic writes to storage occur to show the pool is
868in use.
869See
870.Sy zfs_multihost_interval
871in the
872.Xr zfs-module-parameters 7
873man page.
874In order to enable this property each host must set a unique hostid.
875The default value is
876.Sy off .
877.It Sy version Ns = Ns Ar version
878The current on-disk version of the pool.
879This can be increased, but never decreased.
880The preferred method of updating pools is with the
881.Nm zpool Cm upgrade
882command, though this property can be used when a specific version is needed for
883backwards compatibility.
884Once feature flags are enabled on a pool this property will no longer have a
885value.
886.El
887.Ss Subcommands
888All subcommands that modify state are logged persistently to the pool in their
889original form.
890.Pp
891The
892.Nm
893command provides subcommands to create and destroy storage pools, add capacity
894to storage pools, and provide information about the storage pools.
895The following subcommands are supported:
896.Bl -tag -width Ds
897.It Xo
898.Nm
899.Fl \&?
900.Xc
901Displays a help message.
902.It Xo
903.Nm
904.Cm add
905.Op Fl fgLnP
906.Oo Fl o Ar property Ns = Ns Ar value Oc
907.Ar pool vdev Ns ...
908.Xc
909Adds the specified virtual devices to the given pool.
910The
911.Ar vdev
912specification is described in the
913.Sx Virtual Devices
914section.
915The behavior of the
916.Fl f
917option, and the device checks performed are described in the
918.Nm zpool Cm create
919subcommand.
920.Bl -tag -width Ds
921.It Fl f
922Forces use of
923.Ar vdev Ns s ,
924even if they appear in use or specify a conflicting replication level.
925Not all devices can be overridden in this manner.
926.It Fl g
927Display
928.Ar vdev ,
929GUIDs instead of the normal device names.
930These GUIDs can be used in place of
931device names for the zpool detach/offline/remove/replace commands.
932.It Fl L
933Display real paths for
934.Ar vdev Ns s
935resolving all symbolic links.
936This can be used to look up the current block
937device name regardless of the /dev/disk/ path used to open it.
938.It Fl n
939Displays the configuration that would be used without actually adding the
940.Ar vdev Ns s .
941The actual pool creation can still fail due to insufficient privileges or
942device sharing.
943.It Fl P
944Display real paths for
945.Ar vdev Ns s
946instead of only the last component of the path.
947This can be used in conjunction with the
948.Fl L
949flag.
950.It Fl o Ar property Ns = Ns Ar value
951Sets the given pool properties.
952See the
953.Sx Properties
954section for a list of valid properties that can be set.
955The only property supported at the moment is
956.Sy ashift .
957.El
958.It Xo
959.Nm
960.Cm attach
961.Op Fl f
962.Oo Fl o Ar property Ns = Ns Ar value Oc
963.Ar pool device new_device
964.Xc
965Attaches
966.Ar new_device
967to the existing
968.Ar device .
969The existing device cannot be part of a raidz configuration.
970If
971.Ar device
972is not currently part of a mirrored configuration,
973.Ar device
974automatically transforms into a two-way mirror of
975.Ar device
976and
977.Ar new_device .
978If
979.Ar device
980is part of a two-way mirror, attaching
981.Ar new_device
982creates a three-way mirror, and so on.
983In either case,
984.Ar new_device
985begins to resilver immediately.
986.Bl -tag -width Ds
987.It Fl f
988Forces use of
989.Ar new_device ,
990even if its appears to be in use.
991Not all devices can be overridden in this manner.
992.It Fl o Ar property Ns = Ns Ar value
993Sets the given pool properties.
994See the
995.Sx Properties
996section for a list of valid properties that can be set.
997The only property supported at the moment is
998.Sy ashift .
999.El
1000.It Xo
1001.Nm
1002.Cm checkpoint
1003.Op Fl d , -discard
1004.Ar pool
1005.Xc
1006Checkpoints the current state of
1007.Ar pool
1008, which can be later restored by
1009.Nm zpool Cm import --rewind-to-checkpoint .
1010Rewinding will also discard the checkpoint.
1011The existence of a checkpoint in a pool prohibits the following
1012.Nm zpool
1013commands:
1014.Cm remove ,
1015.Cm attach ,
1016.Cm detach ,
1017.Cm split ,
1018and
1019.Cm reguid .
1020In addition, it may break reservation boundaries if the pool lacks free
1021space.
1022The
1023.Nm zpool Cm status
1024command indicates the existence of a checkpoint or the progress of discarding a
1025checkpoint from a pool.
1026The
1027.Nm zpool Cm list
1028command reports how much space the checkpoint takes from the pool.
1029.Bl -tag -width Ds
1030.It Fl d , -discard
1031Discards an existing checkpoint from
1032.Ar pool
1033without rewinding.
1034.El
1035.It Xo
1036.Nm
1037.Cm clear
1038.Ar pool
1039.Op Ar device
1040.Xc
1041Clears device errors in a pool.
1042If no arguments are specified, all device errors within the pool are cleared.
1043If one or more devices is specified, only those errors associated with the
1044specified device or devices are cleared.
1045If multihost is enabled, and the pool has been suspended, this will not
1046resume I/O.
1047While the pool was suspended, it may have been imported on
1048another host, and resuming I/O could result in pool damage.
1049.It Xo
1050.Nm
1051.Cm create
1052.Op Fl dfn
1053.Op Fl B
1054.Op Fl m Ar mountpoint
1055.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1056.Oo Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value Oc Ns ...
1057.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
1058.Op Fl R Ar root
1059.Op Fl t Ar tempname
1060.Ar pool vdev Ns ...
1061.Xc
1062Creates a new storage pool containing the virtual devices specified on the
1063command line.
1064The pool name must begin with a letter, and can only contain
1065alphanumeric characters as well as underscore
1066.Pq Qq Sy _ ,
1067dash
1068.Pq Qq Sy - ,
1069and period
1070.Pq Qq Sy \&. .
1071The pool names
1072.Sy mirror ,
1073.Sy raidz ,
1074.Sy spare
1075and
1076.Sy log
1077are reserved, as are names beginning with the pattern
1078.Sy c[0-9] .
1079The
1080.Ar vdev
1081specification is described in the
1082.Sx Virtual Devices
1083section.
1084.Pp
1085The command attempts to verify that each device specified is accessible and not
1086currently in use by another subsystem.
1087However this check is not robust enough
1088to detect simultaneous attempts to use a new device in different pools, even if
1089.Sy multihost
1090is
1091.Sy enabled .
1092The
1093administrator must ensure that simultaneous invocations of any combination of
1094.Sy zpool replace ,
1095.Sy zpool create ,
1096.Sy zpool add ,
1097or
1098.Sy zpool labelclear ,
1099do not refer to the same device.
1100Using the same device in two pools will
1101result in pool corruption.
1102.sp
1103There are some uses, such as being currently mounted, or specified as the
1104dedicated dump device, that prevents a device from ever being used by ZFS.
1105Other uses, such as having a preexisting UFS file system, can be overridden with
1106the
1107.Fl f
1108option.
1109.Pp
1110The command also checks that the replication strategy for the pool is
1111consistent.
1112An attempt to combine redundant and non-redundant storage in a single pool, or
1113to mix disks and files, results in an error unless
1114.Fl f
1115is specified.
1116The use of differently sized devices within a single raidz or mirror group is
1117also flagged as an error unless
1118.Fl f
1119is specified.
1120.Pp
1121Unless the
1122.Fl R
1123option is specified, the default mount point is
1124.Pa / Ns Ar pool .
1125The mount point must not exist or must be empty, or else the root dataset
1126cannot be mounted.
1127This can be overridden with the
1128.Fl m
1129option.
1130.Pp
1131By default all supported features are enabled on the new pool unless the
1132.Fl d
1133option is specified.
1134.Bl -tag -width Ds
1135.It Fl B
1136Create whole disk pool with EFI System partition to support booting system
1137with UEFI firmware.
1138Default size is 256MB.
1139To create boot partition with custom size, set the
1140.Sy bootsize
1141property with the
1142.Fl o
1143option.
1144See the
1145.Sx Properties
1146section for details.
1147.It Fl d
1148Do not enable any features on the new pool.
1149Individual features can be enabled by setting their corresponding properties to
1150.Sy enabled
1151with the
1152.Fl o
1153option.
1154See
1155.Xr zpool-features 7
1156for details about feature properties.
1157.It Fl f
1158Forces use of
1159.Ar vdev Ns s ,
1160even if they appear in use or specify a conflicting replication level.
1161Not all devices can be overridden in this manner.
1162.It Fl m Ar mountpoint
1163Sets the mount point for the root dataset.
1164The default mount point is
1165.Pa /pool
1166or
1167.Pa altroot/pool
1168if
1169.Ar altroot
1170is specified.
1171The mount point must be an absolute path,
1172.Sy legacy ,
1173or
1174.Sy none .
1175For more information on dataset mount points, see
1176.Xr zfs 8 .
1177.It Fl n
1178Displays the configuration that would be used without actually creating the
1179pool.
1180The actual pool creation can still fail due to insufficient privileges or
1181device sharing.
1182.It Fl o Ar property Ns = Ns Ar value
1183Sets the given pool properties.
1184See the
1185.Sx Properties
1186section for a list of valid properties that can be set.
1187.It Fl o Cm feature@ Ns Ar feature Ns = Ns Ar value
1188Sets the given pool feature.
1189See
1190.Xr zpool-features 7
1191for a list of valid features that can be set.
1192.Pp
1193.Ar value
1194can either be
1195.Sy disabled
1196or
1197.Sy enabled .
1198.It Fl O Ar file-system-property Ns = Ns Ar value
1199Sets the given file system properties in the root file system of the pool.
1200See the
1201.Sx Properties
1202section of
1203.Xr zfs 8
1204for a list of valid properties that can be set.
1205.It Fl R Ar root
1206Equivalent to
1207.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
1208.It Fl t Ar tempname
1209Sets the in-core pool name to
1210.Pa tempname
1211while the on-disk name will be the name specified as the pool name
1212.Pa pool .
1213This will set the default cachefile property to
1214.Sy none .
1215This is intended to handle name space collisions when creating pools
1216for other systems, such as virtual machines or physical machines
1217whose pools live on network block devices.
1218.El
1219.It Xo
1220.Nm
1221.Cm destroy
1222.Op Fl f
1223.Ar pool
1224.Xc
1225Destroys the given pool, freeing up any devices for other use.
1226This command tries to unmount any active datasets before destroying the pool.
1227.Bl -tag -width Ds
1228.It Fl f
1229Forces any active datasets contained within the pool to be unmounted.
1230.El
1231.It Xo
1232.Nm
1233.Cm detach
1234.Ar pool device
1235.Xc
1236Detaches
1237.Ar device
1238from a mirror.
1239The operation is refused if there are no other valid replicas of the data.
1240.It Xo
1241.Nm
1242.Cm export
1243.Op Fl f
1244.Ar pool Ns ...
1245.Xc
1246Exports the given pools from the system.
1247All devices are marked as exported, but are still considered in use by other
1248subsystems.
1249The devices can be moved between systems
1250.Pq even those of different endianness
1251and imported as long as a sufficient number of devices are present.
1252.Pp
1253Before exporting the pool, all datasets within the pool are unmounted.
1254A pool can not be exported if it has a shared spare that is currently being
1255used.
1256.Pp
1257For pools to be portable, you must give the
1258.Nm
1259command whole disks, not just slices, so that ZFS can label the disks with
1260portable EFI labels.
1261Otherwise, disk drivers on platforms of different endianness will not recognize
1262the disks.
1263.Bl -tag -width Ds
1264.It Fl f
1265Forcefully unmount all datasets, using the
1266.Nm unmount Fl f
1267command.
1268.Pp
1269This command will forcefully export the pool even if it has a shared spare that
1270is currently being used.
1271This may lead to potential data corruption.
1272.El
1273.It Xo
1274.Nm
1275.Cm get
1276.Op Fl Hp
1277.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
1278.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
1279.Ar pool Ns ...
1280.Xc
1281Retrieves the given list of properties
1282.Po
1283or all properties if
1284.Sy all
1285is used
1286.Pc
1287for the specified storage pool(s).
1288These properties are displayed with the following fields:
1289.Bd -literal
1290        name          Name of storage pool
1291        property      Property name
1292        value         Property value
1293        source        Property source, either 'default' or 'local'.
1294.Ed
1295.Pp
1296See the
1297.Sx Properties
1298section for more information on the available pool properties.
1299.Bl -tag -width Ds
1300.It Fl H
1301Scripted mode.
1302Do not display headers, and separate fields by a single tab instead of arbitrary
1303space.
1304.It Fl o Ar field
1305A comma-separated list of columns to display.
1306.Sy name Ns \&, Ns Sy property Ns \&, Ns Sy value Ns \&, Ns Sy source
1307is the default value.
1308.It Fl p
1309Display numbers in parsable (exact) values.
1310.El
1311.It Xo
1312.Nm
1313.Cm history
1314.Op Fl il
1315.Oo Ar pool Oc Ns ...
1316.Xc
1317Displays the command history of the specified pool(s) or all pools if no pool is
1318specified.
1319.Bl -tag -width Ds
1320.It Fl i
1321Displays internally logged ZFS events in addition to user initiated events.
1322.It Fl l
1323Displays log records in long format, which in addition to standard format
1324includes, the user name, the hostname, and the zone in which the operation was
1325performed.
1326.El
1327.It Xo
1328.Nm
1329.Cm import
1330.Op Fl D
1331.Op Fl d Ar dir
1332.Xc
1333Lists pools available to import.
1334If the
1335.Fl d
1336option is not specified, this command searches for devices in
1337.Pa /dev/dsk .
1338The
1339.Fl d
1340option can be specified multiple times, and all directories are searched.
1341If the device appears to be part of an exported pool, this command displays a
1342summary of the pool with the name of the pool, a numeric identifier, as well as
1343the vdev layout and current health of the device for each device or file.
1344Destroyed pools, pools that were previously destroyed with the
1345.Nm zpool Cm destroy
1346command, are not listed unless the
1347.Fl D
1348option is specified.
1349.Pp
1350The numeric identifier is unique, and can be used instead of the pool name when
1351multiple exported pools of the same name are available.
1352.Bl -tag -width Ds
1353.It Fl c Ar cachefile
1354Reads configuration from the given
1355.Ar cachefile
1356that was created with the
1357.Sy cachefile
1358pool property.
1359This
1360.Ar cachefile
1361is used instead of searching for devices.
1362.It Fl d Ar dir
1363Searches for devices or files in
1364.Ar dir .
1365The
1366.Fl d
1367option can be specified multiple times.
1368.It Fl D
1369Lists destroyed pools only.
1370.El
1371.It Xo
1372.Nm
1373.Cm import
1374.Fl a
1375.Op Fl DflmN
1376.Op Fl F Op Fl n
1377.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1378.Op Fl o Ar mntopts
1379.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1380.Op Fl R Ar root
1381.Xc
1382Imports all pools found in the search directories.
1383Identical to the previous command, except that all pools with a sufficient
1384number of devices available are imported.
1385Destroyed pools, pools that were previously destroyed with the
1386.Nm zpool Cm destroy
1387command, will not be imported unless the
1388.Fl D
1389option is specified.
1390.Bl -tag -width Ds
1391.It Fl a
1392Searches for and imports all pools found.
1393.It Fl c Ar cachefile
1394Reads configuration from the given
1395.Ar cachefile
1396that was created with the
1397.Sy cachefile
1398pool property.
1399This
1400.Ar cachefile
1401is used instead of searching for devices.
1402.It Fl d Ar dir
1403Searches for devices or files in
1404.Ar dir .
1405The
1406.Fl d
1407option can be specified multiple times.
1408This option is incompatible with the
1409.Fl c
1410option.
1411.It Fl D
1412Imports destroyed pools only.
1413The
1414.Fl f
1415option is also required.
1416.It Fl f
1417Forces import, even if the pool appears to be potentially active.
1418.It Fl F
1419Recovery mode for a non-importable pool.
1420Attempt to return the pool to an importable state by discarding the last few
1421transactions.
1422Not all damaged pools can be recovered by using this option.
1423If successful, the data from the discarded transactions is irretrievably lost.
1424This option is ignored if the pool is importable or already imported.
1425.It Fl l
1426Indicates that this command will request encryption keys for all encrypted
1427datasets it attempts to mount as it is bringing the pool online.
1428Note that if any datasets have a
1429.Sy keylocation
1430of
1431.Sy prompt
1432this command will block waiting for the keys to be entered.
1433Without this flag encrypted datasets will be left unavailable until the keys are
1434loaded.
1435.It Fl m
1436Allows a pool to import when there is a missing log device.
1437Recent transactions can be lost because the log device will be discarded.
1438.It Fl n
1439Used with the
1440.Fl F
1441recovery option.
1442Determines whether a non-importable pool can be made importable again, but does
1443not actually perform the pool recovery.
1444For more details about pool recovery mode, see the
1445.Fl F
1446option, above.
1447.It Fl N
1448Import the pool without mounting any file systems.
1449.It Fl o Ar mntopts
1450Comma-separated list of mount options to use when mounting datasets within the
1451pool.
1452See
1453.Xr zfs 8
1454for a description of dataset properties and mount options.
1455.It Fl o Ar property Ns = Ns Ar value
1456Sets the specified property on the imported pool.
1457See the
1458.Sx Properties
1459section for more information on the available pool properties.
1460.It Fl R Ar root
1461Sets the
1462.Sy cachefile
1463property to
1464.Sy none
1465and the
1466.Sy altroot
1467property to
1468.Ar root .
1469.El
1470.It Xo
1471.Nm
1472.Cm import
1473.Op Fl Dfmt
1474.Op Fl F Op Fl n
1475.Op Fl -rewind-to-checkpoint
1476.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1477.Op Fl o Ar mntopts
1478.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1479.Op Fl R Ar root
1480.Ar pool Ns | Ns Ar id
1481.Op Ar newpool
1482.Xc
1483Imports a specific pool.
1484A pool can be identified by its name or the numeric identifier.
1485If
1486.Ar newpool
1487is specified, the pool is imported using the name
1488.Ar newpool .
1489Otherwise, it is imported with the same name as its exported name.
1490.Pp
1491If a device is removed from a system without running
1492.Nm zpool Cm export
1493first, the device appears as potentially active.
1494It cannot be determined if this was a failed export, or whether the device is
1495really in use from another host.
1496To import a pool in this state, the
1497.Fl f
1498option is required.
1499.Bl -tag -width Ds
1500.It Fl c Ar cachefile
1501Reads configuration from the given
1502.Ar cachefile
1503that was created with the
1504.Sy cachefile
1505pool property.
1506This
1507.Ar cachefile
1508is used instead of searching for devices.
1509.It Fl d Ar dir
1510Searches for devices or files in
1511.Ar dir .
1512The
1513.Fl d
1514option can be specified multiple times.
1515This option is incompatible with the
1516.Fl c
1517option.
1518.It Fl D
1519Imports destroyed pool.
1520The
1521.Fl f
1522option is also required.
1523.It Fl f
1524Forces import, even if the pool appears to be potentially active.
1525.It Fl F
1526Recovery mode for a non-importable pool.
1527Attempt to return the pool to an importable state by discarding the last few
1528transactions.
1529Not all damaged pools can be recovered by using this option.
1530If successful, the data from the discarded transactions is irretrievably lost.
1531This option is ignored if the pool is importable or already imported.
1532.It Fl l
1533Indicates that the zpool command will request encryption keys for all
1534encrypted datasets it attempts to mount as it is bringing the pool
1535online.
1536This is equivalent to running
1537.Nm Cm mount
1538on each encrypted dataset immediately after the pool is imported.
1539If any datasets have a
1540.Sy prompt
1541keysource this command will block waiting for the key to be entered.
1542Otherwise, encrypted datasets will be left unavailable until the keys are
1543loaded.
1544.It Fl m
1545Allows a pool to import when there is a missing log device.
1546Recent transactions can be lost because the log device will be discarded.
1547.It Fl n
1548Used with the
1549.Fl F
1550recovery option.
1551Determines whether a non-importable pool can be made importable again, but does
1552not actually perform the pool recovery.
1553For more details about pool recovery mode, see the
1554.Fl F
1555option, above.
1556.It Fl o Ar mntopts
1557Comma-separated list of mount options to use when mounting datasets within the
1558pool.
1559See
1560.Xr zfs 8
1561for a description of dataset properties and mount options.
1562.It Fl o Ar property Ns = Ns Ar value
1563Sets the specified property on the imported pool.
1564See the
1565.Sx Properties
1566section for more information on the available pool properties.
1567.It Fl R Ar root
1568Sets the
1569.Sy cachefile
1570property to
1571.Sy none
1572and the
1573.Sy altroot
1574property to
1575.Ar root .
1576.It Fl t
1577Used with
1578.Ar newpool .
1579Specifies that
1580.Ar newpool
1581is temporary.
1582Temporary pool names last until export.
1583Ensures that the original pool name will be used in all label updates and
1584therefore is retained upon export.
1585Will also set
1586.Sy cachefile
1587property to
1588.Sy none
1589when not explicitly specified.
1590.It Fl -rewind-to-checkpoint
1591Rewinds pool to the checkpointed state.
1592Once the pool is imported with this flag there is no way to undo the rewind.
1593All changes and data that were written after the checkpoint are lost!
1594The only exception is when the
1595.Sy readonly
1596mounting option is enabled.
1597In this case, the checkpointed state of the pool is opened and an
1598administrator can see how the pool would look like if they were
1599to fully rewind.
1600.El
1601.It Xo
1602.Nm
1603.Cm initialize
1604.Op Fl c | Fl s
1605.Ar pool
1606.Op Ar device Ns ...
1607.Xc
1608Begins initializing by writing to all unallocated regions on the specified
1609devices, or all eligible devices in the pool if no individual devices are
1610specified.
1611Only leaf data or log devices may be initialized.
1612.Bl -tag -width Ds
1613.It Fl c , -cancel
1614Cancel initializing on the specified devices, or all eligible devices if none
1615are specified.
1616If one or more target devices are invalid or are not currently being
1617initialized, the command will fail and no cancellation will occur on any device.
1618.It Fl s -suspend
1619Suspend initializing on the specified devices, or all eligible devices if none
1620are specified.
1621If one or more target devices are invalid or are not currently being
1622initialized, the command will fail and no suspension will occur on any device.
1623Initializing can then be resumed by running
1624.Nm zpool Cm initialize
1625with no flags on the relevant target devices.
1626.El
1627.It Xo
1628.Nm
1629.Cm iostat
1630.Op Oo Fl lq Oc | Ns Fl rw
1631.Op Fl T Sy u Ns | Ns Sy d
1632.Op Fl ghHLnpPvy
1633.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
1634.Op Ar interval Op Ar count
1635.Xc
1636Displays I/O statistics for the given pools/vdevs.
1637Physical I/Os may be observed via
1638.Xr iostat 8 .
1639If writes are located nearby, they may be merged into a single larger operation.
1640Additional I/O may be generated depending on the level of vdev redundancy.
1641To filter output, you may pass in a list of pools, a pool and list of vdevs
1642in that pool, or a list of any vdevs from any pool.
1643If no items are specified, statistics for every pool in the system are shown.
1644When given an
1645.Ar interval ,
1646the statistics are printed every
1647.Ar interval
1648seconds until ^C is pressed.
1649If
1650.Fl n
1651flag is specified the headers are displayed only once, otherwise they are
1652displayed periodically.
1653If
1654.Ar count
1655is specified, the command exits after
1656.Ar count
1657reports are printed.
1658The first report printed is always the statistics since boot regardless of
1659whether
1660.Ar interval
1661and
1662.Ar count
1663are passed.
1664Also note that the units of
1665.Sy K ,
1666.Sy M ,
1667.Sy G ...
1668that are printed in the report are in base 1024.
1669To get the raw values, use the
1670.Fl p
1671flag.
1672.Bl -tag -width Ds
1673.It Fl T Sy u Ns | Ns Sy d
1674Display a time stamp.
1675Specify
1676.Sy u
1677for a printed representation of the internal representation of time.
1678See
1679.Xr time 2 .
1680Specify
1681.Sy d
1682for standard date format.
1683See
1684.Xr date 1 .
1685.It Fl g
1686Display vdev GUIDs instead of the normal device names.
1687These GUIDs can be used in place of device names for the zpool
1688detach/offline/remove/replace commands.
1689.It Fl H
1690Scripted mode.
1691Do not display headers, and separate fields by a single tab instead of
1692arbitrary space.
1693.It Fl L
1694Display real paths for vdevs resolving all symbolic links.
1695This can be used to look up the current block device name regardless of the
1696.Pa /dev/dsk/
1697path used to open it.
1698.It Fl n
1699Print headers only once when passed.
1700.It Fl p
1701Display numbers in parsable (exact) values.
1702Time values are in nanoseconds.
1703.It Fl P
1704Display full paths for vdevs instead of only the last component of
1705the path.
1706This can be used in conjunction with the
1707.Fl L
1708flag.
1709.It Fl r
1710Print request size histograms for the leaf vdev's IO
1711This includes histograms of individual IOs (ind) and aggregate IOs (agg).
1712These stats can be useful for observing how well IO aggregation is working.
1713Note that TRIM IOs may exceed 16M, but will be counted as 16M.
1714.It Fl v
1715Verbose statistics Reports usage statistics for individual vdevs within the
1716pool, in addition to the pool-wide statistics.
1717.It Fl y
1718Omit statistics since boot.
1719Normally the first line of output reports the statistics since boot.
1720This option suppresses that first line of output.
1721.Ar interval
1722.It Fl w
1723Display latency histograms:
1724.Pp
1725.Ar total_wait :
1726Total IO time (queuing + disk IO time).
1727.Ar disk_wait :
1728Disk IO time (time reading/writing the disk).
1729.Ar syncq_wait :
1730Amount of time IO spent in synchronous priority queues.
1731Does not include disk time.
1732.Ar asyncq_wait :
1733Amount of time IO spent in asynchronous priority queues.
1734Does not include disk time.
1735.Ar scrub :
1736Amount of time IO spent in scrub queue.
1737Does not include disk time.
1738.It Fl l
1739Include average latency statistics:
1740.Pp
1741.Ar total_wait :
1742Average total IO time (queuing + disk IO time).
1743.Ar disk_wait :
1744Average disk IO time (time reading/writing the disk).
1745.Ar syncq_wait :
1746Average amount of time IO spent in synchronous priority queues.
1747Does not include disk time.
1748.Ar asyncq_wait :
1749Average amount of time IO spent in asynchronous priority queues.
1750Does not include disk time.
1751.Ar scrub :
1752Average queuing time in scrub queue.
1753Does not include disk time.
1754.Ar trim :
1755Average queuing time in trim queue.
1756Does not include disk time.
1757.It Fl q
1758Include active queue statistics.
1759Each priority queue has both pending (
1760.Ar pend )
1761and active (
1762.Ar activ )
1763IOs.
1764Pending IOs are waiting to be issued to the disk, and active IOs have been
1765issued to disk and are waiting for completion.
1766These stats are broken out by priority queue:
1767.Pp
1768.Ar syncq_read/write :
1769Current number of entries in synchronous priority
1770queues.
1771.Ar asyncq_read/write :
1772Current number of entries in asynchronous priority queues.
1773.Ar scrubq_read :
1774Current number of entries in scrub queue.
1775.Ar trimq_write :
1776Current number of entries in trim queue.
1777.Pp
1778All queue statistics are instantaneous measurements of the number of
1779entries in the queues.
1780If you specify an interval, the measurements will be sampled from the end of
1781the interval.
1782.El
1783.It Xo
1784.Nm
1785.Cm labelclear
1786.Op Fl f
1787.Ar device
1788.Xc
1789Removes ZFS label information from the specified
1790.Ar device .
1791The
1792.Ar device
1793must not be part of an active pool configuration.
1794.Bl -tag -width Ds
1795.It Fl f
1796Treat exported or foreign devices as inactive.
1797.El
1798.It Xo
1799.Nm
1800.Cm list
1801.Op Fl HgLpPv
1802.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1803.Op Fl T Sy u Ns | Ns Sy d
1804.Oo Ar pool Oc Ns ...
1805.Op Ar interval Op Ar count
1806.Xc
1807Lists the given pools along with a health status and space usage.
1808If no
1809.Ar pool Ns s
1810are specified, all pools in the system are listed.
1811When given an
1812.Ar interval ,
1813the information is printed every
1814.Ar interval
1815seconds until ^C is pressed.
1816If
1817.Ar count
1818is specified, the command exits after
1819.Ar count
1820reports are printed.
1821.Bl -tag -width Ds
1822.It Fl g
1823Display vdev GUIDs instead of the normal device names.
1824These GUIDs can be used in place of device names for the zpool
1825detach/offline/remove/replace commands.
1826.It Fl H
1827Scripted mode.
1828Do not display headers, and separate fields by a single tab instead of arbitrary
1829space.
1830.It Fl o Ar property
1831Comma-separated list of properties to display.
1832See the
1833.Sx Properties
1834section for a list of valid properties.
1835The default list is
1836.Cm name , size , allocated , free , checkpoint, expandsize , fragmentation , capacity ,
1837.Cm dedupratio , health , altroot .
1838.It Fl L
1839Display real paths for vdevs resolving all symbolic links.
1840This can be used to look up the current block device name regardless of the
1841/dev/disk/ path used to open it.
1842.It Fl p
1843Display numbers in parsable
1844.Pq exact
1845values.
1846.It Fl P
1847Display full paths for vdevs instead of only the last component of
1848the path.
1849This can be used in conjunction with the
1850.Fl L
1851flag.
1852.It Fl T Sy u Ns | Ns Sy d
1853Display a time stamp.
1854Specify
1855.Sy u
1856for a printed representation of the internal representation of time.
1857See
1858.Xr time 2 .
1859Specify
1860.Sy d
1861for standard date format.
1862See
1863.Xr date 1 .
1864.It Fl v
1865Verbose statistics.
1866Reports usage statistics for individual vdevs within the pool, in addition to
1867the pool-wise statistics.
1868.El
1869.It Xo
1870.Nm
1871.Cm offline
1872.Op Fl t
1873.Ar pool Ar device Ns ...
1874.Xc
1875Takes the specified physical device offline.
1876While the
1877.Ar device
1878is offline, no attempt is made to read or write to the device.
1879This command is not applicable to spares.
1880.Bl -tag -width Ds
1881.It Fl t
1882Temporary.
1883Upon reboot, the specified physical device reverts to its previous state.
1884.El
1885.It Xo
1886.Nm
1887.Cm online
1888.Op Fl e
1889.Ar pool Ar device Ns ...
1890.Xc
1891Brings the specified physical device online.
1892This command is not applicable to spares.
1893.Bl -tag -width Ds
1894.It Fl e
1895Expand the device to use all available space.
1896If the device is part of a mirror or raidz then all devices must be expanded
1897before the new space will become available to the pool.
1898.El
1899.It Xo
1900.Nm
1901.Cm reguid
1902.Ar pool
1903.Xc
1904Generates a new unique identifier for the pool.
1905You must ensure that all devices in this pool are online and healthy before
1906performing this action.
1907.It Xo
1908.Nm
1909.Cm reopen
1910.Ar pool
1911.Xc
1912Reopen all the vdevs associated with the pool.
1913.It Xo
1914.Nm
1915.Cm remove
1916.Op Fl np
1917.Ar pool Ar device Ns ...
1918.Xc
1919Removes the specified device from the pool.
1920This command currently only supports removing hot spares, cache, log
1921devices and mirrored top-level vdevs (mirror of leaf devices); but not raidz.
1922.sp
1923Removing a top-level vdev reduces the total amount of space in the storage pool.
1924The specified device will be evacuated by copying all allocated space from it to
1925the other devices in the pool.
1926In this case, the
1927.Nm zpool Cm remove
1928command initiates the removal and returns, while the evacuation continues in
1929the background.
1930The removal progress can be monitored with
1931.Nm zpool Cm status .
1932This feature must be enabled to be used, see
1933.Xr zpool-features 7
1934.Pp
1935A mirrored top-level device (log or data) can be removed by specifying the
1936top-level mirror for the same.
1937Non-log devices or data devices that are part of a mirrored configuration can be
1938removed using the
1939.Nm zpool Cm detach
1940command.
1941.Bl -tag -width Ds
1942.It Fl n
1943Do not actually perform the removal ("no-op").
1944Instead, print the estimated amount of memory that will be used by the
1945mapping table after the removal completes.
1946This is nonzero only for top-level vdevs.
1947.El
1948.Bl -tag -width Ds
1949.It Fl p
1950Used in conjunction with the
1951.Fl n
1952flag, displays numbers as parsable (exact) values.
1953.El
1954.It Xo
1955.Nm
1956.Cm remove
1957.Fl s
1958.Ar pool
1959.Xc
1960Stops and cancels an in-progress removal of a top-level vdev.
1961.It Xo
1962.Nm
1963.Cm replace
1964.Op Fl f
1965.Ar pool Ar device Op Ar new_device
1966.Xc
1967Replaces
1968.Ar old_device
1969with
1970.Ar new_device .
1971This is equivalent to attaching
1972.Ar new_device ,
1973waiting for it to resilver, and then detaching
1974.Ar old_device .
1975.Pp
1976The size of
1977.Ar new_device
1978must be greater than or equal to the minimum size of all the devices in a mirror
1979or raidz configuration.
1980.Pp
1981.Ar new_device
1982is required if the pool is not redundant.
1983If
1984.Ar new_device
1985is not specified, it defaults to
1986.Ar old_device .
1987This form of replacement is useful after an existing disk has failed and has
1988been physically replaced.
1989In this case, the new disk may have the same
1990.Pa /dev/dsk
1991path as the old device, even though it is actually a different disk.
1992ZFS recognizes this.
1993.Bl -tag -width Ds
1994.It Fl f
1995Forces use of
1996.Ar new_device ,
1997even if its appears to be in use.
1998Not all devices can be overridden in this manner.
1999.El
2000.It Xo
2001.Nm
2002.Cm resilver
2003.Ar pool Ns ...
2004.Xc
2005Starts a resilver.
2006If an existing resilver is already running it will be restarted from the
2007beginning.
2008Any drives that were scheduled for a deferred resilver will be added to the
2009new one.
2010This requires the
2011.Sy resilver_defer
2012feature.
2013.It Xo
2014.Nm
2015.Cm scrub
2016.Op Fl s | Fl p
2017.Ar pool Ns ...
2018.Xc
2019Begins a scrub or resumes a paused scrub.
2020The scrub examines all data in the specified pools to verify that it checksums
2021correctly.
2022For replicated
2023.Pq mirror or raidz
2024devices, ZFS automatically repairs any damage discovered during the scrub.
2025The
2026.Nm zpool Cm status
2027command reports the progress of the scrub and summarizes the results of the
2028scrub upon completion.
2029.Pp
2030Scrubbing and resilvering are very similar operations.
2031The difference is that resilvering only examines data that ZFS knows to be out
2032of date
2033.Po
2034for example, when attaching a new device to a mirror or replacing an existing
2035device
2036.Pc ,
2037whereas scrubbing examines all data to discover silent errors due to hardware
2038faults or disk failure.
2039.Pp
2040Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
2041one at a time.
2042If a scrub is paused, the
2043.Nm zpool Cm scrub
2044resumes it.
2045If a resilver is in progress, ZFS does not allow a scrub to be started until the
2046resilver completes.
2047.Pp
2048Note that, due to changes in pool data on a live system, it is possible for
2049scrubs to progress slightly beyond 100% completion.
2050During this period, no completion time estimate will be provided.
2051.Bl -tag -width Ds
2052.It Fl s
2053Stop scrubbing.
2054.El
2055.Bl -tag -width Ds
2056.It Fl p
2057Pause scrubbing.
2058Scrub pause state and progress are periodically synced to disk.
2059If the system is restarted or pool is exported during a paused scrub,
2060even after import, scrub will remain paused until it is resumed.
2061Once resumed the scrub will pick up from the place where it was last
2062checkpointed to disk.
2063To resume a paused scrub issue
2064.Nm zpool Cm scrub
2065again.
2066.El
2067.It Xo
2068.Nm
2069.Cm set
2070.Ar property Ns = Ns Ar value
2071.Ar pool
2072.Xc
2073Sets the given property on the specified pool.
2074See the
2075.Sx Properties
2076section for more information on what properties can be set and acceptable
2077values.
2078.It Xo
2079.Nm
2080.Cm split
2081.Op Fl gLlnP
2082.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
2083.Op Fl R Ar root
2084.Ar pool newpool
2085.Xc
2086Splits devices off
2087.Ar pool
2088creating
2089.Ar newpool .
2090All vdevs in
2091.Ar pool
2092must be mirrors.
2093At the time of the split,
2094.Ar newpool
2095will be a replica of
2096.Ar pool .
2097.Bl -tag -width Ds
2098.It Fl g
2099Display vdev GUIDs instead of the normal device names.
2100These GUIDs can be used in place of device names for the zpool
2101detach/offline/remove/replace commands.
2102.It Fl L
2103Display real paths for vdevs resolving all symbolic links.
2104This can be used to look up the current block device name regardless of the
2105.Pa /dev/disk/
2106path used to open it.
2107.It Fl l
2108Indicates that this command will request encryption keys for all encrypted
2109datasets it attempts to mount as it is bringing the new pool online.
2110Note that if any datasets have a
2111.Sy keylocation
2112of
2113.Sy prompt
2114this command will block waiting for the keys to be entered.
2115Without this flag encrypted datasets will be left unavailable and unmounted
2116until the keys are loaded.
2117.It Fl n
2118Do dry run, do not actually perform the split.
2119Print out the expected configuration of
2120.Ar newpool .
2121.It Fl P
2122Display full paths for vdevs instead of only the last component of
2123the path.
2124This can be used in conjunction with the
2125.Fl L
2126flag.
2127.It Fl o Ar property Ns = Ns Ar value
2128Sets the specified property for
2129.Ar newpool .
2130See the
2131.Sx Properties
2132section for more information on the available pool properties.
2133.It Fl R Ar root
2134Set
2135.Sy altroot
2136for
2137.Ar newpool
2138to
2139.Ar root
2140and automatically import it.
2141.El
2142.It Xo
2143.Nm
2144.Cm status
2145.Op Fl DigLpPstvx
2146.Op Fl T Sy u Ns | Ns Sy d
2147.Oo Ar pool Oc Ns ...
2148.Op Ar interval Op Ar count
2149.Xc
2150Displays the detailed health status for the given pools.
2151If no
2152.Ar pool
2153is specified, then the status of each pool in the system is displayed.
2154For more information on pool and device health, see the
2155.Sx Device Failure and Recovery
2156section.
2157.Pp
2158If a scrub or resilver is in progress, this command reports the percentage done
2159and the estimated time to completion.
2160Both of these are only approximate, because the amount of data in the pool and
2161the other workloads on the system can change.
2162.Bl -tag -width Ds
2163.It Fl D
2164Display a histogram of deduplication statistics, showing the allocated
2165.Pq physically present on disk
2166and referenced
2167.Pq logically referenced in the pool
2168block counts and sizes by reference count.
2169.It Fl g
2170Display vdev GUIDs instead of the normal device names.
2171These GUIDs can be used in place of device names for the zpool
2172detach/offline/remove/replace commands.
2173.It Fl i
2174Display vdev initialization status.
2175.It Fl L
2176Display real paths for vdevs resolving all symbolic links.
2177This can be used to look up the current block device name regardless of the
2178.Pa /dev/disk/
2179path used to open it.
2180.It Fl p
2181Display numbers in parsable (exact) values.
2182.It Fl P
2183Display full paths for vdevs instead of only the last component of
2184the path.
2185This can be used in conjunction with the
2186.Fl L
2187flag.
2188.It Fl s
2189Display the number of leaf VDEV slow IOs.
2190This is the number of IOs that didn't complete in
2191.Sy zio_slow_io_ms
2192milliseconds (default 30 seconds).
2193This does not necessarily mean the IOs failed to complete, just took an
2194unreasonably long amount of time.
2195This may indicate a problem with the underlying storage.
2196.It Fl t
2197Display vdev TRIM status.
2198.It Fl T Sy u Ns | Ns Sy d
2199Display a time stamp.
2200Specify
2201.Sy u
2202for a printed representation of the internal representation of time.
2203See
2204.Xr time 2 .
2205Specify
2206.Sy d
2207for standard date format.
2208See
2209.Xr date 1 .
2210.It Fl v
2211Displays verbose data error information, printing out a complete list of all
2212data errors since the last complete pool scrub.
2213.It Fl x
2214Only display status for pools that are exhibiting errors or are otherwise
2215unavailable.
2216Warnings about pools not using the latest on-disk format will not be included.
2217.El
2218.It Xo
2219.Nm
2220.Cm sync
2221.Oo Ar pool Oc Ns ...
2222.Xc
2223Forces all in-core dirty data to be written to the primary pool storage and
2224not the ZIL.
2225It will also update administrative information including quota reporting.
2226Without arguments,
2227.Nm zpool Cm sync
2228will sync all pools on the system.
2229Otherwise, it will only sync the specified
2230.Ar pool .
2231.It Xo
2232.Nm
2233.Cm trim
2234.Op Fl d
2235.Op Fl r Ar rate
2236.Op Fl c | Fl s
2237.Ar pool
2238.Op Ar device Ns ...
2239.Xc
2240Initiates an immediate on-demand TRIM operation for all of the free space in
2241a pool.
2242This operation informs the underlying storage devices of all blocks
2243in the pool which are no longer allocated and allows thinly provisioned
2244devices to reclaim the space.
2245.Pp
2246A manual on-demand TRIM operation can be initiated irrespective of the
2247.Sy autotrim
2248pool property setting.
2249See the documentation for the
2250.Sy autotrim
2251property above for the types of vdev devices which can be trimmed.
2252.Bl -tag -width Ds
2253.It Fl d -secure
2254Causes a secure TRIM to be initiated.
2255When performing a secure TRIM, the device guarantees that data stored on the
2256trimmed blocks has been erased.
2257This requires support from the device and is not supported by all SSDs.
2258.It Fl r -rate Ar rate
2259Controls the rate at which the TRIM operation progresses.
2260Without this option TRIM is executed as quickly as possible.
2261The rate, expressed in bytes per second, is applied on a per-vdev basis and
2262may be set differently for each leaf vdev.
2263.It Fl c , -cancel
2264Cancel trimming on the specified devices, or all eligible devices if none
2265are specified.
2266If one or more target devices are invalid or are not currently being
2267trimmed, the command will fail and no cancellation will occur on any device.
2268.It Fl s -suspend
2269Suspend trimming on the specified devices, or all eligible devices if none
2270are specified.
2271If one or more target devices are invalid or are not currently being
2272trimmed, the command will fail and no suspension will occur on any device.
2273Trimming can then be resumed by running
2274.Nm zpool Cm trim
2275with no flags on the relevant target devices.
2276.El
2277.It Xo
2278.Nm
2279.Cm upgrade
2280.Xc
2281Displays pools which do not have all supported features enabled and pools
2282formatted using a legacy ZFS version number.
2283These pools can continue to be used, but some features may not be available.
2284Use
2285.Nm zpool Cm upgrade Fl a
2286to enable all features on all pools.
2287.It Xo
2288.Nm
2289.Cm upgrade
2290.Fl v
2291.Xc
2292Displays legacy ZFS versions supported by the current software.
2293See
2294.Xr zpool-features 7
2295for a description of feature flags features supported by the current software.
2296.It Xo
2297.Nm
2298.Cm upgrade
2299.Op Fl V Ar version
2300.Fl a Ns | Ns Ar pool Ns ...
2301.Xc
2302Enables all supported features on the given pool.
2303Once this is done, the pool will no longer be accessible on systems that do not
2304support feature flags.
2305See
2306.Xr zpool-features 7
2307for details on compatibility with systems that support feature flags, but do not
2308support all features enabled on the pool.
2309.Bl -tag -width Ds
2310.It Fl a
2311Enables all supported features on all pools.
2312.It Fl V Ar version
2313Upgrade to the specified legacy version.
2314If the
2315.Fl V
2316flag is specified, no features will be enabled on the pool.
2317This option can only be used to increase the version number up to the last
2318supported legacy version number.
2319.El
2320.El
2321.Sh EXIT STATUS
2322The following exit values are returned:
2323.Bl -tag -width Ds
2324.It Sy 0
2325Successful completion.
2326.It Sy 1
2327An error occurred.
2328.It Sy 2
2329Invalid command line options were specified.
2330.El
2331.Sh EXAMPLES
2332.Bl -tag -width Ds
2333.It Sy Example 1 No Creating a RAID-Z Storage Pool
2334The following command creates a pool with a single raidz root vdev that
2335consists of six disks.
2336.Bd -literal
2337# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
2338.Ed
2339.It Sy Example 2 No Creating a Mirrored Storage Pool
2340The following command creates a pool with two mirrors, where each mirror
2341contains two disks.
2342.Bd -literal
2343# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
2344.Ed
2345.It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
2346The following command creates an unmirrored pool using two disk slices.
2347.Bd -literal
2348# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
2349.Ed
2350.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
2351The following command creates an unmirrored pool using files.
2352While not recommended, a pool based on files can be useful for experimental
2353purposes.
2354.Bd -literal
2355# zpool create tank /path/to/file/a /path/to/file/b
2356.Ed
2357.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
2358The following command adds two mirrored disks to the pool
2359.Em tank ,
2360assuming the pool is already made up of two-way mirrors.
2361The additional space is immediately available to any datasets within the pool.
2362.Bd -literal
2363# zpool add tank mirror c1t0d0 c1t1d0
2364.Ed
2365.It Sy Example 6 No Listing Available ZFS Storage Pools
2366The following command lists all available pools on the system.
2367In this case, the pool
2368.Em zion
2369is faulted due to a missing device.
2370The results from this command are similar to the following:
2371.Bd -literal
2372# zpool list
2373NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2374rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
2375tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
2376zion       -      -      -      -         -      -      -  FAULTED -
2377.Ed
2378.It Sy Example 7 No Destroying a ZFS Storage Pool
2379The following command destroys the pool
2380.Em tank
2381and any datasets contained within.
2382.Bd -literal
2383# zpool destroy -f tank
2384.Ed
2385.It Sy Example 8 No Exporting a ZFS Storage Pool
2386The following command exports the devices in pool
2387.Em tank
2388so that they can be relocated or later imported.
2389.Bd -literal
2390# zpool export tank
2391.Ed
2392.It Sy Example 9 No Importing a ZFS Storage Pool
2393The following command displays available pools, and then imports the pool
2394.Em tank
2395for use on the system.
2396The results from this command are similar to the following:
2397.Bd -literal
2398# zpool import
2399  pool: tank
2400    id: 15451357997522795478
2401 state: ONLINE
2402action: The pool can be imported using its name or numeric identifier.
2403config:
2404
2405        tank        ONLINE
2406          mirror    ONLINE
2407            c1t2d0  ONLINE
2408            c1t3d0  ONLINE
2409
2410# zpool import tank
2411.Ed
2412.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
2413The following command upgrades all ZFS Storage pools to the current version of
2414the software.
2415.Bd -literal
2416# zpool upgrade -a
2417This system is currently running ZFS version 2.
2418.Ed
2419.It Sy Example 11 No Managing Hot Spares
2420The following command creates a new pool with an available hot spare:
2421.Bd -literal
2422# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
2423.Ed
2424.Pp
2425If one of the disks were to fail, the pool would be reduced to the degraded
2426state.
2427The failed device can be replaced using the following command:
2428.Bd -literal
2429# zpool replace tank c0t0d0 c0t3d0
2430.Ed
2431.Pp
2432Once the data has been resilvered, the spare is automatically removed and is
2433made available for use should another device fail.
2434The hot spare can be permanently removed from the pool using the following
2435command:
2436.Bd -literal
2437# zpool remove tank c0t2d0
2438.Ed
2439.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
2440The following command creates a ZFS storage pool consisting of two, two-way
2441mirrors and mirrored log devices:
2442.Bd -literal
2443# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
2444  c4d0 c5d0
2445.Ed
2446.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
2447The following command adds two disks for use as cache devices to a ZFS storage
2448pool:
2449.Bd -literal
2450# zpool add pool cache c2d0 c3d0
2451.Ed
2452.Pp
2453Once added, the cache devices gradually fill with content from main memory.
2454Depending on the size of your cache devices, it could take over an hour for
2455them to fill.
2456Capacity and reads can be monitored using the
2457.Cm iostat
2458option as follows:
2459.Bd -literal
2460# zpool iostat -v pool 5
2461.Ed
2462.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
2463The following commands remove the mirrored log device
2464.Sy mirror-2
2465and mirrored top-level data device
2466.Sy mirror-1 .
2467.Pp
2468Given this configuration:
2469.Bd -literal
2470  pool: tank
2471 state: ONLINE
2472 scrub: none requested
2473config:
2474
2475         NAME        STATE     READ WRITE CKSUM
2476         tank        ONLINE       0     0     0
2477           mirror-0  ONLINE       0     0     0
2478             c6t0d0  ONLINE       0     0     0
2479             c6t1d0  ONLINE       0     0     0
2480           mirror-1  ONLINE       0     0     0
2481             c6t2d0  ONLINE       0     0     0
2482             c6t3d0  ONLINE       0     0     0
2483         logs
2484           mirror-2  ONLINE       0     0     0
2485             c4t0d0  ONLINE       0     0     0
2486             c4t1d0  ONLINE       0     0     0
2487.Ed
2488.Pp
2489The command to remove the mirrored log
2490.Sy mirror-2
2491is:
2492.Bd -literal
2493# zpool remove tank mirror-2
2494.Ed
2495.Pp
2496The command to remove the mirrored data
2497.Sy mirror-1
2498is:
2499.Bd -literal
2500# zpool remove tank mirror-1
2501.Ed
2502.It Sy Example 15 No Displaying expanded space on a device
2503The following command displays the detailed information for the pool
2504.Em data .
2505This pool is comprised of a single raidz vdev where one of its devices
2506increased its capacity by 10GB.
2507In this example, the pool will not be able to utilize this extra capacity until
2508all the devices under the raidz vdev have been expanded.
2509.Bd -literal
2510# zpool list -v data
2511NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
2512data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
2513  raidz1    23.9G  14.6G  9.30G    48%         -
2514    c1t1d0      -      -      -      -         -
2515    c1t2d0      -      -      -      -       10G
2516    c1t3d0      -      -      -      -         -
2517.Ed
2518.El
2519.Sh ENVIRONMENT VARIABLES
2520.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
2521.It Ev ZPOOL_VDEV_NAME_GUID
2522Cause
2523.Nm zpool
2524subcommands to output vdev guids by default.
2525This behavior is identical to the
2526.Nm zpool status -g
2527command line option.
2528.El
2529.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
2530.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
2531Cause
2532.Nm zpool
2533subcommands to follow links for vdev names by default.
2534This behavior is identical to the
2535.Nm zpool status -L
2536command line option.
2537.El
2538.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
2539.It Ev ZPOOL_VDEV_NAME_PATH
2540Cause
2541.Nm zpool
2542subcommands to output full vdev path names by default.
2543This behavior is identical to the
2544.Nm zpool status -P
2545command line option.
2546.El
2547.Sh INTERFACE STABILITY
2548.Sy Evolving
2549.Sh SEE ALSO
2550.Xr attributes 7 ,
2551.Xr zpool-features 7 ,
2552.Xr zfs 8
2553