xref: /titanic_50/usr/src/man/man1m/zpool.1m (revision c8323d4323a565676ba44883bfeb289d9ed8813e)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2013 by Delphix. All rights reserved.
24.\" Copyright 2016 Nexenta Systems, Inc.
25.\"
26.Dd February 15, 2016
27.Dt ZPOOL 1M
28.Os
29.Sh NAME
30.Nm zpool
31.Nd configure ZFS storage pools
32.Sh SYNOPSIS
33.Nm
34.Fl \?
35.Nm
36.Cm add
37.Op Fl fn
38.Ar pool vdev Ns ...
39.Nm
40.Cm attach
41.Op Fl f
42.Ar pool device new_device
43.Nm
44.Cm clear
45.Ar pool
46.Op Ar device
47.Nm
48.Cm create
49.Op Fl dfn
50.Op Fl m Ar mountpoint
51.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
52.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
53.Op Fl R Ar root
54.Ar pool vdev Ns ...
55.Nm
56.Cm destroy
57.Op Fl f
58.Ar pool
59.Nm
60.Cm detach
61.Ar pool device
62.Nm
63.Cm export
64.Op Fl f
65.Ar pool Ns ...
66.Nm
67.Cm get
68.Op Fl Hp
69.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
70.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
71.Ar pool Ns ...
72.Nm
73.Cm history
74.Op Fl il
75.Oo Ar pool Oc Ns ...
76.Nm
77.Cm import
78.Op Fl D
79.Op Fl d Ar dir
80.Nm
81.Cm import
82.Fl a
83.Op Fl DfmN
84.Op Fl F Op Fl n
85.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
86.Op Fl o Ar mntopts
87.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
88.Op Fl R Ar root
89.Nm
90.Cm import
91.Op Fl Dfm
92.Op Fl F Op Fl n
93.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
94.Op Fl o Ar mntopts
95.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
96.Op Fl R Ar root
97.Ar pool Ns | Ns Ar id
98.Op Ar newpool
99.Nm
100.Cm iostat
101.Op Fl v
102.Op Fl T Sy u Ns | Ns Sy d
103.Oo Ar pool Oc Ns ...
104.Op Ar interval Op Ar count
105.Nm
106.Cm list
107.Op Fl Hpv
108.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
109.Op Fl T Sy u Ns | Ns Sy d
110.Oo Ar pool Oc Ns ...
111.Op Ar interval Op Ar count
112.Nm
113.Cm offline
114.Op Fl t
115.Ar pool Ar device Ns ...
116.Nm
117.Cm online
118.Op Fl e
119.Ar pool Ar device Ns ...
120.Nm
121.Cm reguid
122.Ar pool
123.Nm
124.Cm reopen
125.Ar pool
126.Nm
127.Cm remove
128.Ar pool Ar device Ns ...
129.Nm
130.Cm replace
131.Op Fl f
132.Ar pool Ar device Op Ar new_device
133.Nm
134.Cm scrub
135.Op Fl s
136.Ar pool Ns ...
137.Nm
138.Cm set
139.Ar property Ns = Ns Ar value
140.Ar pool
141.Nm
142.Cm status
143.Op Fl Dvx
144.Op Fl T Sy u Ns | Ns Sy d
145.Oo Ar pool Oc Ns ...
146.Op Ar interval Op Ar count
147.Nm
148.Cm upgrade
149.Nm
150.Cm upgrade
151.Fl v
152.Nm
153.Cm upgrade
154.Op Fl V Ar version
155.Fl a Ns | Ns Ar pool Ns ...
156.Sh DESCRIPTION
157The
158.Nm
159command configures ZFS storage pools. A storage pool is a collection of devices
160that provides physical storage and data replication for ZFS datasets. All
161datasets within a storage pool share the same space. See
162.Xr zfs 1M
163for information on managing datasets.
164.Ss Virtual Devices (vdevs)
165A "virtual device" describes a single device or a collection of devices
166organized according to certain performance and fault characteristics. The
167following virtual devices are supported:
168.Bl -tag -width Ds
169.It Sy disk
170A block device, typically located under
171.Pa /dev/dsk .
172ZFS can use individual slices or partitions, though the recommended mode of
173operation is to use whole disks. A disk can be specified by a full path, or it
174can be a shorthand name
175.Po the relative portion of the path under
176.Pa /dev/dsk
177.Pc .
178A whole disk can be specified by omitting the slice or partition designation.
179For example,
180.Pa c0t0d0
181is equivalent to
182.Pa /dev/dsk/c0t0d0s2 .
183When given a whole disk, ZFS automatically labels the disk, if necessary.
184.It Sy file
185A regular file. The use of files as a backing store is strongly discouraged. It
186is designed primarily for experimental purposes, as the fault tolerance of a
187file is only as good as the file system of which it is a part. A file must be
188specified by a full path.
189.It Sy mirror
190A mirror of two or more devices. Data is replicated in an identical fashion
191across all components of a mirror. A mirror with N disks of size X can hold X
192bytes and can withstand (N-1) devices failing before data integrity is
193compromised.
194.It Sy raidz , raidz1 , raidz2 , raidz3
195A variation on RAID-5 that allows for better distribution of parity and
196eliminates the RAID-5
197.Qq write hole
198.Pq in which data and parity become inconsistent after a power loss .
199Data and parity is striped across all disks within a raidz group.
200.Pp
201A raidz group can have single-, double-, or triple-parity, meaning that the
202raidz group can sustain one, two, or three failures, respectively, without
203losing any data. The
204.Sy raidz1
205vdev type specifies a single-parity raidz group; the
206.Sy raidz2
207vdev type specifies a double-parity raidz group; and the
208.Sy raidz3
209vdev type specifies a triple-parity raidz group. The
210.Sy raidz
211vdev type is an alias for
212.Sy raidz1 .
213.Pp
214A raidz group with N disks of size X with P parity disks can hold approximately
215(N-P)*X bytes and can withstand P device(s) failing before data integrity is
216compromised. The minimum number of devices in a raidz group is one more than
217the number of parity disks. The recommended number is between 3 and 9 to help
218increase performance.
219.It Sy spare
220A special pseudo-vdev which keeps track of available hot spares for a pool. For
221more information, see the
222.Sx Hot Spares
223section.
224.It Sy log
225A separate intent log device. If more than one log device is specified, then
226writes are load-balanced between devices. Log devices can be mirrored. However,
227raidz vdev types are not supported for the intent log. For more information,
228see the
229.Sx Intent Log
230section.
231.It Sy cache
232A device used to cache storage pool data. A cache device cannot be cannot be
233configured as a mirror or raidz group. For more information, see the
234.Sx Cache Devices
235section.
236.El
237.Pp
238Virtual devices cannot be nested, so a mirror or raidz virtual device can only
239contain files or disks. Mirrors of mirrors
240.Pq or other combinations
241are not allowed.
242.Pp
243A pool can have any number of virtual devices at the top of the configuration
244.Po known as
245.Qq root vdevs
246.Pc .
247Data is dynamically distributed across all top-level devices to balance data
248among devices. As new virtual devices are added, ZFS automatically places data
249on the newly available devices.
250.Pp
251Virtual devices are specified one at a time on the command line, separated by
252whitespace. The keywords
253.Sy mirror
254and
255.Sy raidz
256are used to distinguish where a group ends and another begins. For example,
257the following creates two root vdevs, each a mirror of two disks:
258.Bd -literal
259# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
260.Ed
261.Ss Device Failure and Recovery
262ZFS supports a rich set of mechanisms for handling device failure and data
263corruption. All metadata and data is checksummed, and ZFS automatically repairs
264bad data from a good copy when corruption is detected.
265.Pp
266In order to take advantage of these features, a pool must make use of some form
267of redundancy, using either mirrored or raidz groups. While ZFS supports
268running in a non-redundant configuration, where each root vdev is simply a disk
269or file, this is strongly discouraged. A single case of bit corruption can
270render some or all of your data unavailable.
271.Pp
272A pool's health status is described by one of three states: online, degraded,
273or faulted. An online pool has all devices operating normally. A degraded pool
274is one in which one or more devices have failed, but the data is still
275available due to a redundant configuration. A faulted pool has corrupted
276metadata, or one or more faulted devices, and insufficient replicas to continue
277functioning.
278.Pp
279The health of the top-level vdev, such as mirror or raidz device, is
280potentially impacted by the state of its associated vdevs, or component
281devices. A top-level vdev or component device is in one of the following
282states:
283.Bl -tag -width "DEGRADED"
284.It Sy DEGRADED
285One or more top-level vdevs is in the degraded state because one or more
286component devices are offline. Sufficient replicas exist to continue
287functioning.
288.Pp
289One or more component devices is in the degraded or faulted state, but
290sufficient replicas exist to continue functioning. The underlying conditions
291are as follows:
292.Bl -bullet
293.It
294The number of checksum errors exceeds acceptable levels and the device is
295degraded as an indication that something may be wrong. ZFS continues to use the
296device as necessary.
297.It
298The number of I/O errors exceeds acceptable levels. The device could not be
299marked as faulted because there are insufficient replicas to continue
300functioning.
301.El
302.It Sy FAULTED
303One or more top-level vdevs is in the faulted state because one or more
304component devices are offline. Insufficient replicas exist to continue
305functioning.
306.Pp
307One or more component devices is in the faulted state, and insufficient
308replicas exist to continue functioning. The underlying conditions are as
309follows:
310.Bl -bullet
311.It
312The device could be opened, but the contents did not match expected values.
313.It
314The number of I/O errors exceeds acceptable levels and the device is faulted to
315prevent further use of the device.
316.El
317.It Sy OFFLINE
318The device was explicitly taken offline by the
319.Nm zpool Cm offline
320command.
321.It Sy ONLINE
322The device is online and functioning.
323.It Sy REMOVED
324The device was physically removed while the system was running. Device removal
325detection is hardware-dependent and may not be supported on all platforms.
326.It Sy UNAVAIL
327The device could not be opened. If a pool is imported when a device was
328unavailable, then the device will be identified by a unique identifier instead
329of its path since the path was never correct in the first place.
330.El
331.Pp
332If a device is removed and later re-attached to the system, ZFS attempts
333to put the device online automatically. Device attach detection is
334hardware-dependent and might not be supported on all platforms.
335.Ss Hot Spares
336ZFS allows devices to be associated with pools as
337.Qq hot spares .
338These devices are not actively used in the pool, but when an active device
339fails, it is automatically replaced by a hot spare. To create a pool with hot
340spares, specify a
341.Sy spare
342vdev with any number of devices. For example,
343.Bd -literal
344# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
345.Ed
346.Pp
347Spares can be shared across multiple pools, and can be added with the
348.Nm zpool Cm add
349command and removed with the
350.Nm zpool Cm remove
351command. Once a spare replacement is initiated, a new
352.Sy spare
353vdev is created within the configuration that will remain there until the
354original device is replaced. At this point, the hot spare becomes available
355again if another device fails.
356.Pp
357If a pool has a shared spare that is currently being used, the pool can not be
358exported since other pools may use this shared spare, which may lead to
359potential data corruption.
360.Pp
361An in-progress spare replacement can be cancelled by detaching the hot spare.
362If the original faulted device is detached, then the hot spare assumes its
363place in the configuration, and is removed from the spare list of all active
364pools.
365.Pp
366Spares cannot replace log devices.
367.Ss Intent Log
368The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous
369transactions. For instance, databases often require their transactions to be on
370stable storage devices when returning from a system call. NFS and other
371applications can also use
372.Xr fsync 3C
373to ensure data stability. By default, the intent log is allocated from blocks
374within the main pool. However, it might be possible to get better performance
375using separate intent log devices such as NVRAM or a dedicated disk. For
376example:
377.Bd -literal
378# zpool create pool c0d0 c1d0 log c2d0
379.Ed
380.Pp
381Multiple log devices can also be specified, and they can be mirrored. See the
382.Sx EXAMPLES
383section for an example of mirroring multiple log devices.
384.Pp
385Log devices can be added, replaced, attached, detached, and imported and
386exported as part of the larger pool. Mirrored log devices can be removed by
387specifying the top-level mirror for the log.
388.Ss Cache Devices
389Devices can be added to a storage pool as
390.Qq cache devices .
391These devices provide an additional layer of caching between main memory and
392disk. For read-heavy workloads, where the working set size is much larger than
393what can be cached in main memory, using cache devices allow much more of this
394working set to be served from low latency media. Using cache devices provides
395the greatest performance improvement for random read-workloads of mostly static
396content.
397.Pp
398To create a pool with cache devices, specify a
399.Sy cache
400vdev with any number of devices. For example:
401.Bd -literal
402# zpool create pool c0d0 c1d0 cache c2d0 c3d0
403.Ed
404.Pp
405Cache devices cannot be mirrored or part of a raidz configuration. If a read
406error is encountered on a cache device, that read I/O is reissued to the
407original storage pool device, which might be part of a mirrored or raidz
408configuration.
409.Pp
410The content of the cache devices is considered volatile, as is the case with
411other system caches.
412.Ss Properties
413Each pool has several properties associated with it. Some properties are
414read-only statistics while others are configurable and change the behavior of
415the pool.
416.Pp
417The following are read-only properties:
418.Bl -tag -width Ds
419.It Sy available
420Amount of storage available within the pool. This property can also be referred
421to by its shortened column name,
422.Sy avail .
423.It Sy capacity
424Percentage of pool space used. This property can also be referred to by its
425shortened column name,
426.Sy cap .
427.It Sy expandsize
428Amount of uninitialized space within the pool or device that can be used to
429increase the total capacity of the pool.  Uninitialized space consists of
430any space on an EFI labeled vdev which has not been brought online
431.Po e.g, using
432.Nm zpool Cm online Fl e
433.Pc .
434This space occurs when a LUN is dynamically expanded.
435.It Sy fragmentation
436The amount of fragmentation in the pool.
437.It Sy free
438The amount of free space available in the pool.
439.It Sy freeing
440After a file system or snapshot is destroyed, the space it was using is
441returned to the pool asynchronously.
442.Sy freeing
443is the amount of space remaining to be reclaimed. Over time
444.Sy freeing
445will decrease while
446.Sy free
447increases.
448.It Sy health
449The current health of the pool. Health can be one of
450.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL .
451.It Sy guid
452A unique identifier for the pool.
453.It Sy size
454Total size of the storage pool.
455.It Sy unsupported@ Ns Em feature_guid
456Information about unsupported features that are enabled on the pool. See
457.Xr zpool-features 5
458for details.
459.It Sy used
460Amount of storage space used within the pool.
461.El
462.Pp
463The space usage properties report actual physical space available to the
464storage pool. The physical space can be different from the total amount of
465space that any contained datasets can actually use. The amount of space used in
466a raidz configuration depends on the characteristics of the data being
467written. In addition, ZFS reserves some space for internal accounting
468that the
469.Xr zfs 1M
470command takes into account, but the
471.Nm
472command does not. For non-full pools of a reasonable size, these effects should
473be invisible. For small pools, or pools that are close to being completely
474full, these discrepancies may become more noticeable.
475.Pp
476The following property can be set at creation time and import time:
477.Bl -tag -width Ds
478.It Sy altroot
479Alternate root directory. If set, this directory is prepended to any mount
480points within the pool. This can be used when examining an unknown pool where
481the mount points cannot be trusted, or in an alternate boot environment, where
482the typical paths are not valid.
483.Sy altroot
484is not a persistent property. It is valid only while the system is up. Setting
485.Sy altroot
486defaults to using
487.Sy cachefile Ns = Ns Sy none ,
488though this may be overridden using an explicit setting.
489.El
490.Pp
491The following property can be set only at import time:
492.Bl -tag -width Ds
493.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off
494If set to
495.Sy on ,
496the pool will be imported in read-only mode. This property can also be referred
497to by its shortened column name,
498.Sy rdonly .
499.El
500.Pp
501The following properties can be set at creation time and import time, and later
502changed with the
503.Nm zpool Cm set
504command:
505.Bl -tag -width Ds
506.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off
507Controls automatic pool expansion when the underlying LUN is grown. If set to
508.Sy on ,
509the pool will be resized according to the size of the expanded device. If the
510device is part of a mirror or raidz then all devices within that mirror/raidz
511group must be expanded before the new space is made available to the pool. The
512default behavior is
513.Sy off .
514This property can also be referred to by its shortened column name,
515.Sy expand .
516.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off
517Controls automatic device replacement. If set to
518.Sy off ,
519device replacement must be initiated by the administrator by using the
520.Nm zpool Cm replace
521command. If set to
522.Sy on ,
523any new device, found in the same physical location as a device that previously
524belonged to the pool, is automatically formatted and replaced. The default
525behavior is
526.Sy off .
527This property can also be referred to by its shortened column name,
528.Sy replace .
529.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
530Identifies the default bootable dataset for the root pool. This property is
531expected to be set mainly by the installation and upgrade programs.
532.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none
533Controls the location of where the pool configuration is cached. Discovering
534all pools on system startup requires a cached copy of the configuration data
535that is stored on the root file system. All pools in this cache are
536automatically imported when the system boots. Some environments, such as
537install and clustering, need to cache this information in a different location
538so that pools are not automatically imported. Setting this property caches the
539pool configuration in a different location that can later be imported with
540.Nm zpool Cm import Fl c .
541Setting it to the special value
542.Sy none
543creates a temporary pool that is never cached, and the special value
544.Qq
545.Pq empty string
546uses the default location.
547.Pp
548Multiple pools can share the same cache file. Because the kernel destroys and
549recreates this file when pools are added and removed, care should be taken when
550attempting to access this file. When the last pool using a
551.Sy cachefile
552is exported or destroyed, the file is removed.
553.It Sy comment Ns = Ns Ar text
554A text string consisting of printable ASCII characters that will be stored
555such that it is available even if the pool becomes faulted.  An administrator
556can provide additional information about a pool using this property.
557.It Sy dedupditto Ns = Ns Ar number
558Threshold for the number of block ditto copies. If the reference count for a
559deduplicated block increases above this number, a new ditto copy of this block
560is automatically stored. The default setting is
561.Sy 0
562which causes no ditto copies to be created for deduplicated blocks. The miniumum
563legal nonzero setting is
564.Sy 100 .
565.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off
566Controls whether a non-privileged user is granted access based on the dataset
567permissions defined on the dataset. See
568.Xr zfs 1M
569for more information on ZFS delegated administration.
570.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic
571Controls the system behavior in the event of catastrophic pool failure. This
572condition is typically a result of a loss of connectivity to the underlying
573storage device(s) or a failure of all devices within the pool. The behavior of
574such an event is determined as follows:
575.Bl -tag -width "continue"
576.It Sy wait
577Blocks all I/O access until the device connectivity is recovered and the errors
578are cleared. This is the default behavior.
579.It Sy continue
580Returns
581.Er EIO
582to any new write I/O requests but allows reads to any of the remaining healthy
583devices. Any write requests that have yet to be committed to disk would be
584blocked.
585.It Sy panic
586Prints out a message to the console and generates a system crash dump.
587.El
588.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
589The value of this property is the current state of
590.Ar feature_name .
591The only valid value when setting this property is
592.Sy enabled
593which moves
594.Ar feature_name
595to the enabled state. See
596.Xr zpool-features 5
597for details on feature states.
598.It Sy listsnaps Ns = Ns Sy on Ns | Ns Sy off
599Controls whether information about snapshots associated with this pool is
600output when
601.Nm zfs Cm list
602is run without the
603.Fl t
604option. The default value is
605.Sy off .
606.It Sy version Ns = Ns Ar version
607The current on-disk version of the pool. This can be increased, but never
608decreased. The preferred method of updating pools is with the
609.Nm zpool Cm upgrade
610command, though this property can be used when a specific version is needed for
611backwards compatibility. Once feature flags is enabled on a pool this property
612will no longer have a value.
613.El
614.Ss Subcommands
615All subcommands that modify state are logged persistently to the pool in their
616original form.
617.Pp
618The
619.Nm
620command provides subcommands to create and destroy storage pools, add capacity
621to storage pools, and provide information about the storage pools. The
622following subcommands are supported:
623.Bl -tag -width Ds
624.It Xo
625.Nm
626.Fl \?
627.Xc
628Displays a help message.
629.It Xo
630.Nm
631.Cm add
632.Op Fl fn
633.Ar pool vdev Ns ...
634.Xc
635Adds the specified virtual devices to the given pool. The
636.Ar vdev
637specification is described in the
638.Sx Virtual Devices
639section. The behavior of the
640.Fl f
641option, and the device checks performed are described in the
642.Nm zpool Cm create
643subcommand.
644.Bl -tag -width Ds
645.It Fl f
646Forces use of
647.Ar vdev Ns s ,
648even if they appear in use or specify a conflicting replication level. Not all
649devices can be overridden in this manner.
650.It Fl n
651Displays the configuration that would be used without actually adding the
652.Ar vdev Ns s .
653The actual pool creation can still fail due to insufficient privileges or
654device sharing.
655.El
656.It Xo
657.Nm
658.Cm attach
659.Op Fl f
660.Ar pool device new_device
661.Xc
662Attaches
663.Ar new_device
664to the existing
665.Ar device .
666The existing device cannot be part of a raidz configuration. If
667.Ar device
668is not currently part of a mirrored configuration,
669.Ar device
670automatically transforms into a two-way mirror of
671.Ar device
672and
673.Ar new_device .
674If
675.Ar device
676is part of a two-way mirror, attaching
677.Ar new_device
678creates a three-way mirror, and so on. In either case,
679.Ar new_device
680begins to resilver immediately.
681.Bl -tag -width Ds
682.It Fl f
683Forces use of
684.Ar new_device ,
685even if its appears to be in use. Not all devices can be overridden in this
686manner.
687.El
688.It Xo
689.Nm
690.Cm clear
691.Ar pool
692.Op Ar device
693.Xc
694Clears device errors in a pool. If no arguments are specified, all device
695errors within the pool are cleared. If one or more devices is specified, only
696those errors associated with the specified device or devices are cleared.
697.It Xo
698.Nm
699.Cm create
700.Op Fl dfn
701.Op Fl m Ar mountpoint
702.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
703.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns ...
704.Op Fl R Ar root
705.Ar pool vdev Ns ...
706.Xc
707Creates a new storage pool containing the virtual devices specified on the
708command line. The pool name must begin with a letter, and can only contain
709alphanumeric characters as well as underscore
710.Pq Qq Sy _ ,
711dash
712.Pq Qq Sy - ,
713and period
714.Pq Qq Sy \&. .
715The pool names
716.Sy mirror ,
717.Sy raidz ,
718.Sy spare
719and
720.Sy log
721are reserved, as are names beginning with the pattern
722.Sy c[0-9] .
723The
724.Ar vdev
725specification is described in the
726.Sx Virtual Devices
727section.
728.Pp
729The command verifies that each device specified is accessible and not currently
730in use by another subsystem. There are some uses, such as being currently
731mounted, or specified as the dedicated dump device, that prevents a device from
732ever being used by ZFS . Other uses, such as having a preexisting UFS file
733system, can be overridden with the
734.Fl f
735option.
736.Pp
737The command also checks that the replication strategy for the pool is
738consistent. An attempt to combine redundant and non-redundant storage in a
739single pool, or to mix disks and files, results in an error unless
740.Fl f
741is specified. The use of differently sized devices within a single raidz or
742mirror group is also flagged as an error unless
743.Fl f
744is specified.
745.Pp
746Unless the
747.Fl R
748option is specified, the default mount point is
749.Pa / Ns Ar pool .
750The mount point must not exist or must be empty, or else the root dataset
751cannot be mounted. This can be overridden with the
752.Fl m
753option.
754.Pp
755By default all supported features are enabled on the new pool unless the
756.Fl d
757option is specified.
758.Bl -tag -width Ds
759.It Fl d
760Do not enable any features on the new pool. Individual features can be enabled
761by setting their corresponding properties to
762.Sy enabled
763with the
764.Fl o
765option. See
766.Xr zpool-features 5
767for details about feature properties.
768.It Fl f
769Forces use of
770.Ar vdev Ns s ,
771even if they appear in use or specify a conflicting replication level. Not all
772devices can be overridden in this manner.
773.It Fl m Ar mountpoint
774Sets the mount point for the root dataset. The default mount point is
775.Pa /pool
776or
777.Pa altroot/pool
778if
779.Ar altroot
780is specified. The mount point must be an absolute path,
781.Sy legacy ,
782or
783.Sy none .
784For more information on dataset mount points, see
785.Xr zfs 1M .
786.It Fl n
787Displays the configuration that would be used without actually creating the
788pool. The actual pool creation can still fail due to insufficient privileges or
789device sharing.
790.It Fl o Ar property Ns = Ns Ar value
791Sets the given pool properties. See the
792.Sx Properties
793section for a list of valid properties that can be set.
794.It Fl O Ar file-system-property Ns = Ns Ar value
795Sets the given file system properties in the root file system of the pool. See
796the
797.Sx Properties
798section of
799.Xr zfs 1M
800for a list of valid properties that can be set.
801.It Fl R Ar root
802Equivalent to
803.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root
804.El
805.It Xo
806.Nm
807.Cm destroy
808.Op Fl f
809.Ar pool
810.Xc
811Destroys the given pool, freeing up any devices for other use. This command
812tries to unmount any active datasets before destroying the pool.
813.Bl -tag -width Ds
814.It Fl f
815Forces any active datasets contained within the pool to be unmounted.
816.El
817.It Xo
818.Nm
819.Cm detach
820.Ar pool device
821.Xc
822Detaches
823.Ar device
824from a mirror. The operation is refused if there are no other valid replicas of
825the data.
826.It Xo
827.Nm
828.Cm export
829.Op Fl f
830.Ar pool Ns ...
831.Xc
832Exports the given pools from the system. All devices are marked as exported,
833but are still considered in use by other subsystems. The devices can be moved
834between systems
835.Pq even those of different endianness
836and imported as long as a sufficient number of devices are present.
837.Pp
838Before exporting the pool, all datasets within the pool are unmounted. A pool
839can not be exported if it has a shared spare that is currently being used.
840.Pp
841For pools to be portable, you must give the
842.Nm
843command whole disks, not just slices, so that ZFS can label the disks with
844portable EFI labels. Otherwise, disk drivers on platforms of different
845endianness will not recognize the disks.
846.Bl -tag -width Ds
847.It Fl f
848Forcefully unmount all datasets, using the
849.Nm unmount Fl f
850command.
851.Pp
852This command will forcefully export the pool even if it has a shared spare that
853is currently being used. This may lead to potential data corruption.
854.El
855.It Xo
856.Nm
857.Cm get
858.Op Fl Hp
859.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns ...
860.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns ...
861.Ar pool Ns ...
862.Xc
863Retrieves the given list of properties
864.Po
865or all properties if
866.Sy all
867is used
868.Pc
869for the specified storage pool(s). These properties are displayed with
870the following fields:
871.Bd -literal
872        name          Name of storage pool
873        property      Property name
874        value         Property value
875        source        Property source, either 'default' or 'local'.
876.Ed
877.Pp
878See the
879.Sx Properties
880section for more information on the available pool properties.
881.Bl -tag -width Ds
882.It Fl H
883Scripted mode. Do not display headers, and separate fields by a single tab
884instead of arbitrary space.
885.It Fl o Ar field
886A comma-separated list of columns to display.
887.Sy name Ns , Ns Sy property Ns , Ns Sy value Ns , Ns Sy source
888is the default value.
889.It Fl p
890Display numbers in parsable (exact) values.
891.El
892.It Xo
893.Nm
894.Cm history
895.Op Fl il
896.Oo Ar pool Oc Ns ...
897.Xc
898Displays the command history of the specified pool(s) or all pools if no pool is
899specified.
900.Bl -tag -width Ds
901.It Fl i
902Displays internally logged ZFS events in addition to user initiated events.
903.It Fl l
904Displays log records in long format, which in addition to standard format
905includes, the user name, the hostname, and the zone in which the operation was
906performed.
907.El
908.It Xo
909.Nm
910.Cm import
911.Op Fl D
912.Op Fl d Ar dir
913.Xc
914Lists pools available to import. If the
915.Fl d
916option is not specified, this command searches for devices in
917.Pa /dev/dsk .
918The
919.Fl d
920option can be specified multiple times, and all directories are searched. If the
921device appears to be part of an exported pool, this command displays a summary
922of the pool with the name of the pool, a numeric identifier, as well as the vdev
923layout and current health of the device for each device or file. Destroyed
924pools, pools that were previously destroyed with the
925.Nm zpool Cm destroy
926command, are not listed unless the
927.Fl D
928option is specified.
929.Pp
930The numeric identifier is unique, and can be used instead of the pool name when
931multiple exported pools of the same name are available.
932.Bl -tag -width Ds
933.It Fl c Ar cachefile
934Reads configuration from the given
935.Ar cachefile
936that was created with the
937.Sy cachefile
938pool property. This
939.Ar cachefile
940is used instead of searching for devices.
941.It Fl d Ar dir
942Searches for devices or files in
943.Ar dir .
944The
945.Fl d
946option can be specified multiple times.
947.It Fl D
948Lists destroyed pools only.
949.El
950.It Xo
951.Nm
952.Cm import
953.Fl a
954.Op Fl DfmN
955.Op Fl F Op Fl n
956.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
957.Op Fl o Ar mntopts
958.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
959.Op Fl R Ar root
960.Xc
961Imports all pools found in the search directories. Identical to the previous
962command, except that all pools with a sufficient number of devices available are
963imported. Destroyed pools, pools that were previously destroyed with the
964.Nm zpool Cm destroy
965command, will not be imported unless the
966.Fl D
967option is specified.
968.Bl -tag -width Ds
969.It Fl a
970Searches for and imports all pools found.
971.It Fl c Ar cachefile
972Reads configuration from the given
973.Ar cachefile
974that was created with the
975.Sy cachefile
976pool property. This
977.Ar cachefile
978is used instead of searching for devices.
979.It Fl d Ar dir
980Searches for devices or files in
981.Ar dir .
982The
983.Fl d
984option can be specified multiple times. This option is incompatible with the
985.Fl c
986option.
987.It Fl D
988Imports destroyed pools only. The
989.Fl f
990option is also required.
991.It Fl f
992Forces import, even if the pool appears to be potentially active.
993.It Fl F
994Recovery mode for a non-importable pool. Attempt to return the pool to an
995importable state by discarding the last few transactions. Not all damaged pools
996can be recovered by using this option. If successful, the data from the
997discarded transactions is irretrievably lost. This option is ignored if the pool
998is importable or already imported.
999.It Fl m
1000Allows a pool to import when there is a missing log device. Recent transactions
1001can be lost because the log device will be discarded.
1002.It Fl n
1003Used with the
1004.Fl F
1005recovery option. Determines whether a non-importable pool can be made importable
1006again, but does not actually perform the pool recovery. For more details about
1007pool recovery mode, see the
1008.Fl F
1009option, above.
1010.It Fl N
1011Import the pool without mounting any file systems.
1012.It Fl o Ar mntopts
1013Comma-separated list of mount options to use when mounting datasets within the
1014pool. See
1015.Xr zfs 1M
1016for a description of dataset properties and mount options.
1017.It Fl o Ar property Ns = Ns Ar value
1018Sets the specified property on the imported pool. See the
1019.Sx Properties
1020section for more information on the available pool properties.
1021.It Fl R Ar root
1022Sets the
1023.Sy cachefile
1024property to
1025.Sy none
1026and the
1027.Sy altroot
1028property to
1029.Ar root .
1030.El
1031.It Xo
1032.Nm
1033.Cm import
1034.Op Fl Dfm
1035.Op Fl F Op Fl n
1036.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir
1037.Op Fl o Ar mntopts
1038.Oo Fl o Ar property Ns = Ns Ar value Oc Ns ...
1039.Op Fl R Ar root
1040.Ar pool Ns | Ns Ar id
1041.Op Ar newpool
1042.Xc
1043Imports a specific pool. A pool can be identified by its name or the numeric
1044identifier. If
1045.Ar newpool
1046is specified, the pool is imported using the name
1047.Ar newpool .
1048Otherwise, it is imported with the same name as its exported name.
1049.Pp
1050If a device is removed from a system without running
1051.Nm zpool Cm export
1052first, the device appears as potentially active. It cannot be determined if
1053this was a failed export, or whether the device is really in use from another
1054host. To import a pool in this state, the
1055.Fl f
1056option is required.
1057.Bl -tag -width Ds
1058.It Fl c Ar cachefile
1059Reads configuration from the given
1060.Ar cachefile
1061that was created with the
1062.Sy cachefile
1063pool property. This
1064.Ar cachefile
1065is used instead of searching for devices.
1066.It Fl d Ar dir
1067Searches for devices or files in
1068.Ar dir .
1069The
1070.Fl d
1071option can be specified multiple times. This option is incompatible with the
1072.Fl c
1073option.
1074.It Fl D
1075Imports destroyed pool. The
1076.Fl f
1077option is also required.
1078.It Fl f
1079Forces import, even if the pool appears to be potentially active.
1080.It Fl F
1081Recovery mode for a non-importable pool. Attempt to return the pool to an
1082importable state by discarding the last few transactions. Not all damaged pools
1083can be recovered by using this option. If successful, the data from the
1084discarded transactions is irretrievably lost. This option is ignored if the pool
1085is importable or already imported.
1086.It Fl m
1087Allows a pool to import when there is a missing log device. Recent transactions
1088can be lost because the log device will be discarded.
1089.It Fl n
1090Used with the
1091.Fl F
1092recovery option. Determines whether a non-importable pool can be made importable
1093again, but does not actually perform the pool recovery. For more details about
1094pool recovery mode, see the
1095.Fl F
1096option, above.
1097.It Fl o Ar mntopts
1098Comma-separated list of mount options to use when mounting datasets within the
1099pool. See
1100.Xr zfs 1M
1101for a description of dataset properties and mount options.
1102.It Fl o Ar property Ns = Ns Ar value
1103Sets the specified property on the imported pool. See the
1104.Sx Properties
1105section for more information on the available pool properties.
1106.It Fl R Ar root
1107Sets the
1108.Sy cachefile
1109property to
1110.Sy none
1111and the
1112.Sy altroot
1113property to
1114.Ar root .
1115.El
1116.It Xo
1117.Nm
1118.Cm iostat
1119.Op Fl v
1120.Op Fl T Sy u Ns | Ns Sy d
1121.Oo Ar pool Oc Ns ...
1122.Op Ar interval Op Ar count
1123.Xc
1124Displays I/O statistics for the given pools. When given an
1125.Ar interval ,
1126the statistics are printed every
1127.Ar interval
1128seconds until ^C is pressed. If no
1129.Ar pool Ns s
1130are specified, statistics for every pool in the system is shown. If
1131.Ar count
1132is specified, the command exits after
1133.Ar count
1134reports are printed.
1135.Bl -tag -width Ds
1136.It Fl T Sy u Ns | Ns Sy d
1137Display a time stamp. Specify
1138.Sy u
1139for a printed representation of the internal representation of time. See
1140.Xr time 2 .
1141Specify
1142.Sy d
1143for standard date format. See
1144.Xr date 1 .
1145.It Fl v
1146Verbose statistics. Reports usage statistics for individual vdevs within the
1147pool, in addition to the pool-wide statistics.
1148.El
1149.It Xo
1150.Nm
1151.Cm list
1152.Op Fl Hpv
1153.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns ...
1154.Op Fl T Sy u Ns | Ns Sy d
1155.Oo Ar pool Oc Ns ...
1156.Op Ar interval Op Ar count
1157.Xc
1158Lists the given pools along with a health status and space usage. If no
1159.Ar pool Ns s
1160are specified, all pools in the system are listed. When given an
1161.Ar interval ,
1162the information is printed every
1163.Ar interval
1164seconds until ^C is pressed. If
1165.Ar count
1166is specified, the command exits after
1167.Ar count
1168reports are printed.
1169.Bl -tag -width Ds
1170.It Fl H
1171Scripted mode. Do not display headers, and separate fields by a single tab
1172instead of arbitrary space.
1173.It Fl o Ar property
1174Comma-separated list of properties to display. See the
1175.Sx Properties
1176section for a list of valid properties. The default list is
1177.Sy name , size , used , available , fragmentation , expandsize , capacity ,
1178.Sy dedupratio , health , altroot .
1179.It Fl p
1180Display numbers in parsable
1181.Pq exact
1182values.
1183.It Fl T Sy u Ns | Ns Sy d
1184Display a time stamp. Specify
1185.Fl u
1186for a printed representation of the internal representation of time. See
1187.Xr time 2 .
1188Specify
1189.Fl d
1190for standard date format. See
1191.Xr date 1 .
1192.It Fl v
1193Verbose statistics. Reports usage statistics for individual vdevs within the
1194pool, in addition to the pool-wise statistics.
1195.El
1196.It Xo
1197.Nm
1198.Cm offline
1199.Op Fl t
1200.Ar pool Ar device Ns ...
1201.Xc
1202Takes the specified physical device offline. While the
1203.Ar device
1204is offline, no attempt is made to read or write to the device. This command is
1205not applicable to spares.
1206.Bl -tag -width Ds
1207.It Fl t
1208Temporary. Upon reboot, the specified physical device reverts to its previous
1209state.
1210.El
1211.It Xo
1212.Nm
1213.Cm online
1214.Op Fl e
1215.Ar pool Ar device Ns ...
1216.Xc
1217Brings the specified physical device online. This command is not applicable to
1218spares.
1219.Bl -tag -width Ds
1220.It Fl e
1221Expand the device to use all available space. If the device is part of a mirror
1222or raidz then all devices must be expanded before the new space will become
1223available to the pool.
1224.El
1225.It Xo
1226.Nm
1227.Cm reguid
1228.Ar pool
1229.Xc
1230Generates a new unique identifier for the pool. You must ensure that all devices
1231in this pool are online and healthy before performing this action.
1232.It Xo
1233.Nm
1234.Cm reopen
1235.Ar pool
1236.Xc
1237Reopen all the vdevs associated with the pool.
1238.It Xo
1239.Nm
1240.Cm remove
1241.Ar pool Ar device Ns ...
1242.Xc
1243Removes the specified device from the pool. This command currently only supports
1244removing hot spares, cache, and log devices. A mirrored log device can be
1245removed by specifying the top-level mirror for the log. Non-log devices that are
1246part of a mirrored configuration can be removed using the
1247.Nm zpool Cm detach
1248command. Non-redundant and raidz devices cannot be removed from a pool.
1249.It Xo
1250.Nm
1251.Cm replace
1252.Op Fl f
1253.Ar pool Ar device Op Ar new_device
1254.Xc
1255Replaces
1256.Ar old_device
1257with
1258.Ar new_device .
1259This is equivalent to attaching
1260.Ar new_device ,
1261waiting for it to resilver, and then detaching
1262.Ar old_device .
1263.Pp
1264The size of
1265.Ar new_device
1266must be greater than or equal to the minimum size of all the devices in a mirror
1267or raidz configuration.
1268.Pp
1269.Ar new_device
1270is required if the pool is not redundant. If
1271.Ar new_device
1272is not specified, it defaults to
1273.Ar old_device .
1274This form of replacement is useful after an existing disk has failed and has
1275been physically replaced. In this case, the new disk may have the same
1276.Pa /dev/dsk
1277path as the old device, even though it is actually a different disk. ZFS
1278recognizes this.
1279.Bl -tag -width Ds
1280.It Fl f
1281Forces use of
1282.Ar new_device ,
1283even if its appears to be in use. Not all devices can be overridden in this
1284manner.
1285.El
1286.It Xo
1287.Nm
1288.Cm scrub
1289.Op Fl s
1290.Ar pool Ns ...
1291.Xc
1292Begins a scrub. The scrub examines all data in the specified pools to verify
1293that it checksums correctly. For replicated
1294.Pq mirror or raidz
1295devices, ZFS automatically repairs any damage discovered during the scrub. The
1296.Nm zpool Cm status
1297command reports the progress of the scrub and summarizes the results of the
1298scrub upon completion.
1299.Pp
1300Scrubbing and resilvering are very similar operations. The difference is that
1301resilvering only examines data that ZFS knows to be out of date
1302.Po
1303for example, when attaching a new device to a mirror or replacing an existing
1304device
1305.Pc ,
1306whereas scrubbing examines all data to discover silent errors due to hardware
1307faults or disk failure.
1308.Pp
1309Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows
1310one at a time. If a scrub is already in progress, the
1311.Nm zpool Cm scrub
1312command terminates it and starts a new scrub. If a resilver is in progress, ZFS
1313does not allow a scrub to be started until the resilver completes.
1314.Bl -tag -width Ds
1315.It Fl s
1316Stop scrubbing.
1317.El
1318.It Xo
1319.Nm
1320.Cm set
1321.Ar property Ns = Ns Ar value
1322.Ar pool
1323.Xc
1324Sets the given property on the specified pool. See the
1325.Sx Properties
1326section for more information on what properties can be set and acceptable
1327values.
1328.It Xo
1329.Nm
1330.Cm status
1331.Op Fl Dvx
1332.Op Fl T Sy u Ns | Ns Sy d
1333.Oo Ar pool Oc Ns ...
1334.Op Ar interval Op Ar count
1335.Xc
1336Displays the detailed health status for the given pools. If no
1337.Ar pool
1338is specified, then the status of each pool in the system is displayed. For more
1339information on pool and device health, see the
1340.Sx Device Failure and Recovery
1341section.
1342.Pp
1343If a scrub or resilver is in progress, this command reports the percentage done
1344and the estimated time to completion. Both of these are only approximate,
1345because the amount of data in the pool and the other workloads on the system can
1346change.
1347.Bl -tag -width Ds
1348.It Fl D
1349Display a histogram of deduplication statistics, showing the allocated
1350.Pq physically present on disk
1351and referenced
1352.Pq logically referenced in the pool
1353block counts and sizes by reference count.
1354.It Fl T Sy u Ns | Ns Sy d
1355Display a time stamp. Specify
1356.Fl u
1357for a printed representation of the internal representation of time. See
1358.Xr time 2 .
1359Specify
1360.Fl d
1361for standard date format. See
1362.Xr date 1 .
1363.It Fl v
1364Displays verbose data error information, printing out a complete list of all
1365data errors since the last complete pool scrub.
1366.It Fl x
1367Only display status for pools that are exhibiting errors or are otherwise
1368unavailable. Warnings about pools not using the latest on-disk format will not
1369be included.
1370.El
1371.It Xo
1372.Nm
1373.Cm upgrade
1374.Xc
1375Displays pools which do not have all supported features enabled and pools
1376formatted using a legacy ZFS version number. These pools can continue to be
1377used, but some features may not be available. Use
1378.Nm zpool Cm upgrade Fl a
1379to enable all features on all pools.
1380.It Xo
1381.Nm
1382.Cm upgrade
1383.Fl v
1384.Xc
1385Displays legacy ZFS versions supported by the current software. See
1386.Xr zpool-features 5
1387for a description of feature flags features supported by the current software.
1388.It Xo
1389.Nm
1390.Cm upgrade
1391.Op Fl V Ar version
1392.Fl a Ns | Ns Ar pool Ns ...
1393.Xc
1394Enables all supported features on the given pool. Once this is done, the pool
1395will no longer be accessible on systems that do not support feature flags. See
1396.Xr zpool-features 5
1397for details on compatibility with systems that support feature flags, but do not
1398support all features enabled on the pool.
1399.Bl -tag -width Ds
1400.It Fl a
1401Enables all supported features on all pools.
1402.It Fl V Ar version
1403Upgrade to the specified legacy version. If the
1404.Fl V
1405flag is specified, no features will be enabled on the pool. This option can only
1406be used to increase the version number up to the last supported legacy version
1407number.
1408.El
1409.El
1410.Sh EXIT STATUS
1411The following exit values are returned:
1412.Bl -tag -width Ds
1413.It Sy 0
1414Successful completion.
1415.It Sy 1
1416An error occurred.
1417.It Sy 2
1418Invalid command line options were specified.
1419.El
1420.Sh EXAMPLES
1421.Bl -tag -width Ds
1422.It Sy Example 1 No Creating a RAID-Z Storage Pool
1423The following command creates a pool with a single raidz root vdev that
1424consists of six disks.
1425.Bd -literal
1426# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
1427.Ed
1428.It Sy Example 2 No Creating a Mirrored Storage Pool
1429The following command creates a pool with two mirrors, where each mirror
1430contains two disks.
1431.Bd -literal
1432# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
1433.Ed
1434.It Sy Example 3 No Creating a ZFS Storage Pool by Using Slices
1435The following command creates an unmirrored pool using two disk slices.
1436.Bd -literal
1437# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
1438.Ed
1439.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
1440The following command creates an unmirrored pool using files. While not
1441recommended, a pool based on files can be useful for experimental purposes.
1442.Bd -literal
1443# zpool create tank /path/to/file/a /path/to/file/b
1444.Ed
1445.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
1446The following command adds two mirrored disks to the pool
1447.Em tank ,
1448assuming the pool is already made up of two-way mirrors. The additional space
1449is immediately available to any datasets within the pool.
1450.Bd -literal
1451# zpool add tank mirror c1t0d0 c1t1d0
1452.Ed
1453.It Sy Example 6 No Listing Available ZFS Storage Pools
1454The following command lists all available pools on the system. In this case,
1455the pool
1456.Em zion
1457is faulted due to a missing device. The results from this command are similar
1458to the following:
1459.Bd -literal
1460# zpool list
1461NAME    SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1462rpool  19.9G  8.43G  11.4G    33%         -    42%  1.00x  ONLINE  -
1463tank   61.5G  20.0G  41.5G    48%         -    32%  1.00x  ONLINE  -
1464zion       -      -      -      -         -      -      -  FAULTED -
1465.Ed
1466.It Sy Example 7 No Destroying a ZFS Storage Pool
1467The following command destroys the pool
1468.Em tank
1469and any datasets contained within.
1470.Bd -literal
1471# zpool destroy -f tank
1472.Ed
1473.It Sy Example 8 No Exporting a ZFS Storage Pool
1474The following command exports the devices in pool
1475.Em tank
1476so that they can be relocated or later imported.
1477.Bd -literal
1478# zpool export tank
1479.Ed
1480.It Sy Example 9 No Importing a ZFS Storage Pool
1481The following command displays available pools, and then imports the pool
1482.Em tank
1483for use on the system. The results from this command are similar to the
1484following:
1485.Bd -literal
1486# zpool import
1487  pool: tank
1488    id: 15451357997522795478
1489 state: ONLINE
1490action: The pool can be imported using its name or numeric identifier.
1491config:
1492
1493        tank        ONLINE
1494          mirror    ONLINE
1495            c1t2d0  ONLINE
1496            c1t3d0  ONLINE
1497
1498# zpool import tank
1499.Ed
1500.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
1501The following command upgrades all ZFS Storage pools to the current version of
1502the software.
1503.Bd -literal
1504# zpool upgrade -a
1505This system is currently running ZFS version 2.
1506.Ed
1507.It Sy Example 11 No Managing Hot Spares
1508The following command creates a new pool with an available hot spare:
1509.Bd -literal
1510# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
1511.Ed
1512.Pp
1513If one of the disks were to fail, the pool would be reduced to the degraded
1514state. The failed device can be replaced using the following command:
1515.Bd -literal
1516# zpool replace tank c0t0d0 c0t3d0
1517.Ed
1518.Pp
1519Once the data has been resilvered, the spare is automatically removed and is
1520made available should another device fails. The hot spare can be permanently
1521removed from the pool using the following command:
1522.Bd -literal
1523# zpool remove tank c0t2d0
1524.Ed
1525.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
1526The following command creates a ZFS storage pool consisting of two, two-way
1527mirrors and mirrored log devices:
1528.Bd -literal
1529# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \e
1530  c4d0 c5d0
1531.Ed
1532.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
1533The following command adds two disks for use as cache devices to a ZFS storage
1534pool:
1535.Bd -literal
1536# zpool add pool cache c2d0 c3d0
1537.Ed
1538.Pp
1539Once added, the cache devices gradually fill with content from main memory.
1540Depending on the size of your cache devices, it could take over an hour for
1541them to fill. Capacity and reads can be monitored using the
1542.Cm iostat
1543option as follows:
1544.Bd -literal
1545# zpool iostat -v pool 5
1546.Ed
1547.It Sy Example 14 No Removing a Mirrored Log Device
1548The following command removes the mirrored log device
1549.Sy mirror-2 .
1550Given this configuration:
1551.Bd -literal
1552  pool: tank
1553 state: ONLINE
1554 scrub: none requested
1555config:
1556
1557         NAME        STATE     READ WRITE CKSUM
1558         tank        ONLINE       0     0     0
1559           mirror-0  ONLINE       0     0     0
1560             c6t0d0  ONLINE       0     0     0
1561             c6t1d0  ONLINE       0     0     0
1562           mirror-1  ONLINE       0     0     0
1563             c6t2d0  ONLINE       0     0     0
1564             c6t3d0  ONLINE       0     0     0
1565         logs
1566           mirror-2  ONLINE       0     0     0
1567             c4t0d0  ONLINE       0     0     0
1568             c4t1d0  ONLINE       0     0     0
1569.Ed
1570.Pp
1571The command to remove the mirrored log
1572.Sy mirror-2
1573is:
1574.Bd -literal
1575# zpool remove tank mirror-2
1576.Ed
1577.It Sy Example 15 No Displaying expanded space on a device
1578The following command dipslays the detailed information for the pool
1579.Em data .
1580This pool is comprised of a single raidz vdev where one of its devices
1581increased its capacity by 10GB. In this example, the pool will not be able to
1582utilize this extra capacity until all the devices under the raidz vdev have
1583been expanded.
1584.Bd -literal
1585# zpool list -v data
1586NAME         SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1587data        23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1588  raidz1    23.9G  14.6G  9.30G    48%         -
1589    c1t1d0      -      -      -      -         -
1590    c1t2d0      -      -      -      -       10G
1591    c1t3d0      -      -      -      -         -
1592.Ed
1593.El
1594.Sh INTERFACE STABILITY
1595.Sy Evolving
1596.Sh SEE ALSO
1597.Xr zfs 1M ,
1598.Xr attributes 5 ,
1599.Xr zpool-features 5
1600