1.\" 2.\" CDDL HEADER START 3.\" 4.\" The contents of this file are subject to the terms of the 5.\" Common Development and Distribution License (the "License"). 6.\" You may not use this file except in compliance with the License. 7.\" 8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9.\" or http://www.opensolaris.org/os/licensing. 10.\" See the License for the specific language governing permissions 11.\" and limitations under the License. 12.\" 13.\" When distributing Covered Code, include this CDDL HEADER in each 14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15.\" If applicable, add the following below this CDDL HEADER, with the 16.\" fields enclosed by brackets "[]" replaced with your own identifying 17.\" information: Portions Copyright [yyyy] [name of copyright owner] 18.\" 19.\" CDDL HEADER END 20.\" 21.\" 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. 25.\" Copyright (c) 2017 Datto Inc. 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved. 27.\" Copyright 2017 Nexenta Systems, Inc. 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. 29.\" 30.Dd August 9, 2019 31.Dt ZPOOL 8 32.Os 33.Sh NAME 34.Nm zpool 35.Nd configure ZFS storage pools 36.Sh SYNOPSIS 37.Nm 38.Fl ?V 39.Nm 40.Cm version 41.Nm 42.Cm <subcommand> 43.Op Ar <args> 44.Sh DESCRIPTION 45The 46.Nm 47command configures ZFS storage pools. 48A storage pool is a collection of devices that provides physical storage and 49data replication for ZFS datasets. 50All datasets within a storage pool share the same space. 51See 52.Xr zfs 8 53for information on managing datasets. 54.Pp 55For an overview of creating and managing ZFS storage pools see the 56.Xr zpoolconcepts 8 57manual page. 58.Sh SUBCOMMANDS 59All subcommands that modify state are logged persistently to the pool in their 60original form. 61.Pp 62The 63.Nm 64command provides subcommands to create and destroy storage pools, add capacity 65to storage pools, and provide information about the storage pools. 66The following subcommands are supported: 67.Bl -tag -width Ds 68.It Xo 69.Nm 70.Fl ? 71.Xc 72Displays a help message. 73.It Xo 74.Nm 75.Fl V, -version 76.Xc 77An alias for the 78.Nm zpool Cm version 79subcommand. 80.It Xo 81.Nm 82.Cm version 83.Xc 84Displays the software version of the 85.Nm 86userland utility and the zfs kernel module. 87.El 88.Ss Creation 89.Bl -tag -width Ds 90.It Xr zpool-create 8 91Creates a new storage pool containing the virtual devices specified on the 92command line. 93.It Xr zpool-initialize 8 94Begins initializing by writing to all unallocated regions on the specified 95devices, or all eligible devices in the pool if no individual devices are 96specified. 97.El 98.Ss Destruction 99.Bl -tag -width Ds 100.It Xr zpool-destroy 8 101Destroys the given pool, freeing up any devices for other use. 102.It Xr zpool-labelclear 8 103Removes ZFS label information from the specified 104.Ar device . 105.El 106.Ss Virtual Devices 107.Bl -tag -width Ds 108.It Xo 109.Xr zpool-attach 8 / 110.Xr zpool-detach 8 111.Xc 112Increases or decreases redundancy by 113.Cm attach Ns -ing or 114.Cm detach Ns -ing a device on an existing vdev (virtual device). 115.It Xo 116.Xr zpool-add 8 / 117.Xr zpool-remove 8 118.Xc 119Adds the specified virtual devices to the given pool, 120or removes the specified device from the pool. 121.It Xr zpool-replace 8 122Replaces an existing device (which may be faulted) with a new one. 123.It Xr zpool-split 8 124Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy). 125.El 126.Ss Properties 127Available pool properties listed in the 128.Xr zpoolprops 8 129manual page. 130.Bl -tag -width Ds 131.It Xr zpool-list 8 132Lists the given pools along with a health status and space usage. 133.It Xo 134.Xr zpool-get 8 / 135.Xr zpool-set 8 136.Xc 137Retrieves the given list of properties 138.Po 139or all properties if 140.Sy all 141is used 142.Pc 143for the specified storage pool(s). 144.El 145.Ss Monitoring 146.Bl -tag -width Ds 147.It Xr zpool-status 8 148Displays the detailed health status for the given pools. 149.It Xr zpool-iostat 8 150Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may 151be observed via 152.Xr iostat 1 . 153.It Xr zpool-events 8 154Lists all recent events generated by the ZFS kernel modules. These events 155are consumed by the 156.Xr zed 8 157and used to automate administrative tasks such as replacing a failed device 158with a hot spare. For more information about the subclasses and event payloads 159that can be generated see the 160.Xr zfs-events 5 161man page. 162.It Xr zpool-history 8 163Displays the command history of the specified pool(s) or all pools if no pool is 164specified. 165.El 166.Ss Maintenance 167.Bl -tag -width Ds 168.It Xr zpool-scrub 8 169Begins a scrub or resumes a paused scrub. 170.It Xr zpool-checkpoint 8 171Checkpoints the current state of 172.Ar pool 173, which can be later restored by 174.Nm zpool Cm import --rewind-to-checkpoint . 175.It Xr zpool-trim 8 176Initiates an immediate on-demand TRIM operation for all of the free space in 177a pool. This operation informs the underlying storage devices of all blocks 178in the pool which are no longer allocated and allows thinly provisioned 179devices to reclaim the space. 180.It Xr zpool-sync 8 181This command forces all in-core dirty data to be written to the primary 182pool storage and not the ZIL. It will also update administrative 183information including quota reporting. Without arguments, 184.Sy zpool sync 185will sync all pools on the system. Otherwise, it will sync only the 186specified pool(s). 187.It Xr zpool-upgrade 8 188Manage the on-disk format version of storage pools. 189.It Xr zpool-wait 8 190Waits until all background activity of the given types has ceased in the given 191pool. 192.El 193.Ss Fault Resolution 194.Bl -tag -width Ds 195.It Xo 196.Xr zpool-offline 8 197.Xr zpool-online 8 198.Xc 199Takes the specified physical device offline or brings it online. 200.It Xr zpool-resilver 8 201Starts a resilver. If an existing resilver is already running it will be 202restarted from the beginning. 203.It Xr zpool-reopen 8 204Reopen all the vdevs associated with the pool. 205.It Xr zpool-clear 8 206Clears device errors in a pool. 207.El 208.Ss Import & Export 209.Bl -tag -width Ds 210.It Xr zpool-import 8 211Make disks containing ZFS storage pools available for use on the system. 212.It Xr zpool-export 8 213Exports the given pools from the system. 214.It Xr zpool-reguid 8 215Generates a new unique identifier for the pool. 216.El 217.Sh EXIT STATUS 218The following exit values are returned: 219.Bl -tag -width Ds 220.It Sy 0 221Successful completion. 222.It Sy 1 223An error occurred. 224.It Sy 2 225Invalid command line options were specified. 226.El 227.Sh EXAMPLES 228.Bl -tag -width Ds 229.It Sy Example 1 No Creating a RAID-Z Storage Pool 230The following command creates a pool with a single raidz root vdev that 231consists of six disks. 232.Bd -literal 233# zpool create tank raidz sda sdb sdc sdd sde sdf 234.Ed 235.It Sy Example 2 No Creating a Mirrored Storage Pool 236The following command creates a pool with two mirrors, where each mirror 237contains two disks. 238.Bd -literal 239# zpool create tank mirror sda sdb mirror sdc sdd 240.Ed 241.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions 242The following command creates an unmirrored pool using two disk partitions. 243.Bd -literal 244# zpool create tank sda1 sdb2 245.Ed 246.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files 247The following command creates an unmirrored pool using files. 248While not recommended, a pool based on files can be useful for experimental 249purposes. 250.Bd -literal 251# zpool create tank /path/to/file/a /path/to/file/b 252.Ed 253.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool 254The following command adds two mirrored disks to the pool 255.Em tank , 256assuming the pool is already made up of two-way mirrors. 257The additional space is immediately available to any datasets within the pool. 258.Bd -literal 259# zpool add tank mirror sda sdb 260.Ed 261.It Sy Example 6 No Listing Available ZFS Storage Pools 262The following command lists all available pools on the system. 263In this case, the pool 264.Em zion 265is faulted due to a missing device. 266The results from this command are similar to the following: 267.Bd -literal 268# zpool list 269NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 270rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - 271tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - 272zion - - - - - - - FAULTED - 273.Ed 274.It Sy Example 7 No Destroying a ZFS Storage Pool 275The following command destroys the pool 276.Em tank 277and any datasets contained within. 278.Bd -literal 279# zpool destroy -f tank 280.Ed 281.It Sy Example 8 No Exporting a ZFS Storage Pool 282The following command exports the devices in pool 283.Em tank 284so that they can be relocated or later imported. 285.Bd -literal 286# zpool export tank 287.Ed 288.It Sy Example 9 No Importing a ZFS Storage Pool 289The following command displays available pools, and then imports the pool 290.Em tank 291for use on the system. 292The results from this command are similar to the following: 293.Bd -literal 294# zpool import 295 pool: tank 296 id: 15451357997522795478 297 state: ONLINE 298action: The pool can be imported using its name or numeric identifier. 299config: 300 301 tank ONLINE 302 mirror ONLINE 303 sda ONLINE 304 sdb ONLINE 305 306# zpool import tank 307.Ed 308.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version 309The following command upgrades all ZFS Storage pools to the current version of 310the software. 311.Bd -literal 312# zpool upgrade -a 313This system is currently running ZFS version 2. 314.Ed 315.It Sy Example 11 No Managing Hot Spares 316The following command creates a new pool with an available hot spare: 317.Bd -literal 318# zpool create tank mirror sda sdb spare sdc 319.Ed 320.Pp 321If one of the disks were to fail, the pool would be reduced to the degraded 322state. 323The failed device can be replaced using the following command: 324.Bd -literal 325# zpool replace tank sda sdd 326.Ed 327.Pp 328Once the data has been resilvered, the spare is automatically removed and is 329made available for use should another device fail. 330The hot spare can be permanently removed from the pool using the following 331command: 332.Bd -literal 333# zpool remove tank sdc 334.Ed 335.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs 336The following command creates a ZFS storage pool consisting of two, two-way 337mirrors and mirrored log devices: 338.Bd -literal 339# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\ 340 sde sdf 341.Ed 342.It Sy Example 13 No Adding Cache Devices to a ZFS Pool 343The following command adds two disks for use as cache devices to a ZFS storage 344pool: 345.Bd -literal 346# zpool add pool cache sdc sdd 347.Ed 348.Pp 349Once added, the cache devices gradually fill with content from main memory. 350Depending on the size of your cache devices, it could take over an hour for 351them to fill. 352Capacity and reads can be monitored using the 353.Cm iostat 354option as follows: 355.Bd -literal 356# zpool iostat -v pool 5 357.Ed 358.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device 359The following commands remove the mirrored log device 360.Sy mirror-2 361and mirrored top-level data device 362.Sy mirror-1 . 363.Pp 364Given this configuration: 365.Bd -literal 366 pool: tank 367 state: ONLINE 368 scrub: none requested 369config: 370 371 NAME STATE READ WRITE CKSUM 372 tank ONLINE 0 0 0 373 mirror-0 ONLINE 0 0 0 374 sda ONLINE 0 0 0 375 sdb ONLINE 0 0 0 376 mirror-1 ONLINE 0 0 0 377 sdc ONLINE 0 0 0 378 sdd ONLINE 0 0 0 379 logs 380 mirror-2 ONLINE 0 0 0 381 sde ONLINE 0 0 0 382 sdf ONLINE 0 0 0 383.Ed 384.Pp 385The command to remove the mirrored log 386.Sy mirror-2 387is: 388.Bd -literal 389# zpool remove tank mirror-2 390.Ed 391.Pp 392The command to remove the mirrored data 393.Sy mirror-1 394is: 395.Bd -literal 396# zpool remove tank mirror-1 397.Ed 398.It Sy Example 15 No Displaying expanded space on a device 399The following command displays the detailed information for the pool 400.Em data . 401This pool is comprised of a single raidz vdev where one of its devices 402increased its capacity by 10GB. 403In this example, the pool will not be able to utilize this extra capacity until 404all the devices under the raidz vdev have been expanded. 405.Bd -literal 406# zpool list -v data 407NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 408data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - 409 raidz1 23.9G 14.6G 9.30G - 48% 410 sda - - - - - 411 sdb - - - 10G - 412 sdc - - - - - 413.Ed 414.It Sy Example 16 No Adding output columns 415Additional columns can be added to the 416.Nm zpool Cm status 417and 418.Nm zpool Cm iostat 419output with 420.Fl c 421option. 422.Bd -literal 423# zpool status -c vendor,model,size 424 NAME STATE READ WRITE CKSUM vendor model size 425 tank ONLINE 0 0 0 426 mirror-0 ONLINE 0 0 0 427 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 428 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 429 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 430 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 431 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 432 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 433 434# zpool iostat -vc size 435 capacity operations bandwidth 436pool alloc free read write read write size 437---------- ----- ----- ----- ----- ----- ----- ---- 438rpool 14.6G 54.9G 4 55 250K 2.69M 439 sda1 14.6G 54.9G 4 55 250K 2.69M 70G 440---------- ----- ----- ----- ----- ----- ----- ---- 441.Ed 442.El 443.Sh ENVIRONMENT VARIABLES 444.Bl -tag -width "ZFS_ABORT" 445.It Ev ZFS_ABORT 446Cause 447.Nm zpool 448to dump core on exit for the purposes of running 449.Sy ::findleaks . 450.El 451.Bl -tag -width "ZFS_COLOR" 452.It Ev ZFS_COLOR 453Use ANSI color in 454.Nm zpool status 455output. 456.El 457.Bl -tag -width "ZPOOL_IMPORT_PATH" 458.It Ev ZPOOL_IMPORT_PATH 459The search path for devices or files to use with the pool. This is a colon-separated list of directories in which 460.Nm zpool 461looks for device nodes and files. 462Similar to the 463.Fl d 464option in 465.Nm zpool import . 466.El 467.Bl -tag -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS" 468.It Ev ZPOOL_IMPORT_UDEV_TIMEOUT_MS 469The maximum time in milliseconds that 470.Nm zpool import 471will wait for an expected device to be available. 472.El 473.Bl -tag -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE" 474.It Ev ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE 475If set, suppress warning about non-native vdev ashift in 476.Nm zpool status . 477The value is not used, only the presence or absence of the variable matters. 478.El 479.Bl -tag -width "ZPOOL_VDEV_NAME_GUID" 480.It Ev ZPOOL_VDEV_NAME_GUID 481Cause 482.Nm zpool 483subcommands to output vdev guids by default. This behavior is identical to the 484.Nm zpool status -g 485command line option. 486.El 487.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS" 488.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS 489Cause 490.Nm zpool 491subcommands to follow links for vdev names by default. This behavior is identical to the 492.Nm zpool status -L 493command line option. 494.El 495.Bl -tag -width "ZPOOL_VDEV_NAME_PATH" 496.It Ev ZPOOL_VDEV_NAME_PATH 497Cause 498.Nm zpool 499subcommands to output full vdev path names by default. This 500behavior is identical to the 501.Nm zpool status -P 502command line option. 503.El 504.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT" 505.It Ev ZFS_VDEV_DEVID_OPT_OUT 506Older ZFS on Linux implementations had issues when attempting to display pool 507config VDEV names if a 508.Sy devid 509NVP value is present in the pool's config. 510.Pp 511For example, a pool that originated on illumos platform would have a devid 512value in the config and 513.Nm zpool status 514would fail when listing the config. 515This would also be true for future Linux based pools. 516.Pp 517A pool can be stripped of any 518.Sy devid 519values on import or prevented from adding 520them on 521.Nm zpool create 522or 523.Nm zpool add 524by setting 525.Sy ZFS_VDEV_DEVID_OPT_OUT . 526.El 527.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT" 528.It Ev ZPOOL_SCRIPTS_AS_ROOT 529Allow a privileged user to run the 530.Nm zpool status/iostat 531with the 532.Fl c 533option. Normally, only unprivileged users are allowed to run 534.Fl c . 535.El 536.Bl -tag -width "ZPOOL_SCRIPTS_PATH" 537.It Ev ZPOOL_SCRIPTS_PATH 538The search path for scripts when running 539.Nm zpool status/iostat 540with the 541.Fl c 542option. This is a colon-separated list of directories and overrides the default 543.Pa ~/.zpool.d 544and 545.Pa /etc/zfs/zpool.d 546search paths. 547.El 548.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED" 549.It Ev ZPOOL_SCRIPTS_ENABLED 550Allow a user to run 551.Nm zpool status/iostat 552with the 553.Fl c 554option. If 555.Sy ZPOOL_SCRIPTS_ENABLED 556is not set, it is assumed that the user is allowed to run 557.Nm zpool status/iostat -c . 558.El 559.Sh INTERFACE STABILITY 560.Sy Evolving 561.Sh SEE ALSO 562.Xr zfs-events 5 , 563.Xr zfs-module-parameters 5 , 564.Xr zpool-features 5 , 565.Xr zed 8 , 566.Xr zfs 8 , 567.Xr zpool-add 8 , 568.Xr zpool-attach 8 , 569.Xr zpool-checkpoint 8 , 570.Xr zpool-clear 8 , 571.Xr zpool-create 8 , 572.Xr zpool-destroy 8 , 573.Xr zpool-detach 8 , 574.Xr zpool-events 8 , 575.Xr zpool-export 8 , 576.Xr zpool-get 8 , 577.Xr zpool-history 8 , 578.Xr zpool-import 8 , 579.Xr zpool-initialize 8 , 580.Xr zpool-iostat 8 , 581.Xr zpool-labelclear 8 , 582.Xr zpool-list 8 , 583.Xr zpool-offline 8 , 584.Xr zpool-online 8 , 585.Xr zpool-reguid 8 , 586.Xr zpool-remove 8 , 587.Xr zpool-reopen 8 , 588.Xr zpool-replace 8 , 589.Xr zpool-resilver 8 , 590.Xr zpool-scrub 8 , 591.Xr zpool-set 8 , 592.Xr zpool-split 8 , 593.Xr zpool-status 8 , 594.Xr zpool-sync 8 , 595.Xr zpool-trim 8 , 596.Xr zpool-upgrade 8 , 597.Xr zpool-wait 8 , 598.Xr zpoolconcepts 8 , 599.Xr zpoolprops 8 600