1.\" 2.\" CDDL HEADER START 3.\" 4.\" The contents of this file are subject to the terms of the 5.\" Common Development and Distribution License (the "License"). 6.\" You may not use this file except in compliance with the License. 7.\" 8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9.\" or https://opensource.org/licenses/CDDL-1.0. 10.\" See the License for the specific language governing permissions 11.\" and limitations under the License. 12.\" 13.\" When distributing Covered Code, include this CDDL HEADER in each 14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15.\" If applicable, add the following below this CDDL HEADER, with the 16.\" fields enclosed by brackets "[]" replaced with your own identifying 17.\" information: Portions Copyright [yyyy] [name of copyright owner] 18.\" 19.\" CDDL HEADER END 20.\" 21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. 22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. 23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. 24.\" Copyright (c) 2017 Datto Inc. 25.\" Copyright (c) 2018 George Melikov. All Rights Reserved. 26.\" Copyright 2017 Nexenta Systems, Inc. 27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. 28.\" 29.Dd March 16, 2022 30.Dt ZPOOL 8 31.Os 32. 33.Sh NAME 34.Nm zpool 35.Nd configure ZFS storage pools 36.Sh SYNOPSIS 37.Nm 38.Fl ?V 39.Nm 40.Cm version 41.Nm 42.Cm subcommand 43.Op Ar arguments 44. 45.Sh DESCRIPTION 46The 47.Nm 48command configures ZFS storage pools. 49A storage pool is a collection of devices that provides physical storage and 50data replication for ZFS datasets. 51All datasets within a storage pool share the same space. 52See 53.Xr zfs 8 54for information on managing datasets. 55.Pp 56For an overview of creating and managing ZFS storage pools see the 57.Xr zpoolconcepts 7 58manual page. 59. 60.Sh SUBCOMMANDS 61All subcommands that modify state are logged persistently to the pool in their 62original form. 63.Pp 64The 65.Nm 66command provides subcommands to create and destroy storage pools, add capacity 67to storage pools, and provide information about the storage pools. 68The following subcommands are supported: 69.Bl -tag -width Ds 70.It Xo 71.Nm 72.Fl ?\& 73.Xc 74Displays a help message. 75.It Xo 76.Nm 77.Fl V , -version 78.Xc 79.It Xo 80.Nm 81.Cm version 82.Xc 83Displays the software version of the 84.Nm 85userland utility and the ZFS kernel module. 86.El 87. 88.Ss Creation 89.Bl -tag -width Ds 90.It Xr zpool-create 8 91Creates a new storage pool containing the virtual devices specified on the 92command line. 93.It Xr zpool-initialize 8 94Begins initializing by writing to all unallocated regions on the specified 95devices, or all eligible devices in the pool if no individual devices are 96specified. 97.El 98. 99.Ss Destruction 100.Bl -tag -width Ds 101.It Xr zpool-destroy 8 102Destroys the given pool, freeing up any devices for other use. 103.It Xr zpool-labelclear 8 104Removes ZFS label information from the specified 105.Ar device . 106.El 107. 108.Ss Virtual Devices 109.Bl -tag -width Ds 110.It Xo 111.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8 112.Xc 113Converts a non-redundant disk into a mirror, or increases 114the redundancy level of an existing mirror 115.Cm ( attach Ns ), or performs the inverse operation ( 116.Cm detach Ns ). 117.It Xo 118.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8 119.Xc 120Adds the specified virtual devices to the given pool, 121or removes the specified device from the pool. 122.It Xr zpool-replace 8 123Replaces an existing device (which may be faulted) with a new one. 124.It Xr zpool-split 8 125Creates a new pool by splitting all mirrors in an existing pool (which decreases 126its redundancy). 127.El 128. 129.Ss Properties 130Available pool properties listed in the 131.Xr zpoolprops 7 132manual page. 133.Bl -tag -width Ds 134.It Xr zpool-list 8 135Lists the given pools along with a health status and space usage. 136.It Xo 137.Xr zpool-get 8 Ns / Ns Xr zpool-set 8 138.Xc 139Retrieves the given list of properties 140.Po 141or all properties if 142.Sy all 143is used 144.Pc 145for the specified storage pool(s). 146.El 147. 148.Ss Monitoring 149.Bl -tag -width Ds 150.It Xr zpool-status 8 151Displays the detailed health status for the given pools. 152.It Xr zpool-iostat 8 153Displays logical I/O statistics for the given pools/vdevs. 154Physical I/O operations may be observed via 155.Xr iostat 1 . 156.It Xr zpool-events 8 157Lists all recent events generated by the ZFS kernel modules. 158These events are consumed by the 159.Xr zed 8 160and used to automate administrative tasks such as replacing a failed device 161with a hot spare. 162That manual page also describes the subclasses and event payloads 163that can be generated. 164.It Xr zpool-history 8 165Displays the command history of the specified pool(s) or all pools if no pool is 166specified. 167.El 168. 169.Ss Maintenance 170.Bl -tag -width Ds 171.It Xr zpool-scrub 8 172Begins a scrub or resumes a paused scrub. 173.It Xr zpool-checkpoint 8 174Checkpoints the current state of 175.Ar pool , 176which can be later restored by 177.Nm zpool Cm import Fl -rewind-to-checkpoint . 178.It Xr zpool-trim 8 179Initiates an immediate on-demand TRIM operation for all of the free space in a 180pool. 181This operation informs the underlying storage devices of all blocks 182in the pool which are no longer allocated and allows thinly provisioned 183devices to reclaim the space. 184.It Xr zpool-sync 8 185This command forces all in-core dirty data to be written to the primary 186pool storage and not the ZIL. 187It will also update administrative information including quota reporting. 188Without arguments, 189.Nm zpool Cm sync 190will sync all pools on the system. 191Otherwise, it will sync only the specified pool(s). 192.It Xr zpool-upgrade 8 193Manage the on-disk format version of storage pools. 194.It Xr zpool-wait 8 195Waits until all background activity of the given types has ceased in the given 196pool. 197.El 198. 199.Ss Fault Resolution 200.Bl -tag -width Ds 201.It Xo 202.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8 203.Xc 204Takes the specified physical device offline or brings it online. 205.It Xr zpool-resilver 8 206Starts a resilver. 207If an existing resilver is already running it will be restarted from the 208beginning. 209.It Xr zpool-reopen 8 210Reopen all the vdevs associated with the pool. 211.It Xr zpool-clear 8 212Clears device errors in a pool. 213.El 214. 215.Ss Import & Export 216.Bl -tag -width Ds 217.It Xr zpool-import 8 218Make disks containing ZFS storage pools available for use on the system. 219.It Xr zpool-export 8 220Exports the given pools from the system. 221.It Xr zpool-reguid 8 222Generates a new unique identifier for the pool. 223.El 224. 225.Sh EXIT STATUS 226The following exit values are returned: 227.Bl -tag -compact -offset 4n -width "a" 228.It Sy 0 229Successful completion. 230.It Sy 1 231An error occurred. 232.It Sy 2 233Invalid command line options were specified. 234.El 235. 236.Sh EXAMPLES 237.\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8. 238.\" Examples 6, 14 are shared with zpool-add.8. 239.\" Examples 7, 16 are shared with zpool-list.8. 240.\" Examples 8 are shared with zpool-destroy.8. 241.\" Examples 9 are shared with zpool-export.8. 242.\" Examples 10 are shared with zpool-import.8. 243.\" Examples 11 are shared with zpool-upgrade.8. 244.\" Examples 15 are shared with zpool-remove.8. 245.\" Examples 17 are shared with zpool-status.8. 246.\" Examples 14, 17 are also shared with zpool-iostat.8. 247.\" Make sure to update them omnidirectionally 248.Ss Example 1 : No Creating a RAID-Z Storage Pool 249The following command creates a pool with a single raidz root vdev that 250consists of six disks: 251.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf 252. 253.Ss Example 2 : No Creating a Mirrored Storage Pool 254The following command creates a pool with two mirrors, where each mirror 255contains two disks: 256.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd 257. 258.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions 259The following command creates a non-redundant pool using two disk partitions: 260.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 261. 262.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files 263The following command creates a non-redundant pool using files. 264While not recommended, a pool based on files can be useful for experimental 265purposes. 266.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b 267. 268.Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored 269The following command converts an existing single device 270.Ar sda 271into a mirror by attaching a second device to it, 272.Ar sdb . 273.Dl # Nm zpool Cm attach Ar tank Pa sda sdb 274. 275.Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool 276The following command adds two mirrored disks to the pool 277.Ar tank , 278assuming the pool is already made up of two-way mirrors. 279The additional space is immediately available to any datasets within the pool. 280.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb 281. 282.Ss Example 7 : No Listing Available ZFS Storage Pools 283The following command lists all available pools on the system. 284In this case, the pool 285.Ar zion 286is faulted due to a missing device. 287The results from this command are similar to the following: 288.Bd -literal -compact -offset Ds 289.No # Nm zpool Cm list 290NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 291rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - 292tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - 293zion - - - - - - - FAULTED - 294.Ed 295. 296.Ss Example 8 : No Destroying a ZFS Storage Pool 297The following command destroys the pool 298.Ar tank 299and any datasets contained within: 300.Dl # Nm zpool Cm destroy Fl f Ar tank 301. 302.Ss Example 9 : No Exporting a ZFS Storage Pool 303The following command exports the devices in pool 304.Ar tank 305so that they can be relocated or later imported: 306.Dl # Nm zpool Cm export Ar tank 307. 308.Ss Example 10 : No Importing a ZFS Storage Pool 309The following command displays available pools, and then imports the pool 310.Ar tank 311for use on the system. 312The results from this command are similar to the following: 313.Bd -literal -compact -offset Ds 314.No # Nm zpool Cm import 315 pool: tank 316 id: 15451357997522795478 317 state: ONLINE 318action: The pool can be imported using its name or numeric identifier. 319config: 320 321 tank ONLINE 322 mirror ONLINE 323 sda ONLINE 324 sdb ONLINE 325 326.No # Nm zpool Cm import Ar tank 327.Ed 328. 329.Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version 330The following command upgrades all ZFS Storage pools to the current version of 331the software: 332.Bd -literal -compact -offset Ds 333.No # Nm zpool Cm upgrade Fl a 334This system is currently running ZFS version 2. 335.Ed 336. 337.Ss Example 12 : No Managing Hot Spares 338The following command creates a new pool with an available hot spare: 339.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc 340.Pp 341If one of the disks were to fail, the pool would be reduced to the degraded 342state. 343The failed device can be replaced using the following command: 344.Dl # Nm zpool Cm replace Ar tank Pa sda sdd 345.Pp 346Once the data has been resilvered, the spare is automatically removed and is 347made available for use should another device fail. 348The hot spare can be permanently removed from the pool using the following 349command: 350.Dl # Nm zpool Cm remove Ar tank Pa sdc 351. 352.Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs 353The following command creates a ZFS storage pool consisting of two, two-way 354mirrors and mirrored log devices: 355.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf 356. 357.Ss Example 14 : No Adding Cache Devices to a ZFS Pool 358The following command adds two disks for use as cache devices to a ZFS storage 359pool: 360.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd 361.Pp 362Once added, the cache devices gradually fill with content from main memory. 363Depending on the size of your cache devices, it could take over an hour for 364them to fill. 365Capacity and reads can be monitored using the 366.Cm iostat 367subcommand as follows: 368.Dl # Nm zpool Cm iostat Fl v Ar pool 5 369. 370.Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device 371The following commands remove the mirrored log device 372.Sy mirror-2 373and mirrored top-level data device 374.Sy mirror-1 . 375.Pp 376Given this configuration: 377.Bd -literal -compact -offset Ds 378 pool: tank 379 state: ONLINE 380 scrub: none requested 381config: 382 383 NAME STATE READ WRITE CKSUM 384 tank ONLINE 0 0 0 385 mirror-0 ONLINE 0 0 0 386 sda ONLINE 0 0 0 387 sdb ONLINE 0 0 0 388 mirror-1 ONLINE 0 0 0 389 sdc ONLINE 0 0 0 390 sdd ONLINE 0 0 0 391 logs 392 mirror-2 ONLINE 0 0 0 393 sde ONLINE 0 0 0 394 sdf ONLINE 0 0 0 395.Ed 396.Pp 397The command to remove the mirrored log 398.Ar mirror-2 No is : 399.Dl # Nm zpool Cm remove Ar tank mirror-2 400.Pp 401The command to remove the mirrored data 402.Ar mirror-1 No is : 403.Dl # Nm zpool Cm remove Ar tank mirror-1 404. 405.Ss Example 16 : No Displaying expanded space on a device 406The following command displays the detailed information for the pool 407.Ar data . 408This pool is comprised of a single raidz vdev where one of its devices 409increased its capacity by 10 GiB. 410In this example, the pool will not be able to utilize this extra capacity until 411all the devices under the raidz vdev have been expanded. 412.Bd -literal -compact -offset Ds 413.No # Nm zpool Cm list Fl v Ar data 414NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 415data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - 416 raidz1 23.9G 14.6G 9.30G - 48% 417 sda - - - - - 418 sdb - - - 10G - 419 sdc - - - - - 420.Ed 421. 422.Ss Example 17 : No Adding output columns 423Additional columns can be added to the 424.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . 425.Bd -literal -compact -offset Ds 426.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size 427 NAME STATE READ WRITE CKSUM vendor model size 428 tank ONLINE 0 0 0 429 mirror-0 ONLINE 0 0 0 430 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 431 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 432 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 433 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 434 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 435 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 436 437.No # Nm zpool Cm iostat Fl vc Pa size 438 capacity operations bandwidth 439pool alloc free read write read write size 440---------- ----- ----- ----- ----- ----- ----- ---- 441rpool 14.6G 54.9G 4 55 250K 2.69M 442 sda1 14.6G 54.9G 4 55 250K 2.69M 70G 443---------- ----- ----- ----- ----- ----- ----- ---- 444.Ed 445. 446.Sh ENVIRONMENT VARIABLES 447.Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE" 448.It Sy ZFS_ABORT 449Cause 450.Nm 451to dump core on exit for the purposes of running 452.Sy ::findleaks . 453.It Sy ZFS_COLOR 454Use ANSI color in 455.Nm zpool Cm status 456and 457.Nm zpool Cm iostat 458output. 459.It Sy ZPOOL_AUTO_POWER_ON_SLOT 460Automatically attempt to turn on the drives enclosure slot power to a drive when 461running the 462.Nm zpool Cm online 463or 464.Nm zpool Cm clear 465commands. 466This has the same effect as passing the 467.Fl -power 468option to those commands. 469.It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS 470The maximum time in milliseconds to wait for a slot power sysfs value 471to return the correct value after writing it. 472For example, after writing "on" to the sysfs enclosure slot power_control file, 473it can take some time for the enclosure to power down the slot and return 474"on" if you read back the 'power_control' value. 475Defaults to 30 seconds (30000ms) if not set. 476.It Sy ZPOOL_IMPORT_PATH 477The search path for devices or files to use with the pool. 478This is a colon-separated list of directories in which 479.Nm 480looks for device nodes and files. 481Similar to the 482.Fl d 483option in 484.Nm zpool import . 485.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS 486The maximum time in milliseconds that 487.Nm zpool import 488will wait for an expected device to be available. 489.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE 490If set, suppress warning about non-native vdev ashift in 491.Nm zpool Cm status . 492The value is not used, only the presence or absence of the variable matters. 493.It Sy ZPOOL_VDEV_NAME_GUID 494Cause 495.Nm 496subcommands to output vdev guids by default. 497This behavior is identical to the 498.Nm zpool Cm status Fl g 499command line option. 500.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS 501Cause 502.Nm 503subcommands to follow links for vdev names by default. 504This behavior is identical to the 505.Nm zpool Cm status Fl L 506command line option. 507.It Sy ZPOOL_VDEV_NAME_PATH 508Cause 509.Nm 510subcommands to output full vdev path names by default. 511This behavior is identical to the 512.Nm zpool Cm status Fl P 513command line option. 514.It Sy ZFS_VDEV_DEVID_OPT_OUT 515Older OpenZFS implementations had issues when attempting to display pool 516config vdev names if a 517.Sy devid 518NVP value is present in the pool's config. 519.Pp 520For example, a pool that originated on illumos platform would have a 521.Sy devid 522value in the config and 523.Nm zpool Cm status 524would fail when listing the config. 525This would also be true for future Linux-based pools. 526.Pp 527A pool can be stripped of any 528.Sy devid 529values on import or prevented from adding 530them on 531.Nm zpool Cm create 532or 533.Nm zpool Cm add 534by setting 535.Sy ZFS_VDEV_DEVID_OPT_OUT . 536.Pp 537.It Sy ZPOOL_SCRIPTS_AS_ROOT 538Allow a privileged user to run 539.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 540Normally, only unprivileged users are allowed to run 541.Fl c . 542.It Sy ZPOOL_SCRIPTS_PATH 543The search path for scripts when running 544.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 545This is a colon-separated list of directories and overrides the default 546.Pa ~/.zpool.d 547and 548.Pa /etc/zfs/zpool.d 549search paths. 550.It Sy ZPOOL_SCRIPTS_ENABLED 551Allow a user to run 552.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 553If 554.Sy ZPOOL_SCRIPTS_ENABLED 555is not set, it is assumed that the user is allowed to run 556.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 557.\" Shared with zfs.8 558.It Sy ZFS_MODULE_TIMEOUT 559Time, in seconds, to wait for 560.Pa /dev/zfs 561to appear. 562Defaults to 563.Sy 10 , 564max 565.Sy 600 Pq 10 minutes . 566If 567.Pf < Sy 0 , 568wait forever; if 569.Sy 0 , 570don't wait. 571.El 572. 573.Sh INTERFACE STABILITY 574.Sy Evolving 575. 576.Sh SEE ALSO 577.Xr zfs 4 , 578.Xr zpool-features 7 , 579.Xr zpoolconcepts 7 , 580.Xr zpoolprops 7 , 581.Xr zed 8 , 582.Xr zfs 8 , 583.Xr zpool-add 8 , 584.Xr zpool-attach 8 , 585.Xr zpool-checkpoint 8 , 586.Xr zpool-clear 8 , 587.Xr zpool-create 8 , 588.Xr zpool-destroy 8 , 589.Xr zpool-detach 8 , 590.Xr zpool-events 8 , 591.Xr zpool-export 8 , 592.Xr zpool-get 8 , 593.Xr zpool-history 8 , 594.Xr zpool-import 8 , 595.Xr zpool-initialize 8 , 596.Xr zpool-iostat 8 , 597.Xr zpool-labelclear 8 , 598.Xr zpool-list 8 , 599.Xr zpool-offline 8 , 600.Xr zpool-online 8 , 601.Xr zpool-reguid 8 , 602.Xr zpool-remove 8 , 603.Xr zpool-reopen 8 , 604.Xr zpool-replace 8 , 605.Xr zpool-resilver 8 , 606.Xr zpool-scrub 8 , 607.Xr zpool-set 8 , 608.Xr zpool-split 8 , 609.Xr zpool-status 8 , 610.Xr zpool-sync 8 , 611.Xr zpool-trim 8 , 612.Xr zpool-upgrade 8 , 613.Xr zpool-wait 8 614