1.\" 2.\" CDDL HEADER START 3.\" 4.\" The contents of this file are subject to the terms of the 5.\" Common Development and Distribution License (the "License"). 6.\" You may not use this file except in compliance with the License. 7.\" 8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9.\" or https://opensource.org/licenses/CDDL-1.0. 10.\" See the License for the specific language governing permissions 11.\" and limitations under the License. 12.\" 13.\" When distributing Covered Code, include this CDDL HEADER in each 14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15.\" If applicable, add the following below this CDDL HEADER, with the 16.\" fields enclosed by brackets "[]" replaced with your own identifying 17.\" information: Portions Copyright [yyyy] [name of copyright owner] 18.\" 19.\" CDDL HEADER END 20.\" 21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. 22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. 23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. 24.\" Copyright (c) 2017 Datto Inc. 25.\" Copyright (c) 2018 George Melikov. All Rights Reserved. 26.\" Copyright 2017 Nexenta Systems, Inc. 27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. 28.\" 29.Dd February 14, 2024 30.Dt ZPOOL 8 31.Os 32. 33.Sh NAME 34.Nm zpool 35.Nd configure ZFS storage pools 36.Sh SYNOPSIS 37.Nm 38.Fl ?V 39.Nm 40.Cm version 41.Op Fl j 42.Nm 43.Cm subcommand 44.Op Ar arguments 45. 46.Sh DESCRIPTION 47The 48.Nm 49command configures ZFS storage pools. 50A storage pool is a collection of devices that provides physical storage and 51data replication for ZFS datasets. 52All datasets within a storage pool share the same space. 53See 54.Xr zfs 8 55for information on managing datasets. 56.Pp 57For an overview of creating and managing ZFS storage pools see the 58.Xr zpoolconcepts 7 59manual page. 60. 61.Sh SUBCOMMANDS 62All subcommands that modify state are logged persistently to the pool in their 63original form. 64.Pp 65The 66.Nm 67command provides subcommands to create and destroy storage pools, add capacity 68to storage pools, and provide information about the storage pools. 69The following subcommands are supported: 70.Bl -tag -width Ds 71.It Xo 72.Nm 73.Fl ?\& 74.Xc 75Displays a help message. 76.It Xo 77.Nm 78.Fl V , -version 79.Xc 80.It Xo 81.Nm 82.Cm version 83.Op Fl j 84.Xc 85Displays the software version of the 86.Nm 87userland utility and the ZFS kernel module. 88Use 89.Fl j 90option to output in JSON format. 91.El 92. 93.Ss Creation 94.Bl -tag -width Ds 95.It Xr zpool-create 8 96Creates a new storage pool containing the virtual devices specified on the 97command line. 98.It Xr zpool-initialize 8 99Begins initializing by writing to all unallocated regions on the specified 100devices, or all eligible devices in the pool if no individual devices are 101specified. 102.El 103. 104.Ss Destruction 105.Bl -tag -width Ds 106.It Xr zpool-destroy 8 107Destroys the given pool, freeing up any devices for other use. 108.It Xr zpool-labelclear 8 109Removes ZFS label information from the specified 110.Ar device . 111.El 112. 113.Ss Virtual Devices 114.Bl -tag -width Ds 115.It Xo 116.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8 117.Xc 118Converts a non-redundant disk into a mirror, or increases 119the redundancy level of an existing mirror 120.Cm ( attach Ns ), or performs the inverse operation ( 121.Cm detach Ns ). 122.It Xo 123.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8 124.Xc 125Adds the specified virtual devices to the given pool, 126or removes the specified device from the pool. 127.It Xr zpool-replace 8 128Replaces an existing device (which may be faulted) with a new one. 129.It Xr zpool-split 8 130Creates a new pool by splitting all mirrors in an existing pool (which decreases 131its redundancy). 132.El 133. 134.Ss Properties 135Available pool properties listed in the 136.Xr zpoolprops 7 137manual page. 138.Bl -tag -width Ds 139.It Xr zpool-list 8 140Lists the given pools along with a health status and space usage. 141.It Xo 142.Xr zpool-get 8 Ns / Ns Xr zpool-set 8 143.Xc 144Retrieves the given list of properties 145.Po 146or all properties if 147.Sy all 148is used 149.Pc 150for the specified storage pool(s). 151.El 152. 153.Ss Monitoring 154.Bl -tag -width Ds 155.It Xr zpool-status 8 156Displays the detailed health status for the given pools. 157.It Xr zpool-iostat 8 158Displays logical I/O statistics for the given pools/vdevs. 159Physical I/O operations may be observed via 160.Xr iostat 1 . 161.It Xr zpool-events 8 162Lists all recent events generated by the ZFS kernel modules. 163These events are consumed by the 164.Xr zed 8 165and used to automate administrative tasks such as replacing a failed device 166with a hot spare. 167That manual page also describes the subclasses and event payloads 168that can be generated. 169.It Xr zpool-history 8 170Displays the command history of the specified pool(s) or all pools if no pool is 171specified. 172.El 173. 174.Ss Maintenance 175.Bl -tag -width Ds 176.It Xr zpool-prefetch 8 177Prefetches specific types of pool data. 178.It Xr zpool-scrub 8 179Begins a scrub or resumes a paused scrub. 180.It Xr zpool-checkpoint 8 181Checkpoints the current state of 182.Ar pool , 183which can be later restored by 184.Nm zpool Cm import Fl -rewind-to-checkpoint . 185.It Xr zpool-trim 8 186Initiates an immediate on-demand TRIM operation for all of the free space in a 187pool. 188This operation informs the underlying storage devices of all blocks 189in the pool which are no longer allocated and allows thinly provisioned 190devices to reclaim the space. 191.It Xr zpool-sync 8 192This command forces all in-core dirty data to be written to the primary 193pool storage and not the ZIL. 194It will also update administrative information including quota reporting. 195Without arguments, 196.Nm zpool Cm sync 197will sync all pools on the system. 198Otherwise, it will sync only the specified pool(s). 199.It Xr zpool-upgrade 8 200Manage the on-disk format version of storage pools. 201.It Xr zpool-wait 8 202Waits until all background activity of the given types has ceased in the given 203pool. 204.El 205. 206.Ss Fault Resolution 207.Bl -tag -width Ds 208.It Xo 209.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8 210.Xc 211Takes the specified physical device offline or brings it online. 212.It Xr zpool-resilver 8 213Starts a resilver. 214If an existing resilver is already running it will be restarted from the 215beginning. 216.It Xr zpool-reopen 8 217Reopen all the vdevs associated with the pool. 218.It Xr zpool-clear 8 219Clears device errors in a pool. 220.El 221. 222.Ss Import & Export 223.Bl -tag -width Ds 224.It Xr zpool-import 8 225Make disks containing ZFS storage pools available for use on the system. 226.It Xr zpool-export 8 227Exports the given pools from the system. 228.It Xr zpool-reguid 8 229Generates a new unique identifier for the pool. 230.El 231. 232.Sh EXIT STATUS 233The following exit values are returned: 234.Bl -tag -compact -offset 4n -width "a" 235.It Sy 0 236Successful completion. 237.It Sy 1 238An error occurred. 239.It Sy 2 240Invalid command line options were specified. 241.El 242. 243.Sh EXAMPLES 244.\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8. 245.\" Examples 6, 14 are shared with zpool-add.8. 246.\" Examples 7, 16 are shared with zpool-list.8. 247.\" Examples 8 are shared with zpool-destroy.8. 248.\" Examples 9 are shared with zpool-export.8. 249.\" Examples 10 are shared with zpool-import.8. 250.\" Examples 11 are shared with zpool-upgrade.8. 251.\" Examples 15 are shared with zpool-remove.8. 252.\" Examples 17 are shared with zpool-status.8. 253.\" Examples 14, 17 are also shared with zpool-iostat.8. 254.\" Make sure to update them omnidirectionally 255.Ss Example 1 : No Creating a RAID-Z Storage Pool 256The following command creates a pool with a single raidz root vdev that 257consists of six disks: 258.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf 259. 260.Ss Example 2 : No Creating a Mirrored Storage Pool 261The following command creates a pool with two mirrors, where each mirror 262contains two disks: 263.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd 264. 265.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions 266The following command creates a non-redundant pool using two disk partitions: 267.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 268. 269.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files 270The following command creates a non-redundant pool using files. 271While not recommended, a pool based on files can be useful for experimental 272purposes. 273.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b 274. 275.Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored 276The following command converts an existing single device 277.Ar sda 278into a mirror by attaching a second device to it, 279.Ar sdb . 280.Dl # Nm zpool Cm attach Ar tank Pa sda sdb 281. 282.Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool 283The following command adds two mirrored disks to the pool 284.Ar tank , 285assuming the pool is already made up of two-way mirrors. 286The additional space is immediately available to any datasets within the pool. 287.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb 288. 289.Ss Example 7 : No Listing Available ZFS Storage Pools 290The following command lists all available pools on the system. 291In this case, the pool 292.Ar zion 293is faulted due to a missing device. 294The results from this command are similar to the following: 295.Bd -literal -compact -offset Ds 296.No # Nm zpool Cm list 297NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 298rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - 299tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - 300zion - - - - - - - FAULTED - 301.Ed 302. 303.Ss Example 8 : No Destroying a ZFS Storage Pool 304The following command destroys the pool 305.Ar tank 306and any datasets contained within: 307.Dl # Nm zpool Cm destroy Fl f Ar tank 308. 309.Ss Example 9 : No Exporting a ZFS Storage Pool 310The following command exports the devices in pool 311.Ar tank 312so that they can be relocated or later imported: 313.Dl # Nm zpool Cm export Ar tank 314. 315.Ss Example 10 : No Importing a ZFS Storage Pool 316The following command displays available pools, and then imports the pool 317.Ar tank 318for use on the system. 319The results from this command are similar to the following: 320.Bd -literal -compact -offset Ds 321.No # Nm zpool Cm import 322 pool: tank 323 id: 15451357997522795478 324 state: ONLINE 325action: The pool can be imported using its name or numeric identifier. 326config: 327 328 tank ONLINE 329 mirror ONLINE 330 sda ONLINE 331 sdb ONLINE 332 333.No # Nm zpool Cm import Ar tank 334.Ed 335. 336.Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version 337The following command upgrades all ZFS Storage pools to the current version of 338the software: 339.Bd -literal -compact -offset Ds 340.No # Nm zpool Cm upgrade Fl a 341This system is currently running ZFS version 2. 342.Ed 343. 344.Ss Example 12 : No Managing Hot Spares 345The following command creates a new pool with an available hot spare: 346.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc 347.Pp 348If one of the disks were to fail, the pool would be reduced to the degraded 349state. 350The failed device can be replaced using the following command: 351.Dl # Nm zpool Cm replace Ar tank Pa sda sdd 352.Pp 353Once the data has been resilvered, the spare is automatically removed and is 354made available for use should another device fail. 355The hot spare can be permanently removed from the pool using the following 356command: 357.Dl # Nm zpool Cm remove Ar tank Pa sdc 358. 359.Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs 360The following command creates a ZFS storage pool consisting of two, two-way 361mirrors and mirrored log devices: 362.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf 363. 364.Ss Example 14 : No Adding Cache Devices to a ZFS Pool 365The following command adds two disks for use as cache devices to a ZFS storage 366pool: 367.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd 368.Pp 369Once added, the cache devices gradually fill with content from main memory. 370Depending on the size of your cache devices, it could take over an hour for 371them to fill. 372Capacity and reads can be monitored using the 373.Cm iostat 374subcommand as follows: 375.Dl # Nm zpool Cm iostat Fl v Ar pool 5 376. 377.Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device 378The following commands remove the mirrored log device 379.Sy mirror-2 380and mirrored top-level data device 381.Sy mirror-1 . 382.Pp 383Given this configuration: 384.Bd -literal -compact -offset Ds 385 pool: tank 386 state: ONLINE 387 scrub: none requested 388config: 389 390 NAME STATE READ WRITE CKSUM 391 tank ONLINE 0 0 0 392 mirror-0 ONLINE 0 0 0 393 sda ONLINE 0 0 0 394 sdb ONLINE 0 0 0 395 mirror-1 ONLINE 0 0 0 396 sdc ONLINE 0 0 0 397 sdd ONLINE 0 0 0 398 logs 399 mirror-2 ONLINE 0 0 0 400 sde ONLINE 0 0 0 401 sdf ONLINE 0 0 0 402.Ed 403.Pp 404The command to remove the mirrored log 405.Ar mirror-2 No is : 406.Dl # Nm zpool Cm remove Ar tank mirror-2 407.Pp 408The command to remove the mirrored data 409.Ar mirror-1 No is : 410.Dl # Nm zpool Cm remove Ar tank mirror-1 411. 412.Ss Example 16 : No Displaying expanded space on a device 413The following command displays the detailed information for the pool 414.Ar data . 415This pool is comprised of a single raidz vdev where one of its devices 416increased its capacity by 10 GiB. 417In this example, the pool will not be able to utilize this extra capacity until 418all the devices under the raidz vdev have been expanded. 419.Bd -literal -compact -offset Ds 420.No # Nm zpool Cm list Fl v Ar data 421NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT 422data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - 423 raidz1 23.9G 14.6G 9.30G - 48% 424 sda - - - - - 425 sdb - - - 10G - 426 sdc - - - - - 427.Ed 428. 429.Ss Example 17 : No Adding output columns 430Additional columns can be added to the 431.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . 432.Bd -literal -compact -offset Ds 433.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size 434 NAME STATE READ WRITE CKSUM vendor model size 435 tank ONLINE 0 0 0 436 mirror-0 ONLINE 0 0 0 437 U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 438 U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 439 U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 440 U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 441 U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 442 U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T 443 444.No # Nm zpool Cm iostat Fl vc Pa size 445 capacity operations bandwidth 446pool alloc free read write read write size 447---------- ----- ----- ----- ----- ----- ----- ---- 448rpool 14.6G 54.9G 4 55 250K 2.69M 449 sda1 14.6G 54.9G 4 55 250K 2.69M 70G 450---------- ----- ----- ----- ----- ----- ----- ---- 451.Ed 452. 453.Sh ENVIRONMENT VARIABLES 454.Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE" 455.It Sy ZFS_ABORT 456Cause 457.Nm 458to dump core on exit for the purposes of running 459.Sy ::findleaks . 460.It Sy ZFS_COLOR 461Use ANSI color in 462.Nm zpool Cm status 463and 464.Nm zpool Cm iostat 465output. 466.It Sy ZPOOL_AUTO_POWER_ON_SLOT 467Automatically attempt to turn on the drives enclosure slot power to a drive when 468running the 469.Nm zpool Cm online 470or 471.Nm zpool Cm clear 472commands. 473This has the same effect as passing the 474.Fl -power 475option to those commands. 476.It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS 477The maximum time in milliseconds to wait for a slot power sysfs value 478to return the correct value after writing it. 479For example, after writing "on" to the sysfs enclosure slot power_control file, 480it can take some time for the enclosure to power down the slot and return 481"on" if you read back the 'power_control' value. 482Defaults to 30 seconds (30000ms) if not set. 483.It Sy ZPOOL_IMPORT_PATH 484The search path for devices or files to use with the pool. 485This is a colon-separated list of directories in which 486.Nm 487looks for device nodes and files. 488Similar to the 489.Fl d 490option in 491.Nm zpool import . 492.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS 493The maximum time in milliseconds that 494.Nm zpool import 495will wait for an expected device to be available. 496.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE 497If set, suppress warning about non-native vdev ashift in 498.Nm zpool Cm status . 499The value is not used, only the presence or absence of the variable matters. 500.It Sy ZPOOL_VDEV_NAME_GUID 501Cause 502.Nm 503subcommands to output vdev guids by default. 504This behavior is identical to the 505.Nm zpool Cm status Fl g 506command line option. 507.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS 508Cause 509.Nm 510subcommands to follow links for vdev names by default. 511This behavior is identical to the 512.Nm zpool Cm status Fl L 513command line option. 514.It Sy ZPOOL_VDEV_NAME_PATH 515Cause 516.Nm 517subcommands to output full vdev path names by default. 518This behavior is identical to the 519.Nm zpool Cm status Fl P 520command line option. 521.It Sy ZFS_VDEV_DEVID_OPT_OUT 522Older OpenZFS implementations had issues when attempting to display pool 523config vdev names if a 524.Sy devid 525NVP value is present in the pool's config. 526.Pp 527For example, a pool that originated on illumos platform would have a 528.Sy devid 529value in the config and 530.Nm zpool Cm status 531would fail when listing the config. 532This would also be true for future Linux-based pools. 533.Pp 534A pool can be stripped of any 535.Sy devid 536values on import or prevented from adding 537them on 538.Nm zpool Cm create 539or 540.Nm zpool Cm add 541by setting 542.Sy ZFS_VDEV_DEVID_OPT_OUT . 543.Pp 544.It Sy ZPOOL_SCRIPTS_AS_ROOT 545Allow a privileged user to run 546.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 547Normally, only unprivileged users are allowed to run 548.Fl c . 549.It Sy ZPOOL_SCRIPTS_PATH 550The search path for scripts when running 551.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 552This is a colon-separated list of directories and overrides the default 553.Pa ~/.zpool.d 554and 555.Pa /etc/zfs/zpool.d 556search paths. 557.It Sy ZPOOL_SCRIPTS_ENABLED 558Allow a user to run 559.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 560If 561.Sy ZPOOL_SCRIPTS_ENABLED 562is not set, it is assumed that the user is allowed to run 563.Nm zpool Cm status Ns / Ns Cm iostat Fl c . 564.\" Shared with zfs.8 565.It Sy ZFS_MODULE_TIMEOUT 566Time, in seconds, to wait for 567.Pa /dev/zfs 568to appear. 569Defaults to 570.Sy 10 , 571max 572.Sy 600 Pq 10 minutes . 573If 574.Pf < Sy 0 , 575wait forever; if 576.Sy 0 , 577don't wait. 578.El 579. 580.Sh INTERFACE STABILITY 581.Sy Evolving 582. 583.Sh SEE ALSO 584.Xr zfs 4 , 585.Xr zpool-features 7 , 586.Xr zpoolconcepts 7 , 587.Xr zpoolprops 7 , 588.Xr zed 8 , 589.Xr zfs 8 , 590.Xr zpool-add 8 , 591.Xr zpool-attach 8 , 592.Xr zpool-checkpoint 8 , 593.Xr zpool-clear 8 , 594.Xr zpool-create 8 , 595.Xr zpool-destroy 8 , 596.Xr zpool-detach 8 , 597.Xr zpool-events 8 , 598.Xr zpool-export 8 , 599.Xr zpool-get 8 , 600.Xr zpool-history 8 , 601.Xr zpool-import 8 , 602.Xr zpool-initialize 8 , 603.Xr zpool-iostat 8 , 604.Xr zpool-labelclear 8 , 605.Xr zpool-list 8 , 606.Xr zpool-offline 8 , 607.Xr zpool-online 8 , 608.Xr zpool-prefetch 8 , 609.Xr zpool-reguid 8 , 610.Xr zpool-remove 8 , 611.Xr zpool-reopen 8 , 612.Xr zpool-replace 8 , 613.Xr zpool-resilver 8 , 614.Xr zpool-scrub 8 , 615.Xr zpool-set 8 , 616.Xr zpool-split 8 , 617.Xr zpool-status 8 , 618.Xr zpool-sync 8 , 619.Xr zpool-trim 8 , 620.Xr zpool-upgrade 8 , 621.Xr zpool-wait 8 622