1.\" SPDX-License-Identifier: CDDL-1.0 2.\" 3.\" CDDL HEADER START 4.\" 5.\" The contents of this file are subject to the terms of the 6.\" Common Development and Distribution License (the "License"). 7.\" You may not use this file except in compliance with the License. 8.\" 9.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10.\" or https://opensource.org/licenses/CDDL-1.0. 11.\" See the License for the specific language governing permissions 12.\" and limitations under the License. 13.\" 14.\" When distributing Covered Code, include this CDDL HEADER in each 15.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16.\" If applicable, add the following below this CDDL HEADER, with the 17.\" fields enclosed by brackets "[]" replaced with your own identifying 18.\" information: Portions Copyright [yyyy] [name of copyright owner] 19.\" 20.\" CDDL HEADER END 21.\" 22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. 23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. 24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. 25.\" Copyright (c) 2017 Datto Inc. 26.\" Copyright (c) 2018 George Melikov. All Rights Reserved. 27.\" Copyright 2017 Nexenta Systems, Inc. 28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. 29.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org> 30.\" Copyright (c) 2023, Klara Inc. 31.\" 32.Dd November 18, 2024 33.Dt ZPOOLPROPS 7 34.Os 35. 36.Sh NAME 37.Nm zpoolprops 38.Nd properties of ZFS storage pools 39. 40.Sh DESCRIPTION 41Each pool has several properties associated with it. 42Some properties are read-only statistics while others are configurable and 43change the behavior of the pool. 44.Pp 45User properties have no effect on ZFS behavior. 46Use them to annotate pools in a way that is meaningful in your environment. 47For more information about user properties, see the 48.Sx User Properties 49section. 50.Pp 51The following are read-only properties: 52.Bl -tag -width "unsupported@guid" 53.It Sy allocated 54Amount of storage used within the pool. 55See 56.Sy fragmentation 57and 58.Sy free 59for more information. 60.It Sy bcloneratio 61The ratio of the total amount of storage that would be required to store all 62the cloned blocks without cloning to the actual storage used. 63The 64.Sy bcloneratio 65property is calculated as: 66.Pp 67.Sy ( ( bclonesaved + bcloneused ) * 100 ) / bcloneused 68.It Sy bclonesaved 69The amount of additional storage that would be required if block cloning 70was not used. 71.It Sy bcloneused 72The amount of storage used by cloned blocks. 73.It Sy capacity 74Percentage of pool space used. 75This property can also be referred to by its shortened column name, 76.Sy cap . 77.It Sy dedupcached 78Total size of the deduplication table currently loaded into the ARC. 79See 80.Xr zpool-prefetch 8 . 81.It Sy dedup_table_size 82Total on-disk size of the deduplication table. 83.It Sy expandsize 84Amount of uninitialized space within the pool or device that can be used to 85increase the total capacity of the pool. 86On whole-disk vdevs, this is the space beyond the end of the GPT – 87typically occurring when a LUN is dynamically expanded 88or a disk replaced with a larger one. 89On partition vdevs, this is the space appended to the partition after it was 90added to the pool – most likely by resizing it in-place. 91The space can be claimed for the pool by bringing it online with 92.Sy autoexpand=on 93or using 94.Nm zpool Cm online Fl e . 95.It Sy fragmentation 96The amount of fragmentation in the pool. 97As the amount of space 98.Sy allocated 99increases, it becomes more difficult to locate 100.Sy free 101space. 102This may result in lower write performance compared to pools with more 103unfragmented free space. 104.It Sy free 105The amount of free space available in the pool. 106By contrast, the 107.Xr zfs 8 108.Sy available 109property describes how much new data can be written to ZFS filesystems/volumes. 110The zpool 111.Sy free 112property is not generally useful for this purpose, and can be substantially more 113than the zfs 114.Sy available 115space. 116This discrepancy is due to several factors, including raidz parity; 117zfs reservation, quota, refreservation, and refquota properties; and space set 118aside by 119.Sy spa_slop_shift 120(see 121.Xr zfs 4 122for more information). 123.It Sy freeing 124After a file system or snapshot is destroyed, the space it was using is 125returned to the pool asynchronously. 126.Sy freeing 127is the amount of space remaining to be reclaimed. 128Over time 129.Sy freeing 130will decrease while 131.Sy free 132increases. 133.It Sy guid 134A unique identifier for the pool. 135.It Sy health 136The current health of the pool. 137Health can be one of 138.Sy ONLINE , DEGRADED , FAULTED , OFFLINE, REMOVED , UNAVAIL . 139.It Sy last_scrubbed_txg 140Indicates the transaction group (TXG) up to which the most recent scrub 141operation has checked and repaired the dataset. 142This provides insight into the data integrity status of their pool at 143a specific point in time. 144.Xr zpool-scrub 8 145can utilize this property to scan only data that has changed since the last 146scrub completed, when given the 147.Fl C 148flag. 149This property is not updated when performing an error scrub with the 150.Fl e 151flag. 152.It Sy leaked 153Space not released while 154.Sy freeing 155due to corruption, now permanently leaked into the pool. 156.It Sy load_guid 157A unique identifier for the pool. 158Unlike the 159.Sy guid 160property, this identifier is generated every time we load the pool (i.e. does 161not persist across imports/exports) and never changes while the pool is loaded 162(even if a 163.Sy reguid 164operation takes place). 165.It Sy size 166Total size of the storage pool. 167.It Sy unsupported@ Ns Em guid 168Information about unsupported features that are enabled on the pool. 169See 170.Xr zpool-features 7 171for details. 172.El 173.Pp 174The space usage properties report actual physical space available to the 175storage pool. 176The physical space can be different from the total amount of space that any 177contained datasets can actually use. 178The amount of space used in a raidz configuration depends on the characteristics 179of the data being written. 180In addition, ZFS reserves some space for internal accounting that the 181.Xr zfs 8 182command takes into account, but the 183.Nm 184command does not. 185For non-full pools of a reasonable size, these effects should be invisible. 186For small pools, or pools that are close to being completely full, these 187discrepancies may become more noticeable. 188.Pp 189The following property can be set at creation time and import time: 190.Bl -tag -width Ds 191.It Sy altroot 192Alternate root directory. 193If set, this directory is prepended to any mount points within the pool. 194This can be used when examining an unknown pool where the mount points cannot be 195trusted, or in an alternate boot environment, where the typical paths are not 196valid. 197.Sy altroot 198is not a persistent property. 199It is valid only while the system is up. 200Setting 201.Sy altroot 202defaults to using 203.Sy cachefile Ns = Ns Sy none , 204though this may be overridden using an explicit setting. 205.El 206.Pp 207The following property can be set only at import time: 208.Bl -tag -width Ds 209.It Sy readonly Ns = Ns Sy on Ns | Ns Sy off 210If set to 211.Sy on , 212the pool will be imported in read-only mode. 213This property can also be referred to by its shortened column name, 214.Sy rdonly . 215.El 216.Pp 217The following properties can be set at creation time and import time, and later 218changed with the 219.Nm zpool Cm set 220command: 221.Bl -tag -width Ds 222.It Sy ashift Ns = Ns Ar ashift 223Pool sector size exponent, to the power of 224.Sy 2 225(internally referred to as 226.Sy ashift ) . 227Values from 9 to 16, inclusive, are valid; also, the 228value 0 (the default) means to auto-detect using the kernel's block 229layer and a ZFS internal exception list. 230I/O operations will be aligned to the specified size boundaries. 231Additionally, the minimum (disk) 232write size will be set to the specified size, so this represents a 233space/performance trade-off. 234For optimal performance, the pool sector size should be greater than 235or equal to the sector size of the underlying disks. 236The typical case for setting this property is when 237performance is important and the underlying disks use 4KiB sectors but 238report 512B sectors to the OS (for compatibility reasons); in that 239case, set 240.Sy ashift Ns = Ns Sy 12 241(which is 242.Sy 1<<12 No = Sy 4096 ) . 243When set, this property is 244used as the default hint value in subsequent vdev operations (add, 245attach and replace). 246Changing this value will not modify any existing 247vdev, not even on disk replacement; however it can be used, for 248instance, to replace a dying 512B sectors disk with a newer 4KiB 249sectors device: this will probably result in bad performance but at the 250same time could prevent loss of data. 251.It Sy autoexpand Ns = Ns Sy on Ns | Ns Sy off 252Controls automatic pool expansion when the underlying LUN is grown. 253If set to 254.Sy on , 255the pool will be resized according to the size of the expanded device. 256If the device is part of a mirror or raidz then all devices within that 257mirror/raidz group must be expanded before the new space is made available to 258the pool. 259The default behavior is 260.Sy off . 261This property can also be referred to by its shortened column name, 262.Sy expand . 263.It Sy autoreplace Ns = Ns Sy on Ns | Ns Sy off 264Controls automatic device replacement. 265If set to 266.Sy off , 267device replacement must be initiated by the administrator by using the 268.Nm zpool Cm replace 269command. 270If set to 271.Sy on , 272any new device, found in the same physical location as a device that previously 273belonged to the pool, is automatically formatted and replaced. 274The default behavior is 275.Sy off . 276This property can also be referred to by its shortened column name, 277.Sy replace . 278Autoreplace can also be used with virtual disks (like device 279mapper) provided that you use the /dev/disk/by-vdev paths setup by 280vdev_id.conf. 281See the 282.Xr vdev_id 8 283manual page for more details. 284Autoreplace and autoonline require the ZFS Event Daemon be configured and 285running. 286See the 287.Xr zed 8 288manual page for more details. 289.It Sy autotrim Ns = Ns Sy on Ns | Ns Sy off 290When set to 291.Sy on 292space which has been recently freed, and is no longer allocated by the pool, 293will be periodically trimmed. 294This allows block device vdevs which support 295BLKDISCARD, such as SSDs, or file vdevs on which the underlying file system 296supports hole-punching, to reclaim unused blocks. 297The default value for this property is 298.Sy off . 299.Pp 300Automatic TRIM does not immediately reclaim blocks after a free. 301Instead, it will optimistically delay allowing smaller ranges to be aggregated 302into a few larger ones. 303These can then be issued more efficiently to the storage. 304TRIM on L2ARC devices is enabled by setting 305.Sy l2arc_trim_ahead > 0 . 306.Pp 307Be aware that automatic trimming of recently freed data blocks can put 308significant stress on the underlying storage devices. 309This will vary depending of how well the specific device handles these commands. 310For lower-end devices it is often possible to achieve most of the benefits 311of automatic trimming by running an on-demand (manual) TRIM periodically 312using the 313.Nm zpool Cm trim 314command. 315.It Sy bootfs Ns = Ns Sy (unset) Ns | Ns Ar pool Ns Op / Ns Ar dataset 316Identifies the default bootable dataset for the root pool. 317This property is expected to be set mainly by the installation and upgrade 318programs. 319Not all Linux distribution boot processes use the bootfs property. 320.It Sy cachefile Ns = Ns Ar path Ns | Ns Sy none 321Controls the location of where the pool configuration is cached. 322Discovering all pools on system startup requires a cached copy of the 323configuration data that is stored on the root file system. 324All pools in this cache are automatically imported when the system boots. 325Some environments, such as install and clustering, need to cache this 326information in a different location so that pools are not automatically 327imported. 328Setting this property caches the pool configuration in a different location that 329can later be imported with 330.Nm zpool Cm import Fl c . 331Setting it to the value 332.Sy none 333creates a temporary pool that is never cached, and the 334.Qq 335.Pq empty string 336uses the default location. 337.Pp 338Multiple pools can share the same cache file. 339Because the kernel destroys and recreates this file when pools are added and 340removed, care should be taken when attempting to access this file. 341When the last pool using a 342.Sy cachefile 343is exported or destroyed, the file will be empty. 344.It Sy comment Ns = Ns Ar text 345A text string consisting of printable ASCII characters that will be stored 346such that it is available even if the pool becomes faulted. 347An administrator can provide additional information about a pool using this 348property. 349.It Sy compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns … 350Specifies that the pool maintain compatibility with specific feature sets. 351When set to 352.Sy off 353(or unset) compatibility is disabled (all features may be enabled); when set to 354.Sy legacy 355no features may be enabled. 356When set to a comma-separated list of filenames 357(each filename may either be an absolute path, or relative to 358.Pa /etc/zfs/compatibility.d 359or 360.Pa /usr/share/zfs/compatibility.d ) 361the lists of requested features are read from those files, separated by 362whitespace and/or commas. 363Only features present in all files may be enabled. 364.Pp 365See 366.Xr zpool-features 7 , 367.Xr zpool-create 8 368and 369.Xr zpool-upgrade 8 370for more information on the operation of compatibility feature sets. 371.It Sy dedup_table_quota Ns = Ns Ar number Ns | Ns Sy none Ns | Ns Sy auto 372This property sets a limit on the on-disk size of the pool's dedup table. 373Entries will not be added to the dedup table once this size is reached; 374if a dedup table already exists, and is larger than this size, they 375will not be removed as part of setting this property. 376Existing entries will still have their reference counts updated. 377.Pp 378The actual size limit of the table may be above or below the quota, 379depending on the actual on-disk size of the entries (which may be 380approximated for purposes of calculating the quota). 381That is, setting a quota size of 1M may result in the maximum size being 382slightly below, or slightly above, that value. 383Set to 384.Sy 'none' 385to disable. 386In automatic mode, which is the default, the size of a dedicated dedup vdev 387is used as the quota limit. 388.Pp 389The 390.Sy dedup_table_quota 391property works for both legacy and fast dedup tables. 392.It Sy dedupditto Ns = Ns Ar number 393This property is deprecated and no longer has any effect. 394.It Sy delegation Ns = Ns Sy on Ns | Ns Sy off 395Controls whether a non-privileged user is granted access based on the dataset 396permissions defined on the dataset. 397See 398.Xr zfs 8 399for more information on ZFS delegated administration. 400.It Sy failmode Ns = Ns Sy wait Ns | Ns Sy continue Ns | Ns Sy panic 401Controls the system behavior in the event of catastrophic pool failure. 402This condition is typically a result of a loss of connectivity to the underlying 403storage device(s) or a failure of all devices within the pool. 404The behavior of such an event is determined as follows: 405.Bl -tag -width "continue" 406.It Sy wait 407Blocks all I/O access until the device connectivity is recovered and the errors 408are cleared with 409.Nm zpool Cm clear . 410This is the default behavior. 411.It Sy continue 412Returns 413.Er EIO 414to any new write I/O requests but allows reads to any of the remaining healthy 415devices. 416Any write requests that have yet to be committed to disk would be blocked. 417.It Sy panic 418Prints out a message to the console and generates a system crash dump. 419.El 420.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled 421The value of this property is the current state of 422.Ar feature_name . 423The only valid value when setting this property is 424.Sy enabled 425which moves 426.Ar feature_name 427to the enabled state. 428See 429.Xr zpool-features 7 430for details on feature states. 431.It Sy listsnapshots Ns = Ns Sy on Ns | Ns Sy off 432Controls whether information about snapshots associated with this pool is 433output when 434.Nm zfs Cm list 435is run without the 436.Fl t 437option. 438The default value is 439.Sy off . 440This property can also be referred to by its shortened name, 441.Sy listsnaps . 442.It Sy multihost Ns = Ns Sy on Ns | Ns Sy off 443Controls whether a pool activity check should be performed during 444.Nm zpool Cm import . 445When a pool is determined to be active it cannot be imported, even with the 446.Fl f 447option. 448This property is intended to be used in failover configurations 449where multiple hosts have access to a pool on shared storage. 450.Pp 451Multihost provides protection on import only. 452It does not protect against an 453individual device being used in multiple pools, regardless of the type of vdev. 454See the discussion under 455.Nm zpool Cm create . 456.Pp 457When this property is on, periodic writes to storage occur to show the pool is 458in use. 459See 460.Sy zfs_multihost_interval 461in the 462.Xr zfs 4 463manual page. 464In order to enable this property each host must set a unique hostid. 465See 466.Xr genhostid 1 467.Xr zgenhostid 8 468.Xr spl 4 469for additional details. 470The default value is 471.Sy off . 472.It Sy version Ns = Ns Ar version 473The current on-disk version of the pool. 474This can be increased, but never decreased. 475The preferred method of updating pools is with the 476.Nm zpool Cm upgrade 477command, though this property can be used when a specific version is needed for 478backwards compatibility. 479Once feature flags are enabled on a pool this property will no longer have a 480value. 481.El 482. 483.Ss User Properties 484In addition to the standard native properties, ZFS supports arbitrary user 485properties. 486User properties have no effect on ZFS behavior, but applications or 487administrators can use them to annotate pools. 488.Pp 489User property names must contain a colon 490.Pq Qq Sy \&: 491character to distinguish them from native properties. 492They may contain lowercase letters, numbers, and the following punctuation 493characters: colon 494.Pq Qq Sy \&: , 495dash 496.Pq Qq Sy - , 497period 498.Pq Qq Sy \&. , 499and underscore 500.Pq Qq Sy _ . 501The expected convention is that the property name is divided into two portions 502such as 503.Ar module : Ns Ar property , 504but this namespace is not enforced by ZFS. 505User property names can be at most 255 characters, and cannot begin with a dash 506.Pq Qq Sy - . 507.Pp 508When making programmatic use of user properties, it is strongly suggested to use 509a reversed DNS domain name for the 510.Ar module 511component of property names to reduce the chance that two 512independently-developed packages use the same property name for different 513purposes. 514.Pp 515The values of user properties are arbitrary strings and 516are never validated. 517All of the commands that operate on properties 518.Po Nm zpool Cm list , 519.Nm zpool Cm get , 520.Nm zpool Cm set , 521and so forth 522.Pc 523can be used to manipulate both native properties and user properties. 524Use 525.Nm zpool Cm set Ar name Ns = 526to clear a user property. 527Property values are limited to 8192 bytes. 528