1.\" 2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. 3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. 4.\" Copyright (c) 2019 Datto Inc. 5.\" The contents of this file are subject to the terms of the Common Development 6.\" and Distribution License (the "License"). You may not use this file except 7.\" in compliance with the License. You can obtain a copy of the license at 8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. 9.\" 10.\" See the License for the specific language governing permissions and 11.\" limitations under the License. When distributing Covered Code, include this 12.\" CDDL HEADER in each file and include the License file at 13.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this 14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your 15.\" own identifying information: 16.\" Portions Copyright [yyyy] [name of copyright owner] 17.\" 18.Dd July 21, 2023 19.Dt ZFS 4 20.Os 21. 22.Sh NAME 23.Nm zfs 24.Nd tuning of the ZFS kernel module 25. 26.Sh DESCRIPTION 27The ZFS module supports these parameters: 28.Bl -tag -width Ds 29.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 30Maximum size in bytes of the dbuf cache. 31The target size is determined by the MIN versus 32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd 33of the target ARC size. 34The behavior of the dbuf cache and its associated settings 35can be observed via the 36.Pa /proc/spl/kstat/zfs/dbufstats 37kstat. 38. 39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 40Maximum size in bytes of the metadata dbuf cache. 41The target size is determined by the MIN versus 42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th 43of the target ARC size. 44The behavior of the metadata dbuf cache and its associated settings 45can be observed via the 46.Pa /proc/spl/kstat/zfs/dbufstats 47kstat. 48. 49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint 50The percentage over 51.Sy dbuf_cache_max_bytes 52when dbufs must be evicted directly. 53. 54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint 55The percentage below 56.Sy dbuf_cache_max_bytes 57when the evict thread stops evicting dbufs. 58. 59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint 60Set the size of the dbuf cache 61.Pq Sy dbuf_cache_max_bytes 62to a log2 fraction of the target ARC size. 63. 64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint 65Set the size of the dbuf metadata cache 66.Pq Sy dbuf_metadata_cache_max_bytes 67to a log2 fraction of the target ARC size. 68. 69.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint 70Set the size of the mutex array for the dbuf cache. 71When set to 72.Sy 0 73the array is dynamically sized based on total system memory. 74. 75.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint 76dnode slots allocated in a single operation as a power of 2. 77The default value minimizes lock contention for the bulk operation performed. 78. 79.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 80Limit the amount we can prefetch with one call to this amount in bytes. 81This helps to limit the amount of memory that can be used by prefetching. 82. 83.It Sy ignore_hole_birth Pq int 84Alias for 85.Sy send_holes_without_birth_time . 86. 87.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int 88Turbo L2ARC warm-up. 89When the L2ARC is cold the fill interval will be set as fast as possible. 90. 91.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 92Min feed interval in milliseconds. 93Requires 94.Sy l2arc_feed_again Ns = Ns Ar 1 95and only applicable in related situations. 96. 97.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 98Seconds between L2ARC writing. 99. 100.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64 101How far through the ARC lists to search for L2ARC cacheable content, 102expressed as a multiplier of 103.Sy l2arc_write_max . 104ARC persistence across reboots can be achieved with persistent L2ARC 105by setting this parameter to 106.Sy 0 , 107allowing the full length of ARC lists to be searched for cacheable content. 108. 109.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 110Scales 111.Sy l2arc_headroom 112by this percentage when L2ARC contents are being successfully compressed 113before writing. 114A value of 115.Sy 100 116disables this feature. 117. 118.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int 119Controls whether buffers present on special vdevs are eligible for caching 120into L2ARC. 121If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. 122. 123.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int 124Controls whether only MFU metadata and data are cached from ARC into L2ARC. 125This may be desired to avoid wasting space on L2ARC when reading/writing large 126amounts of data that are not expected to be accessed more than once. 127.Pp 128The default is off, 129meaning both MRU and MFU data and metadata are cached. 130When turning off this feature, some MRU buffers will still be present 131in ARC and eventually cached on L2ARC. 132.No If Sy l2arc_noprefetch Ns = Ns Sy 0 , 133some prefetched buffers will be cached to L2ARC, and those might later 134transition to MRU, in which case the 135.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 136.Pp 137Regardless of 138.Sy l2arc_noprefetch , 139some MFU buffers might be evicted from ARC, 140accessed later on as prefetches and transition to MRU as prefetches. 141If accessed again they are counted as MRU and the 142.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 143.Pp 144The ARC status of L2ARC buffers when they were first cached in 145L2ARC can be seen in the 146.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize 147arcstats when importing the pool or onlining a cache 148device if persistent L2ARC is enabled. 149.Pp 150The 151.Sy evict_l2_eligible_mru 152arcstat does not take into account if this option is enabled as the information 153provided by the 154.Sy evict_l2_eligible_m[rf]u 155arcstats can be used to decide if toggling this option is appropriate 156for the current workload. 157. 158.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint 159Percent of ARC size allowed for L2ARC-only headers. 160Since L2ARC buffers are not evicted on memory pressure, 161too many headers on a system with an irrationally large L2ARC 162can render it slow or unusable. 163This parameter limits L2ARC writes and rebuilds to achieve the target. 164. 165.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 166Trims ahead of the current write size 167.Pq Sy l2arc_write_max 168on L2ARC devices by this percentage of write size if we have filled the device. 169If set to 170.Sy 100 171we TRIM twice the space required to accommodate upcoming writes. 172A minimum of 173.Sy 64 MiB 174will be trimmed. 175It also enables TRIM of the whole L2ARC device upon creation 176or addition to an existing pool or if the header of the device is 177invalid upon importing a pool or onlining a cache device. 178A value of 179.Sy 0 180disables TRIM on L2ARC altogether and is the default as it can put significant 181stress on the underlying storage devices. 182This will vary depending of how well the specific device handles these commands. 183. 184.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 185Do not write buffers to L2ARC if they were prefetched but not used by 186applications. 187In case there are prefetched buffers in L2ARC and this option 188is later set, we do not read the prefetched buffers from L2ARC. 189Unsetting this option is useful for caching sequential reads from the 190disks to L2ARC and serve those reads from L2ARC later on. 191This may be beneficial in case the L2ARC device is significantly faster 192in sequential reads than the disks of the pool. 193.Pp 194Use 195.Sy 1 196to disable and 197.Sy 0 198to enable caching/reading prefetches to/from L2ARC. 199. 200.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int 201No reads during writes. 202. 203.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 204Cold L2ARC devices will have 205.Sy l2arc_write_max 206increased by this amount while they remain cold. 207. 208.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 209Max write bytes per interval. 210. 211.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 212Rebuild the L2ARC when importing a pool (persistent L2ARC). 213This can be disabled if there are problems importing a pool 214or attaching an L2ARC device (e.g. the L2ARC device is slow 215in reading stored log metadata, or the metadata 216has become somehow fragmented/unusable). 217. 218.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 219Mininum size of an L2ARC device required in order to write log blocks in it. 220The log blocks are used upon importing the pool to rebuild the persistent L2ARC. 221.Pp 222For L2ARC devices less than 1 GiB, the amount of data 223.Fn l2arc_evict 224evicts is significant compared to the amount of restored L2ARC data. 225In this case, do not write log blocks in L2ARC in order not to waste space. 226. 227.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 228Metaslab granularity, in bytes. 229This is roughly similar to what would be referred to as the "stripe size" 230in traditional RAID arrays. 231In normal operation, ZFS will try to write this amount of data to each disk 232before moving on to the next top-level vdev. 233. 234.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 235Enable metaslab group biasing based on their vdevs' over- or under-utilization 236relative to the pool. 237. 238.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 239Make some blocks above a certain size be gang blocks. 240This option is used by the test suite to facilitate testing. 241. 242.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint 243For blocks that could be forced to be a gang block (due to 244.Sy metaslab_force_ganging ) , 245force this many of them to be gang blocks. 246. 247.It Sy zfs_ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 248Default DDT ZAP data block size as a power of 2. Note that changing this after 249creating a DDT on the pool will not affect existing DDTs, only newly created 250ones. 251. 252.It Sy zfs_ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 253Default DDT ZAP indirect block size as a power of 2. Note that changing this 254after creating a DDT on the pool will not affect existing DDTs, only newly 255created ones. 256. 257.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int 258Default dnode block size as a power of 2. 259. 260.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int 261Default dnode indirect block size as a power of 2. 262. 263.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 264When attempting to log an output nvlist of an ioctl in the on-disk history, 265the output will not be stored if it is larger than this size (in bytes). 266This must be less than 267.Sy DMU_MAX_ACCESS Pq 64 MiB . 268This applies primarily to 269.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . 270. 271.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int 272Prevent log spacemaps from being destroyed during pool exports and destroys. 273. 274.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 275Enable/disable segment-based metaslab selection. 276. 277.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int 278When using segment-based metaslab selection, continue allocating 279from the active metaslab until this option's 280worth of buckets have been exhausted. 281. 282.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int 283Load all metaslabs during pool import. 284. 285.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int 286Prevent metaslabs from being unloaded. 287. 288.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 289Enable use of the fragmentation metric in computing metaslab weights. 290. 291.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 292Maximum distance to search forward from the last offset. 293Without this limit, fragmented pools can see 294.Em >100`000 295iterations and 296.Fn metaslab_block_picker 297becomes the performance limiting factor on high-performance storage. 298.Pp 299With the default setting of 300.Sy 16 MiB , 301we typically see less than 302.Em 500 303iterations, even with very fragmented 304.Sy ashift Ns = Ns Sy 9 305pools. 306The maximum number of iterations possible is 307.Sy metaslab_df_max_search / 2^(ashift+1) . 308With the default setting of 309.Sy 16 MiB 310this is 311.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 312or 313.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . 314. 315.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int 316If not searching forward (due to 317.Sy metaslab_df_max_search , metaslab_df_free_pct , 318.No or Sy metaslab_df_alloc_threshold ) , 319this tunable controls which segment is used. 320If set, we will use the largest free segment. 321If unset, we will use a segment of at least the requested size. 322. 323.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 324When we unload a metaslab, we cache the size of the largest free chunk. 325We use that cached size to determine whether or not to load a metaslab 326for a given allocation. 327As more frees accumulate in that metaslab while it's unloaded, 328the cached max size becomes less and less accurate. 329After a number of seconds controlled by this tunable, 330we stop considering the cached max size and start 331considering only the histogram instead. 332. 333.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint 334When we are loading a new metaslab, we check the amount of memory being used 335to store metaslab range trees. 336If it is over a threshold, we attempt to unload the least recently used metaslab 337to prevent the system from clogging all of its memory with range trees. 338This tunable sets the percentage of total system memory that is the threshold. 339. 340.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int 341.Bl -item -compact 342.It 343If unset, we will first try normal allocation. 344.It 345If that fails then we will do a gang allocation. 346.It 347If that fails then we will do a "try hard" gang allocation. 348.It 349If that fails then we will have a multi-layer gang block. 350.El 351.Pp 352.Bl -item -compact 353.It 354If set, we will first try normal allocation. 355.It 356If that fails then we will do a "try hard" allocation. 357.It 358If that fails we will do a gang allocation. 359.It 360If that fails we will do a "try hard" gang allocation. 361.It 362If that fails then we will have a multi-layer gang block. 363.El 364. 365.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint 366When not trying hard, we only consider this number of the best metaslabs. 367This improves performance, especially when there are many metaslabs per vdev 368and the allocation can't actually be satisfied 369(so we would otherwise iterate all metaslabs). 370. 371.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint 372When a vdev is added, target this number of metaslabs per top-level vdev. 373. 374.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint 375Default lower limit for metaslab size. 376. 377.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint 378Default upper limit for metaslab size. 379. 380.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint 381Maximum ashift used when optimizing for logical \[->] physical sector size on 382new 383top-level vdevs. 384May be increased up to 385.Sy ASHIFT_MAX Po 16 Pc , 386but this may negatively impact pool space efficiency. 387. 388.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint 389Minimum ashift used when creating new top-level vdevs. 390. 391.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint 392Minimum number of metaslabs to create in a top-level vdev. 393. 394.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int 395Skip label validation steps during pool import. 396Changing is not recommended unless you know what you're doing 397and are recovering a damaged label. 398. 399.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint 400Practical upper limit of total metaslabs per top-level vdev. 401. 402.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 403Enable metaslab group preloading. 404. 405.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint 406Maximum number of metaslabs per group to preload 407. 408.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint 409Percentage of CPUs to run a metaslab preload taskq 410. 411.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 412Give more weight to metaslabs with lower LBAs, 413assuming they have greater bandwidth, 414as is typically the case on a modern constant angular velocity disk drive. 415. 416.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint 417After a metaslab is used, we keep it loaded for this many TXGs, to attempt to 418reduce unnecessary reloading. 419Note that both this many TXGs and 420.Sy metaslab_unload_delay_ms 421milliseconds must pass before unloading will occur. 422. 423.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint 424After a metaslab is used, we keep it loaded for this many milliseconds, 425to attempt to reduce unnecessary reloading. 426Note, that both this many milliseconds and 427.Sy metaslab_unload_delay 428TXGs must pass before unloading will occur. 429. 430.It Sy reference_history Ns = Ns Sy 3 Pq uint 431Maximum reference holders being tracked when reference_tracking_enable is 432active. 433. 434.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int 435Track reference holders to 436.Sy refcount_t 437objects (debug builds only). 438. 439.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int 440When set, the 441.Sy hole_birth 442optimization will not be used, and all holes will always be sent during a 443.Nm zfs Cm send . 444This is useful if you suspect your datasets are affected by a bug in 445.Sy hole_birth . 446. 447.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp 448SPA config file. 449. 450.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint 451Multiplication factor used to estimate actual disk consumption from the 452size of data being written. 453The default value is a worst case estimate, 454but lower values may be valid for a given pool depending on its configuration. 455Pool administrators who understand the factors involved 456may wish to specify a more realistic inflation factor, 457particularly if they operate close to quota or capacity limits. 458. 459.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int 460Whether to print the vdev tree in the debugging message buffer during pool 461import. 462. 463.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int 464Whether to traverse data blocks during an "extreme rewind" 465.Pq Fl X 466import. 467.Pp 468An extreme rewind import normally performs a full traversal of all 469blocks in the pool for verification. 470If this parameter is unset, the traversal skips non-metadata blocks. 471It can be toggled once the 472import has started to stop or start the traversal of non-metadata blocks. 473. 474.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int 475Whether to traverse blocks during an "extreme rewind" 476.Pq Fl X 477pool import. 478.Pp 479An extreme rewind import normally performs a full traversal of all 480blocks in the pool for verification. 481If this parameter is unset, the traversal is not performed. 482It can be toggled once the import has started to stop or start the traversal. 483. 484.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint 485Sets the maximum number of bytes to consume during pool import to the log2 486fraction of the target ARC size. 487. 488.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int 489Normally, we don't allow the last 490.Sy 3.2% Pq Sy 1/2^spa_slop_shift 491of space in the pool to be consumed. 492This ensures that we don't run the pool completely out of space, 493due to unaccounted changes (e.g. to the MOS). 494It also limits the worst-case time to allocate space. 495If we have less than this amount of free space, 496most ZPL operations (e.g. write, create) will return 497.Sy ENOSPC . 498. 499.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint 500Limits the number of on-disk error log entries that will be converted to the 501new format when enabling the 502.Sy head_errlog 503feature. 504The default is to convert all log entries. 505. 506.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 507During top-level vdev removal, chunks of data are copied from the vdev 508which may include free space in order to trade bandwidth for IOPS. 509This parameter determines the maximum span of free space, in bytes, 510which will be included as "unnecessary" data in a chunk of copied data. 511.Pp 512The default value here was chosen to align with 513.Sy zfs_vdev_read_gap_limit , 514which is a similar concept when doing 515regular reads (but there's no reason it has to be the same). 516. 517.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 518Logical ashift for file-based devices. 519. 520.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 521Physical ashift for file-based devices. 522. 523.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 524If set, when we start iterating over a ZAP object, 525prefetch the entire object (all leaf blocks). 526However, this is limited by 527.Sy dmu_prefetch_max . 528. 529.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int 530Maximum micro ZAP size. 531A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size. 532. 533.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 534Min bytes to prefetch per stream. 535Prefetch distance starts from the demand access size and quickly grows to 536this value, doubling on each hit. 537After that it may grow further by 1/8 per hit, but only if some prefetch 538since last time haven't completed in time to satisfy demand request, i.e. 539prefetch depth didn't cover the read latency or the pool got saturated. 540. 541.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 542Max bytes to prefetch per stream. 543. 544.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 545Max bytes to prefetch indirects for per stream. 546. 547.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint 548Max number of streams per zfetch (prefetch streams per file). 549. 550.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint 551Min time before inactive prefetch stream can be reclaimed 552. 553.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint 554Max time before inactive prefetch stream can be deleted 555. 556.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 557Enables ARC from using scatter/gather lists and forces all allocations to be 558linear in kernel memory. 559Disabling can improve performance in some code paths 560at the expense of fragmented kernel memory. 561. 562.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint 563Maximum number of consecutive memory pages allocated in a single block for 564scatter/gather lists. 565.Pp 566The value of 567.Sy MAX_ORDER 568depends on kernel configuration. 569. 570.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint 571This is the minimum allocation size that will use scatter (page-based) ABDs. 572Smaller allocations will use linear ABDs. 573. 574.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 575When the number of bytes consumed by dnodes in the ARC exceeds this number of 576bytes, try to unpin some of it in response to demand for non-metadata. 577This value acts as a ceiling to the amount of dnode metadata, and defaults to 578.Sy 0 , 579which indicates that a percent which is based on 580.Sy zfs_arc_dnode_limit_percent 581of the ARC meta buffers that may be used for dnodes. 582.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 583Percentage that can be consumed by dnodes of ARC meta buffers. 584.Pp 585See also 586.Sy zfs_arc_dnode_limit , 587which serves a similar purpose but has a higher priority if nonzero. 588. 589.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 590Percentage of ARC dnodes to try to scan in response to demand for non-metadata 591when the number of bytes consumed by dnodes exceeds 592.Sy zfs_arc_dnode_limit . 593. 594.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint 595The ARC's buffer hash table is sized based on the assumption of an average 596block size of this value. 597This works out to roughly 1 MiB of hash table per 1 GiB of physical memory 598with 8-byte pointers. 599For configurations with a known larger average block size, 600this value can be increased to reduce the memory footprint. 601. 602.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint 603When 604.Fn arc_is_overflowing , 605.Fn arc_get_data_impl 606waits for this percent of the requested amount of data to be evicted. 607For example, by default, for every 608.Em 2 KiB 609that's evicted, 610.Em 1 KiB 611of it may be "reused" by a new allocation. 612Since this is above 613.Sy 100 Ns % , 614it ensures that progress is made towards getting 615.Sy arc_size No under Sy arc_c . 616Since this is finite, it ensures that allocations can still happen, 617even during the potentially long time that 618.Sy arc_size No is more than Sy arc_c . 619. 620.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint 621Number ARC headers to evict per sub-list before proceeding to another sub-list. 622This batch-style operation prevents entire sub-lists from being evicted at once 623but comes at a cost of additional unlocking and locking. 624. 625.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint 626If set to a non zero value, it will replace the 627.Sy arc_grow_retry 628value with this value. 629The 630.Sy arc_grow_retry 631.No value Pq default Sy 5 Ns s 632is the number of seconds the ARC will wait before 633trying to resume growth after a memory pressure event. 634. 635.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int 636Throttle I/O when free system memory drops below this percentage of total 637system memory. 638Setting this value to 639.Sy 0 640will disable the throttle. 641. 642.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 643Max size of ARC in bytes. 644If 645.Sy 0 , 646then the max size of ARC is determined by the amount of system memory installed. 647Under Linux, half of system memory will be used as the limit. 648Under 649.Fx , 650the larger of 651.Sy all_system_memory No \- Sy 1 GiB 652and 653.Sy 5/8 No \(mu Sy all_system_memory 654will be used as the limit. 655This value must be at least 656.Sy 67108864 Ns B Pq 64 MiB . 657.Pp 658This value can be changed dynamically, with some caveats. 659It cannot be set back to 660.Sy 0 661while running, and reducing it below the current ARC size will not cause 662the ARC to shrink without memory pressure to induce shrinking. 663. 664.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint 665Balance between metadata and data on ghost hits. 666Values above 100 increase metadata caching by proportionally reducing effect 667of ghost data hits on target data/metadata rate. 668. 669.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 670Min size of ARC in bytes. 671.No If set to Sy 0 , arc_c_min 672will default to consuming the larger of 673.Sy 32 MiB 674and 675.Sy all_system_memory No / Sy 32 . 676. 677.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint 678Minimum time prefetched blocks are locked in the ARC. 679. 680.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint 681Minimum time "prescient prefetched" blocks are locked in the ARC. 682These blocks are meant to be prefetched fairly aggressively ahead of 683the code that may use them. 684. 685.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int 686Number of arc_prune threads. 687.Fx 688does not need more than one. 689Linux may theoretically use one per mount point up to number of CPUs, 690but that was not proven to be useful. 691. 692.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int 693Number of missing top-level vdevs which will be allowed during 694pool import (only in read-only mode). 695. 696.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 697Maximum size in bytes allowed to be passed as 698.Sy zc_nvlist_src_size 699for ioctls on 700.Pa /dev/zfs . 701This prevents a user from causing the kernel to allocate 702an excessive amount of memory. 703When the limit is exceeded, the ioctl fails with 704.Sy EINVAL 705and a description of the error is sent to the 706.Pa zfs-dbgmsg 707log. 708This parameter should not need to be touched under normal circumstances. 709If 710.Sy 0 , 711equivalent to a quarter of the user-wired memory limit under 712.Fx 713and to 714.Sy 134217728 Ns B Pq 128 MiB 715under Linux. 716. 717.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint 718To allow more fine-grained locking, each ARC state contains a series 719of lists for both data and metadata objects. 720Locking is performed at the level of these "sub-lists". 721This parameters controls the number of sub-lists per ARC state, 722and also applies to other uses of the multilist data structure. 723.Pp 724If 725.Sy 0 , 726equivalent to the greater of the number of online CPUs and 727.Sy 4 . 728. 729.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int 730The ARC size is considered to be overflowing if it exceeds the current 731ARC target size 732.Pq Sy arc_c 733by thresholds determined by this parameter. 734Exceeding by 735.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 736starts ARC reclamation process. 737If that appears insufficient, exceeding by 738.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 739blocks new buffer allocation until the reclaim thread catches up. 740Started reclamation process continues till ARC size returns below the 741target size. 742.Pp 743The default value of 744.Sy 8 745causes the ARC to start reclamation if it exceeds the target size by 746.Em 0.2% 747of the target size, and block allocations by 748.Em 0.6% . 749. 750.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint 751If nonzero, this will update 752.Sy arc_shrink_shift Pq default Sy 7 753with the new value. 754. 755.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint 756Percent of pagecache to reclaim ARC to. 757.Pp 758This tunable allows the ZFS ARC to play more nicely 759with the kernel's LRU pagecache. 760It can guarantee that the ARC size won't collapse under scanning 761pressure on the pagecache, yet still allows the ARC to be reclaimed down to 762.Sy zfs_arc_min 763if necessary. 764This value is specified as percent of pagecache size (as measured by 765.Sy NR_FILE_PAGES ) , 766where that percent may exceed 767.Sy 100 . 768This 769only operates during memory pressure/reclaim. 770. 771.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int 772This is a limit on how many pages the ARC shrinker makes available for 773eviction in response to one page allocation attempt. 774Note that in practice, the kernel's shrinker can ask us to evict 775up to about four times this for one allocation attempt. 776.Pp 777The default limit of 778.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages 779limits the amount of time spent attempting to reclaim ARC memory to 780less than 100 ms per allocation attempt, 781even with a small average compressed block size of ~8 KiB. 782.Pp 783The parameter can be set to 0 (zero) to disable the limit, 784and only applies on Linux. 785. 786.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 787The target number of bytes the ARC should leave as free memory on the system. 788If zero, equivalent to the bigger of 789.Sy 512 KiB No and Sy all_system_memory/64 . 790. 791.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 792Disable pool import at module load by ignoring the cache file 793.Pq Sy spa_config_path . 794. 795.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 796Rate limit checksum events to this many per second. 797Note that this should not be set below the ZED thresholds 798(currently 10 checksums over 10 seconds) 799or else the daemon may not trigger any action. 800. 801.It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq uint 802This controls the amount of time that a ZIL block (lwb) will remain "open" 803when it isn't "full", and it has a thread waiting for it to be committed to 804stable storage. 805The timeout is scaled based on a percentage of the last lwb 806latency to avoid significantly impacting the latency of each individual 807transaction record (itx). 808. 809.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int 810Vdev indirection layer (used for device removal) sleeps for this many 811milliseconds during mapping generation. 812Intended for use with the test suite to throttle vdev removal speed. 813. 814.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint 815Minimum percent of obsolete bytes in vdev mapping required to attempt to 816condense 817.Pq see Sy zfs_condense_indirect_vdevs_enable . 818Intended for use with the test suite 819to facilitate triggering condensing as needed. 820. 821.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 822Enable condensing indirect vdev mappings. 823When set, attempt to condense indirect vdev mappings 824if the mapping uses more than 825.Sy zfs_condense_min_mapping_bytes 826bytes of memory and if the obsolete space map object uses more than 827.Sy zfs_condense_max_obsolete_bytes 828bytes on-disk. 829The condensing process is an attempt to save memory by removing obsolete 830mappings. 831. 832.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 833Only attempt to condense indirect vdev mappings if the on-disk size 834of the obsolete space map object is greater than this number of bytes 835.Pq see Sy zfs_condense_indirect_vdevs_enable . 836. 837.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 838Minimum size vdev mapping to attempt to condense 839.Pq see Sy zfs_condense_indirect_vdevs_enable . 840. 841.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 842Internally ZFS keeps a small log to facilitate debugging. 843The log is enabled by default, and can be disabled by unsetting this option. 844The contents of the log can be accessed by reading 845.Pa /proc/spl/kstat/zfs/dbgmsg . 846Writing 847.Sy 0 848to the file clears the log. 849.Pp 850This setting does not influence debug prints due to 851.Sy zfs_flags . 852. 853.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 854Maximum size of the internal ZFS debug log. 855. 856.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int 857Historically used for controlling what reporting was available under 858.Pa /proc/spl/kstat/zfs . 859No effect. 860. 861.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 862When a pool sync operation takes longer than 863.Sy zfs_deadman_synctime_ms , 864or when an individual I/O operation takes longer than 865.Sy zfs_deadman_ziotime_ms , 866then the operation is considered to be "hung". 867If 868.Sy zfs_deadman_enabled 869is set, then the deadman behavior is invoked as described by 870.Sy zfs_deadman_failmode . 871By default, the deadman is enabled and set to 872.Sy wait 873which results in "hung" I/O operations only being logged. 874The deadman is automatically disabled when a pool gets suspended. 875. 876.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp 877Controls the failure behavior when the deadman detects a "hung" I/O operation. 878Valid values are: 879.Bl -tag -compact -offset 4n -width "continue" 880.It Sy wait 881Wait for a "hung" operation to complete. 882For each "hung" operation a "deadman" event will be posted 883describing that operation. 884.It Sy continue 885Attempt to recover from a "hung" operation by re-dispatching it 886to the I/O pipeline if possible. 887.It Sy panic 888Panic the system. 889This can be used to facilitate automatic fail-over 890to a properly configured fail-over partner. 891.El 892. 893.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 894Check time in milliseconds. 895This defines the frequency at which we check for hung I/O requests 896and potentially invoke the 897.Sy zfs_deadman_failmode 898behavior. 899. 900.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 901Interval in milliseconds after which the deadman is triggered and also 902the interval after which a pool sync operation is considered to be "hung". 903Once this limit is exceeded the deadman will be invoked every 904.Sy zfs_deadman_checktime_ms 905milliseconds until the pool sync completes. 906. 907.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 908Interval in milliseconds after which the deadman is triggered and an 909individual I/O operation is considered to be "hung". 910As long as the operation remains "hung", 911the deadman will be invoked every 912.Sy zfs_deadman_checktime_ms 913milliseconds until the operation completes. 914. 915.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 916Enable prefetching dedup-ed blocks which are going to be freed. 917. 918.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 919Start to delay each transaction once there is this amount of dirty data, 920expressed as a percentage of 921.Sy zfs_dirty_data_max . 922This value should be at least 923.Sy zfs_vdev_async_write_active_max_dirty_percent . 924.No See Sx ZFS TRANSACTION DELAY . 925. 926.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int 927This controls how quickly the transaction delay approaches infinity. 928Larger values cause longer delays for a given amount of dirty data. 929.Pp 930For the smoothest delay, this value should be about 1 billion divided 931by the maximum number of operations per second. 932This will smoothly handle between ten times and a tenth of this number. 933.No See Sx ZFS TRANSACTION DELAY . 934.Pp 935.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . 936. 937.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int 938Disables requirement for IVset GUIDs to be present and match when doing a raw 939receive of encrypted datasets. 940Intended for users whose pools were created with 941OpenZFS pre-release versions and now have compatibility issues. 942. 943.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong 944Maximum number of uses of a single salt value before generating a new one for 945encrypted datasets. 946The default value is also the maximum. 947. 948.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint 949Size of the znode hashtable used for holds. 950.Pp 951Due to the need to hold locks on objects that may not exist yet, kernel mutexes 952are not created per-object and instead a hashtable is used where collisions 953will result in objects waiting when there is not actually contention on the 954same object. 955. 956.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int 957Rate limit delay and deadman zevents (which report slow I/O operations) to this 958many per 959second. 960. 961.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 962Upper-bound limit for unflushed metadata changes to be held by the 963log spacemap in memory, in bytes. 964. 965.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 966Part of overall system memory that ZFS allows to be used 967for unflushed metadata changes by the log spacemap, in millionths. 968. 969.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 970Describes the maximum number of log spacemap blocks allowed for each pool. 971The default value means that the space in all the log spacemaps 972can add up to no more than 973.Sy 131072 974blocks (which means 975.Em 16 GiB 976of logical space before compression and ditto blocks, 977assuming that blocksize is 978.Em 128 KiB ) . 979.Pp 980This tunable is important because it involves a trade-off between import 981time after an unclean export and the frequency of flushing metaslabs. 982The higher this number is, the more log blocks we allow when the pool is 983active which means that we flush metaslabs less often and thus decrease 984the number of I/O operations for spacemap updates per TXG. 985At the same time though, that means that in the event of an unclean export, 986there will be more log spacemap blocks for us to read, inducing overhead 987in the import time of the pool. 988The lower the number, the amount of flushing increases, destroying log 989blocks quicker as they become obsolete faster, which leaves less blocks 990to be read during import time after a crash. 991.Pp 992Each log spacemap block existing during pool import leads to approximately 993one extra logical I/O issued. 994This is the reason why this tunable is exposed in terms of blocks rather 995than space used. 996. 997.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 998If the number of metaslabs is small and our incoming rate is high, 999we could get into a situation that we are flushing all our metaslabs every TXG. 1000Thus we always allow at least this many log blocks. 1001. 1002.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 1003Tunable used to determine the number of blocks that can be used for 1004the spacemap log, expressed as a percentage of the total number of 1005unflushed metaslabs in the pool. 1006. 1007.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 1008Tunable limiting maximum time in TXGs any metaslab may remain unflushed. 1009It effectively limits maximum number of unflushed per-TXG spacemap logs 1010that need to be read after unclean pool export. 1011. 1012.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1013When enabled, files will not be asynchronously removed from the list of pending 1014unlinks and the space they consume will be leaked. 1015Once this option has been disabled and the dataset is remounted, 1016the pending unlinks will be processed and the freed space returned to the pool. 1017This option is used by the test suite. 1018. 1019.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong 1020This is the used to define a large file for the purposes of deletion. 1021Files containing more than 1022.Sy zfs_delete_blocks 1023will be deleted asynchronously, while smaller files are deleted synchronously. 1024Decreasing this value will reduce the time spent in an 1025.Xr unlink 2 1026system call, at the expense of a longer delay before the freed space is 1027available. 1028This only applies on Linux. 1029. 1030.It Sy zfs_dirty_data_max Ns = Pq int 1031Determines the dirty space limit in bytes. 1032Once this limit is exceeded, new writes are halted until space frees up. 1033This parameter takes precedence over 1034.Sy zfs_dirty_data_max_percent . 1035.No See Sx ZFS TRANSACTION DELAY . 1036.Pp 1037Defaults to 1038.Sy physical_ram/10 , 1039capped at 1040.Sy zfs_dirty_data_max_max . 1041. 1042.It Sy zfs_dirty_data_max_max Ns = Pq int 1043Maximum allowable value of 1044.Sy zfs_dirty_data_max , 1045expressed in bytes. 1046This limit is only enforced at module load time, and will be ignored if 1047.Sy zfs_dirty_data_max 1048is later changed. 1049This parameter takes precedence over 1050.Sy zfs_dirty_data_max_max_percent . 1051.No See Sx ZFS TRANSACTION DELAY . 1052.Pp 1053Defaults to 1054.Sy min(physical_ram/4, 4GiB) , 1055or 1056.Sy min(physical_ram/4, 1GiB) 1057for 32-bit systems. 1058. 1059.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint 1060Maximum allowable value of 1061.Sy zfs_dirty_data_max , 1062expressed as a percentage of physical RAM. 1063This limit is only enforced at module load time, and will be ignored if 1064.Sy zfs_dirty_data_max 1065is later changed. 1066The parameter 1067.Sy zfs_dirty_data_max_max 1068takes precedence over this one. 1069.No See Sx ZFS TRANSACTION DELAY . 1070. 1071.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint 1072Determines the dirty space limit, expressed as a percentage of all memory. 1073Once this limit is exceeded, new writes are halted until space frees up. 1074The parameter 1075.Sy zfs_dirty_data_max 1076takes precedence over this one. 1077.No See Sx ZFS TRANSACTION DELAY . 1078.Pp 1079Subject to 1080.Sy zfs_dirty_data_max_max . 1081. 1082.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint 1083Start syncing out a transaction group if there's at least this much dirty data 1084.Pq as a percentage of Sy zfs_dirty_data_max . 1085This should be less than 1086.Sy zfs_vdev_async_write_active_min_dirty_percent . 1087. 1088.It Sy zfs_wrlog_data_max Ns = Pq int 1089The upper limit of write-transaction zil log data size in bytes. 1090Write operations are throttled when approaching the limit until log data is 1091cleared out after transaction group sync. 1092Because of some overhead, it should be set at least 2 times the size of 1093.Sy zfs_dirty_data_max 1094.No to prevent harming normal write throughput . 1095It also should be smaller than the size of the slog device if slog is present. 1096.Pp 1097Defaults to 1098.Sy zfs_dirty_data_max*2 1099. 1100.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint 1101Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be 1102preallocated for a file in order to guarantee that later writes will not 1103run out of space. 1104Instead, 1105.Xr fallocate 2 1106space preallocation only checks that sufficient space is currently available 1107in the pool or the user's project quota allocation, 1108and then creates a sparse file of the requested size. 1109The requested space is multiplied by 1110.Sy zfs_fallocate_reserve_percent 1111to allow additional space for indirect blocks and other internal metadata. 1112Setting this to 1113.Sy 0 1114disables support for 1115.Xr fallocate 2 1116and causes it to return 1117.Sy EOPNOTSUPP . 1118. 1119.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string 1120Select a fletcher 4 implementation. 1121.Pp 1122Supported selectors are: 1123.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , 1124.No and Sy aarch64_neon . 1125All except 1126.Sy fastest No and Sy scalar 1127require instruction set extensions to be available, 1128and will only appear if ZFS detects that they are present at runtime. 1129If multiple implementations of fletcher 4 are available, the 1130.Sy fastest 1131will be chosen using a micro benchmark. 1132Selecting 1133.Sy scalar 1134results in the original CPU-based calculation being used. 1135Selecting any option other than 1136.Sy fastest No or Sy scalar 1137results in vector instructions 1138from the respective CPU instruction set being used. 1139. 1140.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string 1141Select a BLAKE3 implementation. 1142.Pp 1143Supported selectors are: 1144.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . 1145All except 1146.Sy cycle , fastest No and Sy generic 1147require instruction set extensions to be available, 1148and will only appear if ZFS detects that they are present at runtime. 1149If multiple implementations of BLAKE3 are available, the 1150.Sy fastest will be chosen using a micro benchmark. You can see the 1151benchmark results by reading this kstat file: 1152.Pa /proc/spl/kstat/zfs/chksum_bench . 1153. 1154.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1155Enable/disable the processing of the free_bpobj object. 1156. 1157.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 1158Maximum number of blocks freed in a single TXG. 1159. 1160.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 1161Maximum number of dedup blocks freed in a single TXG. 1162. 1163.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint 1164Maximum asynchronous read I/O operations active to each device. 1165.No See Sx ZFS I/O SCHEDULER . 1166. 1167.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint 1168Minimum asynchronous read I/O operation active to each device. 1169.No See Sx ZFS I/O SCHEDULER . 1170. 1171.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1172When the pool has more than this much dirty data, use 1173.Sy zfs_vdev_async_write_max_active 1174to limit active async writes. 1175If the dirty data is between the minimum and maximum, 1176the active I/O limit is linearly interpolated. 1177.No See Sx ZFS I/O SCHEDULER . 1178. 1179.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint 1180When the pool has less than this much dirty data, use 1181.Sy zfs_vdev_async_write_min_active 1182to limit active async writes. 1183If the dirty data is between the minimum and maximum, 1184the active I/O limit is linearly 1185interpolated. 1186.No See Sx ZFS I/O SCHEDULER . 1187. 1188.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint 1189Maximum asynchronous write I/O operations active to each device. 1190.No See Sx ZFS I/O SCHEDULER . 1191. 1192.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint 1193Minimum asynchronous write I/O operations active to each device. 1194.No See Sx ZFS I/O SCHEDULER . 1195.Pp 1196Lower values are associated with better latency on rotational media but poorer 1197resilver performance. 1198The default value of 1199.Sy 2 1200was chosen as a compromise. 1201A value of 1202.Sy 3 1203has been shown to improve resilver performance further at a cost of 1204further increasing latency. 1205. 1206.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint 1207Maximum initializing I/O operations active to each device. 1208.No See Sx ZFS I/O SCHEDULER . 1209. 1210.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint 1211Minimum initializing I/O operations active to each device. 1212.No See Sx ZFS I/O SCHEDULER . 1213. 1214.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint 1215The maximum number of I/O operations active to each device. 1216Ideally, this will be at least the sum of each queue's 1217.Sy max_active . 1218.No See Sx ZFS I/O SCHEDULER . 1219. 1220.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint 1221Timeout value to wait before determining a device is missing 1222during import. 1223This is helpful for transient missing paths due 1224to links being briefly removed and recreated in response to 1225udev events. 1226. 1227.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint 1228Maximum sequential resilver I/O operations active to each device. 1229.No See Sx ZFS I/O SCHEDULER . 1230. 1231.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint 1232Minimum sequential resilver I/O operations active to each device. 1233.No See Sx ZFS I/O SCHEDULER . 1234. 1235.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint 1236Maximum removal I/O operations active to each device. 1237.No See Sx ZFS I/O SCHEDULER . 1238. 1239.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint 1240Minimum removal I/O operations active to each device. 1241.No See Sx ZFS I/O SCHEDULER . 1242. 1243.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint 1244Maximum scrub I/O operations active to each device. 1245.No See Sx ZFS I/O SCHEDULER . 1246. 1247.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint 1248Minimum scrub I/O operations active to each device. 1249.No See Sx ZFS I/O SCHEDULER . 1250. 1251.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint 1252Maximum synchronous read I/O operations active to each device. 1253.No See Sx ZFS I/O SCHEDULER . 1254. 1255.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint 1256Minimum synchronous read I/O operations active to each device. 1257.No See Sx ZFS I/O SCHEDULER . 1258. 1259.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint 1260Maximum synchronous write I/O operations active to each device. 1261.No See Sx ZFS I/O SCHEDULER . 1262. 1263.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint 1264Minimum synchronous write I/O operations active to each device. 1265.No See Sx ZFS I/O SCHEDULER . 1266. 1267.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint 1268Maximum trim/discard I/O operations active to each device. 1269.No See Sx ZFS I/O SCHEDULER . 1270. 1271.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint 1272Minimum trim/discard I/O operations active to each device. 1273.No See Sx ZFS I/O SCHEDULER . 1274. 1275.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint 1276For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), 1277the number of concurrently-active I/O operations is limited to 1278.Sy zfs_*_min_active , 1279unless the vdev is "idle". 1280When there are no interactive I/O operations active (synchronous or otherwise), 1281and 1282.Sy zfs_vdev_nia_delay 1283operations have completed since the last interactive operation, 1284then the vdev is considered to be "idle", 1285and the number of concurrently-active non-interactive operations is increased to 1286.Sy zfs_*_max_active . 1287.No See Sx ZFS I/O SCHEDULER . 1288. 1289.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint 1290Some HDDs tend to prioritize sequential I/O so strongly, that concurrent 1291random I/O latency reaches several seconds. 1292On some HDDs this happens even if sequential I/O operations 1293are submitted one at a time, and so setting 1294.Sy zfs_*_max_active Ns = Sy 1 1295does not help. 1296To prevent non-interactive I/O, like scrub, 1297from monopolizing the device, no more than 1298.Sy zfs_vdev_nia_credit operations can be sent 1299while there are outstanding incomplete interactive operations. 1300This enforced wait ensures the HDD services the interactive I/O 1301within a reasonable amount of time. 1302.No See Sx ZFS I/O SCHEDULER . 1303. 1304.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint 1305Maximum number of queued allocations per top-level vdev expressed as 1306a percentage of 1307.Sy zfs_vdev_async_write_max_active , 1308which allows the system to detect devices that are more capable 1309of handling allocations and to allocate more blocks to those devices. 1310This allows for dynamic allocation distribution when devices are imbalanced, 1311as fuller devices will tend to be slower than empty devices. 1312.Pp 1313Also see 1314.Sy zio_dva_throttle_enabled . 1315. 1316.It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint 1317Default queue depth for each vdev IO allocator. 1318Higher values allow for better coalescing of sequential writes before sending 1319them to the disk, but can increase transaction commit times. 1320. 1321.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint 1322Defines if the driver should retire on a given error type. 1323The following options may be bitwise-ored together: 1324.TS 1325box; 1326lbz r l l . 1327 Value Name Description 1328_ 1329 1 Device No driver retries on device errors 1330 2 Transport No driver retries on transport errors. 1331 4 Driver No driver retries on driver errors. 1332.TE 1333. 1334.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int 1335Time before expiring 1336.Pa .zfs/snapshot . 1337. 1338.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int 1339Allow the creation, removal, or renaming of entries in the 1340.Sy .zfs/snapshot 1341directory to cause the creation, destruction, or renaming of snapshots. 1342When enabled, this functionality works both locally and over NFS exports 1343which have the 1344.Em no_root_squash 1345option set. 1346. 1347.It Sy zfs_flags Ns = Ns Sy 0 Pq int 1348Set additional debugging flags. 1349The following flags may be bitwise-ored together: 1350.TS 1351box; 1352lbz r l l . 1353 Value Name Description 1354_ 1355 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. 1356* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. 1357* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. 1358 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. 1359* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. 1360 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. 1361 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. 1362 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. 1363 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. 1364 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. 1365 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. 1366 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log 1367 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. 1368.TE 1369.Sy \& * No Requires debug build . 1370. 1371.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint 1372Enables btree verification. 1373The following settings are culminative: 1374.TS 1375box; 1376lbz r l l . 1377 Value Description 1378 1379 1 Verify height. 1380 2 Verify pointers from children to parent. 1381 3 Verify element counts. 1382 4 Verify element order. (expensive) 1383* 5 Verify unused memory is poisoned. (expensive) 1384.TE 1385.Sy \& * No Requires debug build . 1386. 1387.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int 1388If destroy encounters an 1389.Sy EIO 1390while reading metadata (e.g. indirect blocks), 1391space referenced by the missing metadata can not be freed. 1392Normally this causes the background destroy to become "stalled", 1393as it is unable to make forward progress. 1394While in this stalled state, all remaining space to free 1395from the error-encountering filesystem is "temporarily leaked". 1396Set this flag to cause it to ignore the 1397.Sy EIO , 1398permanently leak the space from indirect blocks that can not be read, 1399and continue to free everything else that it can. 1400.Pp 1401The default "stalling" behavior is useful if the storage partially 1402fails (i.e. some but not all I/O operations fail), and then later recovers. 1403In this case, we will be able to continue pool operations while it is 1404partially failed, and when it recovers, we can continue to free the 1405space, with no leaks. 1406Note, however, that this case is actually fairly rare. 1407.Pp 1408Typically pools either 1409.Bl -enum -compact -offset 4n -width "1." 1410.It 1411fail completely (but perhaps temporarily, 1412e.g. due to a top-level vdev going offline), or 1413.It 1414have localized, permanent errors (e.g. disk returns the wrong data 1415due to bit flip or firmware bug). 1416.El 1417In the former case, this setting does not matter because the 1418pool will be suspended and the sync thread will not be able to make 1419forward progress regardless. 1420In the latter, because the error is permanent, the best we can do 1421is leak the minimum amount of space, 1422which is what setting this flag will do. 1423It is therefore reasonable for this flag to normally be set, 1424but we chose the more conservative approach of not setting it, 1425so that there is no possibility of 1426leaking space in the "partial temporary" failure case. 1427. 1428.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint 1429During a 1430.Nm zfs Cm destroy 1431operation using the 1432.Sy async_destroy 1433feature, 1434a minimum of this much time will be spent working on freeing blocks per TXG. 1435. 1436.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint 1437Similar to 1438.Sy zfs_free_min_time_ms , 1439but for cleanup of old indirection records for removed vdevs. 1440. 1441.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 1442Largest data block to write to the ZIL. 1443Larger blocks will be treated as if the dataset being written to had the 1444.Sy logbias Ns = Ns Sy throughput 1445property set. 1446. 1447.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 1448Pattern written to vdev free space by 1449.Xr zpool-initialize 8 . 1450. 1451.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1452Size of writes used by 1453.Xr zpool-initialize 8 . 1454This option is used by the test suite. 1455. 1456.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 1457The threshold size (in block pointers) at which we create a new sub-livelist. 1458Larger sublists are more costly from a memory perspective but the fewer 1459sublists there are, the lower the cost of insertion. 1460. 1461.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int 1462If the amount of shared space between a snapshot and its clone drops below 1463this threshold, the clone turns off the livelist and reverts to the old 1464deletion method. 1465This is in place because livelists no long give us a benefit 1466once a clone has been overwritten enough. 1467. 1468.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int 1469Incremented each time an extra ALLOC blkptr is added to a livelist entry while 1470it is being condensed. 1471This option is used by the test suite to track race conditions. 1472. 1473.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int 1474Incremented each time livelist condensing is canceled while in 1475.Fn spa_livelist_condense_sync . 1476This option is used by the test suite to track race conditions. 1477. 1478.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1479When set, the livelist condense process pauses indefinitely before 1480executing the synctask \(em 1481.Fn spa_livelist_condense_sync . 1482This option is used by the test suite to trigger race conditions. 1483. 1484.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int 1485Incremented each time livelist condensing is canceled while in 1486.Fn spa_livelist_condense_cb . 1487This option is used by the test suite to track race conditions. 1488. 1489.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1490When set, the livelist condense process pauses indefinitely before 1491executing the open context condensing work in 1492.Fn spa_livelist_condense_cb . 1493This option is used by the test suite to trigger race conditions. 1494. 1495.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 1496The maximum execution time limit that can be set for a ZFS channel program, 1497specified as a number of Lua instructions. 1498. 1499.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 1500The maximum memory limit that can be set for a ZFS channel program, specified 1501in bytes. 1502. 1503.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int 1504The maximum depth of nested datasets. 1505This value can be tuned temporarily to 1506fix existing datasets that exceed the predefined limit. 1507. 1508.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 1509The number of past TXGs that the flushing algorithm of the log spacemap 1510feature uses to estimate incoming log blocks. 1511. 1512.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 1513Maximum number of rows allowed in the summary of the spacemap log. 1514. 1515.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint 1516We currently support block sizes from 1517.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . 1518The benefits of larger blocks, and thus larger I/O, 1519need to be weighed against the cost of COWing a giant block to modify one byte. 1520Additionally, very large blocks can have an impact on I/O latency, 1521and also potentially on the memory allocator. 1522Therefore, we formerly forbade creating blocks larger than 1M. 1523Larger blocks could be created by changing it, 1524and pools with larger blocks can always be imported and used, 1525regardless of this setting. 1526. 1527.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int 1528Allow datasets received with redacted send/receive to be mounted. 1529Normally disabled because these datasets may be missing key data. 1530. 1531.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 1532Minimum number of metaslabs to flush per dirty TXG. 1533. 1534.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint 1535Allow metaslabs to keep their active state as long as their fragmentation 1536percentage is no more than this value. 1537An active metaslab that exceeds this threshold 1538will no longer keep its active status allowing better metaslabs to be selected. 1539. 1540.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint 1541Metaslab groups are considered eligible for allocations if their 1542fragmentation metric (measured as a percentage) is less than or equal to 1543this value. 1544If a metaslab group exceeds this threshold then it will be 1545skipped unless all metaslab groups within the metaslab class have also 1546crossed this threshold. 1547. 1548.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint 1549Defines a threshold at which metaslab groups should be eligible for allocations. 1550The value is expressed as a percentage of free space 1551beyond which a metaslab group is always eligible for allocations. 1552If a metaslab group's free space is less than or equal to the 1553threshold, the allocator will avoid allocating to that group 1554unless all groups in the pool have reached the threshold. 1555Once all groups have reached the threshold, all groups are allowed to accept 1556allocations. 1557The default value of 1558.Sy 0 1559disables the feature and causes all metaslab groups to be eligible for 1560allocations. 1561.Pp 1562This parameter allows one to deal with pools having heavily imbalanced 1563vdevs such as would be the case when a new vdev has been added. 1564Setting the threshold to a non-zero percentage will stop allocations 1565from being made to vdevs that aren't filled to the specified percentage 1566and allow lesser filled vdevs to acquire more allocations than they 1567otherwise would under the old 1568.Sy zfs_mg_alloc_failures 1569facility. 1570. 1571.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1572If enabled, ZFS will place DDT data into the special allocation class. 1573. 1574.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1575If enabled, ZFS will place user data indirect blocks 1576into the special allocation class. 1577. 1578.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint 1579Historical statistics for this many latest multihost updates will be available 1580in 1581.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . 1582. 1583.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 1584Used to control the frequency of multihost writes which are performed when the 1585.Sy multihost 1586pool property is on. 1587This is one of the factors used to determine the 1588length of the activity check during import. 1589.Pp 1590The multihost write period is 1591.Sy zfs_multihost_interval No / Sy leaf-vdevs . 1592On average a multihost write will be issued for each leaf vdev 1593every 1594.Sy zfs_multihost_interval 1595milliseconds. 1596In practice, the observed period can vary with the I/O load 1597and this observed value is the delay which is stored in the uberblock. 1598. 1599.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint 1600Used to control the duration of the activity test on import. 1601Smaller values of 1602.Sy zfs_multihost_import_intervals 1603will reduce the import time but increase 1604the risk of failing to detect an active pool. 1605The total activity check time is never allowed to drop below one second. 1606.Pp 1607On import the activity check waits a minimum amount of time determined by 1608.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , 1609or the same product computed on the host which last had the pool imported, 1610whichever is greater. 1611The activity check time may be further extended if the value of MMP 1612delay found in the best uberblock indicates actual multihost updates happened 1613at longer intervals than 1614.Sy zfs_multihost_interval . 1615A minimum of 1616.Em 100 ms 1617is enforced. 1618.Pp 1619.Sy 0 No is equivalent to Sy 1 . 1620. 1621.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint 1622Controls the behavior of the pool when multihost write failures or delays are 1623detected. 1624.Pp 1625When 1626.Sy 0 , 1627multihost write failures or delays are ignored. 1628The failures will still be reported to the ZED which depending on 1629its configuration may take action such as suspending the pool or offlining a 1630device. 1631.Pp 1632Otherwise, the pool will be suspended if 1633.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval 1634milliseconds pass without a successful MMP write. 1635This guarantees the activity test will see MMP writes if the pool is imported. 1636.Sy 1 No is equivalent to Sy 2 ; 1637this is necessary to prevent the pool from being suspended 1638due to normal, small I/O latency variations. 1639. 1640.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int 1641Set to disable scrub I/O. 1642This results in scrubs not actually scrubbing data and 1643simply doing a metadata crawl of the pool instead. 1644. 1645.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1646Set to disable block prefetching for scrubs. 1647. 1648.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 1649Disable cache flush operations on disks when writing. 1650Setting this will cause pool corruption on power loss 1651if a volatile out-of-order write cache is enabled. 1652. 1653.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1654Allow no-operation writes. 1655The occurrence of nopwrites will further depend on other pool properties 1656.Pq i.a. the checksumming and compression algorithms . 1657. 1658.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int 1659Enable forcing TXG sync to find holes. 1660When enabled forces ZFS to sync data when 1661.Sy SEEK_HOLE No or Sy SEEK_DATA 1662flags are used allowing holes in a file to be accurately reported. 1663When disabled holes will not be reported in recently dirtied files. 1664. 1665.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int 1666The number of bytes which should be prefetched during a pool traversal, like 1667.Nm zfs Cm send 1668or other data crawling operations. 1669. 1670.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint 1671The number of blocks pointed by indirect (non-L0) block which should be 1672prefetched during a pool traversal, like 1673.Nm zfs Cm send 1674or other data crawling operations. 1675. 1676.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 1677Control percentage of dirtied indirect blocks from frees allowed into one TXG. 1678After this threshold is crossed, additional frees will wait until the next TXG. 1679.Sy 0 No disables this throttle . 1680. 1681.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1682Disable predictive prefetch. 1683Note that it leaves "prescient" prefetch 1684.Pq for, e.g., Nm zfs Cm send 1685intact. 1686Unlike predictive prefetch, prescient prefetch never issues I/O 1687that ends up not being needed, so it can't hurt performance. 1688. 1689.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1690Disable QAT hardware acceleration for SHA256 checksums. 1691May be unset after the ZFS modules have been loaded to initialize the QAT 1692hardware as long as support is compiled in and the QAT driver is present. 1693. 1694.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1695Disable QAT hardware acceleration for gzip compression. 1696May be unset after the ZFS modules have been loaded to initialize the QAT 1697hardware as long as support is compiled in and the QAT driver is present. 1698. 1699.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1700Disable QAT hardware acceleration for AES-GCM encryption. 1701May be unset after the ZFS modules have been loaded to initialize the QAT 1702hardware as long as support is compiled in and the QAT driver is present. 1703. 1704.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1705Bytes to read per chunk. 1706. 1707.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint 1708Historical statistics for this many latest reads will be available in 1709.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . 1710. 1711.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int 1712Include cache hits in read history 1713. 1714.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1715Maximum read segment size to issue when sequentially resilvering a 1716top-level vdev. 1717. 1718.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1719Automatically start a pool scrub when the last active sequential resilver 1720completes in order to verify the checksums of all blocks which have been 1721resilvered. 1722This is enabled by default and strongly recommended. 1723. 1724.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 1725Maximum amount of I/O that can be concurrently issued for a sequential 1726resilver per leaf device, given in bytes. 1727. 1728.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int 1729If an indirect split block contains more than this many possible unique 1730combinations when being reconstructed, consider it too computationally 1731expensive to check them all. 1732Instead, try at most this many randomly selected 1733combinations each time the block is accessed. 1734This allows all segment copies to participate fairly 1735in the reconstruction when all combinations 1736cannot be checked and prevents repeated use of one bad copy. 1737. 1738.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int 1739Set to attempt to recover from fatal errors. 1740This should only be used as a last resort, 1741as it typically results in leaked space, or worse. 1742. 1743.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1744Ignore hard I/O errors during device removal. 1745When set, if a device encounters a hard I/O error during the removal process 1746the removal will not be cancelled. 1747This can result in a normally recoverable block becoming permanently damaged 1748and is hence not recommended. 1749This should only be used as a last resort when the 1750pool cannot be returned to a healthy state prior to removing the device. 1751. 1752.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1753This is used by the test suite so that it can ensure that certain actions 1754happen while in the middle of a removal. 1755. 1756.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1757The largest contiguous segment that we will attempt to allocate when removing 1758a device. 1759If there is a performance problem with attempting to allocate large blocks, 1760consider decreasing this. 1761The default value is also the maximum. 1762. 1763.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int 1764Ignore the 1765.Sy resilver_defer 1766feature, causing an operation that would start a resilver to 1767immediately restart the one in progress. 1768. 1769.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint 1770Resilvers are processed by the sync thread. 1771While resilvering, it will spend at least this much time 1772working on a resilver between TXG flushes. 1773. 1774.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1775If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), 1776even if there were unrepairable errors. 1777Intended to be used during pool repair or recovery to 1778stop resilvering when the pool is next imported. 1779. 1780.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint 1781Scrubs are processed by the sync thread. 1782While scrubbing, it will spend at least this much time 1783working on a scrub between TXG flushes. 1784. 1785.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint 1786Error blocks to be scrubbed in one txg. 1787. 1788.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint 1789To preserve progress across reboots, the sequential scan algorithm periodically 1790needs to stop metadata scanning and issue all the verification I/O to disk. 1791The frequency of this flushing is determined by this tunable. 1792. 1793.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint 1794This tunable affects how scrub and resilver I/O segments are ordered. 1795A higher number indicates that we care more about how filled in a segment is, 1796while a lower number indicates we care more about the size of the extent without 1797considering the gaps within a segment. 1798This value is only tunable upon module insertion. 1799Changing the value afterwards will have no effect on scrub or resilver 1800performance. 1801. 1802.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint 1803Determines the order that data will be verified while scrubbing or resilvering: 1804.Bl -tag -compact -offset 4n -width "a" 1805.It Sy 1 1806Data will be verified as sequentially as possible, given the 1807amount of memory reserved for scrubbing 1808.Pq see Sy zfs_scan_mem_lim_fact . 1809This may improve scrub performance if the pool's data is very fragmented. 1810.It Sy 2 1811The largest mostly-contiguous chunk of found data will be verified first. 1812By deferring scrubbing of small segments, we may later find adjacent data 1813to coalesce and increase the segment size. 1814.It Sy 0 1815.No Use strategy Sy 1 No during normal verification 1816.No and strategy Sy 2 No while taking a checkpoint . 1817.El 1818. 1819.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int 1820If unset, indicates that scrubs and resilvers will gather metadata in 1821memory before issuing sequential I/O. 1822Otherwise indicates that the legacy algorithm will be used, 1823where I/O is initiated as soon as it is discovered. 1824Unsetting will not affect scrubs or resilvers that are already in progress. 1825. 1826.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int 1827Sets the largest gap in bytes between scrub/resilver I/O operations 1828that will still be considered sequential for sorting purposes. 1829Changing this value will not 1830affect scrubs or resilvers that are already in progress. 1831. 1832.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1833Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. 1834This tunable determines the hard limit for I/O sorting memory usage. 1835When the hard limit is reached we stop scanning metadata and start issuing 1836data verification I/O. 1837This is done until we get below the soft limit. 1838. 1839.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1840The fraction of the hard limit used to determined the soft limit for I/O sorting 1841by the sequential scan algorithm. 1842When we cross this limit from below no action is taken. 1843When we cross this limit from above it is because we are issuing verification 1844I/O. 1845In this case (unless the metadata scan is done) we stop issuing verification I/O 1846and start scanning metadata again until we get to the hard limit. 1847. 1848.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1849When reporting resilver throughput and estimated completion time use the 1850performance observed over roughly the last 1851.Sy zfs_scan_report_txgs 1852TXGs. 1853When set to zero performance is calculated over the time between checkpoints. 1854. 1855.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int 1856Enforce tight memory limits on pool scans when a sequential scan is in progress. 1857When disabled, the memory limit may be exceeded by fast disks. 1858. 1859.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int 1860Freezes a scrub/resilver in progress without actually pausing it. 1861Intended for testing/debugging. 1862. 1863.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1864Maximum amount of data that can be concurrently issued at once for scrubs and 1865resilvers per leaf device, given in bytes. 1866. 1867.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int 1868Allow sending of corrupt data (ignore read/checksum errors when sending). 1869. 1870.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int 1871Include unmodified spill blocks in the send stream. 1872Under certain circumstances, previous versions of ZFS could incorrectly 1873remove the spill block from an existing object. 1874Including unmodified copies of the spill blocks creates a backwards-compatible 1875stream which will recreate a spill block if it was incorrectly removed. 1876. 1877.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1878The fill fraction of the 1879.Nm zfs Cm send 1880internal queues. 1881The fill fraction controls the timing with which internal threads are woken up. 1882. 1883.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1884The maximum number of bytes allowed in 1885.Nm zfs Cm send Ns 's 1886internal queues. 1887. 1888.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1889The fill fraction of the 1890.Nm zfs Cm send 1891prefetch queue. 1892The fill fraction controls the timing with which internal threads are woken up. 1893. 1894.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1895The maximum number of bytes allowed that will be prefetched by 1896.Nm zfs Cm send . 1897This value must be at least twice the maximum block size in use. 1898. 1899.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1900The fill fraction of the 1901.Nm zfs Cm receive 1902queue. 1903The fill fraction controls the timing with which internal threads are woken up. 1904. 1905.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1906The maximum number of bytes allowed in the 1907.Nm zfs Cm receive 1908queue. 1909This value must be at least twice the maximum block size in use. 1910. 1911.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1912The maximum amount of data, in bytes, that 1913.Nm zfs Cm receive 1914will write in one DMU transaction. 1915This is the uncompressed size, even when receiving a compressed send stream. 1916This setting will not reduce the write size below a single block. 1917Capped at a maximum of 1918.Sy 32 MiB . 1919. 1920.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int 1921When this variable is set to non-zero a corrective receive: 1922.Bl -enum -compact -offset 4n -width "1." 1923.It 1924Does not enforce the restriction of source & destination snapshot GUIDs 1925matching. 1926.It 1927If there is an error during healing, the healing receive is not 1928terminated instead it moves on to the next record. 1929.El 1930. 1931.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1932Setting this variable overrides the default logic for estimating block 1933sizes when doing a 1934.Nm zfs Cm send . 1935The default heuristic is that the average block size 1936will be the current recordsize. 1937Override this value if most data in your dataset is not of that size 1938and you require accurate zfs send size estimates. 1939. 1940.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint 1941Flushing of data to disk is done in passes. 1942Defer frees starting in this pass. 1943. 1944.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1945Maximum memory used for prefetching a checkpoint's space map on each 1946vdev while discarding the checkpoint. 1947. 1948.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint 1949Only allow small data blocks to be allocated on the special and dedup vdev 1950types when the available free space percentage on these vdevs exceeds this 1951value. 1952This ensures reserved space is available for pool metadata as the 1953special vdevs approach capacity. 1954. 1955.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint 1956Starting in this sync pass, disable compression (including of metadata). 1957With the default setting, in practice, we don't have this many sync passes, 1958so this has no effect. 1959.Pp 1960The original intent was that disabling compression would help the sync passes 1961to converge. 1962However, in practice, disabling compression increases 1963the average number of sync passes; because when we turn compression off, 1964many blocks' size will change, and thus we have to re-allocate 1965(not overwrite) them. 1966It also increases the number of 1967.Em 128 KiB 1968allocations (e.g. for indirect blocks and spacemaps) 1969because these will not be compressed. 1970The 1971.Em 128 KiB 1972allocations are especially detrimental to performance 1973on highly fragmented systems, which may have very few free segments of this 1974size, 1975and may need to load new metaslabs to satisfy these allocations. 1976. 1977.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint 1978Rewrite new block pointers starting in this pass. 1979. 1980.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int 1981This controls the number of threads used by 1982.Sy dp_sync_taskq . 1983The default value of 1984.Sy 75% 1985will create a maximum of one thread per CPU. 1986. 1987.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 1988Maximum size of TRIM command. 1989Larger ranges will be split into chunks no larger than this value before 1990issuing. 1991. 1992.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 1993Minimum size of TRIM commands. 1994TRIM ranges smaller than this will be skipped, 1995unless they're part of a larger range which was chunked. 1996This is done because it's common for these small TRIMs 1997to negatively impact overall performance. 1998. 1999.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2000Skip uninitialized metaslabs during the TRIM process. 2001This option is useful for pools constructed from large thinly-provisioned 2002devices 2003where TRIM operations are slow. 2004As a pool ages, an increasing fraction of the pool's metaslabs 2005will be initialized, progressively degrading the usefulness of this option. 2006This setting is stored when starting a manual TRIM and will 2007persist for the duration of the requested TRIM. 2008. 2009.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint 2010Maximum number of queued TRIMs outstanding per leaf vdev. 2011The number of concurrent TRIM commands issued to the device is controlled by 2012.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . 2013. 2014.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint 2015The number of transaction groups' worth of frees which should be aggregated 2016before TRIM operations are issued to the device. 2017This setting represents a trade-off between issuing larger, 2018more efficient TRIM operations and the delay 2019before the recently trimmed space is available for use by the device. 2020.Pp 2021Increasing this value will allow frees to be aggregated for a longer time. 2022This will result is larger TRIM operations and potentially increased memory 2023usage. 2024Decreasing this value will have the opposite effect. 2025The default of 2026.Sy 32 2027was determined to be a reasonable compromise. 2028. 2029.It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint 2030Historical statistics for this many latest TXGs will be available in 2031.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . 2032. 2033.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint 2034Flush dirty data to disk at least every this many seconds (maximum TXG 2035duration). 2036. 2037.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2038Max vdev I/O aggregation size. 2039. 2040.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2041Max vdev I/O aggregation size for non-rotating media. 2042. 2043.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int 2044A number by which the balancing algorithm increments the load calculation for 2045the purpose of selecting the least busy mirror member when an I/O operation 2046immediately follows its predecessor on rotational vdevs 2047for the purpose of making decisions based on load. 2048. 2049.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int 2050A number by which the balancing algorithm increments the load calculation for 2051the purpose of selecting the least busy mirror member when an I/O operation 2052lacks locality as defined by 2053.Sy zfs_vdev_mirror_rotating_seek_offset . 2054Operations within this that are not immediately following the previous operation 2055are incremented by half. 2056. 2057.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int 2058The maximum distance for the last queued I/O operation in which 2059the balancing algorithm considers an operation to have locality. 2060.No See Sx ZFS I/O SCHEDULER . 2061. 2062.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int 2063A number by which the balancing algorithm increments the load calculation for 2064the purpose of selecting the least busy mirror member on non-rotational vdevs 2065when I/O operations do not immediately follow one another. 2066. 2067.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int 2068A number by which the balancing algorithm increments the load calculation for 2069the purpose of selecting the least busy mirror member when an I/O operation 2070lacks 2071locality as defined by the 2072.Sy zfs_vdev_mirror_rotating_seek_offset . 2073Operations within this that are not immediately following the previous operation 2074are incremented by half. 2075. 2076.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2077Aggregate read I/O operations if the on-disk gap between them is within this 2078threshold. 2079. 2080.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint 2081Aggregate write I/O operations if the on-disk gap between them is within this 2082threshold. 2083. 2084.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string 2085Select the raidz parity implementation to use. 2086.Pp 2087Variants that don't depend on CPU-specific features 2088may be selected on module load, as they are supported on all systems. 2089The remaining options may only be set after the module is loaded, 2090as they are available only if the implementations are compiled in 2091and supported on the running system. 2092.Pp 2093Once the module is loaded, 2094.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl 2095will show the available options, 2096with the currently selected one enclosed in square brackets. 2097.Pp 2098.TS 2099lb l l . 2100fastest selected by built-in benchmark 2101original original implementation 2102scalar scalar implementation 2103sse2 SSE2 instruction set 64-bit x86 2104ssse3 SSSE3 instruction set 64-bit x86 2105avx2 AVX2 instruction set 64-bit x86 2106avx512f AVX512F instruction set 64-bit x86 2107avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 2108aarch64_neon NEON Aarch64/64-bit ARMv8 2109aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 2110powerpc_altivec Altivec PowerPC 2111.TE 2112. 2113.It Sy zfs_vdev_scheduler Pq charp 2114.Sy DEPRECATED . 2115Prints warning to kernel log for compatibility. 2116. 2117.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint 2118Max event queue length. 2119Events in the queue can be viewed with 2120.Xr zpool-events 8 . 2121. 2122.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int 2123Maximum recent zevent records to retain for duplicate checking. 2124Setting this to 2125.Sy 0 2126disables duplicate detection. 2127. 2128.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int 2129Lifespan for a recent ereport that was retained for duplicate checking. 2130. 2131.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int 2132The maximum number of taskq entries that are allowed to be cached. 2133When this limit is exceeded transaction records (itxs) 2134will be cleaned synchronously. 2135. 2136.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int 2137The number of taskq entries that are pre-populated when the taskq is first 2138created and are immediately available for use. 2139. 2140.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int 2141This controls the number of threads used by 2142.Sy dp_zil_clean_taskq . 2143The default value of 2144.Sy 100% 2145will create a maximum of one thread per cpu. 2146. 2147.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2148This sets the maximum block size used by the ZIL. 2149On very fragmented pools, lowering this 2150.Pq typically to Sy 36 KiB 2151can improve performance. 2152. 2153.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint 2154This sets the maximum number of write bytes logged via WR_COPIED. 2155It tunes a tradeoff between additional memory copy and possibly worse log 2156space efficiency vs additional range lock/unlock. 2157. 2158.It Sy zil_min_commit_timeout Ns = Ns Sy 5000 Pq u64 2159This sets the minimum delay in nanoseconds ZIL care to delay block commit, 2160waiting for more records. 2161If ZIL writes are too fast, kernel may not be able sleep for so short interval, 2162increasing log latency above allowed by 2163.Sy zfs_commit_timeout_pct . 2164. 2165.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 2166Disable the cache flush commands that are normally sent to disk by 2167the ZIL after an LWB write has completed. 2168Setting this will cause ZIL corruption on power loss 2169if a volatile out-of-order write cache is enabled. 2170. 2171.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 2172Disable intent logging replay. 2173Can be disabled for recovery from corrupted ZIL. 2174. 2175.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq u64 2176Limit SLOG write size per commit executed with synchronous priority. 2177Any writes above that will be executed with lower (asynchronous) priority 2178to limit potential SLOG device abuse by single active ZIL writer. 2179. 2180.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int 2181Setting this tunable to zero disables ZIL logging of new 2182.Sy xattr Ns = Ns Sy sa 2183records if the 2184.Sy org.openzfs:zilsaxattr 2185feature is enabled on the pool. 2186This would only be necessary to work around bugs in the ZIL logging or replay 2187code for this record type. 2188The tunable has no effect if the feature is disabled. 2189. 2190.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint 2191Usually, one metaslab from each normal-class vdev is dedicated for use by 2192the ZIL to log synchronous writes. 2193However, if there are fewer than 2194.Sy zfs_embedded_slog_min_ms 2195metaslabs in the vdev, this functionality is disabled. 2196This ensures that we don't set aside an unreasonable amount of space for the 2197ZIL. 2198. 2199.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint 2200Whether heuristic for detection of incompressible data with zstd levels >= 3 2201using LZ4 and zstd-1 passes is enabled. 2202. 2203.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint 2204Minimal uncompressed size (inclusive) of a record before the early abort 2205heuristic will be attempted. 2206. 2207.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int 2208If non-zero, the zio deadman will produce debugging messages 2209.Pq see Sy zfs_dbgmsg_enable 2210for all zios, rather than only for leaf zios possessing a vdev. 2211This is meant to be used by developers to gain 2212diagnostic information for hang conditions which don't involve a mutex 2213or other locking primitive: typically conditions in which a thread in 2214the zio pipeline is looping indefinitely. 2215. 2216.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int 2217When an I/O operation takes more than this much time to complete, 2218it's marked as slow. 2219Each slow operation causes a delay zevent. 2220Slow I/O counters can be seen with 2221.Nm zpool Cm status Fl s . 2222. 2223.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 2224Throttle block allocations in the I/O pipeline. 2225This allows for dynamic allocation distribution when devices are imbalanced. 2226When enabled, the maximum number of pending allocations per top-level vdev 2227is limited by 2228.Sy zfs_vdev_queue_depth_pct . 2229. 2230.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int 2231Control the naming scheme used when setting new xattrs in the user namespace. 2232If 2233.Sy 0 2234.Pq the default on Linux , 2235user namespace xattr names are prefixed with the namespace, to be backwards 2236compatible with previous versions of ZFS on Linux. 2237If 2238.Sy 1 2239.Pq the default on Fx , 2240user namespace xattr names are not prefixed, to be backwards compatible with 2241previous versions of ZFS on illumos and 2242.Fx . 2243.Pp 2244Either naming scheme can be read on this and future versions of ZFS, regardless 2245of this tunable, but legacy ZFS on illumos or 2246.Fx 2247are unable to read user namespace xattrs written in the Linux format, and 2248legacy versions of ZFS on Linux are unable to read user namespace xattrs written 2249in the legacy ZFS format. 2250.Pp 2251An existing xattr with the alternate naming scheme is removed when overwriting 2252the xattr so as to not accumulate duplicates. 2253. 2254.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int 2255Prioritize requeued I/O. 2256. 2257.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint 2258Percentage of online CPUs which will run a worker thread for I/O. 2259These workers are responsible for I/O work such as compression and 2260checksum calculations. 2261Fractional number of CPUs will be rounded down. 2262.Pp 2263The default value of 2264.Sy 80% 2265was chosen to avoid using all CPUs which can result in 2266latency issues and inconsistent application performance, 2267especially when slower compression and/or checksumming is enabled. 2268. 2269.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint 2270Number of worker threads per taskq. 2271Lower values improve I/O ordering and CPU utilization, 2272while higher reduces lock contention. 2273.Pp 2274If 2275.Sy 0 , 2276generate a system-dependent value close to 6 threads per taskq. 2277. 2278.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2279Do not create zvol device nodes. 2280This may slightly improve startup time on 2281systems with a very large number of zvols. 2282. 2283.It Sy zvol_major Ns = Ns Sy 230 Pq uint 2284Major number for zvol block devices. 2285. 2286.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long 2287Discard (TRIM) operations done on zvols will be done in batches of this 2288many blocks, where block size is determined by the 2289.Sy volblocksize 2290property of a zvol. 2291. 2292.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2293When adding a zvol to the system, prefetch this many bytes 2294from the start and end of the volume. 2295Prefetching these regions of the volume is desirable, 2296because they are likely to be accessed immediately by 2297.Xr blkid 8 2298or the kernel partitioner. 2299. 2300.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2301When processing I/O requests for a zvol, submit them synchronously. 2302This effectively limits the queue depth to 2303.Em 1 2304for each I/O submitter. 2305When unset, requests are handled asynchronously by a thread pool. 2306The number of requests which can be handled concurrently is controlled by 2307.Sy zvol_threads . 2308.Sy zvol_request_sync 2309is ignored when running on a kernel that supports block multiqueue 2310.Pq Li blk-mq . 2311. 2312.It Sy zvol_threads Ns = Ns Sy 0 Pq uint 2313The number of system wide threads to use for processing zvol block IOs. 2314If 2315.Sy 0 2316(the default) then internally set 2317.Sy zvol_threads 2318to the number of CPUs present or 32 (whichever is greater). 2319. 2320.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint 2321The number of threads per zvol to use for queuing IO requests. 2322This parameter will only appear if your kernel supports 2323.Li blk-mq 2324and is only read and assigned to a zvol at zvol load time. 2325If 2326.Sy 0 2327(the default) then internally set 2328.Sy zvol_blk_mq_threads 2329to the number of CPUs present. 2330. 2331.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2332Set to 2333.Sy 1 2334to use the 2335.Li blk-mq 2336API for zvols. 2337Set to 2338.Sy 0 2339(the default) to use the legacy zvol APIs. 2340This setting can give better or worse zvol performance depending on 2341the workload. 2342This parameter will only appear if your kernel supports 2343.Li blk-mq 2344and is only read and assigned to a zvol at zvol load time. 2345. 2346.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint 2347If 2348.Sy zvol_use_blk_mq 2349is enabled, then process this number of 2350.Sy volblocksize Ns -sized blocks per zvol thread. 2351This tunable can be use to favor better performance for zvol reads (lower 2352values) or writes (higher values). 2353If set to 2354.Sy 0 , 2355then the zvol layer will process the maximum number of blocks 2356per thread that it can. 2357This parameter will only appear if your kernel supports 2358.Li blk-mq 2359and is only applied at each zvol's load time. 2360. 2361.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint 2362The queue_depth value for the zvol 2363.Li blk-mq 2364interface. 2365This parameter will only appear if your kernel supports 2366.Li blk-mq 2367and is only applied at each zvol's load time. 2368If 2369.Sy 0 2370(the default) then use the kernel's default queue depth. 2371Values are clamped to the kernel's 2372.Dv BLKDEV_MIN_RQ 2373and 2374.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ 2375limits. 2376. 2377.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint 2378Defines zvol block devices behaviour when 2379.Sy volmode Ns = Ns Sy default : 2380.Bl -tag -compact -offset 4n -width "a" 2381.It Sy 1 2382.No equivalent to Sy full 2383.It Sy 2 2384.No equivalent to Sy dev 2385.It Sy 3 2386.No equivalent to Sy none 2387.El 2388. 2389.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2390Enable strict ZVOL quota enforcement. 2391The strict quota enforcement may have a performance impact. 2392.El 2393. 2394.Sh ZFS I/O SCHEDULER 2395ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. 2396The scheduler determines when and in what order those operations are issued. 2397The scheduler divides operations into five I/O classes, 2398prioritized in the following order: sync read, sync write, async read, 2399async write, and scrub/resilver. 2400Each queue defines the minimum and maximum number of concurrent operations 2401that may be issued to the device. 2402In addition, the device has an aggregate maximum, 2403.Sy zfs_vdev_max_active . 2404Note that the sum of the per-queue minima must not exceed the aggregate maximum. 2405If the sum of the per-queue maxima exceeds the aggregate maximum, 2406then the number of active operations may reach 2407.Sy zfs_vdev_max_active , 2408in which case no further operations will be issued, 2409regardless of whether all per-queue minima have been met. 2410.Pp 2411For many physical devices, throughput increases with the number of 2412concurrent operations, but latency typically suffers. 2413Furthermore, physical devices typically have a limit 2414at which more concurrent operations have no 2415effect on throughput or can actually cause it to decrease. 2416.Pp 2417The scheduler selects the next operation to issue by first looking for an 2418I/O class whose minimum has not been satisfied. 2419Once all are satisfied and the aggregate maximum has not been hit, 2420the scheduler looks for classes whose maximum has not been satisfied. 2421Iteration through the I/O classes is done in the order specified above. 2422No further operations are issued 2423if the aggregate maximum number of concurrent operations has been hit, 2424or if there are no operations queued for an I/O class that has not hit its 2425maximum. 2426Every time an I/O operation is queued or an operation completes, 2427the scheduler looks for new operations to issue. 2428.Pp 2429In general, smaller 2430.Sy max_active Ns s 2431will lead to lower latency of synchronous operations. 2432Larger 2433.Sy max_active Ns s 2434may lead to higher overall throughput, depending on underlying storage. 2435.Pp 2436The ratio of the queues' 2437.Sy max_active Ns s 2438determines the balance of performance between reads, writes, and scrubs. 2439For example, increasing 2440.Sy zfs_vdev_scrub_max_active 2441will cause the scrub or resilver to complete more quickly, 2442but reads and writes to have higher latency and lower throughput. 2443.Pp 2444All I/O classes have a fixed maximum number of outstanding operations, 2445except for the async write class. 2446Asynchronous writes represent the data that is committed to stable storage 2447during the syncing stage for transaction groups. 2448Transaction groups enter the syncing state periodically, 2449so the number of queued async writes will quickly burst up 2450and then bleed down to zero. 2451Rather than servicing them as quickly as possible, 2452the I/O scheduler changes the maximum number of active async write operations 2453according to the amount of dirty data in the pool. 2454Since both throughput and latency typically increase with the number of 2455concurrent operations issued to physical devices, reducing the 2456burstiness in the number of simultaneous operations also stabilizes the 2457response time of operations from other queues, in particular synchronous ones. 2458In broad strokes, the I/O scheduler will issue more concurrent operations 2459from the async write queue as there is more dirty data in the pool. 2460. 2461.Ss Async Writes 2462The number of concurrent operations issued for the async write I/O class 2463follows a piece-wise linear function defined by a few adjustable points: 2464.Bd -literal 2465 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP 2466 ^ | /^ | 2467 | | / | | 2468active | / | | 2469 I/O | / | | 2470count | / | | 2471 | / | | 2472 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP 2473 0|_______^______|_________| 2474 0% | | 100% of \fBzfs_dirty_data_max\fP 2475 | | 2476 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP 2477 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP 2478.Ed 2479.Pp 2480Until the amount of dirty data exceeds a minimum percentage of the dirty 2481data allowed in the pool, the I/O scheduler will limit the number of 2482concurrent operations to the minimum. 2483As that threshold is crossed, the number of concurrent operations issued 2484increases linearly to the maximum at the specified maximum percentage 2485of the dirty data allowed in the pool. 2486.Pp 2487Ideally, the amount of dirty data on a busy pool will stay in the sloped 2488part of the function between 2489.Sy zfs_vdev_async_write_active_min_dirty_percent 2490and 2491.Sy zfs_vdev_async_write_active_max_dirty_percent . 2492If it exceeds the maximum percentage, 2493this indicates that the rate of incoming data is 2494greater than the rate that the backend storage can handle. 2495In this case, we must further throttle incoming writes, 2496as described in the next section. 2497. 2498.Sh ZFS TRANSACTION DELAY 2499We delay transactions when we've determined that the backend storage 2500isn't able to accommodate the rate of incoming writes. 2501.Pp 2502If there is already a transaction waiting, we delay relative to when 2503that transaction will finish waiting. 2504This way the calculated delay time 2505is independent of the number of threads concurrently executing transactions. 2506.Pp 2507If we are the only waiter, wait relative to when the transaction started, 2508rather than the current time. 2509This credits the transaction for "time already served", 2510e.g. reading indirect blocks. 2511.Pp 2512The minimum time for a transaction to take is calculated as 2513.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) 2514.Pp 2515The delay has two degrees of freedom that can be adjusted via tunables. 2516The percentage of dirty data at which we start to delay is defined by 2517.Sy zfs_delay_min_dirty_percent . 2518This should typically be at or above 2519.Sy zfs_vdev_async_write_active_max_dirty_percent , 2520so that we only start to delay after writing at full speed 2521has failed to keep up with the incoming write rate. 2522The scale of the curve is defined by 2523.Sy zfs_delay_scale . 2524Roughly speaking, this variable determines the amount of delay at the midpoint 2525of the curve. 2526.Bd -literal 2527delay 2528 10ms +-------------------------------------------------------------*+ 2529 | *| 2530 9ms + *+ 2531 | *| 2532 8ms + *+ 2533 | * | 2534 7ms + * + 2535 | * | 2536 6ms + * + 2537 | * | 2538 5ms + * + 2539 | * | 2540 4ms + * + 2541 | * | 2542 3ms + * + 2543 | * | 2544 2ms + (midpoint) * + 2545 | | ** | 2546 1ms + v *** + 2547 | \fBzfs_delay_scale\fP ----------> ******** | 2548 0 +-------------------------------------*********----------------+ 2549 0% <- \fBzfs_dirty_data_max\fP -> 100% 2550.Ed 2551.Pp 2552Note, that since the delay is added to the outstanding time remaining on the 2553most recent transaction it's effectively the inverse of IOPS. 2554Here, the midpoint of 2555.Em 500 us 2556translates to 2557.Em 2000 IOPS . 2558The shape of the curve 2559was chosen such that small changes in the amount of accumulated dirty data 2560in the first three quarters of the curve yield relatively small differences 2561in the amount of delay. 2562.Pp 2563The effects can be easier to understand when the amount of delay is 2564represented on a logarithmic scale: 2565.Bd -literal 2566delay 2567100ms +-------------------------------------------------------------++ 2568 + + 2569 | | 2570 + *+ 2571 10ms + *+ 2572 + ** + 2573 | (midpoint) ** | 2574 + | ** + 2575 1ms + v **** + 2576 + \fBzfs_delay_scale\fP ----------> ***** + 2577 | **** | 2578 + **** + 2579100us + ** + 2580 + * + 2581 | * | 2582 + * + 2583 10us + * + 2584 + + 2585 | | 2586 + + 2587 +--------------------------------------------------------------+ 2588 0% <- \fBzfs_dirty_data_max\fP -> 100% 2589.Ed 2590.Pp 2591Note here that only as the amount of dirty data approaches its limit does 2592the delay start to increase rapidly. 2593The goal of a properly tuned system should be to keep the amount of dirty data 2594out of that range by first ensuring that the appropriate limits are set 2595for the I/O scheduler to reach optimal throughput on the back-end storage, 2596and then by changing the value of 2597.Sy zfs_delay_scale 2598to increase the steepness of the curve. 2599