1.\" 2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. 3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. 4.\" Copyright (c) 2019 Datto Inc. 5.\" The contents of this file are subject to the terms of the Common Development 6.\" and Distribution License (the "License"). You may not use this file except 7.\" in compliance with the License. You can obtain a copy of the license at 8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. 9.\" 10.\" See the License for the specific language governing permissions and 11.\" limitations under the License. When distributing Covered Code, include this 12.\" CDDL HEADER in each file and include the License file at 13.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this 14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your 15.\" own identifying information: 16.\" Portions Copyright [yyyy] [name of copyright owner] 17.\" 18.Dd January 10, 2023 19.Dt ZFS 4 20.Os 21. 22.Sh NAME 23.Nm zfs 24.Nd tuning of the ZFS kernel module 25. 26.Sh DESCRIPTION 27The ZFS module supports these parameters: 28.Bl -tag -width Ds 29.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 30Maximum size in bytes of the dbuf cache. 31The target size is determined by the MIN versus 32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd 33of the target ARC size. 34The behavior of the dbuf cache and its associated settings 35can be observed via the 36.Pa /proc/spl/kstat/zfs/dbufstats 37kstat. 38. 39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 40Maximum size in bytes of the metadata dbuf cache. 41The target size is determined by the MIN versus 42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th 43of the target ARC size. 44The behavior of the metadata dbuf cache and its associated settings 45can be observed via the 46.Pa /proc/spl/kstat/zfs/dbufstats 47kstat. 48. 49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint 50The percentage over 51.Sy dbuf_cache_max_bytes 52when dbufs must be evicted directly. 53. 54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint 55The percentage below 56.Sy dbuf_cache_max_bytes 57when the evict thread stops evicting dbufs. 58. 59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint 60Set the size of the dbuf cache 61.Pq Sy dbuf_cache_max_bytes 62to a log2 fraction of the target ARC size. 63. 64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint 65Set the size of the dbuf metadata cache 66.Pq Sy dbuf_metadata_cache_max_bytes 67to a log2 fraction of the target ARC size. 68. 69.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint 70Set the size of the mutex array for the dbuf cache. 71When set to 72.Sy 0 73the array is dynamically sized based on total system memory. 74. 75.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint 76dnode slots allocated in a single operation as a power of 2. 77The default value minimizes lock contention for the bulk operation performed. 78. 79.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 80Limit the amount we can prefetch with one call to this amount in bytes. 81This helps to limit the amount of memory that can be used by prefetching. 82. 83.It Sy ignore_hole_birth Pq int 84Alias for 85.Sy send_holes_without_birth_time . 86. 87.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int 88Turbo L2ARC warm-up. 89When the L2ARC is cold the fill interval will be set as fast as possible. 90. 91.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 92Min feed interval in milliseconds. 93Requires 94.Sy l2arc_feed_again Ns = Ns Ar 1 95and only applicable in related situations. 96. 97.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 98Seconds between L2ARC writing. 99. 100.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64 101How far through the ARC lists to search for L2ARC cacheable content, 102expressed as a multiplier of 103.Sy l2arc_write_max . 104ARC persistence across reboots can be achieved with persistent L2ARC 105by setting this parameter to 106.Sy 0 , 107allowing the full length of ARC lists to be searched for cacheable content. 108. 109.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 110Scales 111.Sy l2arc_headroom 112by this percentage when L2ARC contents are being successfully compressed 113before writing. 114A value of 115.Sy 100 116disables this feature. 117. 118.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int 119Controls whether buffers present on special vdevs are eligible for caching 120into L2ARC. 121If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. 122. 123.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int 124Controls whether only MFU metadata and data are cached from ARC into L2ARC. 125This may be desired to avoid wasting space on L2ARC when reading/writing large 126amounts of data that are not expected to be accessed more than once. 127.Pp 128The default is off, 129meaning both MRU and MFU data and metadata are cached. 130When turning off this feature, some MRU buffers will still be present 131in ARC and eventually cached on L2ARC. 132.No If Sy l2arc_noprefetch Ns = Ns Sy 0 , 133some prefetched buffers will be cached to L2ARC, and those might later 134transition to MRU, in which case the 135.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 136.Pp 137Regardless of 138.Sy l2arc_noprefetch , 139some MFU buffers might be evicted from ARC, 140accessed later on as prefetches and transition to MRU as prefetches. 141If accessed again they are counted as MRU and the 142.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 143.Pp 144The ARC status of L2ARC buffers when they were first cached in 145L2ARC can be seen in the 146.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize 147arcstats when importing the pool or onlining a cache 148device if persistent L2ARC is enabled. 149.Pp 150The 151.Sy evict_l2_eligible_mru 152arcstat does not take into account if this option is enabled as the information 153provided by the 154.Sy evict_l2_eligible_m[rf]u 155arcstats can be used to decide if toggling this option is appropriate 156for the current workload. 157. 158.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint 159Percent of ARC size allowed for L2ARC-only headers. 160Since L2ARC buffers are not evicted on memory pressure, 161too many headers on a system with an irrationally large L2ARC 162can render it slow or unusable. 163This parameter limits L2ARC writes and rebuilds to achieve the target. 164. 165.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 166Trims ahead of the current write size 167.Pq Sy l2arc_write_max 168on L2ARC devices by this percentage of write size if we have filled the device. 169If set to 170.Sy 100 171we TRIM twice the space required to accommodate upcoming writes. 172A minimum of 173.Sy 64 MiB 174will be trimmed. 175It also enables TRIM of the whole L2ARC device upon creation 176or addition to an existing pool or if the header of the device is 177invalid upon importing a pool or onlining a cache device. 178A value of 179.Sy 0 180disables TRIM on L2ARC altogether and is the default as it can put significant 181stress on the underlying storage devices. 182This will vary depending of how well the specific device handles these commands. 183. 184.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 185Do not write buffers to L2ARC if they were prefetched but not used by 186applications. 187In case there are prefetched buffers in L2ARC and this option 188is later set, we do not read the prefetched buffers from L2ARC. 189Unsetting this option is useful for caching sequential reads from the 190disks to L2ARC and serve those reads from L2ARC later on. 191This may be beneficial in case the L2ARC device is significantly faster 192in sequential reads than the disks of the pool. 193.Pp 194Use 195.Sy 1 196to disable and 197.Sy 0 198to enable caching/reading prefetches to/from L2ARC. 199. 200.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int 201No reads during writes. 202. 203.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 204Cold L2ARC devices will have 205.Sy l2arc_write_max 206increased by this amount while they remain cold. 207. 208.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 209Max write bytes per interval. 210. 211.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 212Rebuild the L2ARC when importing a pool (persistent L2ARC). 213This can be disabled if there are problems importing a pool 214or attaching an L2ARC device (e.g. the L2ARC device is slow 215in reading stored log metadata, or the metadata 216has become somehow fragmented/unusable). 217. 218.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 219Mininum size of an L2ARC device required in order to write log blocks in it. 220The log blocks are used upon importing the pool to rebuild the persistent L2ARC. 221.Pp 222For L2ARC devices less than 1 GiB, the amount of data 223.Fn l2arc_evict 224evicts is significant compared to the amount of restored L2ARC data. 225In this case, do not write log blocks in L2ARC in order not to waste space. 226. 227.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 228Metaslab granularity, in bytes. 229This is roughly similar to what would be referred to as the "stripe size" 230in traditional RAID arrays. 231In normal operation, ZFS will try to write this amount of data to each disk 232before moving on to the next top-level vdev. 233. 234.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 235Enable metaslab group biasing based on their vdevs' over- or under-utilization 236relative to the pool. 237. 238.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 239Make some blocks above a certain size be gang blocks. 240This option is used by the test suite to facilitate testing. 241. 242.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int 243Default dnode block size as a power of 2. 244. 245.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int 246Default dnode indirect block size as a power of 2. 247. 248.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 249When attempting to log an output nvlist of an ioctl in the on-disk history, 250the output will not be stored if it is larger than this size (in bytes). 251This must be less than 252.Sy DMU_MAX_ACCESS Pq 64 MiB . 253This applies primarily to 254.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . 255. 256.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int 257Prevent log spacemaps from being destroyed during pool exports and destroys. 258. 259.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 260Enable/disable segment-based metaslab selection. 261. 262.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int 263When using segment-based metaslab selection, continue allocating 264from the active metaslab until this option's 265worth of buckets have been exhausted. 266. 267.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int 268Load all metaslabs during pool import. 269. 270.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int 271Prevent metaslabs from being unloaded. 272. 273.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 274Enable use of the fragmentation metric in computing metaslab weights. 275. 276.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 277Maximum distance to search forward from the last offset. 278Without this limit, fragmented pools can see 279.Em >100`000 280iterations and 281.Fn metaslab_block_picker 282becomes the performance limiting factor on high-performance storage. 283.Pp 284With the default setting of 285.Sy 16 MiB , 286we typically see less than 287.Em 500 288iterations, even with very fragmented 289.Sy ashift Ns = Ns Sy 9 290pools. 291The maximum number of iterations possible is 292.Sy metaslab_df_max_search / 2^(ashift+1) . 293With the default setting of 294.Sy 16 MiB 295this is 296.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 297or 298.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . 299. 300.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int 301If not searching forward (due to 302.Sy metaslab_df_max_search , metaslab_df_free_pct , 303.No or Sy metaslab_df_alloc_threshold ) , 304this tunable controls which segment is used. 305If set, we will use the largest free segment. 306If unset, we will use a segment of at least the requested size. 307. 308.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 309When we unload a metaslab, we cache the size of the largest free chunk. 310We use that cached size to determine whether or not to load a metaslab 311for a given allocation. 312As more frees accumulate in that metaslab while it's unloaded, 313the cached max size becomes less and less accurate. 314After a number of seconds controlled by this tunable, 315we stop considering the cached max size and start 316considering only the histogram instead. 317. 318.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint 319When we are loading a new metaslab, we check the amount of memory being used 320to store metaslab range trees. 321If it is over a threshold, we attempt to unload the least recently used metaslab 322to prevent the system from clogging all of its memory with range trees. 323This tunable sets the percentage of total system memory that is the threshold. 324. 325.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int 326.Bl -item -compact 327.It 328If unset, we will first try normal allocation. 329.It 330If that fails then we will do a gang allocation. 331.It 332If that fails then we will do a "try hard" gang allocation. 333.It 334If that fails then we will have a multi-layer gang block. 335.El 336.Pp 337.Bl -item -compact 338.It 339If set, we will first try normal allocation. 340.It 341If that fails then we will do a "try hard" allocation. 342.It 343If that fails we will do a gang allocation. 344.It 345If that fails we will do a "try hard" gang allocation. 346.It 347If that fails then we will have a multi-layer gang block. 348.El 349. 350.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint 351When not trying hard, we only consider this number of the best metaslabs. 352This improves performance, especially when there are many metaslabs per vdev 353and the allocation can't actually be satisfied 354(so we would otherwise iterate all metaslabs). 355. 356.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint 357When a vdev is added, target this number of metaslabs per top-level vdev. 358. 359.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint 360Default limit for metaslab size. 361. 362.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint 363Maximum ashift used when optimizing for logical \[->] physical sector size on 364new 365top-level vdevs. 366May be increased up to 367.Sy ASHIFT_MAX Po 16 Pc , 368but this may negatively impact pool space efficiency. 369. 370.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint 371Minimum ashift used when creating new top-level vdevs. 372. 373.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint 374Minimum number of metaslabs to create in a top-level vdev. 375. 376.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int 377Skip label validation steps during pool import. 378Changing is not recommended unless you know what you're doing 379and are recovering a damaged label. 380. 381.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint 382Practical upper limit of total metaslabs per top-level vdev. 383. 384.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 385Enable metaslab group preloading. 386. 387.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 388Give more weight to metaslabs with lower LBAs, 389assuming they have greater bandwidth, 390as is typically the case on a modern constant angular velocity disk drive. 391. 392.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint 393After a metaslab is used, we keep it loaded for this many TXGs, to attempt to 394reduce unnecessary reloading. 395Note that both this many TXGs and 396.Sy metaslab_unload_delay_ms 397milliseconds must pass before unloading will occur. 398. 399.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint 400After a metaslab is used, we keep it loaded for this many milliseconds, 401to attempt to reduce unnecessary reloading. 402Note, that both this many milliseconds and 403.Sy metaslab_unload_delay 404TXGs must pass before unloading will occur. 405. 406.It Sy reference_history Ns = Ns Sy 3 Pq uint 407Maximum reference holders being tracked when reference_tracking_enable is 408active. 409. 410.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int 411Track reference holders to 412.Sy refcount_t 413objects (debug builds only). 414. 415.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int 416When set, the 417.Sy hole_birth 418optimization will not be used, and all holes will always be sent during a 419.Nm zfs Cm send . 420This is useful if you suspect your datasets are affected by a bug in 421.Sy hole_birth . 422. 423.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp 424SPA config file. 425. 426.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint 427Multiplication factor used to estimate actual disk consumption from the 428size of data being written. 429The default value is a worst case estimate, 430but lower values may be valid for a given pool depending on its configuration. 431Pool administrators who understand the factors involved 432may wish to specify a more realistic inflation factor, 433particularly if they operate close to quota or capacity limits. 434. 435.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int 436Whether to print the vdev tree in the debugging message buffer during pool 437import. 438. 439.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int 440Whether to traverse data blocks during an "extreme rewind" 441.Pq Fl X 442import. 443.Pp 444An extreme rewind import normally performs a full traversal of all 445blocks in the pool for verification. 446If this parameter is unset, the traversal skips non-metadata blocks. 447It can be toggled once the 448import has started to stop or start the traversal of non-metadata blocks. 449. 450.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int 451Whether to traverse blocks during an "extreme rewind" 452.Pq Fl X 453pool import. 454.Pp 455An extreme rewind import normally performs a full traversal of all 456blocks in the pool for verification. 457If this parameter is unset, the traversal is not performed. 458It can be toggled once the import has started to stop or start the traversal. 459. 460.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint 461Sets the maximum number of bytes to consume during pool import to the log2 462fraction of the target ARC size. 463. 464.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int 465Normally, we don't allow the last 466.Sy 3.2% Pq Sy 1/2^spa_slop_shift 467of space in the pool to be consumed. 468This ensures that we don't run the pool completely out of space, 469due to unaccounted changes (e.g. to the MOS). 470It also limits the worst-case time to allocate space. 471If we have less than this amount of free space, 472most ZPL operations (e.g. write, create) will return 473.Sy ENOSPC . 474. 475.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint 476Limits the number of on-disk error log entries that will be converted to the 477new format when enabling the 478.Sy head_errlog 479feature. 480The default is to convert all log entries. 481. 482.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 483During top-level vdev removal, chunks of data are copied from the vdev 484which may include free space in order to trade bandwidth for IOPS. 485This parameter determines the maximum span of free space, in bytes, 486which will be included as "unnecessary" data in a chunk of copied data. 487.Pp 488The default value here was chosen to align with 489.Sy zfs_vdev_read_gap_limit , 490which is a similar concept when doing 491regular reads (but there's no reason it has to be the same). 492. 493.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 494Logical ashift for file-based devices. 495. 496.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 497Physical ashift for file-based devices. 498. 499.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 500If set, when we start iterating over a ZAP object, 501prefetch the entire object (all leaf blocks). 502However, this is limited by 503.Sy dmu_prefetch_max . 504. 505.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int 506Maximum micro ZAP size. 507A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size. 508. 509.It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 510If prefetching is enabled, disable prefetching for reads larger than this size. 511. 512.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 513Min bytes to prefetch per stream. 514Prefetch distance starts from the demand access size and quickly grows to 515this value, doubling on each hit. 516After that it may grow further by 1/8 per hit, but only if some prefetch 517since last time haven't completed in time to satisfy demand request, i.e. 518prefetch depth didn't cover the read latency or the pool got saturated. 519. 520.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 521Max bytes to prefetch per stream. 522. 523.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 524Max bytes to prefetch indirects for per stream. 525. 526.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint 527Max number of streams per zfetch (prefetch streams per file). 528. 529.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint 530Min time before inactive prefetch stream can be reclaimed 531. 532.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint 533Max time before inactive prefetch stream can be deleted 534. 535.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 536Enables ARC from using scatter/gather lists and forces all allocations to be 537linear in kernel memory. 538Disabling can improve performance in some code paths 539at the expense of fragmented kernel memory. 540. 541.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint 542Maximum number of consecutive memory pages allocated in a single block for 543scatter/gather lists. 544.Pp 545The value of 546.Sy MAX_ORDER 547depends on kernel configuration. 548. 549.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint 550This is the minimum allocation size that will use scatter (page-based) ABDs. 551Smaller allocations will use linear ABDs. 552. 553.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 554When the number of bytes consumed by dnodes in the ARC exceeds this number of 555bytes, try to unpin some of it in response to demand for non-metadata. 556This value acts as a ceiling to the amount of dnode metadata, and defaults to 557.Sy 0 , 558which indicates that a percent which is based on 559.Sy zfs_arc_dnode_limit_percent 560of the ARC meta buffers that may be used for dnodes. 561.Pp 562Also see 563.Sy zfs_arc_meta_prune 564which serves a similar purpose but is used 565when the amount of metadata in the ARC exceeds 566.Sy zfs_arc_meta_limit 567rather than in response to overall demand for non-metadata. 568. 569.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 570Percentage that can be consumed by dnodes of ARC meta buffers. 571.Pp 572See also 573.Sy zfs_arc_dnode_limit , 574which serves a similar purpose but has a higher priority if nonzero. 575. 576.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 577Percentage of ARC dnodes to try to scan in response to demand for non-metadata 578when the number of bytes consumed by dnodes exceeds 579.Sy zfs_arc_dnode_limit . 580. 581.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint 582The ARC's buffer hash table is sized based on the assumption of an average 583block size of this value. 584This works out to roughly 1 MiB of hash table per 1 GiB of physical memory 585with 8-byte pointers. 586For configurations with a known larger average block size, 587this value can be increased to reduce the memory footprint. 588. 589.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint 590When 591.Fn arc_is_overflowing , 592.Fn arc_get_data_impl 593waits for this percent of the requested amount of data to be evicted. 594For example, by default, for every 595.Em 2 KiB 596that's evicted, 597.Em 1 KiB 598of it may be "reused" by a new allocation. 599Since this is above 600.Sy 100 Ns % , 601it ensures that progress is made towards getting 602.Sy arc_size No under Sy arc_c . 603Since this is finite, it ensures that allocations can still happen, 604even during the potentially long time that 605.Sy arc_size No is more than Sy arc_c . 606. 607.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint 608Number ARC headers to evict per sub-list before proceeding to another sub-list. 609This batch-style operation prevents entire sub-lists from being evicted at once 610but comes at a cost of additional unlocking and locking. 611. 612.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint 613If set to a non zero value, it will replace the 614.Sy arc_grow_retry 615value with this value. 616The 617.Sy arc_grow_retry 618.No value Pq default Sy 5 Ns s 619is the number of seconds the ARC will wait before 620trying to resume growth after a memory pressure event. 621. 622.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int 623Throttle I/O when free system memory drops below this percentage of total 624system memory. 625Setting this value to 626.Sy 0 627will disable the throttle. 628. 629.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 630Max size of ARC in bytes. 631If 632.Sy 0 , 633then the max size of ARC is determined by the amount of system memory installed. 634Under Linux, half of system memory will be used as the limit. 635Under 636.Fx , 637the larger of 638.Sy all_system_memory No \- Sy 1 GiB 639and 640.Sy 5/8 No \(mu Sy all_system_memory 641will be used as the limit. 642This value must be at least 643.Sy 67108864 Ns B Pq 64 MiB . 644.Pp 645This value can be changed dynamically, with some caveats. 646It cannot be set back to 647.Sy 0 648while running, and reducing it below the current ARC size will not cause 649the ARC to shrink without memory pressure to induce shrinking. 650. 651.It Sy zfs_arc_meta_adjust_restarts Ns = Ns Sy 4096 Pq uint 652The number of restart passes to make while scanning the ARC attempting 653the free buffers in order to stay below the 654.Sy fs_arc_meta_limit . 655This value should not need to be tuned but is available to facilitate 656performance analysis. 657. 658.It Sy zfs_arc_meta_limit Ns = Ns Sy 0 Ns B Pq u64 659The maximum allowed size in bytes that metadata buffers are allowed to 660consume in the ARC. 661When this limit is reached, metadata buffers will be reclaimed, 662even if the overall 663.Sy arc_c_max 664has not been reached. 665It defaults to 666.Sy 0 , 667which indicates that a percentage based on 668.Sy zfs_arc_meta_limit_percent 669of the ARC may be used for metadata. 670.Pp 671This value my be changed dynamically, except that must be set to an explicit 672value 673.Pq cannot be set back to Sy 0 . 674. 675.It Sy zfs_arc_meta_limit_percent Ns = Ns Sy 75 Ns % Pq u64 676Percentage of ARC buffers that can be used for metadata. 677.Pp 678See also 679.Sy zfs_arc_meta_limit , 680which serves a similar purpose but has a higher priority if nonzero. 681. 682.It Sy zfs_arc_meta_min Ns = Ns Sy 0 Ns B Pq u64 683The minimum allowed size in bytes that metadata buffers may consume in 684the ARC. 685. 686.It Sy zfs_arc_meta_prune Ns = Ns Sy 10000 Pq int 687The number of dentries and inodes to be scanned looking for entries 688which can be dropped. 689This may be required when the ARC reaches the 690.Sy zfs_arc_meta_limit 691because dentries and inodes can pin buffers in the ARC. 692Increasing this value will cause to dentry and inode caches 693to be pruned more aggressively. 694Setting this value to 695.Sy 0 696will disable pruning the inode and dentry caches. 697. 698.It Sy zfs_arc_meta_strategy Ns = Ns Sy 1 Ns | Ns 0 Pq uint 699Define the strategy for ARC metadata buffer eviction (meta reclaim strategy): 700.Bl -tag -compact -offset 4n -width "0 (META_ONLY)" 701.It Sy 0 Pq META_ONLY 702evict only the ARC metadata buffers 703.It Sy 1 Pq BALANCED 704additional data buffers may be evicted if required 705to evict the required number of metadata buffers. 706.El 707. 708.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 709Min size of ARC in bytes. 710.No If set to Sy 0 , arc_c_min 711will default to consuming the larger of 712.Sy 32 MiB 713and 714.Sy all_system_memory No / Sy 32 . 715. 716.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint 717Minimum time prefetched blocks are locked in the ARC. 718. 719.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint 720Minimum time "prescient prefetched" blocks are locked in the ARC. 721These blocks are meant to be prefetched fairly aggressively ahead of 722the code that may use them. 723. 724.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int 725Number of arc_prune threads. 726.Fx 727does not need more than one. 728Linux may theoretically use one per mount point up to number of CPUs, 729but that was not proven to be useful. 730. 731.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int 732Number of missing top-level vdevs which will be allowed during 733pool import (only in read-only mode). 734. 735.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 736Maximum size in bytes allowed to be passed as 737.Sy zc_nvlist_src_size 738for ioctls on 739.Pa /dev/zfs . 740This prevents a user from causing the kernel to allocate 741an excessive amount of memory. 742When the limit is exceeded, the ioctl fails with 743.Sy EINVAL 744and a description of the error is sent to the 745.Pa zfs-dbgmsg 746log. 747This parameter should not need to be touched under normal circumstances. 748If 749.Sy 0 , 750equivalent to a quarter of the user-wired memory limit under 751.Fx 752and to 753.Sy 134217728 Ns B Pq 128 MiB 754under Linux. 755. 756.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint 757To allow more fine-grained locking, each ARC state contains a series 758of lists for both data and metadata objects. 759Locking is performed at the level of these "sub-lists". 760This parameters controls the number of sub-lists per ARC state, 761and also applies to other uses of the multilist data structure. 762.Pp 763If 764.Sy 0 , 765equivalent to the greater of the number of online CPUs and 766.Sy 4 . 767. 768.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int 769The ARC size is considered to be overflowing if it exceeds the current 770ARC target size 771.Pq Sy arc_c 772by thresholds determined by this parameter. 773Exceeding by 774.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 775starts ARC reclamation process. 776If that appears insufficient, exceeding by 777.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 778blocks new buffer allocation until the reclaim thread catches up. 779Started reclamation process continues till ARC size returns below the 780target size. 781.Pp 782The default value of 783.Sy 8 784causes the ARC to start reclamation if it exceeds the target size by 785.Em 0.2% 786of the target size, and block allocations by 787.Em 0.6% . 788. 789.It Sy zfs_arc_p_min_shift Ns = Ns Sy 0 Pq uint 790If nonzero, this will update 791.Sy arc_p_min_shift Pq default Sy 4 792with the new value. 793.Sy arc_p_min_shift No is used as a shift of Sy arc_c 794when calculating the minumum 795.Sy arc_p No size . 796. 797.It Sy zfs_arc_p_dampener_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 798Disable 799.Sy arc_p 800adapt dampener, which reduces the maximum single adjustment to 801.Sy arc_p . 802. 803.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint 804If nonzero, this will update 805.Sy arc_shrink_shift Pq default Sy 7 806with the new value. 807. 808.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint 809Percent of pagecache to reclaim ARC to. 810.Pp 811This tunable allows the ZFS ARC to play more nicely 812with the kernel's LRU pagecache. 813It can guarantee that the ARC size won't collapse under scanning 814pressure on the pagecache, yet still allows the ARC to be reclaimed down to 815.Sy zfs_arc_min 816if necessary. 817This value is specified as percent of pagecache size (as measured by 818.Sy NR_FILE_PAGES ) , 819where that percent may exceed 820.Sy 100 . 821This 822only operates during memory pressure/reclaim. 823. 824.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int 825This is a limit on how many pages the ARC shrinker makes available for 826eviction in response to one page allocation attempt. 827Note that in practice, the kernel's shrinker can ask us to evict 828up to about four times this for one allocation attempt. 829.Pp 830The default limit of 831.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages 832limits the amount of time spent attempting to reclaim ARC memory to 833less than 100 ms per allocation attempt, 834even with a small average compressed block size of ~8 KiB. 835.Pp 836The parameter can be set to 0 (zero) to disable the limit, 837and only applies on Linux. 838. 839.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 840The target number of bytes the ARC should leave as free memory on the system. 841If zero, equivalent to the bigger of 842.Sy 512 KiB No and Sy all_system_memory/64 . 843. 844.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 845Disable pool import at module load by ignoring the cache file 846.Pq Sy spa_config_path . 847. 848.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 849Rate limit checksum events to this many per second. 850Note that this should not be set below the ZED thresholds 851(currently 10 checksums over 10 seconds) 852or else the daemon may not trigger any action. 853. 854.It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq uint 855This controls the amount of time that a ZIL block (lwb) will remain "open" 856when it isn't "full", and it has a thread waiting for it to be committed to 857stable storage. 858The timeout is scaled based on a percentage of the last lwb 859latency to avoid significantly impacting the latency of each individual 860transaction record (itx). 861. 862.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int 863Vdev indirection layer (used for device removal) sleeps for this many 864milliseconds during mapping generation. 865Intended for use with the test suite to throttle vdev removal speed. 866. 867.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint 868Minimum percent of obsolete bytes in vdev mapping required to attempt to 869condense 870.Pq see Sy zfs_condense_indirect_vdevs_enable . 871Intended for use with the test suite 872to facilitate triggering condensing as needed. 873. 874.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 875Enable condensing indirect vdev mappings. 876When set, attempt to condense indirect vdev mappings 877if the mapping uses more than 878.Sy zfs_condense_min_mapping_bytes 879bytes of memory and if the obsolete space map object uses more than 880.Sy zfs_condense_max_obsolete_bytes 881bytes on-disk. 882The condensing process is an attempt to save memory by removing obsolete 883mappings. 884. 885.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 886Only attempt to condense indirect vdev mappings if the on-disk size 887of the obsolete space map object is greater than this number of bytes 888.Pq see Sy zfs_condense_indirect_vdevs_enable . 889. 890.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 891Minimum size vdev mapping to attempt to condense 892.Pq see Sy zfs_condense_indirect_vdevs_enable . 893. 894.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 895Internally ZFS keeps a small log to facilitate debugging. 896The log is enabled by default, and can be disabled by unsetting this option. 897The contents of the log can be accessed by reading 898.Pa /proc/spl/kstat/zfs/dbgmsg . 899Writing 900.Sy 0 901to the file clears the log. 902.Pp 903This setting does not influence debug prints due to 904.Sy zfs_flags . 905. 906.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 907Maximum size of the internal ZFS debug log. 908. 909.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int 910Historically used for controlling what reporting was available under 911.Pa /proc/spl/kstat/zfs . 912No effect. 913. 914.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 915When a pool sync operation takes longer than 916.Sy zfs_deadman_synctime_ms , 917or when an individual I/O operation takes longer than 918.Sy zfs_deadman_ziotime_ms , 919then the operation is considered to be "hung". 920If 921.Sy zfs_deadman_enabled 922is set, then the deadman behavior is invoked as described by 923.Sy zfs_deadman_failmode . 924By default, the deadman is enabled and set to 925.Sy wait 926which results in "hung" I/O operations only being logged. 927The deadman is automatically disabled when a pool gets suspended. 928. 929.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp 930Controls the failure behavior when the deadman detects a "hung" I/O operation. 931Valid values are: 932.Bl -tag -compact -offset 4n -width "continue" 933.It Sy wait 934Wait for a "hung" operation to complete. 935For each "hung" operation a "deadman" event will be posted 936describing that operation. 937.It Sy continue 938Attempt to recover from a "hung" operation by re-dispatching it 939to the I/O pipeline if possible. 940.It Sy panic 941Panic the system. 942This can be used to facilitate automatic fail-over 943to a properly configured fail-over partner. 944.El 945. 946.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 947Check time in milliseconds. 948This defines the frequency at which we check for hung I/O requests 949and potentially invoke the 950.Sy zfs_deadman_failmode 951behavior. 952. 953.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 954Interval in milliseconds after which the deadman is triggered and also 955the interval after which a pool sync operation is considered to be "hung". 956Once this limit is exceeded the deadman will be invoked every 957.Sy zfs_deadman_checktime_ms 958milliseconds until the pool sync completes. 959. 960.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 961Interval in milliseconds after which the deadman is triggered and an 962individual I/O operation is considered to be "hung". 963As long as the operation remains "hung", 964the deadman will be invoked every 965.Sy zfs_deadman_checktime_ms 966milliseconds until the operation completes. 967. 968.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 969Enable prefetching dedup-ed blocks which are going to be freed. 970. 971.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 972Start to delay each transaction once there is this amount of dirty data, 973expressed as a percentage of 974.Sy zfs_dirty_data_max . 975This value should be at least 976.Sy zfs_vdev_async_write_active_max_dirty_percent . 977.No See Sx ZFS TRANSACTION DELAY . 978. 979.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int 980This controls how quickly the transaction delay approaches infinity. 981Larger values cause longer delays for a given amount of dirty data. 982.Pp 983For the smoothest delay, this value should be about 1 billion divided 984by the maximum number of operations per second. 985This will smoothly handle between ten times and a tenth of this number. 986.No See Sx ZFS TRANSACTION DELAY . 987.Pp 988.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . 989. 990.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int 991Disables requirement for IVset GUIDs to be present and match when doing a raw 992receive of encrypted datasets. 993Intended for users whose pools were created with 994OpenZFS pre-release versions and now have compatibility issues. 995. 996.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong 997Maximum number of uses of a single salt value before generating a new one for 998encrypted datasets. 999The default value is also the maximum. 1000. 1001.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint 1002Size of the znode hashtable used for holds. 1003.Pp 1004Due to the need to hold locks on objects that may not exist yet, kernel mutexes 1005are not created per-object and instead a hashtable is used where collisions 1006will result in objects waiting when there is not actually contention on the 1007same object. 1008. 1009.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int 1010Rate limit delay and deadman zevents (which report slow I/O operations) to this 1011many per 1012second. 1013. 1014.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 1015Upper-bound limit for unflushed metadata changes to be held by the 1016log spacemap in memory, in bytes. 1017. 1018.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 1019Part of overall system memory that ZFS allows to be used 1020for unflushed metadata changes by the log spacemap, in millionths. 1021. 1022.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 1023Describes the maximum number of log spacemap blocks allowed for each pool. 1024The default value means that the space in all the log spacemaps 1025can add up to no more than 1026.Sy 131072 1027blocks (which means 1028.Em 16 GiB 1029of logical space before compression and ditto blocks, 1030assuming that blocksize is 1031.Em 128 KiB ) . 1032.Pp 1033This tunable is important because it involves a trade-off between import 1034time after an unclean export and the frequency of flushing metaslabs. 1035The higher this number is, the more log blocks we allow when the pool is 1036active which means that we flush metaslabs less often and thus decrease 1037the number of I/O operations for spacemap updates per TXG. 1038At the same time though, that means that in the event of an unclean export, 1039there will be more log spacemap blocks for us to read, inducing overhead 1040in the import time of the pool. 1041The lower the number, the amount of flushing increases, destroying log 1042blocks quicker as they become obsolete faster, which leaves less blocks 1043to be read during import time after a crash. 1044.Pp 1045Each log spacemap block existing during pool import leads to approximately 1046one extra logical I/O issued. 1047This is the reason why this tunable is exposed in terms of blocks rather 1048than space used. 1049. 1050.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 1051If the number of metaslabs is small and our incoming rate is high, 1052we could get into a situation that we are flushing all our metaslabs every TXG. 1053Thus we always allow at least this many log blocks. 1054. 1055.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 1056Tunable used to determine the number of blocks that can be used for 1057the spacemap log, expressed as a percentage of the total number of 1058unflushed metaslabs in the pool. 1059. 1060.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 1061Tunable limiting maximum time in TXGs any metaslab may remain unflushed. 1062It effectively limits maximum number of unflushed per-TXG spacemap logs 1063that need to be read after unclean pool export. 1064. 1065.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1066When enabled, files will not be asynchronously removed from the list of pending 1067unlinks and the space they consume will be leaked. 1068Once this option has been disabled and the dataset is remounted, 1069the pending unlinks will be processed and the freed space returned to the pool. 1070This option is used by the test suite. 1071. 1072.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong 1073This is the used to define a large file for the purposes of deletion. 1074Files containing more than 1075.Sy zfs_delete_blocks 1076will be deleted asynchronously, while smaller files are deleted synchronously. 1077Decreasing this value will reduce the time spent in an 1078.Xr unlink 2 1079system call, at the expense of a longer delay before the freed space is 1080available. 1081This only applies on Linux. 1082. 1083.It Sy zfs_dirty_data_max Ns = Pq int 1084Determines the dirty space limit in bytes. 1085Once this limit is exceeded, new writes are halted until space frees up. 1086This parameter takes precedence over 1087.Sy zfs_dirty_data_max_percent . 1088.No See Sx ZFS TRANSACTION DELAY . 1089.Pp 1090Defaults to 1091.Sy physical_ram/10 , 1092capped at 1093.Sy zfs_dirty_data_max_max . 1094. 1095.It Sy zfs_dirty_data_max_max Ns = Pq int 1096Maximum allowable value of 1097.Sy zfs_dirty_data_max , 1098expressed in bytes. 1099This limit is only enforced at module load time, and will be ignored if 1100.Sy zfs_dirty_data_max 1101is later changed. 1102This parameter takes precedence over 1103.Sy zfs_dirty_data_max_max_percent . 1104.No See Sx ZFS TRANSACTION DELAY . 1105.Pp 1106Defaults to 1107.Sy min(physical_ram/4, 4GiB) , 1108or 1109.Sy min(physical_ram/4, 1GiB) 1110for 32-bit systems. 1111. 1112.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint 1113Maximum allowable value of 1114.Sy zfs_dirty_data_max , 1115expressed as a percentage of physical RAM. 1116This limit is only enforced at module load time, and will be ignored if 1117.Sy zfs_dirty_data_max 1118is later changed. 1119The parameter 1120.Sy zfs_dirty_data_max_max 1121takes precedence over this one. 1122.No See Sx ZFS TRANSACTION DELAY . 1123. 1124.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint 1125Determines the dirty space limit, expressed as a percentage of all memory. 1126Once this limit is exceeded, new writes are halted until space frees up. 1127The parameter 1128.Sy zfs_dirty_data_max 1129takes precedence over this one. 1130.No See Sx ZFS TRANSACTION DELAY . 1131.Pp 1132Subject to 1133.Sy zfs_dirty_data_max_max . 1134. 1135.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint 1136Start syncing out a transaction group if there's at least this much dirty data 1137.Pq as a percentage of Sy zfs_dirty_data_max . 1138This should be less than 1139.Sy zfs_vdev_async_write_active_min_dirty_percent . 1140. 1141.It Sy zfs_wrlog_data_max Ns = Pq int 1142The upper limit of write-transaction zil log data size in bytes. 1143Write operations are throttled when approaching the limit until log data is 1144cleared out after transaction group sync. 1145Because of some overhead, it should be set at least 2 times the size of 1146.Sy zfs_dirty_data_max 1147.No to prevent harming normal write throughput . 1148It also should be smaller than the size of the slog device if slog is present. 1149.Pp 1150Defaults to 1151.Sy zfs_dirty_data_max*2 1152. 1153.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint 1154Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be 1155preallocated for a file in order to guarantee that later writes will not 1156run out of space. 1157Instead, 1158.Xr fallocate 2 1159space preallocation only checks that sufficient space is currently available 1160in the pool or the user's project quota allocation, 1161and then creates a sparse file of the requested size. 1162The requested space is multiplied by 1163.Sy zfs_fallocate_reserve_percent 1164to allow additional space for indirect blocks and other internal metadata. 1165Setting this to 1166.Sy 0 1167disables support for 1168.Xr fallocate 2 1169and causes it to return 1170.Sy EOPNOTSUPP . 1171. 1172.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string 1173Select a fletcher 4 implementation. 1174.Pp 1175Supported selectors are: 1176.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , 1177.No and Sy aarch64_neon . 1178All except 1179.Sy fastest No and Sy scalar 1180require instruction set extensions to be available, 1181and will only appear if ZFS detects that they are present at runtime. 1182If multiple implementations of fletcher 4 are available, the 1183.Sy fastest 1184will be chosen using a micro benchmark. 1185Selecting 1186.Sy scalar 1187results in the original CPU-based calculation being used. 1188Selecting any option other than 1189.Sy fastest No or Sy scalar 1190results in vector instructions 1191from the respective CPU instruction set being used. 1192. 1193.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string 1194Select a BLAKE3 implementation. 1195.Pp 1196Supported selectors are: 1197.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . 1198All except 1199.Sy cycle , fastest No and Sy generic 1200require instruction set extensions to be available, 1201and will only appear if ZFS detects that they are present at runtime. 1202If multiple implementations of BLAKE3 are available, the 1203.Sy fastest will be chosen using a micro benchmark. You can see the 1204benchmark results by reading this kstat file: 1205.Pa /proc/spl/kstat/zfs/chksum_bench . 1206. 1207.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1208Enable/disable the processing of the free_bpobj object. 1209. 1210.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 1211Maximum number of blocks freed in a single TXG. 1212. 1213.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 1214Maximum number of dedup blocks freed in a single TXG. 1215. 1216.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint 1217Maximum asynchronous read I/O operations active to each device. 1218.No See Sx ZFS I/O SCHEDULER . 1219. 1220.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint 1221Minimum asynchronous read I/O operation active to each device. 1222.No See Sx ZFS I/O SCHEDULER . 1223. 1224.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1225When the pool has more than this much dirty data, use 1226.Sy zfs_vdev_async_write_max_active 1227to limit active async writes. 1228If the dirty data is between the minimum and maximum, 1229the active I/O limit is linearly interpolated. 1230.No See Sx ZFS I/O SCHEDULER . 1231. 1232.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint 1233When the pool has less than this much dirty data, use 1234.Sy zfs_vdev_async_write_min_active 1235to limit active async writes. 1236If the dirty data is between the minimum and maximum, 1237the active I/O limit is linearly 1238interpolated. 1239.No See Sx ZFS I/O SCHEDULER . 1240. 1241.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint 1242Maximum asynchronous write I/O operations active to each device. 1243.No See Sx ZFS I/O SCHEDULER . 1244. 1245.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint 1246Minimum asynchronous write I/O operations active to each device. 1247.No See Sx ZFS I/O SCHEDULER . 1248.Pp 1249Lower values are associated with better latency on rotational media but poorer 1250resilver performance. 1251The default value of 1252.Sy 2 1253was chosen as a compromise. 1254A value of 1255.Sy 3 1256has been shown to improve resilver performance further at a cost of 1257further increasing latency. 1258. 1259.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint 1260Maximum initializing I/O operations active to each device. 1261.No See Sx ZFS I/O SCHEDULER . 1262. 1263.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint 1264Minimum initializing I/O operations active to each device. 1265.No See Sx ZFS I/O SCHEDULER . 1266. 1267.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint 1268The maximum number of I/O operations active to each device. 1269Ideally, this will be at least the sum of each queue's 1270.Sy max_active . 1271.No See Sx ZFS I/O SCHEDULER . 1272. 1273.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint 1274Timeout value to wait before determining a device is missing 1275during import. 1276This is helpful for transient missing paths due 1277to links being briefly removed and recreated in response to 1278udev events. 1279. 1280.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint 1281Maximum sequential resilver I/O operations active to each device. 1282.No See Sx ZFS I/O SCHEDULER . 1283. 1284.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint 1285Minimum sequential resilver I/O operations active to each device. 1286.No See Sx ZFS I/O SCHEDULER . 1287. 1288.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint 1289Maximum removal I/O operations active to each device. 1290.No See Sx ZFS I/O SCHEDULER . 1291. 1292.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint 1293Minimum removal I/O operations active to each device. 1294.No See Sx ZFS I/O SCHEDULER . 1295. 1296.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint 1297Maximum scrub I/O operations active to each device. 1298.No See Sx ZFS I/O SCHEDULER . 1299. 1300.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint 1301Minimum scrub I/O operations active to each device. 1302.No See Sx ZFS I/O SCHEDULER . 1303. 1304.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint 1305Maximum synchronous read I/O operations active to each device. 1306.No See Sx ZFS I/O SCHEDULER . 1307. 1308.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint 1309Minimum synchronous read I/O operations active to each device. 1310.No See Sx ZFS I/O SCHEDULER . 1311. 1312.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint 1313Maximum synchronous write I/O operations active to each device. 1314.No See Sx ZFS I/O SCHEDULER . 1315. 1316.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint 1317Minimum synchronous write I/O operations active to each device. 1318.No See Sx ZFS I/O SCHEDULER . 1319. 1320.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint 1321Maximum trim/discard I/O operations active to each device. 1322.No See Sx ZFS I/O SCHEDULER . 1323. 1324.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint 1325Minimum trim/discard I/O operations active to each device. 1326.No See Sx ZFS I/O SCHEDULER . 1327. 1328.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint 1329For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), 1330the number of concurrently-active I/O operations is limited to 1331.Sy zfs_*_min_active , 1332unless the vdev is "idle". 1333When there are no interactive I/O operations active (synchronous or otherwise), 1334and 1335.Sy zfs_vdev_nia_delay 1336operations have completed since the last interactive operation, 1337then the vdev is considered to be "idle", 1338and the number of concurrently-active non-interactive operations is increased to 1339.Sy zfs_*_max_active . 1340.No See Sx ZFS I/O SCHEDULER . 1341. 1342.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint 1343Some HDDs tend to prioritize sequential I/O so strongly, that concurrent 1344random I/O latency reaches several seconds. 1345On some HDDs this happens even if sequential I/O operations 1346are submitted one at a time, and so setting 1347.Sy zfs_*_max_active Ns = Sy 1 1348does not help. 1349To prevent non-interactive I/O, like scrub, 1350from monopolizing the device, no more than 1351.Sy zfs_vdev_nia_credit operations can be sent 1352while there are outstanding incomplete interactive operations. 1353This enforced wait ensures the HDD services the interactive I/O 1354within a reasonable amount of time. 1355.No See Sx ZFS I/O SCHEDULER . 1356. 1357.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint 1358Maximum number of queued allocations per top-level vdev expressed as 1359a percentage of 1360.Sy zfs_vdev_async_write_max_active , 1361which allows the system to detect devices that are more capable 1362of handling allocations and to allocate more blocks to those devices. 1363This allows for dynamic allocation distribution when devices are imbalanced, 1364as fuller devices will tend to be slower than empty devices. 1365.Pp 1366Also see 1367.Sy zio_dva_throttle_enabled . 1368. 1369.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint 1370Defines if the driver should retire on a given error type. 1371The following options may be bitwise-ored together: 1372.TS 1373box; 1374lbz r l l . 1375 Value Name Description 1376_ 1377 1 Device No driver retries on device errors 1378 2 Transport No driver retries on transport errors. 1379 4 Driver No driver retries on driver errors. 1380.TE 1381. 1382.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int 1383Time before expiring 1384.Pa .zfs/snapshot . 1385. 1386.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int 1387Allow the creation, removal, or renaming of entries in the 1388.Sy .zfs/snapshot 1389directory to cause the creation, destruction, or renaming of snapshots. 1390When enabled, this functionality works both locally and over NFS exports 1391which have the 1392.Em no_root_squash 1393option set. 1394. 1395.It Sy zfs_flags Ns = Ns Sy 0 Pq int 1396Set additional debugging flags. 1397The following flags may be bitwise-ored together: 1398.TS 1399box; 1400lbz r l l . 1401 Value Name Description 1402_ 1403 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. 1404* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. 1405* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. 1406 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. 1407* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. 1408 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. 1409 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. 1410 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. 1411 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. 1412 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. 1413 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. 1414 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log 1415 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. 1416.TE 1417.Sy \& * No Requires debug build . 1418. 1419.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint 1420Enables btree verification. 1421The following settings are culminative: 1422.TS 1423box; 1424lbz r l l . 1425 Value Description 1426 1427 1 Verify height. 1428 2 Verify pointers from children to parent. 1429 3 Verify element counts. 1430 4 Verify element order. (expensive) 1431* 5 Verify unused memory is poisoned. (expensive) 1432.TE 1433.Sy \& * No Requires debug build . 1434. 1435.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int 1436If destroy encounters an 1437.Sy EIO 1438while reading metadata (e.g. indirect blocks), 1439space referenced by the missing metadata can not be freed. 1440Normally this causes the background destroy to become "stalled", 1441as it is unable to make forward progress. 1442While in this stalled state, all remaining space to free 1443from the error-encountering filesystem is "temporarily leaked". 1444Set this flag to cause it to ignore the 1445.Sy EIO , 1446permanently leak the space from indirect blocks that can not be read, 1447and continue to free everything else that it can. 1448.Pp 1449The default "stalling" behavior is useful if the storage partially 1450fails (i.e. some but not all I/O operations fail), and then later recovers. 1451In this case, we will be able to continue pool operations while it is 1452partially failed, and when it recovers, we can continue to free the 1453space, with no leaks. 1454Note, however, that this case is actually fairly rare. 1455.Pp 1456Typically pools either 1457.Bl -enum -compact -offset 4n -width "1." 1458.It 1459fail completely (but perhaps temporarily, 1460e.g. due to a top-level vdev going offline), or 1461.It 1462have localized, permanent errors (e.g. disk returns the wrong data 1463due to bit flip or firmware bug). 1464.El 1465In the former case, this setting does not matter because the 1466pool will be suspended and the sync thread will not be able to make 1467forward progress regardless. 1468In the latter, because the error is permanent, the best we can do 1469is leak the minimum amount of space, 1470which is what setting this flag will do. 1471It is therefore reasonable for this flag to normally be set, 1472but we chose the more conservative approach of not setting it, 1473so that there is no possibility of 1474leaking space in the "partial temporary" failure case. 1475. 1476.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint 1477During a 1478.Nm zfs Cm destroy 1479operation using the 1480.Sy async_destroy 1481feature, 1482a minimum of this much time will be spent working on freeing blocks per TXG. 1483. 1484.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint 1485Similar to 1486.Sy zfs_free_min_time_ms , 1487but for cleanup of old indirection records for removed vdevs. 1488. 1489.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 1490Largest data block to write to the ZIL. 1491Larger blocks will be treated as if the dataset being written to had the 1492.Sy logbias Ns = Ns Sy throughput 1493property set. 1494. 1495.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 1496Pattern written to vdev free space by 1497.Xr zpool-initialize 8 . 1498. 1499.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1500Size of writes used by 1501.Xr zpool-initialize 8 . 1502This option is used by the test suite. 1503. 1504.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 1505The threshold size (in block pointers) at which we create a new sub-livelist. 1506Larger sublists are more costly from a memory perspective but the fewer 1507sublists there are, the lower the cost of insertion. 1508. 1509.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int 1510If the amount of shared space between a snapshot and its clone drops below 1511this threshold, the clone turns off the livelist and reverts to the old 1512deletion method. 1513This is in place because livelists no long give us a benefit 1514once a clone has been overwritten enough. 1515. 1516.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int 1517Incremented each time an extra ALLOC blkptr is added to a livelist entry while 1518it is being condensed. 1519This option is used by the test suite to track race conditions. 1520. 1521.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int 1522Incremented each time livelist condensing is canceled while in 1523.Fn spa_livelist_condense_sync . 1524This option is used by the test suite to track race conditions. 1525. 1526.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1527When set, the livelist condense process pauses indefinitely before 1528executing the synctask \(em 1529.Fn spa_livelist_condense_sync . 1530This option is used by the test suite to trigger race conditions. 1531. 1532.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int 1533Incremented each time livelist condensing is canceled while in 1534.Fn spa_livelist_condense_cb . 1535This option is used by the test suite to track race conditions. 1536. 1537.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1538When set, the livelist condense process pauses indefinitely before 1539executing the open context condensing work in 1540.Fn spa_livelist_condense_cb . 1541This option is used by the test suite to trigger race conditions. 1542. 1543.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 1544The maximum execution time limit that can be set for a ZFS channel program, 1545specified as a number of Lua instructions. 1546. 1547.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 1548The maximum memory limit that can be set for a ZFS channel program, specified 1549in bytes. 1550. 1551.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int 1552The maximum depth of nested datasets. 1553This value can be tuned temporarily to 1554fix existing datasets that exceed the predefined limit. 1555. 1556.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 1557The number of past TXGs that the flushing algorithm of the log spacemap 1558feature uses to estimate incoming log blocks. 1559. 1560.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 1561Maximum number of rows allowed in the summary of the spacemap log. 1562. 1563.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint 1564We currently support block sizes from 1565.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . 1566The benefits of larger blocks, and thus larger I/O, 1567need to be weighed against the cost of COWing a giant block to modify one byte. 1568Additionally, very large blocks can have an impact on I/O latency, 1569and also potentially on the memory allocator. 1570Therefore, we formerly forbade creating blocks larger than 1M. 1571Larger blocks could be created by changing it, 1572and pools with larger blocks can always be imported and used, 1573regardless of this setting. 1574. 1575.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int 1576Allow datasets received with redacted send/receive to be mounted. 1577Normally disabled because these datasets may be missing key data. 1578. 1579.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 1580Minimum number of metaslabs to flush per dirty TXG. 1581. 1582.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint 1583Allow metaslabs to keep their active state as long as their fragmentation 1584percentage is no more than this value. 1585An active metaslab that exceeds this threshold 1586will no longer keep its active status allowing better metaslabs to be selected. 1587. 1588.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint 1589Metaslab groups are considered eligible for allocations if their 1590fragmentation metric (measured as a percentage) is less than or equal to 1591this value. 1592If a metaslab group exceeds this threshold then it will be 1593skipped unless all metaslab groups within the metaslab class have also 1594crossed this threshold. 1595. 1596.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint 1597Defines a threshold at which metaslab groups should be eligible for allocations. 1598The value is expressed as a percentage of free space 1599beyond which a metaslab group is always eligible for allocations. 1600If a metaslab group's free space is less than or equal to the 1601threshold, the allocator will avoid allocating to that group 1602unless all groups in the pool have reached the threshold. 1603Once all groups have reached the threshold, all groups are allowed to accept 1604allocations. 1605The default value of 1606.Sy 0 1607disables the feature and causes all metaslab groups to be eligible for 1608allocations. 1609.Pp 1610This parameter allows one to deal with pools having heavily imbalanced 1611vdevs such as would be the case when a new vdev has been added. 1612Setting the threshold to a non-zero percentage will stop allocations 1613from being made to vdevs that aren't filled to the specified percentage 1614and allow lesser filled vdevs to acquire more allocations than they 1615otherwise would under the old 1616.Sy zfs_mg_alloc_failures 1617facility. 1618. 1619.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1620If enabled, ZFS will place DDT data into the special allocation class. 1621. 1622.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1623If enabled, ZFS will place user data indirect blocks 1624into the special allocation class. 1625. 1626.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint 1627Historical statistics for this many latest multihost updates will be available 1628in 1629.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . 1630. 1631.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 1632Used to control the frequency of multihost writes which are performed when the 1633.Sy multihost 1634pool property is on. 1635This is one of the factors used to determine the 1636length of the activity check during import. 1637.Pp 1638The multihost write period is 1639.Sy zfs_multihost_interval No / Sy leaf-vdevs . 1640On average a multihost write will be issued for each leaf vdev 1641every 1642.Sy zfs_multihost_interval 1643milliseconds. 1644In practice, the observed period can vary with the I/O load 1645and this observed value is the delay which is stored in the uberblock. 1646. 1647.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint 1648Used to control the duration of the activity test on import. 1649Smaller values of 1650.Sy zfs_multihost_import_intervals 1651will reduce the import time but increase 1652the risk of failing to detect an active pool. 1653The total activity check time is never allowed to drop below one second. 1654.Pp 1655On import the activity check waits a minimum amount of time determined by 1656.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , 1657or the same product computed on the host which last had the pool imported, 1658whichever is greater. 1659The activity check time may be further extended if the value of MMP 1660delay found in the best uberblock indicates actual multihost updates happened 1661at longer intervals than 1662.Sy zfs_multihost_interval . 1663A minimum of 1664.Em 100 ms 1665is enforced. 1666.Pp 1667.Sy 0 No is equivalent to Sy 1 . 1668. 1669.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint 1670Controls the behavior of the pool when multihost write failures or delays are 1671detected. 1672.Pp 1673When 1674.Sy 0 , 1675multihost write failures or delays are ignored. 1676The failures will still be reported to the ZED which depending on 1677its configuration may take action such as suspending the pool or offlining a 1678device. 1679.Pp 1680Otherwise, the pool will be suspended if 1681.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval 1682milliseconds pass without a successful MMP write. 1683This guarantees the activity test will see MMP writes if the pool is imported. 1684.Sy 1 No is equivalent to Sy 2 ; 1685this is necessary to prevent the pool from being suspended 1686due to normal, small I/O latency variations. 1687. 1688.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int 1689Set to disable scrub I/O. 1690This results in scrubs not actually scrubbing data and 1691simply doing a metadata crawl of the pool instead. 1692. 1693.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1694Set to disable block prefetching for scrubs. 1695. 1696.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 1697Disable cache flush operations on disks when writing. 1698Setting this will cause pool corruption on power loss 1699if a volatile out-of-order write cache is enabled. 1700. 1701.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1702Allow no-operation writes. 1703The occurrence of nopwrites will further depend on other pool properties 1704.Pq i.a. the checksumming and compression algorithms . 1705. 1706.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int 1707Enable forcing TXG sync to find holes. 1708When enabled forces ZFS to sync data when 1709.Sy SEEK_HOLE No or Sy SEEK_DATA 1710flags are used allowing holes in a file to be accurately reported. 1711When disabled holes will not be reported in recently dirtied files. 1712. 1713.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int 1714The number of bytes which should be prefetched during a pool traversal, like 1715.Nm zfs Cm send 1716or other data crawling operations. 1717. 1718.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint 1719The number of blocks pointed by indirect (non-L0) block which should be 1720prefetched during a pool traversal, like 1721.Nm zfs Cm send 1722or other data crawling operations. 1723. 1724.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 1725Control percentage of dirtied indirect blocks from frees allowed into one TXG. 1726After this threshold is crossed, additional frees will wait until the next TXG. 1727.Sy 0 No disables this throttle . 1728. 1729.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1730Disable predictive prefetch. 1731Note that it leaves "prescient" prefetch 1732.Pq for, e.g., Nm zfs Cm send 1733intact. 1734Unlike predictive prefetch, prescient prefetch never issues I/O 1735that ends up not being needed, so it can't hurt performance. 1736. 1737.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1738Disable QAT hardware acceleration for SHA256 checksums. 1739May be unset after the ZFS modules have been loaded to initialize the QAT 1740hardware as long as support is compiled in and the QAT driver is present. 1741. 1742.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1743Disable QAT hardware acceleration for gzip compression. 1744May be unset after the ZFS modules have been loaded to initialize the QAT 1745hardware as long as support is compiled in and the QAT driver is present. 1746. 1747.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1748Disable QAT hardware acceleration for AES-GCM encryption. 1749May be unset after the ZFS modules have been loaded to initialize the QAT 1750hardware as long as support is compiled in and the QAT driver is present. 1751. 1752.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1753Bytes to read per chunk. 1754. 1755.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint 1756Historical statistics for this many latest reads will be available in 1757.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . 1758. 1759.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int 1760Include cache hits in read history 1761. 1762.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1763Maximum read segment size to issue when sequentially resilvering a 1764top-level vdev. 1765. 1766.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1767Automatically start a pool scrub when the last active sequential resilver 1768completes in order to verify the checksums of all blocks which have been 1769resilvered. 1770This is enabled by default and strongly recommended. 1771. 1772.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 1773Maximum amount of I/O that can be concurrently issued for a sequential 1774resilver per leaf device, given in bytes. 1775. 1776.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int 1777If an indirect split block contains more than this many possible unique 1778combinations when being reconstructed, consider it too computationally 1779expensive to check them all. 1780Instead, try at most this many randomly selected 1781combinations each time the block is accessed. 1782This allows all segment copies to participate fairly 1783in the reconstruction when all combinations 1784cannot be checked and prevents repeated use of one bad copy. 1785. 1786.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int 1787Set to attempt to recover from fatal errors. 1788This should only be used as a last resort, 1789as it typically results in leaked space, or worse. 1790. 1791.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1792Ignore hard I/O errors during device removal. 1793When set, if a device encounters a hard I/O error during the removal process 1794the removal will not be cancelled. 1795This can result in a normally recoverable block becoming permanently damaged 1796and is hence not recommended. 1797This should only be used as a last resort when the 1798pool cannot be returned to a healthy state prior to removing the device. 1799. 1800.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1801This is used by the test suite so that it can ensure that certain actions 1802happen while in the middle of a removal. 1803. 1804.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1805The largest contiguous segment that we will attempt to allocate when removing 1806a device. 1807If there is a performance problem with attempting to allocate large blocks, 1808consider decreasing this. 1809The default value is also the maximum. 1810. 1811.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int 1812Ignore the 1813.Sy resilver_defer 1814feature, causing an operation that would start a resilver to 1815immediately restart the one in progress. 1816. 1817.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint 1818Resilvers are processed by the sync thread. 1819While resilvering, it will spend at least this much time 1820working on a resilver between TXG flushes. 1821. 1822.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1823If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), 1824even if there were unrepairable errors. 1825Intended to be used during pool repair or recovery to 1826stop resilvering when the pool is next imported. 1827. 1828.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint 1829Scrubs are processed by the sync thread. 1830While scrubbing, it will spend at least this much time 1831working on a scrub between TXG flushes. 1832. 1833.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint 1834To preserve progress across reboots, the sequential scan algorithm periodically 1835needs to stop metadata scanning and issue all the verification I/O to disk. 1836The frequency of this flushing is determined by this tunable. 1837. 1838.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint 1839This tunable affects how scrub and resilver I/O segments are ordered. 1840A higher number indicates that we care more about how filled in a segment is, 1841while a lower number indicates we care more about the size of the extent without 1842considering the gaps within a segment. 1843This value is only tunable upon module insertion. 1844Changing the value afterwards will have no effect on scrub or resilver 1845performance. 1846. 1847.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint 1848Determines the order that data will be verified while scrubbing or resilvering: 1849.Bl -tag -compact -offset 4n -width "a" 1850.It Sy 1 1851Data will be verified as sequentially as possible, given the 1852amount of memory reserved for scrubbing 1853.Pq see Sy zfs_scan_mem_lim_fact . 1854This may improve scrub performance if the pool's data is very fragmented. 1855.It Sy 2 1856The largest mostly-contiguous chunk of found data will be verified first. 1857By deferring scrubbing of small segments, we may later find adjacent data 1858to coalesce and increase the segment size. 1859.It Sy 0 1860.No Use strategy Sy 1 No during normal verification 1861.No and strategy Sy 2 No while taking a checkpoint . 1862.El 1863. 1864.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int 1865If unset, indicates that scrubs and resilvers will gather metadata in 1866memory before issuing sequential I/O. 1867Otherwise indicates that the legacy algorithm will be used, 1868where I/O is initiated as soon as it is discovered. 1869Unsetting will not affect scrubs or resilvers that are already in progress. 1870. 1871.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int 1872Sets the largest gap in bytes between scrub/resilver I/O operations 1873that will still be considered sequential for sorting purposes. 1874Changing this value will not 1875affect scrubs or resilvers that are already in progress. 1876. 1877.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1878Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. 1879This tunable determines the hard limit for I/O sorting memory usage. 1880When the hard limit is reached we stop scanning metadata and start issuing 1881data verification I/O. 1882This is done until we get below the soft limit. 1883. 1884.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1885The fraction of the hard limit used to determined the soft limit for I/O sorting 1886by the sequential scan algorithm. 1887When we cross this limit from below no action is taken. 1888When we cross this limit from above it is because we are issuing verification 1889I/O. 1890In this case (unless the metadata scan is done) we stop issuing verification I/O 1891and start scanning metadata again until we get to the hard limit. 1892. 1893.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1894When reporting resilver throughput and estimated completion time use the 1895performance observed over roughly the last 1896.Sy zfs_scan_report_txgs 1897TXGs. 1898When set to zero performance is calculated over the time between checkpoints. 1899. 1900.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int 1901Enforce tight memory limits on pool scans when a sequential scan is in progress. 1902When disabled, the memory limit may be exceeded by fast disks. 1903. 1904.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int 1905Freezes a scrub/resilver in progress without actually pausing it. 1906Intended for testing/debugging. 1907. 1908.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1909Maximum amount of data that can be concurrently issued at once for scrubs and 1910resilvers per leaf device, given in bytes. 1911. 1912.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int 1913Allow sending of corrupt data (ignore read/checksum errors when sending). 1914. 1915.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int 1916Include unmodified spill blocks in the send stream. 1917Under certain circumstances, previous versions of ZFS could incorrectly 1918remove the spill block from an existing object. 1919Including unmodified copies of the spill blocks creates a backwards-compatible 1920stream which will recreate a spill block if it was incorrectly removed. 1921. 1922.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1923The fill fraction of the 1924.Nm zfs Cm send 1925internal queues. 1926The fill fraction controls the timing with which internal threads are woken up. 1927. 1928.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1929The maximum number of bytes allowed in 1930.Nm zfs Cm send Ns 's 1931internal queues. 1932. 1933.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1934The fill fraction of the 1935.Nm zfs Cm send 1936prefetch queue. 1937The fill fraction controls the timing with which internal threads are woken up. 1938. 1939.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1940The maximum number of bytes allowed that will be prefetched by 1941.Nm zfs Cm send . 1942This value must be at least twice the maximum block size in use. 1943. 1944.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1945The fill fraction of the 1946.Nm zfs Cm receive 1947queue. 1948The fill fraction controls the timing with which internal threads are woken up. 1949. 1950.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1951The maximum number of bytes allowed in the 1952.Nm zfs Cm receive 1953queue. 1954This value must be at least twice the maximum block size in use. 1955. 1956.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1957The maximum amount of data, in bytes, that 1958.Nm zfs Cm receive 1959will write in one DMU transaction. 1960This is the uncompressed size, even when receiving a compressed send stream. 1961This setting will not reduce the write size below a single block. 1962Capped at a maximum of 1963.Sy 32 MiB . 1964. 1965.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int 1966When this variable is set to non-zero a corrective receive: 1967.Bl -enum -compact -offset 4n -width "1." 1968.It 1969Does not enforce the restriction of source & destination snapshot GUIDs 1970matching. 1971.It 1972If there is an error during healing, the healing receive is not 1973terminated instead it moves on to the next record. 1974.El 1975. 1976.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1977Setting this variable overrides the default logic for estimating block 1978sizes when doing a 1979.Nm zfs Cm send . 1980The default heuristic is that the average block size 1981will be the current recordsize. 1982Override this value if most data in your dataset is not of that size 1983and you require accurate zfs send size estimates. 1984. 1985.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint 1986Flushing of data to disk is done in passes. 1987Defer frees starting in this pass. 1988. 1989.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1990Maximum memory used for prefetching a checkpoint's space map on each 1991vdev while discarding the checkpoint. 1992. 1993.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint 1994Only allow small data blocks to be allocated on the special and dedup vdev 1995types when the available free space percentage on these vdevs exceeds this 1996value. 1997This ensures reserved space is available for pool metadata as the 1998special vdevs approach capacity. 1999. 2000.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint 2001Starting in this sync pass, disable compression (including of metadata). 2002With the default setting, in practice, we don't have this many sync passes, 2003so this has no effect. 2004.Pp 2005The original intent was that disabling compression would help the sync passes 2006to converge. 2007However, in practice, disabling compression increases 2008the average number of sync passes; because when we turn compression off, 2009many blocks' size will change, and thus we have to re-allocate 2010(not overwrite) them. 2011It also increases the number of 2012.Em 128 KiB 2013allocations (e.g. for indirect blocks and spacemaps) 2014because these will not be compressed. 2015The 2016.Em 128 KiB 2017allocations are especially detrimental to performance 2018on highly fragmented systems, which may have very few free segments of this 2019size, 2020and may need to load new metaslabs to satisfy these allocations. 2021. 2022.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint 2023Rewrite new block pointers starting in this pass. 2024. 2025.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int 2026This controls the number of threads used by 2027.Sy dp_sync_taskq . 2028The default value of 2029.Sy 75% 2030will create a maximum of one thread per CPU. 2031. 2032.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 2033Maximum size of TRIM command. 2034Larger ranges will be split into chunks no larger than this value before 2035issuing. 2036. 2037.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2038Minimum size of TRIM commands. 2039TRIM ranges smaller than this will be skipped, 2040unless they're part of a larger range which was chunked. 2041This is done because it's common for these small TRIMs 2042to negatively impact overall performance. 2043. 2044.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2045Skip uninitialized metaslabs during the TRIM process. 2046This option is useful for pools constructed from large thinly-provisioned 2047devices 2048where TRIM operations are slow. 2049As a pool ages, an increasing fraction of the pool's metaslabs 2050will be initialized, progressively degrading the usefulness of this option. 2051This setting is stored when starting a manual TRIM and will 2052persist for the duration of the requested TRIM. 2053. 2054.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint 2055Maximum number of queued TRIMs outstanding per leaf vdev. 2056The number of concurrent TRIM commands issued to the device is controlled by 2057.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . 2058. 2059.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint 2060The number of transaction groups' worth of frees which should be aggregated 2061before TRIM operations are issued to the device. 2062This setting represents a trade-off between issuing larger, 2063more efficient TRIM operations and the delay 2064before the recently trimmed space is available for use by the device. 2065.Pp 2066Increasing this value will allow frees to be aggregated for a longer time. 2067This will result is larger TRIM operations and potentially increased memory 2068usage. 2069Decreasing this value will have the opposite effect. 2070The default of 2071.Sy 32 2072was determined to be a reasonable compromise. 2073. 2074.It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint 2075Historical statistics for this many latest TXGs will be available in 2076.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . 2077. 2078.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint 2079Flush dirty data to disk at least every this many seconds (maximum TXG 2080duration). 2081. 2082.It Sy zfs_vdev_aggregate_trim Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2083Allow TRIM I/O operations to be aggregated. 2084This is normally not helpful because the extents to be trimmed 2085will have been already been aggregated by the metaslab. 2086This option is provided for debugging and performance analysis. 2087. 2088.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2089Max vdev I/O aggregation size. 2090. 2091.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2092Max vdev I/O aggregation size for non-rotating media. 2093. 2094.It Sy zfs_vdev_cache_bshift Ns = Ns Sy 16 Po 64 KiB Pc Pq uint 2095Shift size to inflate reads to. 2096. 2097.It Sy zfs_vdev_cache_max Ns = Ns Sy 16384 Ns B Po 16 KiB Pc Pq uint 2098Inflate reads smaller than this value to meet the 2099.Sy zfs_vdev_cache_bshift 2100size 2101.Pq default Sy 64 KiB . 2102. 2103.It Sy zfs_vdev_cache_size Ns = Ns Sy 0 Pq uint 2104Total size of the per-disk cache in bytes. 2105.Pp 2106Currently this feature is disabled, as it has been found to not be helpful 2107for performance and in some cases harmful. 2108. 2109.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int 2110A number by which the balancing algorithm increments the load calculation for 2111the purpose of selecting the least busy mirror member when an I/O operation 2112immediately follows its predecessor on rotational vdevs 2113for the purpose of making decisions based on load. 2114. 2115.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int 2116A number by which the balancing algorithm increments the load calculation for 2117the purpose of selecting the least busy mirror member when an I/O operation 2118lacks locality as defined by 2119.Sy zfs_vdev_mirror_rotating_seek_offset . 2120Operations within this that are not immediately following the previous operation 2121are incremented by half. 2122. 2123.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int 2124The maximum distance for the last queued I/O operation in which 2125the balancing algorithm considers an operation to have locality. 2126.No See Sx ZFS I/O SCHEDULER . 2127. 2128.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int 2129A number by which the balancing algorithm increments the load calculation for 2130the purpose of selecting the least busy mirror member on non-rotational vdevs 2131when I/O operations do not immediately follow one another. 2132. 2133.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int 2134A number by which the balancing algorithm increments the load calculation for 2135the purpose of selecting the least busy mirror member when an I/O operation 2136lacks 2137locality as defined by the 2138.Sy zfs_vdev_mirror_rotating_seek_offset . 2139Operations within this that are not immediately following the previous operation 2140are incremented by half. 2141. 2142.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2143Aggregate read I/O operations if the on-disk gap between them is within this 2144threshold. 2145. 2146.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint 2147Aggregate write I/O operations if the on-disk gap between them is within this 2148threshold. 2149. 2150.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string 2151Select the raidz parity implementation to use. 2152.Pp 2153Variants that don't depend on CPU-specific features 2154may be selected on module load, as they are supported on all systems. 2155The remaining options may only be set after the module is loaded, 2156as they are available only if the implementations are compiled in 2157and supported on the running system. 2158.Pp 2159Once the module is loaded, 2160.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl 2161will show the available options, 2162with the currently selected one enclosed in square brackets. 2163.Pp 2164.TS 2165lb l l . 2166fastest selected by built-in benchmark 2167original original implementation 2168scalar scalar implementation 2169sse2 SSE2 instruction set 64-bit x86 2170ssse3 SSSE3 instruction set 64-bit x86 2171avx2 AVX2 instruction set 64-bit x86 2172avx512f AVX512F instruction set 64-bit x86 2173avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 2174aarch64_neon NEON Aarch64/64-bit ARMv8 2175aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 2176powerpc_altivec Altivec PowerPC 2177.TE 2178. 2179.It Sy zfs_vdev_scheduler Pq charp 2180.Sy DEPRECATED . 2181Prints warning to kernel log for compatibility. 2182. 2183.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint 2184Max event queue length. 2185Events in the queue can be viewed with 2186.Xr zpool-events 8 . 2187. 2188.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int 2189Maximum recent zevent records to retain for duplicate checking. 2190Setting this to 2191.Sy 0 2192disables duplicate detection. 2193. 2194.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int 2195Lifespan for a recent ereport that was retained for duplicate checking. 2196. 2197.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int 2198The maximum number of taskq entries that are allowed to be cached. 2199When this limit is exceeded transaction records (itxs) 2200will be cleaned synchronously. 2201. 2202.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int 2203The number of taskq entries that are pre-populated when the taskq is first 2204created and are immediately available for use. 2205. 2206.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int 2207This controls the number of threads used by 2208.Sy dp_zil_clean_taskq . 2209The default value of 2210.Sy 100% 2211will create a maximum of one thread per cpu. 2212. 2213.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2214This sets the maximum block size used by the ZIL. 2215On very fragmented pools, lowering this 2216.Pq typically to Sy 36 KiB 2217can improve performance. 2218. 2219.It Sy zil_min_commit_timeout Ns = Ns Sy 5000 Pq u64 2220This sets the minimum delay in nanoseconds ZIL care to delay block commit, 2221waiting for more records. 2222If ZIL writes are too fast, kernel may not be able sleep for so short interval, 2223increasing log latency above allowed by 2224.Sy zfs_commit_timeout_pct . 2225. 2226.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 2227Disable the cache flush commands that are normally sent to disk by 2228the ZIL after an LWB write has completed. 2229Setting this will cause ZIL corruption on power loss 2230if a volatile out-of-order write cache is enabled. 2231. 2232.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 2233Disable intent logging replay. 2234Can be disabled for recovery from corrupted ZIL. 2235. 2236.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq u64 2237Limit SLOG write size per commit executed with synchronous priority. 2238Any writes above that will be executed with lower (asynchronous) priority 2239to limit potential SLOG device abuse by single active ZIL writer. 2240. 2241.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int 2242Setting this tunable to zero disables ZIL logging of new 2243.Sy xattr Ns = Ns Sy sa 2244records if the 2245.Sy org.openzfs:zilsaxattr 2246feature is enabled on the pool. 2247This would only be necessary to work around bugs in the ZIL logging or replay 2248code for this record type. 2249The tunable has no effect if the feature is disabled. 2250. 2251.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint 2252Usually, one metaslab from each normal-class vdev is dedicated for use by 2253the ZIL to log synchronous writes. 2254However, if there are fewer than 2255.Sy zfs_embedded_slog_min_ms 2256metaslabs in the vdev, this functionality is disabled. 2257This ensures that we don't set aside an unreasonable amount of space for the 2258ZIL. 2259. 2260.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint 2261Whether heuristic for detection of incompressible data with zstd levels >= 3 2262using LZ4 and zstd-1 passes is enabled. 2263. 2264.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint 2265Minimal uncompressed size (inclusive) of a record before the early abort 2266heuristic will be attempted. 2267. 2268.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int 2269If non-zero, the zio deadman will produce debugging messages 2270.Pq see Sy zfs_dbgmsg_enable 2271for all zios, rather than only for leaf zios possessing a vdev. 2272This is meant to be used by developers to gain 2273diagnostic information for hang conditions which don't involve a mutex 2274or other locking primitive: typically conditions in which a thread in 2275the zio pipeline is looping indefinitely. 2276. 2277.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int 2278When an I/O operation takes more than this much time to complete, 2279it's marked as slow. 2280Each slow operation causes a delay zevent. 2281Slow I/O counters can be seen with 2282.Nm zpool Cm status Fl s . 2283. 2284.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 2285Throttle block allocations in the I/O pipeline. 2286This allows for dynamic allocation distribution when devices are imbalanced. 2287When enabled, the maximum number of pending allocations per top-level vdev 2288is limited by 2289.Sy zfs_vdev_queue_depth_pct . 2290. 2291.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int 2292Control the naming scheme used when setting new xattrs in the user namespace. 2293If 2294.Sy 0 2295.Pq the default on Linux , 2296user namespace xattr names are prefixed with the namespace, to be backwards 2297compatible with previous versions of ZFS on Linux. 2298If 2299.Sy 1 2300.Pq the default on Fx , 2301user namespace xattr names are not prefixed, to be backwards compatible with 2302previous versions of ZFS on illumos and 2303.Fx . 2304.Pp 2305Either naming scheme can be read on this and future versions of ZFS, regardless 2306of this tunable, but legacy ZFS on illumos or 2307.Fx 2308are unable to read user namespace xattrs written in the Linux format, and 2309legacy versions of ZFS on Linux are unable to read user namespace xattrs written 2310in the legacy ZFS format. 2311.Pp 2312An existing xattr with the alternate naming scheme is removed when overwriting 2313the xattr so as to not accumulate duplicates. 2314. 2315.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int 2316Prioritize requeued I/O. 2317. 2318.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint 2319Percentage of online CPUs which will run a worker thread for I/O. 2320These workers are responsible for I/O work such as compression and 2321checksum calculations. 2322Fractional number of CPUs will be rounded down. 2323.Pp 2324The default value of 2325.Sy 80% 2326was chosen to avoid using all CPUs which can result in 2327latency issues and inconsistent application performance, 2328especially when slower compression and/or checksumming is enabled. 2329. 2330.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint 2331Number of worker threads per taskq. 2332Lower values improve I/O ordering and CPU utilization, 2333while higher reduces lock contention. 2334.Pp 2335If 2336.Sy 0 , 2337generate a system-dependent value close to 6 threads per taskq. 2338. 2339.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2340Do not create zvol device nodes. 2341This may slightly improve startup time on 2342systems with a very large number of zvols. 2343. 2344.It Sy zvol_major Ns = Ns Sy 230 Pq uint 2345Major number for zvol block devices. 2346. 2347.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long 2348Discard (TRIM) operations done on zvols will be done in batches of this 2349many blocks, where block size is determined by the 2350.Sy volblocksize 2351property of a zvol. 2352. 2353.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2354When adding a zvol to the system, prefetch this many bytes 2355from the start and end of the volume. 2356Prefetching these regions of the volume is desirable, 2357because they are likely to be accessed immediately by 2358.Xr blkid 8 2359or the kernel partitioner. 2360. 2361.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2362When processing I/O requests for a zvol, submit them synchronously. 2363This effectively limits the queue depth to 2364.Em 1 2365for each I/O submitter. 2366When unset, requests are handled asynchronously by a thread pool. 2367The number of requests which can be handled concurrently is controlled by 2368.Sy zvol_threads . 2369.Sy zvol_request_sync 2370is ignored when running on a kernel that supports block multiqueue 2371.Pq Li blk-mq . 2372. 2373.It Sy zvol_threads Ns = Ns Sy 0 Pq uint 2374The number of system wide threads to use for processing zvol block IOs. 2375If 2376.Sy 0 2377(the default) then internally set 2378.Sy zvol_threads 2379to the number of CPUs present or 32 (whichever is greater). 2380. 2381.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint 2382The number of threads per zvol to use for queuing IO requests. 2383This parameter will only appear if your kernel supports 2384.Li blk-mq 2385and is only read and assigned to a zvol at zvol load time. 2386If 2387.Sy 0 2388(the default) then internally set 2389.Sy zvol_blk_mq_threads 2390to the number of CPUs present. 2391. 2392.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2393Set to 2394.Sy 1 2395to use the 2396.Li blk-mq 2397API for zvols. 2398Set to 2399.Sy 0 2400(the default) to use the legacy zvol APIs. 2401This setting can give better or worse zvol performance depending on 2402the workload. 2403This parameter will only appear if your kernel supports 2404.Li blk-mq 2405and is only read and assigned to a zvol at zvol load time. 2406. 2407.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint 2408If 2409.Sy zvol_use_blk_mq 2410is enabled, then process this number of 2411.Sy volblocksize Ns -sized blocks per zvol thread. 2412This tunable can be use to favor better performance for zvol reads (lower 2413values) or writes (higher values). 2414If set to 2415.Sy 0 , 2416then the zvol layer will process the maximum number of blocks 2417per thread that it can. 2418This parameter will only appear if your kernel supports 2419.Li blk-mq 2420and is only applied at each zvol's load time. 2421. 2422.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint 2423The queue_depth value for the zvol 2424.Li blk-mq 2425interface. 2426This parameter will only appear if your kernel supports 2427.Li blk-mq 2428and is only applied at each zvol's load time. 2429If 2430.Sy 0 2431(the default) then use the kernel's default queue depth. 2432Values are clamped to the kernel's 2433.Dv BLKDEV_MIN_RQ 2434and 2435.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ 2436limits. 2437. 2438.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint 2439Defines zvol block devices behaviour when 2440.Sy volmode Ns = Ns Sy default : 2441.Bl -tag -compact -offset 4n -width "a" 2442.It Sy 1 2443.No equivalent to Sy full 2444.It Sy 2 2445.No equivalent to Sy dev 2446.It Sy 3 2447.No equivalent to Sy none 2448.El 2449. 2450.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2451Enable strict ZVOL quota enforcement. 2452The strict quota enforcement may have a performance impact. 2453.El 2454. 2455.Sh ZFS I/O SCHEDULER 2456ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. 2457The scheduler determines when and in what order those operations are issued. 2458The scheduler divides operations into five I/O classes, 2459prioritized in the following order: sync read, sync write, async read, 2460async write, and scrub/resilver. 2461Each queue defines the minimum and maximum number of concurrent operations 2462that may be issued to the device. 2463In addition, the device has an aggregate maximum, 2464.Sy zfs_vdev_max_active . 2465Note that the sum of the per-queue minima must not exceed the aggregate maximum. 2466If the sum of the per-queue maxima exceeds the aggregate maximum, 2467then the number of active operations may reach 2468.Sy zfs_vdev_max_active , 2469in which case no further operations will be issued, 2470regardless of whether all per-queue minima have been met. 2471.Pp 2472For many physical devices, throughput increases with the number of 2473concurrent operations, but latency typically suffers. 2474Furthermore, physical devices typically have a limit 2475at which more concurrent operations have no 2476effect on throughput or can actually cause it to decrease. 2477.Pp 2478The scheduler selects the next operation to issue by first looking for an 2479I/O class whose minimum has not been satisfied. 2480Once all are satisfied and the aggregate maximum has not been hit, 2481the scheduler looks for classes whose maximum has not been satisfied. 2482Iteration through the I/O classes is done in the order specified above. 2483No further operations are issued 2484if the aggregate maximum number of concurrent operations has been hit, 2485or if there are no operations queued for an I/O class that has not hit its 2486maximum. 2487Every time an I/O operation is queued or an operation completes, 2488the scheduler looks for new operations to issue. 2489.Pp 2490In general, smaller 2491.Sy max_active Ns s 2492will lead to lower latency of synchronous operations. 2493Larger 2494.Sy max_active Ns s 2495may lead to higher overall throughput, depending on underlying storage. 2496.Pp 2497The ratio of the queues' 2498.Sy max_active Ns s 2499determines the balance of performance between reads, writes, and scrubs. 2500For example, increasing 2501.Sy zfs_vdev_scrub_max_active 2502will cause the scrub or resilver to complete more quickly, 2503but reads and writes to have higher latency and lower throughput. 2504.Pp 2505All I/O classes have a fixed maximum number of outstanding operations, 2506except for the async write class. 2507Asynchronous writes represent the data that is committed to stable storage 2508during the syncing stage for transaction groups. 2509Transaction groups enter the syncing state periodically, 2510so the number of queued async writes will quickly burst up 2511and then bleed down to zero. 2512Rather than servicing them as quickly as possible, 2513the I/O scheduler changes the maximum number of active async write operations 2514according to the amount of dirty data in the pool. 2515Since both throughput and latency typically increase with the number of 2516concurrent operations issued to physical devices, reducing the 2517burstiness in the number of simultaneous operations also stabilizes the 2518response time of operations from other queues, in particular synchronous ones. 2519In broad strokes, the I/O scheduler will issue more concurrent operations 2520from the async write queue as there is more dirty data in the pool. 2521. 2522.Ss Async Writes 2523The number of concurrent operations issued for the async write I/O class 2524follows a piece-wise linear function defined by a few adjustable points: 2525.Bd -literal 2526 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP 2527 ^ | /^ | 2528 | | / | | 2529active | / | | 2530 I/O | / | | 2531count | / | | 2532 | / | | 2533 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP 2534 0|_______^______|_________| 2535 0% | | 100% of \fBzfs_dirty_data_max\fP 2536 | | 2537 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP 2538 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP 2539.Ed 2540.Pp 2541Until the amount of dirty data exceeds a minimum percentage of the dirty 2542data allowed in the pool, the I/O scheduler will limit the number of 2543concurrent operations to the minimum. 2544As that threshold is crossed, the number of concurrent operations issued 2545increases linearly to the maximum at the specified maximum percentage 2546of the dirty data allowed in the pool. 2547.Pp 2548Ideally, the amount of dirty data on a busy pool will stay in the sloped 2549part of the function between 2550.Sy zfs_vdev_async_write_active_min_dirty_percent 2551and 2552.Sy zfs_vdev_async_write_active_max_dirty_percent . 2553If it exceeds the maximum percentage, 2554this indicates that the rate of incoming data is 2555greater than the rate that the backend storage can handle. 2556In this case, we must further throttle incoming writes, 2557as described in the next section. 2558. 2559.Sh ZFS TRANSACTION DELAY 2560We delay transactions when we've determined that the backend storage 2561isn't able to accommodate the rate of incoming writes. 2562.Pp 2563If there is already a transaction waiting, we delay relative to when 2564that transaction will finish waiting. 2565This way the calculated delay time 2566is independent of the number of threads concurrently executing transactions. 2567.Pp 2568If we are the only waiter, wait relative to when the transaction started, 2569rather than the current time. 2570This credits the transaction for "time already served", 2571e.g. reading indirect blocks. 2572.Pp 2573The minimum time for a transaction to take is calculated as 2574.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) 2575.Pp 2576The delay has two degrees of freedom that can be adjusted via tunables. 2577The percentage of dirty data at which we start to delay is defined by 2578.Sy zfs_delay_min_dirty_percent . 2579This should typically be at or above 2580.Sy zfs_vdev_async_write_active_max_dirty_percent , 2581so that we only start to delay after writing at full speed 2582has failed to keep up with the incoming write rate. 2583The scale of the curve is defined by 2584.Sy zfs_delay_scale . 2585Roughly speaking, this variable determines the amount of delay at the midpoint 2586of the curve. 2587.Bd -literal 2588delay 2589 10ms +-------------------------------------------------------------*+ 2590 | *| 2591 9ms + *+ 2592 | *| 2593 8ms + *+ 2594 | * | 2595 7ms + * + 2596 | * | 2597 6ms + * + 2598 | * | 2599 5ms + * + 2600 | * | 2601 4ms + * + 2602 | * | 2603 3ms + * + 2604 | * | 2605 2ms + (midpoint) * + 2606 | | ** | 2607 1ms + v *** + 2608 | \fBzfs_delay_scale\fP ----------> ******** | 2609 0 +-------------------------------------*********----------------+ 2610 0% <- \fBzfs_dirty_data_max\fP -> 100% 2611.Ed 2612.Pp 2613Note, that since the delay is added to the outstanding time remaining on the 2614most recent transaction it's effectively the inverse of IOPS. 2615Here, the midpoint of 2616.Em 500 us 2617translates to 2618.Em 2000 IOPS . 2619The shape of the curve 2620was chosen such that small changes in the amount of accumulated dirty data 2621in the first three quarters of the curve yield relatively small differences 2622in the amount of delay. 2623.Pp 2624The effects can be easier to understand when the amount of delay is 2625represented on a logarithmic scale: 2626.Bd -literal 2627delay 2628100ms +-------------------------------------------------------------++ 2629 + + 2630 | | 2631 + *+ 2632 10ms + *+ 2633 + ** + 2634 | (midpoint) ** | 2635 + | ** + 2636 1ms + v **** + 2637 + \fBzfs_delay_scale\fP ----------> ***** + 2638 | **** | 2639 + **** + 2640100us + ** + 2641 + * + 2642 | * | 2643 + * + 2644 10us + * + 2645 + + 2646 | | 2647 + + 2648 +--------------------------------------------------------------+ 2649 0% <- \fBzfs_dirty_data_max\fP -> 100% 2650.Ed 2651.Pp 2652Note here that only as the amount of dirty data approaches its limit does 2653the delay start to increase rapidly. 2654The goal of a properly tuned system should be to keep the amount of dirty data 2655out of that range by first ensuring that the appropriate limits are set 2656for the I/O scheduler to reach optimal throughput on the back-end storage, 2657and then by changing the value of 2658.Sy zfs_delay_scale 2659to increase the steepness of the curve. 2660