1.\" 2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. 3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. 4.\" Copyright (c) 2019 Datto Inc. 5.\" The contents of this file are subject to the terms of the Common Development 6.\" and Distribution License (the "License"). You may not use this file except 7.\" in compliance with the License. You can obtain a copy of the license at 8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. 9.\" 10.\" See the License for the specific language governing permissions and 11.\" limitations under the License. When distributing Covered Code, include this 12.\" CDDL HEADER in each file and include the License file at 13.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this 14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your 15.\" own identifying information: 16.\" Portions Copyright [yyyy] [name of copyright owner] 17.\" 18.Dd January 10, 2023 19.Dt ZFS 4 20.Os 21. 22.Sh NAME 23.Nm zfs 24.Nd tuning of the ZFS kernel module 25. 26.Sh DESCRIPTION 27The ZFS module supports these parameters: 28.Bl -tag -width Ds 29.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 30Maximum size in bytes of the dbuf cache. 31The target size is determined by the MIN versus 32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd 33of the target ARC size. 34The behavior of the dbuf cache and its associated settings 35can be observed via the 36.Pa /proc/spl/kstat/zfs/dbufstats 37kstat. 38. 39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 40Maximum size in bytes of the metadata dbuf cache. 41The target size is determined by the MIN versus 42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th 43of the target ARC size. 44The behavior of the metadata dbuf cache and its associated settings 45can be observed via the 46.Pa /proc/spl/kstat/zfs/dbufstats 47kstat. 48. 49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint 50The percentage over 51.Sy dbuf_cache_max_bytes 52when dbufs must be evicted directly. 53. 54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint 55The percentage below 56.Sy dbuf_cache_max_bytes 57when the evict thread stops evicting dbufs. 58. 59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint 60Set the size of the dbuf cache 61.Pq Sy dbuf_cache_max_bytes 62to a log2 fraction of the target ARC size. 63. 64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint 65Set the size of the dbuf metadata cache 66.Pq Sy dbuf_metadata_cache_max_bytes 67to a log2 fraction of the target ARC size. 68. 69.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint 70Set the size of the mutex array for the dbuf cache. 71When set to 72.Sy 0 73the array is dynamically sized based on total system memory. 74. 75.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint 76dnode slots allocated in a single operation as a power of 2. 77The default value minimizes lock contention for the bulk operation performed. 78. 79.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 80Limit the amount we can prefetch with one call to this amount in bytes. 81This helps to limit the amount of memory that can be used by prefetching. 82. 83.It Sy ignore_hole_birth Pq int 84Alias for 85.Sy send_holes_without_birth_time . 86. 87.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int 88Turbo L2ARC warm-up. 89When the L2ARC is cold the fill interval will be set as fast as possible. 90. 91.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 92Min feed interval in milliseconds. 93Requires 94.Sy l2arc_feed_again Ns = Ns Ar 1 95and only applicable in related situations. 96. 97.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 98Seconds between L2ARC writing. 99. 100.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64 101How far through the ARC lists to search for L2ARC cacheable content, 102expressed as a multiplier of 103.Sy l2arc_write_max . 104ARC persistence across reboots can be achieved with persistent L2ARC 105by setting this parameter to 106.Sy 0 , 107allowing the full length of ARC lists to be searched for cacheable content. 108. 109.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 110Scales 111.Sy l2arc_headroom 112by this percentage when L2ARC contents are being successfully compressed 113before writing. 114A value of 115.Sy 100 116disables this feature. 117. 118.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int 119Controls whether buffers present on special vdevs are eligible for caching 120into L2ARC. 121If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. 122. 123.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int 124Controls whether only MFU metadata and data are cached from ARC into L2ARC. 125This may be desired to avoid wasting space on L2ARC when reading/writing large 126amounts of data that are not expected to be accessed more than once. 127.Pp 128The default is off, 129meaning both MRU and MFU data and metadata are cached. 130When turning off this feature, some MRU buffers will still be present 131in ARC and eventually cached on L2ARC. 132.No If Sy l2arc_noprefetch Ns = Ns Sy 0 , 133some prefetched buffers will be cached to L2ARC, and those might later 134transition to MRU, in which case the 135.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 136.Pp 137Regardless of 138.Sy l2arc_noprefetch , 139some MFU buffers might be evicted from ARC, 140accessed later on as prefetches and transition to MRU as prefetches. 141If accessed again they are counted as MRU and the 142.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 143.Pp 144The ARC status of L2ARC buffers when they were first cached in 145L2ARC can be seen in the 146.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize 147arcstats when importing the pool or onlining a cache 148device if persistent L2ARC is enabled. 149.Pp 150The 151.Sy evict_l2_eligible_mru 152arcstat does not take into account if this option is enabled as the information 153provided by the 154.Sy evict_l2_eligible_m[rf]u 155arcstats can be used to decide if toggling this option is appropriate 156for the current workload. 157. 158.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint 159Percent of ARC size allowed for L2ARC-only headers. 160Since L2ARC buffers are not evicted on memory pressure, 161too many headers on a system with an irrationally large L2ARC 162can render it slow or unusable. 163This parameter limits L2ARC writes and rebuilds to achieve the target. 164. 165.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 166Trims ahead of the current write size 167.Pq Sy l2arc_write_max 168on L2ARC devices by this percentage of write size if we have filled the device. 169If set to 170.Sy 100 171we TRIM twice the space required to accommodate upcoming writes. 172A minimum of 173.Sy 64 MiB 174will be trimmed. 175It also enables TRIM of the whole L2ARC device upon creation 176or addition to an existing pool or if the header of the device is 177invalid upon importing a pool or onlining a cache device. 178A value of 179.Sy 0 180disables TRIM on L2ARC altogether and is the default as it can put significant 181stress on the underlying storage devices. 182This will vary depending of how well the specific device handles these commands. 183. 184.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 185Do not write buffers to L2ARC if they were prefetched but not used by 186applications. 187In case there are prefetched buffers in L2ARC and this option 188is later set, we do not read the prefetched buffers from L2ARC. 189Unsetting this option is useful for caching sequential reads from the 190disks to L2ARC and serve those reads from L2ARC later on. 191This may be beneficial in case the L2ARC device is significantly faster 192in sequential reads than the disks of the pool. 193.Pp 194Use 195.Sy 1 196to disable and 197.Sy 0 198to enable caching/reading prefetches to/from L2ARC. 199. 200.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int 201No reads during writes. 202. 203.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 204Cold L2ARC devices will have 205.Sy l2arc_write_max 206increased by this amount while they remain cold. 207. 208.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 209Max write bytes per interval. 210. 211.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 212Rebuild the L2ARC when importing a pool (persistent L2ARC). 213This can be disabled if there are problems importing a pool 214or attaching an L2ARC device (e.g. the L2ARC device is slow 215in reading stored log metadata, or the metadata 216has become somehow fragmented/unusable). 217. 218.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 219Mininum size of an L2ARC device required in order to write log blocks in it. 220The log blocks are used upon importing the pool to rebuild the persistent L2ARC. 221.Pp 222For L2ARC devices less than 1 GiB, the amount of data 223.Fn l2arc_evict 224evicts is significant compared to the amount of restored L2ARC data. 225In this case, do not write log blocks in L2ARC in order not to waste space. 226. 227.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 228Metaslab granularity, in bytes. 229This is roughly similar to what would be referred to as the "stripe size" 230in traditional RAID arrays. 231In normal operation, ZFS will try to write this amount of data to each disk 232before moving on to the next top-level vdev. 233. 234.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 235Enable metaslab group biasing based on their vdevs' over- or under-utilization 236relative to the pool. 237. 238.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 239Make some blocks above a certain size be gang blocks. 240This option is used by the test suite to facilitate testing. 241. 242.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int 243Default dnode block size as a power of 2. 244. 245.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int 246Default dnode indirect block size as a power of 2. 247. 248.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 249When attempting to log an output nvlist of an ioctl in the on-disk history, 250the output will not be stored if it is larger than this size (in bytes). 251This must be less than 252.Sy DMU_MAX_ACCESS Pq 64 MiB . 253This applies primarily to 254.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . 255. 256.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int 257Prevent log spacemaps from being destroyed during pool exports and destroys. 258. 259.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 260Enable/disable segment-based metaslab selection. 261. 262.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int 263When using segment-based metaslab selection, continue allocating 264from the active metaslab until this option's 265worth of buckets have been exhausted. 266. 267.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int 268Load all metaslabs during pool import. 269. 270.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int 271Prevent metaslabs from being unloaded. 272. 273.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 274Enable use of the fragmentation metric in computing metaslab weights. 275. 276.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 277Maximum distance to search forward from the last offset. 278Without this limit, fragmented pools can see 279.Em >100`000 280iterations and 281.Fn metaslab_block_picker 282becomes the performance limiting factor on high-performance storage. 283.Pp 284With the default setting of 285.Sy 16 MiB , 286we typically see less than 287.Em 500 288iterations, even with very fragmented 289.Sy ashift Ns = Ns Sy 9 290pools. 291The maximum number of iterations possible is 292.Sy metaslab_df_max_search / 2^(ashift+1) . 293With the default setting of 294.Sy 16 MiB 295this is 296.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 297or 298.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . 299. 300.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int 301If not searching forward (due to 302.Sy metaslab_df_max_search , metaslab_df_free_pct , 303.No or Sy metaslab_df_alloc_threshold ) , 304this tunable controls which segment is used. 305If set, we will use the largest free segment. 306If unset, we will use a segment of at least the requested size. 307. 308.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 309When we unload a metaslab, we cache the size of the largest free chunk. 310We use that cached size to determine whether or not to load a metaslab 311for a given allocation. 312As more frees accumulate in that metaslab while it's unloaded, 313the cached max size becomes less and less accurate. 314After a number of seconds controlled by this tunable, 315we stop considering the cached max size and start 316considering only the histogram instead. 317. 318.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint 319When we are loading a new metaslab, we check the amount of memory being used 320to store metaslab range trees. 321If it is over a threshold, we attempt to unload the least recently used metaslab 322to prevent the system from clogging all of its memory with range trees. 323This tunable sets the percentage of total system memory that is the threshold. 324. 325.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int 326.Bl -item -compact 327.It 328If unset, we will first try normal allocation. 329.It 330If that fails then we will do a gang allocation. 331.It 332If that fails then we will do a "try hard" gang allocation. 333.It 334If that fails then we will have a multi-layer gang block. 335.El 336.Pp 337.Bl -item -compact 338.It 339If set, we will first try normal allocation. 340.It 341If that fails then we will do a "try hard" allocation. 342.It 343If that fails we will do a gang allocation. 344.It 345If that fails we will do a "try hard" gang allocation. 346.It 347If that fails then we will have a multi-layer gang block. 348.El 349. 350.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint 351When not trying hard, we only consider this number of the best metaslabs. 352This improves performance, especially when there are many metaslabs per vdev 353and the allocation can't actually be satisfied 354(so we would otherwise iterate all metaslabs). 355. 356.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint 357When a vdev is added, target this number of metaslabs per top-level vdev. 358. 359.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint 360Default limit for metaslab size. 361. 362.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint 363Maximum ashift used when optimizing for logical \[->] physical sector size on 364new 365top-level vdevs. 366May be increased up to 367.Sy ASHIFT_MAX Po 16 Pc , 368but this may negatively impact pool space efficiency. 369. 370.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint 371Minimum ashift used when creating new top-level vdevs. 372. 373.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint 374Minimum number of metaslabs to create in a top-level vdev. 375. 376.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int 377Skip label validation steps during pool import. 378Changing is not recommended unless you know what you're doing 379and are recovering a damaged label. 380. 381.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint 382Practical upper limit of total metaslabs per top-level vdev. 383. 384.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 385Enable metaslab group preloading. 386. 387.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 388Give more weight to metaslabs with lower LBAs, 389assuming they have greater bandwidth, 390as is typically the case on a modern constant angular velocity disk drive. 391. 392.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint 393After a metaslab is used, we keep it loaded for this many TXGs, to attempt to 394reduce unnecessary reloading. 395Note that both this many TXGs and 396.Sy metaslab_unload_delay_ms 397milliseconds must pass before unloading will occur. 398. 399.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint 400After a metaslab is used, we keep it loaded for this many milliseconds, 401to attempt to reduce unnecessary reloading. 402Note, that both this many milliseconds and 403.Sy metaslab_unload_delay 404TXGs must pass before unloading will occur. 405. 406.It Sy reference_history Ns = Ns Sy 3 Pq uint 407Maximum reference holders being tracked when reference_tracking_enable is 408active. 409. 410.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int 411Track reference holders to 412.Sy refcount_t 413objects (debug builds only). 414. 415.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int 416When set, the 417.Sy hole_birth 418optimization will not be used, and all holes will always be sent during a 419.Nm zfs Cm send . 420This is useful if you suspect your datasets are affected by a bug in 421.Sy hole_birth . 422. 423.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp 424SPA config file. 425. 426.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint 427Multiplication factor used to estimate actual disk consumption from the 428size of data being written. 429The default value is a worst case estimate, 430but lower values may be valid for a given pool depending on its configuration. 431Pool administrators who understand the factors involved 432may wish to specify a more realistic inflation factor, 433particularly if they operate close to quota or capacity limits. 434. 435.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int 436Whether to print the vdev tree in the debugging message buffer during pool 437import. 438. 439.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int 440Whether to traverse data blocks during an "extreme rewind" 441.Pq Fl X 442import. 443.Pp 444An extreme rewind import normally performs a full traversal of all 445blocks in the pool for verification. 446If this parameter is unset, the traversal skips non-metadata blocks. 447It can be toggled once the 448import has started to stop or start the traversal of non-metadata blocks. 449. 450.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int 451Whether to traverse blocks during an "extreme rewind" 452.Pq Fl X 453pool import. 454.Pp 455An extreme rewind import normally performs a full traversal of all 456blocks in the pool for verification. 457If this parameter is unset, the traversal is not performed. 458It can be toggled once the import has started to stop or start the traversal. 459. 460.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint 461Sets the maximum number of bytes to consume during pool import to the log2 462fraction of the target ARC size. 463. 464.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int 465Normally, we don't allow the last 466.Sy 3.2% Pq Sy 1/2^spa_slop_shift 467of space in the pool to be consumed. 468This ensures that we don't run the pool completely out of space, 469due to unaccounted changes (e.g. to the MOS). 470It also limits the worst-case time to allocate space. 471If we have less than this amount of free space, 472most ZPL operations (e.g. write, create) will return 473.Sy ENOSPC . 474. 475.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint 476Limits the number of on-disk error log entries that will be converted to the 477new format when enabling the 478.Sy head_errlog 479feature. 480The default is to convert all log entries. 481. 482.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 483During top-level vdev removal, chunks of data are copied from the vdev 484which may include free space in order to trade bandwidth for IOPS. 485This parameter determines the maximum span of free space, in bytes, 486which will be included as "unnecessary" data in a chunk of copied data. 487.Pp 488The default value here was chosen to align with 489.Sy zfs_vdev_read_gap_limit , 490which is a similar concept when doing 491regular reads (but there's no reason it has to be the same). 492. 493.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 494Logical ashift for file-based devices. 495. 496.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 497Physical ashift for file-based devices. 498. 499.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 500If set, when we start iterating over a ZAP object, 501prefetch the entire object (all leaf blocks). 502However, this is limited by 503.Sy dmu_prefetch_max . 504. 505.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int 506Maximum micro ZAP size. 507A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size. 508. 509.It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 510If prefetching is enabled, disable prefetching for reads larger than this size. 511. 512.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 513Min bytes to prefetch per stream. 514Prefetch distance starts from the demand access size and quickly grows to 515this value, doubling on each hit. 516After that it may grow further by 1/8 per hit, but only if some prefetch 517since last time haven't completed in time to satisfy demand request, i.e. 518prefetch depth didn't cover the read latency or the pool got saturated. 519. 520.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 521Max bytes to prefetch per stream. 522. 523.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 524Max bytes to prefetch indirects for per stream. 525. 526.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint 527Max number of streams per zfetch (prefetch streams per file). 528. 529.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint 530Min time before inactive prefetch stream can be reclaimed 531. 532.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint 533Max time before inactive prefetch stream can be deleted 534. 535.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 536Enables ARC from using scatter/gather lists and forces all allocations to be 537linear in kernel memory. 538Disabling can improve performance in some code paths 539at the expense of fragmented kernel memory. 540. 541.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint 542Maximum number of consecutive memory pages allocated in a single block for 543scatter/gather lists. 544.Pp 545The value of 546.Sy MAX_ORDER 547depends on kernel configuration. 548. 549.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint 550This is the minimum allocation size that will use scatter (page-based) ABDs. 551Smaller allocations will use linear ABDs. 552. 553.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 554When the number of bytes consumed by dnodes in the ARC exceeds this number of 555bytes, try to unpin some of it in response to demand for non-metadata. 556This value acts as a ceiling to the amount of dnode metadata, and defaults to 557.Sy 0 , 558which indicates that a percent which is based on 559.Sy zfs_arc_dnode_limit_percent 560of the ARC meta buffers that may be used for dnodes. 561.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 562Percentage that can be consumed by dnodes of ARC meta buffers. 563.Pp 564See also 565.Sy zfs_arc_dnode_limit , 566which serves a similar purpose but has a higher priority if nonzero. 567. 568.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 569Percentage of ARC dnodes to try to scan in response to demand for non-metadata 570when the number of bytes consumed by dnodes exceeds 571.Sy zfs_arc_dnode_limit . 572. 573.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint 574The ARC's buffer hash table is sized based on the assumption of an average 575block size of this value. 576This works out to roughly 1 MiB of hash table per 1 GiB of physical memory 577with 8-byte pointers. 578For configurations with a known larger average block size, 579this value can be increased to reduce the memory footprint. 580. 581.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint 582When 583.Fn arc_is_overflowing , 584.Fn arc_get_data_impl 585waits for this percent of the requested amount of data to be evicted. 586For example, by default, for every 587.Em 2 KiB 588that's evicted, 589.Em 1 KiB 590of it may be "reused" by a new allocation. 591Since this is above 592.Sy 100 Ns % , 593it ensures that progress is made towards getting 594.Sy arc_size No under Sy arc_c . 595Since this is finite, it ensures that allocations can still happen, 596even during the potentially long time that 597.Sy arc_size No is more than Sy arc_c . 598. 599.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint 600Number ARC headers to evict per sub-list before proceeding to another sub-list. 601This batch-style operation prevents entire sub-lists from being evicted at once 602but comes at a cost of additional unlocking and locking. 603. 604.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint 605If set to a non zero value, it will replace the 606.Sy arc_grow_retry 607value with this value. 608The 609.Sy arc_grow_retry 610.No value Pq default Sy 5 Ns s 611is the number of seconds the ARC will wait before 612trying to resume growth after a memory pressure event. 613. 614.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int 615Throttle I/O when free system memory drops below this percentage of total 616system memory. 617Setting this value to 618.Sy 0 619will disable the throttle. 620. 621.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 622Max size of ARC in bytes. 623If 624.Sy 0 , 625then the max size of ARC is determined by the amount of system memory installed. 626Under Linux, half of system memory will be used as the limit. 627Under 628.Fx , 629the larger of 630.Sy all_system_memory No \- Sy 1 GiB 631and 632.Sy 5/8 No \(mu Sy all_system_memory 633will be used as the limit. 634This value must be at least 635.Sy 67108864 Ns B Pq 64 MiB . 636.Pp 637This value can be changed dynamically, with some caveats. 638It cannot be set back to 639.Sy 0 640while running, and reducing it below the current ARC size will not cause 641the ARC to shrink without memory pressure to induce shrinking. 642. 643.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint 644Balance between metadata and data on ghost hits. 645Values above 100 increase metadata caching by proportionally reducing effect 646of ghost data hits on target data/metadata rate. 647. 648.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 649Min size of ARC in bytes. 650.No If set to Sy 0 , arc_c_min 651will default to consuming the larger of 652.Sy 32 MiB 653and 654.Sy all_system_memory No / Sy 32 . 655. 656.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint 657Minimum time prefetched blocks are locked in the ARC. 658. 659.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint 660Minimum time "prescient prefetched" blocks are locked in the ARC. 661These blocks are meant to be prefetched fairly aggressively ahead of 662the code that may use them. 663. 664.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int 665Number of arc_prune threads. 666.Fx 667does not need more than one. 668Linux may theoretically use one per mount point up to number of CPUs, 669but that was not proven to be useful. 670. 671.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int 672Number of missing top-level vdevs which will be allowed during 673pool import (only in read-only mode). 674. 675.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 676Maximum size in bytes allowed to be passed as 677.Sy zc_nvlist_src_size 678for ioctls on 679.Pa /dev/zfs . 680This prevents a user from causing the kernel to allocate 681an excessive amount of memory. 682When the limit is exceeded, the ioctl fails with 683.Sy EINVAL 684and a description of the error is sent to the 685.Pa zfs-dbgmsg 686log. 687This parameter should not need to be touched under normal circumstances. 688If 689.Sy 0 , 690equivalent to a quarter of the user-wired memory limit under 691.Fx 692and to 693.Sy 134217728 Ns B Pq 128 MiB 694under Linux. 695. 696.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint 697To allow more fine-grained locking, each ARC state contains a series 698of lists for both data and metadata objects. 699Locking is performed at the level of these "sub-lists". 700This parameters controls the number of sub-lists per ARC state, 701and also applies to other uses of the multilist data structure. 702.Pp 703If 704.Sy 0 , 705equivalent to the greater of the number of online CPUs and 706.Sy 4 . 707. 708.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int 709The ARC size is considered to be overflowing if it exceeds the current 710ARC target size 711.Pq Sy arc_c 712by thresholds determined by this parameter. 713Exceeding by 714.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 715starts ARC reclamation process. 716If that appears insufficient, exceeding by 717.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 718blocks new buffer allocation until the reclaim thread catches up. 719Started reclamation process continues till ARC size returns below the 720target size. 721.Pp 722The default value of 723.Sy 8 724causes the ARC to start reclamation if it exceeds the target size by 725.Em 0.2% 726of the target size, and block allocations by 727.Em 0.6% . 728. 729.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint 730If nonzero, this will update 731.Sy arc_shrink_shift Pq default Sy 7 732with the new value. 733. 734.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint 735Percent of pagecache to reclaim ARC to. 736.Pp 737This tunable allows the ZFS ARC to play more nicely 738with the kernel's LRU pagecache. 739It can guarantee that the ARC size won't collapse under scanning 740pressure on the pagecache, yet still allows the ARC to be reclaimed down to 741.Sy zfs_arc_min 742if necessary. 743This value is specified as percent of pagecache size (as measured by 744.Sy NR_FILE_PAGES ) , 745where that percent may exceed 746.Sy 100 . 747This 748only operates during memory pressure/reclaim. 749. 750.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int 751This is a limit on how many pages the ARC shrinker makes available for 752eviction in response to one page allocation attempt. 753Note that in practice, the kernel's shrinker can ask us to evict 754up to about four times this for one allocation attempt. 755.Pp 756The default limit of 757.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages 758limits the amount of time spent attempting to reclaim ARC memory to 759less than 100 ms per allocation attempt, 760even with a small average compressed block size of ~8 KiB. 761.Pp 762The parameter can be set to 0 (zero) to disable the limit, 763and only applies on Linux. 764. 765.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 766The target number of bytes the ARC should leave as free memory on the system. 767If zero, equivalent to the bigger of 768.Sy 512 KiB No and Sy all_system_memory/64 . 769. 770.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 771Disable pool import at module load by ignoring the cache file 772.Pq Sy spa_config_path . 773. 774.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 775Rate limit checksum events to this many per second. 776Note that this should not be set below the ZED thresholds 777(currently 10 checksums over 10 seconds) 778or else the daemon may not trigger any action. 779. 780.It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq uint 781This controls the amount of time that a ZIL block (lwb) will remain "open" 782when it isn't "full", and it has a thread waiting for it to be committed to 783stable storage. 784The timeout is scaled based on a percentage of the last lwb 785latency to avoid significantly impacting the latency of each individual 786transaction record (itx). 787. 788.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int 789Vdev indirection layer (used for device removal) sleeps for this many 790milliseconds during mapping generation. 791Intended for use with the test suite to throttle vdev removal speed. 792. 793.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint 794Minimum percent of obsolete bytes in vdev mapping required to attempt to 795condense 796.Pq see Sy zfs_condense_indirect_vdevs_enable . 797Intended for use with the test suite 798to facilitate triggering condensing as needed. 799. 800.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 801Enable condensing indirect vdev mappings. 802When set, attempt to condense indirect vdev mappings 803if the mapping uses more than 804.Sy zfs_condense_min_mapping_bytes 805bytes of memory and if the obsolete space map object uses more than 806.Sy zfs_condense_max_obsolete_bytes 807bytes on-disk. 808The condensing process is an attempt to save memory by removing obsolete 809mappings. 810. 811.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 812Only attempt to condense indirect vdev mappings if the on-disk size 813of the obsolete space map object is greater than this number of bytes 814.Pq see Sy zfs_condense_indirect_vdevs_enable . 815. 816.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 817Minimum size vdev mapping to attempt to condense 818.Pq see Sy zfs_condense_indirect_vdevs_enable . 819. 820.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 821Internally ZFS keeps a small log to facilitate debugging. 822The log is enabled by default, and can be disabled by unsetting this option. 823The contents of the log can be accessed by reading 824.Pa /proc/spl/kstat/zfs/dbgmsg . 825Writing 826.Sy 0 827to the file clears the log. 828.Pp 829This setting does not influence debug prints due to 830.Sy zfs_flags . 831. 832.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 833Maximum size of the internal ZFS debug log. 834. 835.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int 836Historically used for controlling what reporting was available under 837.Pa /proc/spl/kstat/zfs . 838No effect. 839. 840.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 841When a pool sync operation takes longer than 842.Sy zfs_deadman_synctime_ms , 843or when an individual I/O operation takes longer than 844.Sy zfs_deadman_ziotime_ms , 845then the operation is considered to be "hung". 846If 847.Sy zfs_deadman_enabled 848is set, then the deadman behavior is invoked as described by 849.Sy zfs_deadman_failmode . 850By default, the deadman is enabled and set to 851.Sy wait 852which results in "hung" I/O operations only being logged. 853The deadman is automatically disabled when a pool gets suspended. 854. 855.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp 856Controls the failure behavior when the deadman detects a "hung" I/O operation. 857Valid values are: 858.Bl -tag -compact -offset 4n -width "continue" 859.It Sy wait 860Wait for a "hung" operation to complete. 861For each "hung" operation a "deadman" event will be posted 862describing that operation. 863.It Sy continue 864Attempt to recover from a "hung" operation by re-dispatching it 865to the I/O pipeline if possible. 866.It Sy panic 867Panic the system. 868This can be used to facilitate automatic fail-over 869to a properly configured fail-over partner. 870.El 871. 872.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 873Check time in milliseconds. 874This defines the frequency at which we check for hung I/O requests 875and potentially invoke the 876.Sy zfs_deadman_failmode 877behavior. 878. 879.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 880Interval in milliseconds after which the deadman is triggered and also 881the interval after which a pool sync operation is considered to be "hung". 882Once this limit is exceeded the deadman will be invoked every 883.Sy zfs_deadman_checktime_ms 884milliseconds until the pool sync completes. 885. 886.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 887Interval in milliseconds after which the deadman is triggered and an 888individual I/O operation is considered to be "hung". 889As long as the operation remains "hung", 890the deadman will be invoked every 891.Sy zfs_deadman_checktime_ms 892milliseconds until the operation completes. 893. 894.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 895Enable prefetching dedup-ed blocks which are going to be freed. 896. 897.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 898Start to delay each transaction once there is this amount of dirty data, 899expressed as a percentage of 900.Sy zfs_dirty_data_max . 901This value should be at least 902.Sy zfs_vdev_async_write_active_max_dirty_percent . 903.No See Sx ZFS TRANSACTION DELAY . 904. 905.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int 906This controls how quickly the transaction delay approaches infinity. 907Larger values cause longer delays for a given amount of dirty data. 908.Pp 909For the smoothest delay, this value should be about 1 billion divided 910by the maximum number of operations per second. 911This will smoothly handle between ten times and a tenth of this number. 912.No See Sx ZFS TRANSACTION DELAY . 913.Pp 914.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . 915. 916.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int 917Disables requirement for IVset GUIDs to be present and match when doing a raw 918receive of encrypted datasets. 919Intended for users whose pools were created with 920OpenZFS pre-release versions and now have compatibility issues. 921. 922.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong 923Maximum number of uses of a single salt value before generating a new one for 924encrypted datasets. 925The default value is also the maximum. 926. 927.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint 928Size of the znode hashtable used for holds. 929.Pp 930Due to the need to hold locks on objects that may not exist yet, kernel mutexes 931are not created per-object and instead a hashtable is used where collisions 932will result in objects waiting when there is not actually contention on the 933same object. 934. 935.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int 936Rate limit delay and deadman zevents (which report slow I/O operations) to this 937many per 938second. 939. 940.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 941Upper-bound limit for unflushed metadata changes to be held by the 942log spacemap in memory, in bytes. 943. 944.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 945Part of overall system memory that ZFS allows to be used 946for unflushed metadata changes by the log spacemap, in millionths. 947. 948.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 949Describes the maximum number of log spacemap blocks allowed for each pool. 950The default value means that the space in all the log spacemaps 951can add up to no more than 952.Sy 131072 953blocks (which means 954.Em 16 GiB 955of logical space before compression and ditto blocks, 956assuming that blocksize is 957.Em 128 KiB ) . 958.Pp 959This tunable is important because it involves a trade-off between import 960time after an unclean export and the frequency of flushing metaslabs. 961The higher this number is, the more log blocks we allow when the pool is 962active which means that we flush metaslabs less often and thus decrease 963the number of I/O operations for spacemap updates per TXG. 964At the same time though, that means that in the event of an unclean export, 965there will be more log spacemap blocks for us to read, inducing overhead 966in the import time of the pool. 967The lower the number, the amount of flushing increases, destroying log 968blocks quicker as they become obsolete faster, which leaves less blocks 969to be read during import time after a crash. 970.Pp 971Each log spacemap block existing during pool import leads to approximately 972one extra logical I/O issued. 973This is the reason why this tunable is exposed in terms of blocks rather 974than space used. 975. 976.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 977If the number of metaslabs is small and our incoming rate is high, 978we could get into a situation that we are flushing all our metaslabs every TXG. 979Thus we always allow at least this many log blocks. 980. 981.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 982Tunable used to determine the number of blocks that can be used for 983the spacemap log, expressed as a percentage of the total number of 984unflushed metaslabs in the pool. 985. 986.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 987Tunable limiting maximum time in TXGs any metaslab may remain unflushed. 988It effectively limits maximum number of unflushed per-TXG spacemap logs 989that need to be read after unclean pool export. 990. 991.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 992When enabled, files will not be asynchronously removed from the list of pending 993unlinks and the space they consume will be leaked. 994Once this option has been disabled and the dataset is remounted, 995the pending unlinks will be processed and the freed space returned to the pool. 996This option is used by the test suite. 997. 998.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong 999This is the used to define a large file for the purposes of deletion. 1000Files containing more than 1001.Sy zfs_delete_blocks 1002will be deleted asynchronously, while smaller files are deleted synchronously. 1003Decreasing this value will reduce the time spent in an 1004.Xr unlink 2 1005system call, at the expense of a longer delay before the freed space is 1006available. 1007This only applies on Linux. 1008. 1009.It Sy zfs_dirty_data_max Ns = Pq int 1010Determines the dirty space limit in bytes. 1011Once this limit is exceeded, new writes are halted until space frees up. 1012This parameter takes precedence over 1013.Sy zfs_dirty_data_max_percent . 1014.No See Sx ZFS TRANSACTION DELAY . 1015.Pp 1016Defaults to 1017.Sy physical_ram/10 , 1018capped at 1019.Sy zfs_dirty_data_max_max . 1020. 1021.It Sy zfs_dirty_data_max_max Ns = Pq int 1022Maximum allowable value of 1023.Sy zfs_dirty_data_max , 1024expressed in bytes. 1025This limit is only enforced at module load time, and will be ignored if 1026.Sy zfs_dirty_data_max 1027is later changed. 1028This parameter takes precedence over 1029.Sy zfs_dirty_data_max_max_percent . 1030.No See Sx ZFS TRANSACTION DELAY . 1031.Pp 1032Defaults to 1033.Sy min(physical_ram/4, 4GiB) , 1034or 1035.Sy min(physical_ram/4, 1GiB) 1036for 32-bit systems. 1037. 1038.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint 1039Maximum allowable value of 1040.Sy zfs_dirty_data_max , 1041expressed as a percentage of physical RAM. 1042This limit is only enforced at module load time, and will be ignored if 1043.Sy zfs_dirty_data_max 1044is later changed. 1045The parameter 1046.Sy zfs_dirty_data_max_max 1047takes precedence over this one. 1048.No See Sx ZFS TRANSACTION DELAY . 1049. 1050.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint 1051Determines the dirty space limit, expressed as a percentage of all memory. 1052Once this limit is exceeded, new writes are halted until space frees up. 1053The parameter 1054.Sy zfs_dirty_data_max 1055takes precedence over this one. 1056.No See Sx ZFS TRANSACTION DELAY . 1057.Pp 1058Subject to 1059.Sy zfs_dirty_data_max_max . 1060. 1061.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint 1062Start syncing out a transaction group if there's at least this much dirty data 1063.Pq as a percentage of Sy zfs_dirty_data_max . 1064This should be less than 1065.Sy zfs_vdev_async_write_active_min_dirty_percent . 1066. 1067.It Sy zfs_wrlog_data_max Ns = Pq int 1068The upper limit of write-transaction zil log data size in bytes. 1069Write operations are throttled when approaching the limit until log data is 1070cleared out after transaction group sync. 1071Because of some overhead, it should be set at least 2 times the size of 1072.Sy zfs_dirty_data_max 1073.No to prevent harming normal write throughput . 1074It also should be smaller than the size of the slog device if slog is present. 1075.Pp 1076Defaults to 1077.Sy zfs_dirty_data_max*2 1078. 1079.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint 1080Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be 1081preallocated for a file in order to guarantee that later writes will not 1082run out of space. 1083Instead, 1084.Xr fallocate 2 1085space preallocation only checks that sufficient space is currently available 1086in the pool or the user's project quota allocation, 1087and then creates a sparse file of the requested size. 1088The requested space is multiplied by 1089.Sy zfs_fallocate_reserve_percent 1090to allow additional space for indirect blocks and other internal metadata. 1091Setting this to 1092.Sy 0 1093disables support for 1094.Xr fallocate 2 1095and causes it to return 1096.Sy EOPNOTSUPP . 1097. 1098.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string 1099Select a fletcher 4 implementation. 1100.Pp 1101Supported selectors are: 1102.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , 1103.No and Sy aarch64_neon . 1104All except 1105.Sy fastest No and Sy scalar 1106require instruction set extensions to be available, 1107and will only appear if ZFS detects that they are present at runtime. 1108If multiple implementations of fletcher 4 are available, the 1109.Sy fastest 1110will be chosen using a micro benchmark. 1111Selecting 1112.Sy scalar 1113results in the original CPU-based calculation being used. 1114Selecting any option other than 1115.Sy fastest No or Sy scalar 1116results in vector instructions 1117from the respective CPU instruction set being used. 1118. 1119.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string 1120Select a BLAKE3 implementation. 1121.Pp 1122Supported selectors are: 1123.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . 1124All except 1125.Sy cycle , fastest No and Sy generic 1126require instruction set extensions to be available, 1127and will only appear if ZFS detects that they are present at runtime. 1128If multiple implementations of BLAKE3 are available, the 1129.Sy fastest will be chosen using a micro benchmark. You can see the 1130benchmark results by reading this kstat file: 1131.Pa /proc/spl/kstat/zfs/chksum_bench . 1132. 1133.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1134Enable/disable the processing of the free_bpobj object. 1135. 1136.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 1137Maximum number of blocks freed in a single TXG. 1138. 1139.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 1140Maximum number of dedup blocks freed in a single TXG. 1141. 1142.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint 1143Maximum asynchronous read I/O operations active to each device. 1144.No See Sx ZFS I/O SCHEDULER . 1145. 1146.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint 1147Minimum asynchronous read I/O operation active to each device. 1148.No See Sx ZFS I/O SCHEDULER . 1149. 1150.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1151When the pool has more than this much dirty data, use 1152.Sy zfs_vdev_async_write_max_active 1153to limit active async writes. 1154If the dirty data is between the minimum and maximum, 1155the active I/O limit is linearly interpolated. 1156.No See Sx ZFS I/O SCHEDULER . 1157. 1158.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint 1159When the pool has less than this much dirty data, use 1160.Sy zfs_vdev_async_write_min_active 1161to limit active async writes. 1162If the dirty data is between the minimum and maximum, 1163the active I/O limit is linearly 1164interpolated. 1165.No See Sx ZFS I/O SCHEDULER . 1166. 1167.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint 1168Maximum asynchronous write I/O operations active to each device. 1169.No See Sx ZFS I/O SCHEDULER . 1170. 1171.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint 1172Minimum asynchronous write I/O operations active to each device. 1173.No See Sx ZFS I/O SCHEDULER . 1174.Pp 1175Lower values are associated with better latency on rotational media but poorer 1176resilver performance. 1177The default value of 1178.Sy 2 1179was chosen as a compromise. 1180A value of 1181.Sy 3 1182has been shown to improve resilver performance further at a cost of 1183further increasing latency. 1184. 1185.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint 1186Maximum initializing I/O operations active to each device. 1187.No See Sx ZFS I/O SCHEDULER . 1188. 1189.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint 1190Minimum initializing I/O operations active to each device. 1191.No See Sx ZFS I/O SCHEDULER . 1192. 1193.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint 1194The maximum number of I/O operations active to each device. 1195Ideally, this will be at least the sum of each queue's 1196.Sy max_active . 1197.No See Sx ZFS I/O SCHEDULER . 1198. 1199.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint 1200Timeout value to wait before determining a device is missing 1201during import. 1202This is helpful for transient missing paths due 1203to links being briefly removed and recreated in response to 1204udev events. 1205. 1206.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint 1207Maximum sequential resilver I/O operations active to each device. 1208.No See Sx ZFS I/O SCHEDULER . 1209. 1210.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint 1211Minimum sequential resilver I/O operations active to each device. 1212.No See Sx ZFS I/O SCHEDULER . 1213. 1214.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint 1215Maximum removal I/O operations active to each device. 1216.No See Sx ZFS I/O SCHEDULER . 1217. 1218.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint 1219Minimum removal I/O operations active to each device. 1220.No See Sx ZFS I/O SCHEDULER . 1221. 1222.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint 1223Maximum scrub I/O operations active to each device. 1224.No See Sx ZFS I/O SCHEDULER . 1225. 1226.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint 1227Minimum scrub I/O operations active to each device. 1228.No See Sx ZFS I/O SCHEDULER . 1229. 1230.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint 1231Maximum synchronous read I/O operations active to each device. 1232.No See Sx ZFS I/O SCHEDULER . 1233. 1234.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint 1235Minimum synchronous read I/O operations active to each device. 1236.No See Sx ZFS I/O SCHEDULER . 1237. 1238.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint 1239Maximum synchronous write I/O operations active to each device. 1240.No See Sx ZFS I/O SCHEDULER . 1241. 1242.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint 1243Minimum synchronous write I/O operations active to each device. 1244.No See Sx ZFS I/O SCHEDULER . 1245. 1246.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint 1247Maximum trim/discard I/O operations active to each device. 1248.No See Sx ZFS I/O SCHEDULER . 1249. 1250.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint 1251Minimum trim/discard I/O operations active to each device. 1252.No See Sx ZFS I/O SCHEDULER . 1253. 1254.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint 1255For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), 1256the number of concurrently-active I/O operations is limited to 1257.Sy zfs_*_min_active , 1258unless the vdev is "idle". 1259When there are no interactive I/O operations active (synchronous or otherwise), 1260and 1261.Sy zfs_vdev_nia_delay 1262operations have completed since the last interactive operation, 1263then the vdev is considered to be "idle", 1264and the number of concurrently-active non-interactive operations is increased to 1265.Sy zfs_*_max_active . 1266.No See Sx ZFS I/O SCHEDULER . 1267. 1268.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint 1269Some HDDs tend to prioritize sequential I/O so strongly, that concurrent 1270random I/O latency reaches several seconds. 1271On some HDDs this happens even if sequential I/O operations 1272are submitted one at a time, and so setting 1273.Sy zfs_*_max_active Ns = Sy 1 1274does not help. 1275To prevent non-interactive I/O, like scrub, 1276from monopolizing the device, no more than 1277.Sy zfs_vdev_nia_credit operations can be sent 1278while there are outstanding incomplete interactive operations. 1279This enforced wait ensures the HDD services the interactive I/O 1280within a reasonable amount of time. 1281.No See Sx ZFS I/O SCHEDULER . 1282. 1283.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint 1284Maximum number of queued allocations per top-level vdev expressed as 1285a percentage of 1286.Sy zfs_vdev_async_write_max_active , 1287which allows the system to detect devices that are more capable 1288of handling allocations and to allocate more blocks to those devices. 1289This allows for dynamic allocation distribution when devices are imbalanced, 1290as fuller devices will tend to be slower than empty devices. 1291.Pp 1292Also see 1293.Sy zio_dva_throttle_enabled . 1294. 1295.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint 1296Defines if the driver should retire on a given error type. 1297The following options may be bitwise-ored together: 1298.TS 1299box; 1300lbz r l l . 1301 Value Name Description 1302_ 1303 1 Device No driver retries on device errors 1304 2 Transport No driver retries on transport errors. 1305 4 Driver No driver retries on driver errors. 1306.TE 1307. 1308.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int 1309Time before expiring 1310.Pa .zfs/snapshot . 1311. 1312.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int 1313Allow the creation, removal, or renaming of entries in the 1314.Sy .zfs/snapshot 1315directory to cause the creation, destruction, or renaming of snapshots. 1316When enabled, this functionality works both locally and over NFS exports 1317which have the 1318.Em no_root_squash 1319option set. 1320. 1321.It Sy zfs_flags Ns = Ns Sy 0 Pq int 1322Set additional debugging flags. 1323The following flags may be bitwise-ored together: 1324.TS 1325box; 1326lbz r l l . 1327 Value Name Description 1328_ 1329 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. 1330* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. 1331* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. 1332 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. 1333* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. 1334 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. 1335 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. 1336 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. 1337 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. 1338 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. 1339 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. 1340 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log 1341 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. 1342.TE 1343.Sy \& * No Requires debug build . 1344. 1345.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint 1346Enables btree verification. 1347The following settings are culminative: 1348.TS 1349box; 1350lbz r l l . 1351 Value Description 1352 1353 1 Verify height. 1354 2 Verify pointers from children to parent. 1355 3 Verify element counts. 1356 4 Verify element order. (expensive) 1357* 5 Verify unused memory is poisoned. (expensive) 1358.TE 1359.Sy \& * No Requires debug build . 1360. 1361.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int 1362If destroy encounters an 1363.Sy EIO 1364while reading metadata (e.g. indirect blocks), 1365space referenced by the missing metadata can not be freed. 1366Normally this causes the background destroy to become "stalled", 1367as it is unable to make forward progress. 1368While in this stalled state, all remaining space to free 1369from the error-encountering filesystem is "temporarily leaked". 1370Set this flag to cause it to ignore the 1371.Sy EIO , 1372permanently leak the space from indirect blocks that can not be read, 1373and continue to free everything else that it can. 1374.Pp 1375The default "stalling" behavior is useful if the storage partially 1376fails (i.e. some but not all I/O operations fail), and then later recovers. 1377In this case, we will be able to continue pool operations while it is 1378partially failed, and when it recovers, we can continue to free the 1379space, with no leaks. 1380Note, however, that this case is actually fairly rare. 1381.Pp 1382Typically pools either 1383.Bl -enum -compact -offset 4n -width "1." 1384.It 1385fail completely (but perhaps temporarily, 1386e.g. due to a top-level vdev going offline), or 1387.It 1388have localized, permanent errors (e.g. disk returns the wrong data 1389due to bit flip or firmware bug). 1390.El 1391In the former case, this setting does not matter because the 1392pool will be suspended and the sync thread will not be able to make 1393forward progress regardless. 1394In the latter, because the error is permanent, the best we can do 1395is leak the minimum amount of space, 1396which is what setting this flag will do. 1397It is therefore reasonable for this flag to normally be set, 1398but we chose the more conservative approach of not setting it, 1399so that there is no possibility of 1400leaking space in the "partial temporary" failure case. 1401. 1402.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint 1403During a 1404.Nm zfs Cm destroy 1405operation using the 1406.Sy async_destroy 1407feature, 1408a minimum of this much time will be spent working on freeing blocks per TXG. 1409. 1410.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint 1411Similar to 1412.Sy zfs_free_min_time_ms , 1413but for cleanup of old indirection records for removed vdevs. 1414. 1415.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 1416Largest data block to write to the ZIL. 1417Larger blocks will be treated as if the dataset being written to had the 1418.Sy logbias Ns = Ns Sy throughput 1419property set. 1420. 1421.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 1422Pattern written to vdev free space by 1423.Xr zpool-initialize 8 . 1424. 1425.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1426Size of writes used by 1427.Xr zpool-initialize 8 . 1428This option is used by the test suite. 1429. 1430.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 1431The threshold size (in block pointers) at which we create a new sub-livelist. 1432Larger sublists are more costly from a memory perspective but the fewer 1433sublists there are, the lower the cost of insertion. 1434. 1435.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int 1436If the amount of shared space between a snapshot and its clone drops below 1437this threshold, the clone turns off the livelist and reverts to the old 1438deletion method. 1439This is in place because livelists no long give us a benefit 1440once a clone has been overwritten enough. 1441. 1442.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int 1443Incremented each time an extra ALLOC blkptr is added to a livelist entry while 1444it is being condensed. 1445This option is used by the test suite to track race conditions. 1446. 1447.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int 1448Incremented each time livelist condensing is canceled while in 1449.Fn spa_livelist_condense_sync . 1450This option is used by the test suite to track race conditions. 1451. 1452.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1453When set, the livelist condense process pauses indefinitely before 1454executing the synctask \(em 1455.Fn spa_livelist_condense_sync . 1456This option is used by the test suite to trigger race conditions. 1457. 1458.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int 1459Incremented each time livelist condensing is canceled while in 1460.Fn spa_livelist_condense_cb . 1461This option is used by the test suite to track race conditions. 1462. 1463.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1464When set, the livelist condense process pauses indefinitely before 1465executing the open context condensing work in 1466.Fn spa_livelist_condense_cb . 1467This option is used by the test suite to trigger race conditions. 1468. 1469.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 1470The maximum execution time limit that can be set for a ZFS channel program, 1471specified as a number of Lua instructions. 1472. 1473.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 1474The maximum memory limit that can be set for a ZFS channel program, specified 1475in bytes. 1476. 1477.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int 1478The maximum depth of nested datasets. 1479This value can be tuned temporarily to 1480fix existing datasets that exceed the predefined limit. 1481. 1482.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 1483The number of past TXGs that the flushing algorithm of the log spacemap 1484feature uses to estimate incoming log blocks. 1485. 1486.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 1487Maximum number of rows allowed in the summary of the spacemap log. 1488. 1489.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint 1490We currently support block sizes from 1491.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . 1492The benefits of larger blocks, and thus larger I/O, 1493need to be weighed against the cost of COWing a giant block to modify one byte. 1494Additionally, very large blocks can have an impact on I/O latency, 1495and also potentially on the memory allocator. 1496Therefore, we formerly forbade creating blocks larger than 1M. 1497Larger blocks could be created by changing it, 1498and pools with larger blocks can always be imported and used, 1499regardless of this setting. 1500. 1501.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int 1502Allow datasets received with redacted send/receive to be mounted. 1503Normally disabled because these datasets may be missing key data. 1504. 1505.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 1506Minimum number of metaslabs to flush per dirty TXG. 1507. 1508.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint 1509Allow metaslabs to keep their active state as long as their fragmentation 1510percentage is no more than this value. 1511An active metaslab that exceeds this threshold 1512will no longer keep its active status allowing better metaslabs to be selected. 1513. 1514.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint 1515Metaslab groups are considered eligible for allocations if their 1516fragmentation metric (measured as a percentage) is less than or equal to 1517this value. 1518If a metaslab group exceeds this threshold then it will be 1519skipped unless all metaslab groups within the metaslab class have also 1520crossed this threshold. 1521. 1522.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint 1523Defines a threshold at which metaslab groups should be eligible for allocations. 1524The value is expressed as a percentage of free space 1525beyond which a metaslab group is always eligible for allocations. 1526If a metaslab group's free space is less than or equal to the 1527threshold, the allocator will avoid allocating to that group 1528unless all groups in the pool have reached the threshold. 1529Once all groups have reached the threshold, all groups are allowed to accept 1530allocations. 1531The default value of 1532.Sy 0 1533disables the feature and causes all metaslab groups to be eligible for 1534allocations. 1535.Pp 1536This parameter allows one to deal with pools having heavily imbalanced 1537vdevs such as would be the case when a new vdev has been added. 1538Setting the threshold to a non-zero percentage will stop allocations 1539from being made to vdevs that aren't filled to the specified percentage 1540and allow lesser filled vdevs to acquire more allocations than they 1541otherwise would under the old 1542.Sy zfs_mg_alloc_failures 1543facility. 1544. 1545.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1546If enabled, ZFS will place DDT data into the special allocation class. 1547. 1548.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1549If enabled, ZFS will place user data indirect blocks 1550into the special allocation class. 1551. 1552.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint 1553Historical statistics for this many latest multihost updates will be available 1554in 1555.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . 1556. 1557.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 1558Used to control the frequency of multihost writes which are performed when the 1559.Sy multihost 1560pool property is on. 1561This is one of the factors used to determine the 1562length of the activity check during import. 1563.Pp 1564The multihost write period is 1565.Sy zfs_multihost_interval No / Sy leaf-vdevs . 1566On average a multihost write will be issued for each leaf vdev 1567every 1568.Sy zfs_multihost_interval 1569milliseconds. 1570In practice, the observed period can vary with the I/O load 1571and this observed value is the delay which is stored in the uberblock. 1572. 1573.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint 1574Used to control the duration of the activity test on import. 1575Smaller values of 1576.Sy zfs_multihost_import_intervals 1577will reduce the import time but increase 1578the risk of failing to detect an active pool. 1579The total activity check time is never allowed to drop below one second. 1580.Pp 1581On import the activity check waits a minimum amount of time determined by 1582.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , 1583or the same product computed on the host which last had the pool imported, 1584whichever is greater. 1585The activity check time may be further extended if the value of MMP 1586delay found in the best uberblock indicates actual multihost updates happened 1587at longer intervals than 1588.Sy zfs_multihost_interval . 1589A minimum of 1590.Em 100 ms 1591is enforced. 1592.Pp 1593.Sy 0 No is equivalent to Sy 1 . 1594. 1595.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint 1596Controls the behavior of the pool when multihost write failures or delays are 1597detected. 1598.Pp 1599When 1600.Sy 0 , 1601multihost write failures or delays are ignored. 1602The failures will still be reported to the ZED which depending on 1603its configuration may take action such as suspending the pool or offlining a 1604device. 1605.Pp 1606Otherwise, the pool will be suspended if 1607.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval 1608milliseconds pass without a successful MMP write. 1609This guarantees the activity test will see MMP writes if the pool is imported. 1610.Sy 1 No is equivalent to Sy 2 ; 1611this is necessary to prevent the pool from being suspended 1612due to normal, small I/O latency variations. 1613. 1614.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int 1615Set to disable scrub I/O. 1616This results in scrubs not actually scrubbing data and 1617simply doing a metadata crawl of the pool instead. 1618. 1619.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1620Set to disable block prefetching for scrubs. 1621. 1622.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 1623Disable cache flush operations on disks when writing. 1624Setting this will cause pool corruption on power loss 1625if a volatile out-of-order write cache is enabled. 1626. 1627.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1628Allow no-operation writes. 1629The occurrence of nopwrites will further depend on other pool properties 1630.Pq i.a. the checksumming and compression algorithms . 1631. 1632.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int 1633Enable forcing TXG sync to find holes. 1634When enabled forces ZFS to sync data when 1635.Sy SEEK_HOLE No or Sy SEEK_DATA 1636flags are used allowing holes in a file to be accurately reported. 1637When disabled holes will not be reported in recently dirtied files. 1638. 1639.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int 1640The number of bytes which should be prefetched during a pool traversal, like 1641.Nm zfs Cm send 1642or other data crawling operations. 1643. 1644.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint 1645The number of blocks pointed by indirect (non-L0) block which should be 1646prefetched during a pool traversal, like 1647.Nm zfs Cm send 1648or other data crawling operations. 1649. 1650.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 1651Control percentage of dirtied indirect blocks from frees allowed into one TXG. 1652After this threshold is crossed, additional frees will wait until the next TXG. 1653.Sy 0 No disables this throttle . 1654. 1655.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1656Disable predictive prefetch. 1657Note that it leaves "prescient" prefetch 1658.Pq for, e.g., Nm zfs Cm send 1659intact. 1660Unlike predictive prefetch, prescient prefetch never issues I/O 1661that ends up not being needed, so it can't hurt performance. 1662. 1663.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1664Disable QAT hardware acceleration for SHA256 checksums. 1665May be unset after the ZFS modules have been loaded to initialize the QAT 1666hardware as long as support is compiled in and the QAT driver is present. 1667. 1668.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1669Disable QAT hardware acceleration for gzip compression. 1670May be unset after the ZFS modules have been loaded to initialize the QAT 1671hardware as long as support is compiled in and the QAT driver is present. 1672. 1673.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1674Disable QAT hardware acceleration for AES-GCM encryption. 1675May be unset after the ZFS modules have been loaded to initialize the QAT 1676hardware as long as support is compiled in and the QAT driver is present. 1677. 1678.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1679Bytes to read per chunk. 1680. 1681.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint 1682Historical statistics for this many latest reads will be available in 1683.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . 1684. 1685.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int 1686Include cache hits in read history 1687. 1688.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1689Maximum read segment size to issue when sequentially resilvering a 1690top-level vdev. 1691. 1692.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1693Automatically start a pool scrub when the last active sequential resilver 1694completes in order to verify the checksums of all blocks which have been 1695resilvered. 1696This is enabled by default and strongly recommended. 1697. 1698.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 1699Maximum amount of I/O that can be concurrently issued for a sequential 1700resilver per leaf device, given in bytes. 1701. 1702.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int 1703If an indirect split block contains more than this many possible unique 1704combinations when being reconstructed, consider it too computationally 1705expensive to check them all. 1706Instead, try at most this many randomly selected 1707combinations each time the block is accessed. 1708This allows all segment copies to participate fairly 1709in the reconstruction when all combinations 1710cannot be checked and prevents repeated use of one bad copy. 1711. 1712.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int 1713Set to attempt to recover from fatal errors. 1714This should only be used as a last resort, 1715as it typically results in leaked space, or worse. 1716. 1717.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1718Ignore hard I/O errors during device removal. 1719When set, if a device encounters a hard I/O error during the removal process 1720the removal will not be cancelled. 1721This can result in a normally recoverable block becoming permanently damaged 1722and is hence not recommended. 1723This should only be used as a last resort when the 1724pool cannot be returned to a healthy state prior to removing the device. 1725. 1726.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1727This is used by the test suite so that it can ensure that certain actions 1728happen while in the middle of a removal. 1729. 1730.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1731The largest contiguous segment that we will attempt to allocate when removing 1732a device. 1733If there is a performance problem with attempting to allocate large blocks, 1734consider decreasing this. 1735The default value is also the maximum. 1736. 1737.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int 1738Ignore the 1739.Sy resilver_defer 1740feature, causing an operation that would start a resilver to 1741immediately restart the one in progress. 1742. 1743.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint 1744Resilvers are processed by the sync thread. 1745While resilvering, it will spend at least this much time 1746working on a resilver between TXG flushes. 1747. 1748.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 1749If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), 1750even if there were unrepairable errors. 1751Intended to be used during pool repair or recovery to 1752stop resilvering when the pool is next imported. 1753. 1754.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint 1755Scrubs are processed by the sync thread. 1756While scrubbing, it will spend at least this much time 1757working on a scrub between TXG flushes. 1758. 1759.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint 1760To preserve progress across reboots, the sequential scan algorithm periodically 1761needs to stop metadata scanning and issue all the verification I/O to disk. 1762The frequency of this flushing is determined by this tunable. 1763. 1764.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint 1765This tunable affects how scrub and resilver I/O segments are ordered. 1766A higher number indicates that we care more about how filled in a segment is, 1767while a lower number indicates we care more about the size of the extent without 1768considering the gaps within a segment. 1769This value is only tunable upon module insertion. 1770Changing the value afterwards will have no effect on scrub or resilver 1771performance. 1772. 1773.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint 1774Determines the order that data will be verified while scrubbing or resilvering: 1775.Bl -tag -compact -offset 4n -width "a" 1776.It Sy 1 1777Data will be verified as sequentially as possible, given the 1778amount of memory reserved for scrubbing 1779.Pq see Sy zfs_scan_mem_lim_fact . 1780This may improve scrub performance if the pool's data is very fragmented. 1781.It Sy 2 1782The largest mostly-contiguous chunk of found data will be verified first. 1783By deferring scrubbing of small segments, we may later find adjacent data 1784to coalesce and increase the segment size. 1785.It Sy 0 1786.No Use strategy Sy 1 No during normal verification 1787.No and strategy Sy 2 No while taking a checkpoint . 1788.El 1789. 1790.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int 1791If unset, indicates that scrubs and resilvers will gather metadata in 1792memory before issuing sequential I/O. 1793Otherwise indicates that the legacy algorithm will be used, 1794where I/O is initiated as soon as it is discovered. 1795Unsetting will not affect scrubs or resilvers that are already in progress. 1796. 1797.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int 1798Sets the largest gap in bytes between scrub/resilver I/O operations 1799that will still be considered sequential for sorting purposes. 1800Changing this value will not 1801affect scrubs or resilvers that are already in progress. 1802. 1803.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1804Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. 1805This tunable determines the hard limit for I/O sorting memory usage. 1806When the hard limit is reached we stop scanning metadata and start issuing 1807data verification I/O. 1808This is done until we get below the soft limit. 1809. 1810.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 1811The fraction of the hard limit used to determined the soft limit for I/O sorting 1812by the sequential scan algorithm. 1813When we cross this limit from below no action is taken. 1814When we cross this limit from above it is because we are issuing verification 1815I/O. 1816In this case (unless the metadata scan is done) we stop issuing verification I/O 1817and start scanning metadata again until we get to the hard limit. 1818. 1819.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1820When reporting resilver throughput and estimated completion time use the 1821performance observed over roughly the last 1822.Sy zfs_scan_report_txgs 1823TXGs. 1824When set to zero performance is calculated over the time between checkpoints. 1825. 1826.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int 1827Enforce tight memory limits on pool scans when a sequential scan is in progress. 1828When disabled, the memory limit may be exceeded by fast disks. 1829. 1830.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int 1831Freezes a scrub/resilver in progress without actually pausing it. 1832Intended for testing/debugging. 1833. 1834.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1835Maximum amount of data that can be concurrently issued at once for scrubs and 1836resilvers per leaf device, given in bytes. 1837. 1838.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int 1839Allow sending of corrupt data (ignore read/checksum errors when sending). 1840. 1841.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int 1842Include unmodified spill blocks in the send stream. 1843Under certain circumstances, previous versions of ZFS could incorrectly 1844remove the spill block from an existing object. 1845Including unmodified copies of the spill blocks creates a backwards-compatible 1846stream which will recreate a spill block if it was incorrectly removed. 1847. 1848.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1849The fill fraction of the 1850.Nm zfs Cm send 1851internal queues. 1852The fill fraction controls the timing with which internal threads are woken up. 1853. 1854.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1855The maximum number of bytes allowed in 1856.Nm zfs Cm send Ns 's 1857internal queues. 1858. 1859.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1860The fill fraction of the 1861.Nm zfs Cm send 1862prefetch queue. 1863The fill fraction controls the timing with which internal threads are woken up. 1864. 1865.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1866The maximum number of bytes allowed that will be prefetched by 1867.Nm zfs Cm send . 1868This value must be at least twice the maximum block size in use. 1869. 1870.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 1871The fill fraction of the 1872.Nm zfs Cm receive 1873queue. 1874The fill fraction controls the timing with which internal threads are woken up. 1875. 1876.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 1877The maximum number of bytes allowed in the 1878.Nm zfs Cm receive 1879queue. 1880This value must be at least twice the maximum block size in use. 1881. 1882.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 1883The maximum amount of data, in bytes, that 1884.Nm zfs Cm receive 1885will write in one DMU transaction. 1886This is the uncompressed size, even when receiving a compressed send stream. 1887This setting will not reduce the write size below a single block. 1888Capped at a maximum of 1889.Sy 32 MiB . 1890. 1891.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int 1892When this variable is set to non-zero a corrective receive: 1893.Bl -enum -compact -offset 4n -width "1." 1894.It 1895Does not enforce the restriction of source & destination snapshot GUIDs 1896matching. 1897.It 1898If there is an error during healing, the healing receive is not 1899terminated instead it moves on to the next record. 1900.El 1901. 1902.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1903Setting this variable overrides the default logic for estimating block 1904sizes when doing a 1905.Nm zfs Cm send . 1906The default heuristic is that the average block size 1907will be the current recordsize. 1908Override this value if most data in your dataset is not of that size 1909and you require accurate zfs send size estimates. 1910. 1911.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint 1912Flushing of data to disk is done in passes. 1913Defer frees starting in this pass. 1914. 1915.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 1916Maximum memory used for prefetching a checkpoint's space map on each 1917vdev while discarding the checkpoint. 1918. 1919.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint 1920Only allow small data blocks to be allocated on the special and dedup vdev 1921types when the available free space percentage on these vdevs exceeds this 1922value. 1923This ensures reserved space is available for pool metadata as the 1924special vdevs approach capacity. 1925. 1926.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint 1927Starting in this sync pass, disable compression (including of metadata). 1928With the default setting, in practice, we don't have this many sync passes, 1929so this has no effect. 1930.Pp 1931The original intent was that disabling compression would help the sync passes 1932to converge. 1933However, in practice, disabling compression increases 1934the average number of sync passes; because when we turn compression off, 1935many blocks' size will change, and thus we have to re-allocate 1936(not overwrite) them. 1937It also increases the number of 1938.Em 128 KiB 1939allocations (e.g. for indirect blocks and spacemaps) 1940because these will not be compressed. 1941The 1942.Em 128 KiB 1943allocations are especially detrimental to performance 1944on highly fragmented systems, which may have very few free segments of this 1945size, 1946and may need to load new metaslabs to satisfy these allocations. 1947. 1948.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint 1949Rewrite new block pointers starting in this pass. 1950. 1951.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int 1952This controls the number of threads used by 1953.Sy dp_sync_taskq . 1954The default value of 1955.Sy 75% 1956will create a maximum of one thread per CPU. 1957. 1958.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 1959Maximum size of TRIM command. 1960Larger ranges will be split into chunks no larger than this value before 1961issuing. 1962. 1963.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 1964Minimum size of TRIM commands. 1965TRIM ranges smaller than this will be skipped, 1966unless they're part of a larger range which was chunked. 1967This is done because it's common for these small TRIMs 1968to negatively impact overall performance. 1969. 1970.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1971Skip uninitialized metaslabs during the TRIM process. 1972This option is useful for pools constructed from large thinly-provisioned 1973devices 1974where TRIM operations are slow. 1975As a pool ages, an increasing fraction of the pool's metaslabs 1976will be initialized, progressively degrading the usefulness of this option. 1977This setting is stored when starting a manual TRIM and will 1978persist for the duration of the requested TRIM. 1979. 1980.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint 1981Maximum number of queued TRIMs outstanding per leaf vdev. 1982The number of concurrent TRIM commands issued to the device is controlled by 1983.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . 1984. 1985.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint 1986The number of transaction groups' worth of frees which should be aggregated 1987before TRIM operations are issued to the device. 1988This setting represents a trade-off between issuing larger, 1989more efficient TRIM operations and the delay 1990before the recently trimmed space is available for use by the device. 1991.Pp 1992Increasing this value will allow frees to be aggregated for a longer time. 1993This will result is larger TRIM operations and potentially increased memory 1994usage. 1995Decreasing this value will have the opposite effect. 1996The default of 1997.Sy 32 1998was determined to be a reasonable compromise. 1999. 2000.It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint 2001Historical statistics for this many latest TXGs will be available in 2002.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . 2003. 2004.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint 2005Flush dirty data to disk at least every this many seconds (maximum TXG 2006duration). 2007. 2008.It Sy zfs_vdev_aggregate_trim Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2009Allow TRIM I/O operations to be aggregated. 2010This is normally not helpful because the extents to be trimmed 2011will have been already been aggregated by the metaslab. 2012This option is provided for debugging and performance analysis. 2013. 2014.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2015Max vdev I/O aggregation size. 2016. 2017.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2018Max vdev I/O aggregation size for non-rotating media. 2019. 2020.It Sy zfs_vdev_cache_bshift Ns = Ns Sy 16 Po 64 KiB Pc Pq uint 2021Shift size to inflate reads to. 2022. 2023.It Sy zfs_vdev_cache_max Ns = Ns Sy 16384 Ns B Po 16 KiB Pc Pq uint 2024Inflate reads smaller than this value to meet the 2025.Sy zfs_vdev_cache_bshift 2026size 2027.Pq default Sy 64 KiB . 2028. 2029.It Sy zfs_vdev_cache_size Ns = Ns Sy 0 Pq uint 2030Total size of the per-disk cache in bytes. 2031.Pp 2032Currently this feature is disabled, as it has been found to not be helpful 2033for performance and in some cases harmful. 2034. 2035.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int 2036A number by which the balancing algorithm increments the load calculation for 2037the purpose of selecting the least busy mirror member when an I/O operation 2038immediately follows its predecessor on rotational vdevs 2039for the purpose of making decisions based on load. 2040. 2041.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int 2042A number by which the balancing algorithm increments the load calculation for 2043the purpose of selecting the least busy mirror member when an I/O operation 2044lacks locality as defined by 2045.Sy zfs_vdev_mirror_rotating_seek_offset . 2046Operations within this that are not immediately following the previous operation 2047are incremented by half. 2048. 2049.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int 2050The maximum distance for the last queued I/O operation in which 2051the balancing algorithm considers an operation to have locality. 2052.No See Sx ZFS I/O SCHEDULER . 2053. 2054.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int 2055A number by which the balancing algorithm increments the load calculation for 2056the purpose of selecting the least busy mirror member on non-rotational vdevs 2057when I/O operations do not immediately follow one another. 2058. 2059.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int 2060A number by which the balancing algorithm increments the load calculation for 2061the purpose of selecting the least busy mirror member when an I/O operation 2062lacks 2063locality as defined by the 2064.Sy zfs_vdev_mirror_rotating_seek_offset . 2065Operations within this that are not immediately following the previous operation 2066are incremented by half. 2067. 2068.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2069Aggregate read I/O operations if the on-disk gap between them is within this 2070threshold. 2071. 2072.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint 2073Aggregate write I/O operations if the on-disk gap between them is within this 2074threshold. 2075. 2076.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string 2077Select the raidz parity implementation to use. 2078.Pp 2079Variants that don't depend on CPU-specific features 2080may be selected on module load, as they are supported on all systems. 2081The remaining options may only be set after the module is loaded, 2082as they are available only if the implementations are compiled in 2083and supported on the running system. 2084.Pp 2085Once the module is loaded, 2086.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl 2087will show the available options, 2088with the currently selected one enclosed in square brackets. 2089.Pp 2090.TS 2091lb l l . 2092fastest selected by built-in benchmark 2093original original implementation 2094scalar scalar implementation 2095sse2 SSE2 instruction set 64-bit x86 2096ssse3 SSSE3 instruction set 64-bit x86 2097avx2 AVX2 instruction set 64-bit x86 2098avx512f AVX512F instruction set 64-bit x86 2099avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 2100aarch64_neon NEON Aarch64/64-bit ARMv8 2101aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 2102powerpc_altivec Altivec PowerPC 2103.TE 2104. 2105.It Sy zfs_vdev_scheduler Pq charp 2106.Sy DEPRECATED . 2107Prints warning to kernel log for compatibility. 2108. 2109.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint 2110Max event queue length. 2111Events in the queue can be viewed with 2112.Xr zpool-events 8 . 2113. 2114.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int 2115Maximum recent zevent records to retain for duplicate checking. 2116Setting this to 2117.Sy 0 2118disables duplicate detection. 2119. 2120.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int 2121Lifespan for a recent ereport that was retained for duplicate checking. 2122. 2123.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int 2124The maximum number of taskq entries that are allowed to be cached. 2125When this limit is exceeded transaction records (itxs) 2126will be cleaned synchronously. 2127. 2128.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int 2129The number of taskq entries that are pre-populated when the taskq is first 2130created and are immediately available for use. 2131. 2132.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int 2133This controls the number of threads used by 2134.Sy dp_zil_clean_taskq . 2135The default value of 2136.Sy 100% 2137will create a maximum of one thread per cpu. 2138. 2139.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2140This sets the maximum block size used by the ZIL. 2141On very fragmented pools, lowering this 2142.Pq typically to Sy 36 KiB 2143can improve performance. 2144. 2145.It Sy zil_min_commit_timeout Ns = Ns Sy 5000 Pq u64 2146This sets the minimum delay in nanoseconds ZIL care to delay block commit, 2147waiting for more records. 2148If ZIL writes are too fast, kernel may not be able sleep for so short interval, 2149increasing log latency above allowed by 2150.Sy zfs_commit_timeout_pct . 2151. 2152.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 2153Disable the cache flush commands that are normally sent to disk by 2154the ZIL after an LWB write has completed. 2155Setting this will cause ZIL corruption on power loss 2156if a volatile out-of-order write cache is enabled. 2157. 2158.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 2159Disable intent logging replay. 2160Can be disabled for recovery from corrupted ZIL. 2161. 2162.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq u64 2163Limit SLOG write size per commit executed with synchronous priority. 2164Any writes above that will be executed with lower (asynchronous) priority 2165to limit potential SLOG device abuse by single active ZIL writer. 2166. 2167.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int 2168Setting this tunable to zero disables ZIL logging of new 2169.Sy xattr Ns = Ns Sy sa 2170records if the 2171.Sy org.openzfs:zilsaxattr 2172feature is enabled on the pool. 2173This would only be necessary to work around bugs in the ZIL logging or replay 2174code for this record type. 2175The tunable has no effect if the feature is disabled. 2176. 2177.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint 2178Usually, one metaslab from each normal-class vdev is dedicated for use by 2179the ZIL to log synchronous writes. 2180However, if there are fewer than 2181.Sy zfs_embedded_slog_min_ms 2182metaslabs in the vdev, this functionality is disabled. 2183This ensures that we don't set aside an unreasonable amount of space for the 2184ZIL. 2185. 2186.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint 2187Whether heuristic for detection of incompressible data with zstd levels >= 3 2188using LZ4 and zstd-1 passes is enabled. 2189. 2190.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint 2191Minimal uncompressed size (inclusive) of a record before the early abort 2192heuristic will be attempted. 2193. 2194.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int 2195If non-zero, the zio deadman will produce debugging messages 2196.Pq see Sy zfs_dbgmsg_enable 2197for all zios, rather than only for leaf zios possessing a vdev. 2198This is meant to be used by developers to gain 2199diagnostic information for hang conditions which don't involve a mutex 2200or other locking primitive: typically conditions in which a thread in 2201the zio pipeline is looping indefinitely. 2202. 2203.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int 2204When an I/O operation takes more than this much time to complete, 2205it's marked as slow. 2206Each slow operation causes a delay zevent. 2207Slow I/O counters can be seen with 2208.Nm zpool Cm status Fl s . 2209. 2210.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 2211Throttle block allocations in the I/O pipeline. 2212This allows for dynamic allocation distribution when devices are imbalanced. 2213When enabled, the maximum number of pending allocations per top-level vdev 2214is limited by 2215.Sy zfs_vdev_queue_depth_pct . 2216. 2217.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int 2218Control the naming scheme used when setting new xattrs in the user namespace. 2219If 2220.Sy 0 2221.Pq the default on Linux , 2222user namespace xattr names are prefixed with the namespace, to be backwards 2223compatible with previous versions of ZFS on Linux. 2224If 2225.Sy 1 2226.Pq the default on Fx , 2227user namespace xattr names are not prefixed, to be backwards compatible with 2228previous versions of ZFS on illumos and 2229.Fx . 2230.Pp 2231Either naming scheme can be read on this and future versions of ZFS, regardless 2232of this tunable, but legacy ZFS on illumos or 2233.Fx 2234are unable to read user namespace xattrs written in the Linux format, and 2235legacy versions of ZFS on Linux are unable to read user namespace xattrs written 2236in the legacy ZFS format. 2237.Pp 2238An existing xattr with the alternate naming scheme is removed when overwriting 2239the xattr so as to not accumulate duplicates. 2240. 2241.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int 2242Prioritize requeued I/O. 2243. 2244.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint 2245Percentage of online CPUs which will run a worker thread for I/O. 2246These workers are responsible for I/O work such as compression and 2247checksum calculations. 2248Fractional number of CPUs will be rounded down. 2249.Pp 2250The default value of 2251.Sy 80% 2252was chosen to avoid using all CPUs which can result in 2253latency issues and inconsistent application performance, 2254especially when slower compression and/or checksumming is enabled. 2255. 2256.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint 2257Number of worker threads per taskq. 2258Lower values improve I/O ordering and CPU utilization, 2259while higher reduces lock contention. 2260.Pp 2261If 2262.Sy 0 , 2263generate a system-dependent value close to 6 threads per taskq. 2264. 2265.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2266Do not create zvol device nodes. 2267This may slightly improve startup time on 2268systems with a very large number of zvols. 2269. 2270.It Sy zvol_major Ns = Ns Sy 230 Pq uint 2271Major number for zvol block devices. 2272. 2273.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long 2274Discard (TRIM) operations done on zvols will be done in batches of this 2275many blocks, where block size is determined by the 2276.Sy volblocksize 2277property of a zvol. 2278. 2279.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2280When adding a zvol to the system, prefetch this many bytes 2281from the start and end of the volume. 2282Prefetching these regions of the volume is desirable, 2283because they are likely to be accessed immediately by 2284.Xr blkid 8 2285or the kernel partitioner. 2286. 2287.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2288When processing I/O requests for a zvol, submit them synchronously. 2289This effectively limits the queue depth to 2290.Em 1 2291for each I/O submitter. 2292When unset, requests are handled asynchronously by a thread pool. 2293The number of requests which can be handled concurrently is controlled by 2294.Sy zvol_threads . 2295.Sy zvol_request_sync 2296is ignored when running on a kernel that supports block multiqueue 2297.Pq Li blk-mq . 2298. 2299.It Sy zvol_threads Ns = Ns Sy 0 Pq uint 2300The number of system wide threads to use for processing zvol block IOs. 2301If 2302.Sy 0 2303(the default) then internally set 2304.Sy zvol_threads 2305to the number of CPUs present or 32 (whichever is greater). 2306. 2307.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint 2308The number of threads per zvol to use for queuing IO requests. 2309This parameter will only appear if your kernel supports 2310.Li blk-mq 2311and is only read and assigned to a zvol at zvol load time. 2312If 2313.Sy 0 2314(the default) then internally set 2315.Sy zvol_blk_mq_threads 2316to the number of CPUs present. 2317. 2318.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2319Set to 2320.Sy 1 2321to use the 2322.Li blk-mq 2323API for zvols. 2324Set to 2325.Sy 0 2326(the default) to use the legacy zvol APIs. 2327This setting can give better or worse zvol performance depending on 2328the workload. 2329This parameter will only appear if your kernel supports 2330.Li blk-mq 2331and is only read and assigned to a zvol at zvol load time. 2332. 2333.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint 2334If 2335.Sy zvol_use_blk_mq 2336is enabled, then process this number of 2337.Sy volblocksize Ns -sized blocks per zvol thread. 2338This tunable can be use to favor better performance for zvol reads (lower 2339values) or writes (higher values). 2340If set to 2341.Sy 0 , 2342then the zvol layer will process the maximum number of blocks 2343per thread that it can. 2344This parameter will only appear if your kernel supports 2345.Li blk-mq 2346and is only applied at each zvol's load time. 2347. 2348.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint 2349The queue_depth value for the zvol 2350.Li blk-mq 2351interface. 2352This parameter will only appear if your kernel supports 2353.Li blk-mq 2354and is only applied at each zvol's load time. 2355If 2356.Sy 0 2357(the default) then use the kernel's default queue depth. 2358Values are clamped to the kernel's 2359.Dv BLKDEV_MIN_RQ 2360and 2361.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ 2362limits. 2363. 2364.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint 2365Defines zvol block devices behaviour when 2366.Sy volmode Ns = Ns Sy default : 2367.Bl -tag -compact -offset 4n -width "a" 2368.It Sy 1 2369.No equivalent to Sy full 2370.It Sy 2 2371.No equivalent to Sy dev 2372.It Sy 3 2373.No equivalent to Sy none 2374.El 2375. 2376.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2377Enable strict ZVOL quota enforcement. 2378The strict quota enforcement may have a performance impact. 2379.El 2380. 2381.Sh ZFS I/O SCHEDULER 2382ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. 2383The scheduler determines when and in what order those operations are issued. 2384The scheduler divides operations into five I/O classes, 2385prioritized in the following order: sync read, sync write, async read, 2386async write, and scrub/resilver. 2387Each queue defines the minimum and maximum number of concurrent operations 2388that may be issued to the device. 2389In addition, the device has an aggregate maximum, 2390.Sy zfs_vdev_max_active . 2391Note that the sum of the per-queue minima must not exceed the aggregate maximum. 2392If the sum of the per-queue maxima exceeds the aggregate maximum, 2393then the number of active operations may reach 2394.Sy zfs_vdev_max_active , 2395in which case no further operations will be issued, 2396regardless of whether all per-queue minima have been met. 2397.Pp 2398For many physical devices, throughput increases with the number of 2399concurrent operations, but latency typically suffers. 2400Furthermore, physical devices typically have a limit 2401at which more concurrent operations have no 2402effect on throughput or can actually cause it to decrease. 2403.Pp 2404The scheduler selects the next operation to issue by first looking for an 2405I/O class whose minimum has not been satisfied. 2406Once all are satisfied and the aggregate maximum has not been hit, 2407the scheduler looks for classes whose maximum has not been satisfied. 2408Iteration through the I/O classes is done in the order specified above. 2409No further operations are issued 2410if the aggregate maximum number of concurrent operations has been hit, 2411or if there are no operations queued for an I/O class that has not hit its 2412maximum. 2413Every time an I/O operation is queued or an operation completes, 2414the scheduler looks for new operations to issue. 2415.Pp 2416In general, smaller 2417.Sy max_active Ns s 2418will lead to lower latency of synchronous operations. 2419Larger 2420.Sy max_active Ns s 2421may lead to higher overall throughput, depending on underlying storage. 2422.Pp 2423The ratio of the queues' 2424.Sy max_active Ns s 2425determines the balance of performance between reads, writes, and scrubs. 2426For example, increasing 2427.Sy zfs_vdev_scrub_max_active 2428will cause the scrub or resilver to complete more quickly, 2429but reads and writes to have higher latency and lower throughput. 2430.Pp 2431All I/O classes have a fixed maximum number of outstanding operations, 2432except for the async write class. 2433Asynchronous writes represent the data that is committed to stable storage 2434during the syncing stage for transaction groups. 2435Transaction groups enter the syncing state periodically, 2436so the number of queued async writes will quickly burst up 2437and then bleed down to zero. 2438Rather than servicing them as quickly as possible, 2439the I/O scheduler changes the maximum number of active async write operations 2440according to the amount of dirty data in the pool. 2441Since both throughput and latency typically increase with the number of 2442concurrent operations issued to physical devices, reducing the 2443burstiness in the number of simultaneous operations also stabilizes the 2444response time of operations from other queues, in particular synchronous ones. 2445In broad strokes, the I/O scheduler will issue more concurrent operations 2446from the async write queue as there is more dirty data in the pool. 2447. 2448.Ss Async Writes 2449The number of concurrent operations issued for the async write I/O class 2450follows a piece-wise linear function defined by a few adjustable points: 2451.Bd -literal 2452 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP 2453 ^ | /^ | 2454 | | / | | 2455active | / | | 2456 I/O | / | | 2457count | / | | 2458 | / | | 2459 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP 2460 0|_______^______|_________| 2461 0% | | 100% of \fBzfs_dirty_data_max\fP 2462 | | 2463 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP 2464 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP 2465.Ed 2466.Pp 2467Until the amount of dirty data exceeds a minimum percentage of the dirty 2468data allowed in the pool, the I/O scheduler will limit the number of 2469concurrent operations to the minimum. 2470As that threshold is crossed, the number of concurrent operations issued 2471increases linearly to the maximum at the specified maximum percentage 2472of the dirty data allowed in the pool. 2473.Pp 2474Ideally, the amount of dirty data on a busy pool will stay in the sloped 2475part of the function between 2476.Sy zfs_vdev_async_write_active_min_dirty_percent 2477and 2478.Sy zfs_vdev_async_write_active_max_dirty_percent . 2479If it exceeds the maximum percentage, 2480this indicates that the rate of incoming data is 2481greater than the rate that the backend storage can handle. 2482In this case, we must further throttle incoming writes, 2483as described in the next section. 2484. 2485.Sh ZFS TRANSACTION DELAY 2486We delay transactions when we've determined that the backend storage 2487isn't able to accommodate the rate of incoming writes. 2488.Pp 2489If there is already a transaction waiting, we delay relative to when 2490that transaction will finish waiting. 2491This way the calculated delay time 2492is independent of the number of threads concurrently executing transactions. 2493.Pp 2494If we are the only waiter, wait relative to when the transaction started, 2495rather than the current time. 2496This credits the transaction for "time already served", 2497e.g. reading indirect blocks. 2498.Pp 2499The minimum time for a transaction to take is calculated as 2500.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) 2501.Pp 2502The delay has two degrees of freedom that can be adjusted via tunables. 2503The percentage of dirty data at which we start to delay is defined by 2504.Sy zfs_delay_min_dirty_percent . 2505This should typically be at or above 2506.Sy zfs_vdev_async_write_active_max_dirty_percent , 2507so that we only start to delay after writing at full speed 2508has failed to keep up with the incoming write rate. 2509The scale of the curve is defined by 2510.Sy zfs_delay_scale . 2511Roughly speaking, this variable determines the amount of delay at the midpoint 2512of the curve. 2513.Bd -literal 2514delay 2515 10ms +-------------------------------------------------------------*+ 2516 | *| 2517 9ms + *+ 2518 | *| 2519 8ms + *+ 2520 | * | 2521 7ms + * + 2522 | * | 2523 6ms + * + 2524 | * | 2525 5ms + * + 2526 | * | 2527 4ms + * + 2528 | * | 2529 3ms + * + 2530 | * | 2531 2ms + (midpoint) * + 2532 | | ** | 2533 1ms + v *** + 2534 | \fBzfs_delay_scale\fP ----------> ******** | 2535 0 +-------------------------------------*********----------------+ 2536 0% <- \fBzfs_dirty_data_max\fP -> 100% 2537.Ed 2538.Pp 2539Note, that since the delay is added to the outstanding time remaining on the 2540most recent transaction it's effectively the inverse of IOPS. 2541Here, the midpoint of 2542.Em 500 us 2543translates to 2544.Em 2000 IOPS . 2545The shape of the curve 2546was chosen such that small changes in the amount of accumulated dirty data 2547in the first three quarters of the curve yield relatively small differences 2548in the amount of delay. 2549.Pp 2550The effects can be easier to understand when the amount of delay is 2551represented on a logarithmic scale: 2552.Bd -literal 2553delay 2554100ms +-------------------------------------------------------------++ 2555 + + 2556 | | 2557 + *+ 2558 10ms + *+ 2559 + ** + 2560 | (midpoint) ** | 2561 + | ** + 2562 1ms + v **** + 2563 + \fBzfs_delay_scale\fP ----------> ***** + 2564 | **** | 2565 + **** + 2566100us + ** + 2567 + * + 2568 | * | 2569 + * + 2570 10us + * + 2571 + + 2572 | | 2573 + + 2574 +--------------------------------------------------------------+ 2575 0% <- \fBzfs_dirty_data_max\fP -> 100% 2576.Ed 2577.Pp 2578Note here that only as the amount of dirty data approaches its limit does 2579the delay start to increase rapidly. 2580The goal of a properly tuned system should be to keep the amount of dirty data 2581out of that range by first ensuring that the appropriate limits are set 2582for the I/O scheduler to reach optimal throughput on the back-end storage, 2583and then by changing the value of 2584.Sy zfs_delay_scale 2585to increase the steepness of the curve. 2586