1.\" SPDX-License-Identifier: CDDL-1.0 2.\" 3.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. 4.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. 5.\" Copyright (c) 2019 Datto Inc. 6.\" Copyright (c) 2023, 2024 Klara, Inc. 7.\" The contents of this file are subject to the terms of the Common Development 8.\" and Distribution License (the "License"). You may not use this file except 9.\" in compliance with the License. You can obtain a copy of the license at 10.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. 11.\" 12.\" See the License for the specific language governing permissions and 13.\" limitations under the License. When distributing Covered Code, include this 14.\" CDDL HEADER in each file and include the License file at 15.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this 16.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your 17.\" own identifying information: 18.\" Portions Copyright [yyyy] [name of copyright owner] 19.\" 20.\" Copyright (c) 2024, Klara, Inc. 21.\" 22.Dd November 1, 2024 23.Dt ZFS 4 24.Os 25. 26.Sh NAME 27.Nm zfs 28.Nd tuning of the ZFS kernel module 29. 30.Sh DESCRIPTION 31The ZFS module supports these parameters: 32.Bl -tag -width Ds 33.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 34Maximum size in bytes of the dbuf cache. 35The target size is determined by the MIN versus 36.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd 37of the target ARC size. 38The behavior of the dbuf cache and its associated settings 39can be observed via the 40.Pa /proc/spl/kstat/zfs/dbufstats 41kstat. 42. 43.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 44Maximum size in bytes of the metadata dbuf cache. 45The target size is determined by the MIN versus 46.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th 47of the target ARC size. 48The behavior of the metadata dbuf cache and its associated settings 49can be observed via the 50.Pa /proc/spl/kstat/zfs/dbufstats 51kstat. 52. 53.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint 54The percentage over 55.Sy dbuf_cache_max_bytes 56when dbufs must be evicted directly. 57. 58.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint 59The percentage below 60.Sy dbuf_cache_max_bytes 61when the evict thread stops evicting dbufs. 62. 63.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint 64Set the size of the dbuf cache 65.Pq Sy dbuf_cache_max_bytes 66to a log2 fraction of the target ARC size. 67. 68.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint 69Set the size of the dbuf metadata cache 70.Pq Sy dbuf_metadata_cache_max_bytes 71to a log2 fraction of the target ARC size. 72. 73.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint 74Set the size of the mutex array for the dbuf cache. 75When set to 76.Sy 0 77the array is dynamically sized based on total system memory. 78. 79.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint 80dnode slots allocated in a single operation as a power of 2. 81The default value minimizes lock contention for the bulk operation performed. 82. 83.It Sy dmu_ddt_copies Ns = Ns Sy 3 Pq uint 84Controls the number of copies stored for DeDup Table 85.Pq DDT 86objects. 87Reducing the number of copies to 1 from the previous default of 3 88can reduce the write inflation caused by deduplication. 89This assumes redundancy for this data is provided by the vdev layer. 90If the DDT is damaged, space may be leaked 91.Pq not freed 92when the DDT can not report the correct reference count. 93. 94.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 95Limit the amount we can prefetch with one call to this amount in bytes. 96This helps to limit the amount of memory that can be used by prefetching. 97. 98.It Sy ignore_hole_birth Pq int 99Alias for 100.Sy send_holes_without_birth_time . 101. 102.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int 103Turbo L2ARC warm-up. 104When the L2ARC is cold the fill interval will be set as fast as possible. 105. 106.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 107Min feed interval in milliseconds. 108Requires 109.Sy l2arc_feed_again Ns = Ns Ar 1 110and only applicable in related situations. 111. 112.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 113Seconds between L2ARC writing. 114. 115.It Sy l2arc_headroom Ns = Ns Sy 8 Pq u64 116How far through the ARC lists to search for L2ARC cacheable content, 117expressed as a multiplier of 118.Sy l2arc_write_max . 119ARC persistence across reboots can be achieved with persistent L2ARC 120by setting this parameter to 121.Sy 0 , 122allowing the full length of ARC lists to be searched for cacheable content. 123. 124.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 125Scales 126.Sy l2arc_headroom 127by this percentage when L2ARC contents are being successfully compressed 128before writing. 129A value of 130.Sy 100 131disables this feature. 132. 133.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int 134Controls whether buffers present on special vdevs are eligible for caching 135into L2ARC. 136If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. 137. 138.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Ns | Ns 2 Pq int 139Controls whether only MFU metadata and data are cached from ARC into L2ARC. 140This may be desired to avoid wasting space on L2ARC when reading/writing large 141amounts of data that are not expected to be accessed more than once. 142.Pp 143The default is 0, 144meaning both MRU and MFU data and metadata are cached. 145When turning off this feature (setting it to 0), some MRU buffers will 146still be present in ARC and eventually cached on L2ARC. 147.No If Sy l2arc_noprefetch Ns = Ns Sy 0 , 148some prefetched buffers will be cached to L2ARC, and those might later 149transition to MRU, in which case the 150.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 151.Pp 152Setting it to 1 means to L2 cache only MFU data and metadata. 153.Pp 154Setting it to 2 means to L2 cache all metadata (MRU+MFU) but 155only MFU data (i.e. MRU data are not cached). This can be the right setting 156to cache as much metadata as possible even when having high data turnover. 157.Pp 158Regardless of 159.Sy l2arc_noprefetch , 160some MFU buffers might be evicted from ARC, 161accessed later on as prefetches and transition to MRU as prefetches. 162If accessed again they are counted as MRU and the 163.Sy l2arc_mru_asize No arcstat will not be Sy 0 . 164.Pp 165The ARC status of L2ARC buffers when they were first cached in 166L2ARC can be seen in the 167.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize 168arcstats when importing the pool or onlining a cache 169device if persistent L2ARC is enabled. 170.Pp 171The 172.Sy evict_l2_eligible_mru 173arcstat does not take into account if this option is enabled as the information 174provided by the 175.Sy evict_l2_eligible_m[rf]u 176arcstats can be used to decide if toggling this option is appropriate 177for the current workload. 178. 179.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint 180Percent of ARC size allowed for L2ARC-only headers. 181Since L2ARC buffers are not evicted on memory pressure, 182too many headers on a system with an irrationally large L2ARC 183can render it slow or unusable. 184This parameter limits L2ARC writes and rebuilds to achieve the target. 185. 186.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 187Trims ahead of the current write size 188.Pq Sy l2arc_write_max 189on L2ARC devices by this percentage of write size if we have filled the device. 190If set to 191.Sy 100 192we TRIM twice the space required to accommodate upcoming writes. 193A minimum of 194.Sy 64 MiB 195will be trimmed. 196It also enables TRIM of the whole L2ARC device upon creation 197or addition to an existing pool or if the header of the device is 198invalid upon importing a pool or onlining a cache device. 199A value of 200.Sy 0 201disables TRIM on L2ARC altogether and is the default as it can put significant 202stress on the underlying storage devices. 203This will vary depending of how well the specific device handles these commands. 204. 205.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 206Do not write buffers to L2ARC if they were prefetched but not used by 207applications. 208In case there are prefetched buffers in L2ARC and this option 209is later set, we do not read the prefetched buffers from L2ARC. 210Unsetting this option is useful for caching sequential reads from the 211disks to L2ARC and serve those reads from L2ARC later on. 212This may be beneficial in case the L2ARC device is significantly faster 213in sequential reads than the disks of the pool. 214.Pp 215Use 216.Sy 1 217to disable and 218.Sy 0 219to enable caching/reading prefetches to/from L2ARC. 220. 221.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int 222No reads during writes. 223. 224.It Sy l2arc_write_boost Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64 225Cold L2ARC devices will have 226.Sy l2arc_write_max 227increased by this amount while they remain cold. 228. 229.It Sy l2arc_write_max Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64 230Max write bytes per interval. 231. 232.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 233Rebuild the L2ARC when importing a pool (persistent L2ARC). 234This can be disabled if there are problems importing a pool 235or attaching an L2ARC device (e.g. the L2ARC device is slow 236in reading stored log metadata, or the metadata 237has become somehow fragmented/unusable). 238. 239.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 240Minimum size of an L2ARC device required in order to write log blocks in it. 241The log blocks are used upon importing the pool to rebuild the persistent L2ARC. 242.Pp 243For L2ARC devices less than 1 GiB, the amount of data 244.Fn l2arc_evict 245evicts is significant compared to the amount of restored L2ARC data. 246In this case, do not write log blocks in L2ARC in order not to waste space. 247. 248.It Sy metaslab_aliquot Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq u64 249Metaslab group's per child vdev allocation granularity, in bytes. 250This is roughly similar to what would be referred to as the "stripe size" 251in traditional RAID arrays. 252In normal operation, ZFS will try to write this amount of data to each child 253of a top-level vdev before moving on to the next top-level vdev. 254. 255.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 256Enable metaslab groups biasing based on their over- or under-utilization 257relative to the metaslab class average. 258If disabled, each metaslab group will receive allocations proportional to its 259capacity. 260. 261.It Sy metaslab_perf_bias Ns = Ns Sy 1 Ns | Ns 0 Ns | Ns 2 Pq int 262Controls metaslab groups biasing based on their write performance. 263Setting to 0 makes all metaslab groups receive fixed amounts of allocations. 264Setting to 2 allows faster metaslab groups to allocate more. 265Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise. 266That is, if the pool is limited by write throughput, then allocate more from 267faster metaslab groups, but if not, try to evenly distribute the allocations. 268. 269.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 270Make some blocks above a certain size be gang blocks. 271This option is used by the test suite to facilitate testing. 272. 273.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint 274For blocks that could be forced to be a gang block (due to 275.Sy metaslab_force_ganging ) , 276force this many of them to be gang blocks. 277. 278.It Sy brt_zap_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 279Controls prefetching BRT records for blocks which are going to be cloned. 280. 281.It Sy brt_zap_default_bs Ns = Ns Sy 12 Po 4 KiB Pc Pq int 282Default BRT ZAP data block size as a power of 2. Note that changing this after 283creating a BRT on the pool will not affect existing BRTs, only newly created 284ones. 285. 286.It Sy brt_zap_default_ibs Ns = Ns Sy 12 Po 4 KiB Pc Pq int 287Default BRT ZAP indirect block size as a power of 2. Note that changing this 288after creating a BRT on the pool will not affect existing BRTs, only newly 289created ones. 290. 291.It Sy ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 292Default DDT ZAP data block size as a power of 2. Note that changing this after 293creating a DDT on the pool will not affect existing DDTs, only newly created 294ones. 295. 296.It Sy ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int 297Default DDT ZAP indirect block size as a power of 2. Note that changing this 298after creating a DDT on the pool will not affect existing DDTs, only newly 299created ones. 300. 301.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int 302Default dnode block size as a power of 2. 303. 304.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int 305Default dnode indirect block size as a power of 2. 306. 307.It Sy zfs_dio_enabled Ns = Ns Sy 0 Ns | Ns 1 Pq int 308Enable Direct I/O. 309If this setting is 0, then all I/O requests will be directed through the ARC 310acting as though the dataset property 311.Sy direct 312was set to 313.Sy disabled . 314. 315.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 316When attempting to log an output nvlist of an ioctl in the on-disk history, 317the output will not be stored if it is larger than this size (in bytes). 318This must be less than 319.Sy DMU_MAX_ACCESS Pq 64 MiB . 320This applies primarily to 321.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . 322. 323.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int 324Prevent log spacemaps from being destroyed during pool exports and destroys. 325. 326.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 327Enable/disable segment-based metaslab selection. 328. 329.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int 330When using segment-based metaslab selection, continue allocating 331from the active metaslab until this option's 332worth of buckets have been exhausted. 333. 334.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int 335Load all metaslabs during pool import. 336. 337.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int 338Prevent metaslabs from being unloaded. 339. 340.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 341Enable use of the fragmentation metric in computing metaslab weights. 342. 343.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 344Maximum distance to search forward from the last offset. 345Without this limit, fragmented pools can see 346.Em >100`000 347iterations and 348.Fn metaslab_block_picker 349becomes the performance limiting factor on high-performance storage. 350.Pp 351With the default setting of 352.Sy 16 MiB , 353we typically see less than 354.Em 500 355iterations, even with very fragmented 356.Sy ashift Ns = Ns Sy 9 357pools. 358The maximum number of iterations possible is 359.Sy metaslab_df_max_search / 2^(ashift+1) . 360With the default setting of 361.Sy 16 MiB 362this is 363.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 364or 365.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . 366. 367.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int 368If not searching forward (due to 369.Sy metaslab_df_max_search , metaslab_df_free_pct , 370.No or Sy metaslab_df_alloc_threshold ) , 371this tunable controls which segment is used. 372If set, we will use the largest free segment. 373If unset, we will use a segment of at least the requested size. 374. 375.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 376When we unload a metaslab, we cache the size of the largest free chunk. 377We use that cached size to determine whether or not to load a metaslab 378for a given allocation. 379As more frees accumulate in that metaslab while it's unloaded, 380the cached max size becomes less and less accurate. 381After a number of seconds controlled by this tunable, 382we stop considering the cached max size and start 383considering only the histogram instead. 384. 385.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint 386When we are loading a new metaslab, we check the amount of memory being used 387to store metaslab range trees. 388If it is over a threshold, we attempt to unload the least recently used metaslab 389to prevent the system from clogging all of its memory with range trees. 390This tunable sets the percentage of total system memory that is the threshold. 391. 392.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int 393.Bl -item -compact 394.It 395If unset, we will first try normal allocation. 396.It 397If that fails then we will do a gang allocation. 398.It 399If that fails then we will do a "try hard" gang allocation. 400.It 401If that fails then we will have a multi-layer gang block. 402.El 403.Pp 404.Bl -item -compact 405.It 406If set, we will first try normal allocation. 407.It 408If that fails then we will do a "try hard" allocation. 409.It 410If that fails we will do a gang allocation. 411.It 412If that fails we will do a "try hard" gang allocation. 413.It 414If that fails then we will have a multi-layer gang block. 415.El 416. 417.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint 418When not trying hard, we only consider this number of the best metaslabs. 419This improves performance, especially when there are many metaslabs per vdev 420and the allocation can't actually be satisfied 421(so we would otherwise iterate all metaslabs). 422. 423.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint 424When a vdev is added, target this number of metaslabs per top-level vdev. 425. 426.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint 427Default lower limit for metaslab size. 428. 429.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint 430Default upper limit for metaslab size. 431. 432.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint 433Maximum ashift used when optimizing for logical \[->] physical sector size on 434new 435top-level vdevs. 436May be increased up to 437.Sy ASHIFT_MAX Po 16 Pc , 438but this may negatively impact pool space efficiency. 439. 440.It Sy zfs_vdev_direct_write_verify Ns = Ns Sy Linux 1 | FreeBSD 0 Pq uint 441If non-zero, then a Direct I/O write's checksum will be verified every 442time the write is issued and before it is committed to the block pointer. 443In the event the checksum is not valid then the I/O operation will return EIO. 444This module parameter can be used to detect if the 445contents of the users buffer have changed in the process of doing a Direct I/O 446write. 447It can also help to identify if reported checksum errors are tied to Direct I/O 448writes. 449Each verify error causes a 450.Sy dio_verify_wr 451zevent. 452Direct Write I/O checksum verify errors can be seen with 453.Nm zpool Cm status Fl d . 454The default value for this is 1 on Linux, but is 0 for 455.Fx 456because user pages can be placed under write protection in 457.Fx 458before the Direct I/O write is issued. 459. 460.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint 461Minimum ashift used when creating new top-level vdevs. 462. 463.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint 464Minimum number of metaslabs to create in a top-level vdev. 465. 466.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int 467Skip label validation steps during pool import. 468Changing is not recommended unless you know what you're doing 469and are recovering a damaged label. 470. 471.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint 472Practical upper limit of total metaslabs per top-level vdev. 473. 474.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 475Enable metaslab group preloading. 476. 477.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint 478Maximum number of metaslabs per group to preload 479. 480.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint 481Percentage of CPUs to run a metaslab preload taskq 482. 483.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 484Give more weight to metaslabs with lower LBAs, 485assuming they have greater bandwidth, 486as is typically the case on a modern constant angular velocity disk drive. 487. 488.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint 489After a metaslab is used, we keep it loaded for this many TXGs, to attempt to 490reduce unnecessary reloading. 491Note that both this many TXGs and 492.Sy metaslab_unload_delay_ms 493milliseconds must pass before unloading will occur. 494. 495.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint 496After a metaslab is used, we keep it loaded for this many milliseconds, 497to attempt to reduce unnecessary reloading. 498Note, that both this many milliseconds and 499.Sy metaslab_unload_delay 500TXGs must pass before unloading will occur. 501. 502.It Sy reference_history Ns = Ns Sy 3 Pq uint 503Maximum reference holders being tracked when reference_tracking_enable is 504active. 505.It Sy raidz_expand_max_copy_bytes Ns = Ns Sy 160MB Pq ulong 506Max amount of memory to use for RAID-Z expansion I/O. 507This limits how much I/O can be outstanding at once. 508. 509.It Sy raidz_expand_max_reflow_bytes Ns = Ns Sy 0 Pq ulong 510For testing, pause RAID-Z expansion when reflow amount reaches this value. 511. 512.It Sy raidz_io_aggregate_rows Ns = Ns Sy 4 Pq ulong 513For expanded RAID-Z, aggregate reads that have more rows than this. 514. 515.It Sy reference_history Ns = Ns Sy 3 Pq int 516Maximum reference holders being tracked when reference_tracking_enable is 517active. 518. 519.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int 520Track reference holders to 521.Sy refcount_t 522objects (debug builds only). 523. 524.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int 525When set, the 526.Sy hole_birth 527optimization will not be used, and all holes will always be sent during a 528.Nm zfs Cm send . 529This is useful if you suspect your datasets are affected by a bug in 530.Sy hole_birth . 531. 532.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp 533SPA config file. 534. 535.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint 536Multiplication factor used to estimate actual disk consumption from the 537size of data being written. 538The default value is a worst case estimate, 539but lower values may be valid for a given pool depending on its configuration. 540Pool administrators who understand the factors involved 541may wish to specify a more realistic inflation factor, 542particularly if they operate close to quota or capacity limits. 543. 544.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int 545Whether to print the vdev tree in the debugging message buffer during pool 546import. 547. 548.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int 549Whether to traverse data blocks during an "extreme rewind" 550.Pq Fl X 551import. 552.Pp 553An extreme rewind import normally performs a full traversal of all 554blocks in the pool for verification. 555If this parameter is unset, the traversal skips non-metadata blocks. 556It can be toggled once the 557import has started to stop or start the traversal of non-metadata blocks. 558. 559.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int 560Whether to traverse blocks during an "extreme rewind" 561.Pq Fl X 562pool import. 563.Pp 564An extreme rewind import normally performs a full traversal of all 565blocks in the pool for verification. 566If this parameter is unset, the traversal is not performed. 567It can be toggled once the import has started to stop or start the traversal. 568. 569.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint 570Sets the maximum number of bytes to consume during pool import to the log2 571fraction of the target ARC size. 572. 573.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int 574Normally, we don't allow the last 575.Sy 3.2% Pq Sy 1/2^spa_slop_shift 576of space in the pool to be consumed. 577This ensures that we don't run the pool completely out of space, 578due to unaccounted changes (e.g. to the MOS). 579It also limits the worst-case time to allocate space. 580If we have less than this amount of free space, 581most ZPL operations (e.g. write, create) will return 582.Sy ENOSPC . 583. 584.It Sy spa_num_allocators Ns = Ns Sy 4 Pq int 585Determines the number of block allocators to use per spa instance. 586Capped by the number of actual CPUs in the system via 587.Sy spa_cpus_per_allocator . 588.Pp 589Note that setting this value too high could result in performance 590degradation and/or excess fragmentation. 591Set value only applies to pools imported/created after that. 592. 593.It Sy spa_cpus_per_allocator Ns = Ns Sy 4 Pq int 594Determines the minimum number of CPUs in a system for block allocator 595per spa instance. 596Set value only applies to pools imported/created after that. 597. 598.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint 599Limits the number of on-disk error log entries that will be converted to the 600new format when enabling the 601.Sy head_errlog 602feature. 603The default is to convert all log entries. 604. 605.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 606During top-level vdev removal, chunks of data are copied from the vdev 607which may include free space in order to trade bandwidth for IOPS. 608This parameter determines the maximum span of free space, in bytes, 609which will be included as "unnecessary" data in a chunk of copied data. 610.Pp 611The default value here was chosen to align with 612.Sy zfs_vdev_read_gap_limit , 613which is a similar concept when doing 614regular reads (but there's no reason it has to be the same). 615. 616.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 617Logical ashift for file-based devices. 618. 619.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 620Physical ashift for file-based devices. 621. 622.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int 623If set, when we start iterating over a ZAP object, 624prefetch the entire object (all leaf blocks). 625However, this is limited by 626.Sy dmu_prefetch_max . 627. 628.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int 629Maximum micro ZAP size. 630A "micro" ZAP is upgraded to a "fat" ZAP once it grows beyond the specified 631size. 632Sizes higher than 128KiB will be clamped to 128KiB unless the 633.Sy large_microzap 634feature is enabled. 635. 636.It Sy zap_shrink_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 637If set, adjacent empty ZAP blocks will be collapsed, reducing disk space. 638. 639.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 640Min bytes to prefetch per stream. 641Prefetch distance starts from the demand access size and quickly grows to 642this value, doubling on each hit. 643After that it may grow further by 1/8 per hit, but only if some prefetch 644since last time haven't completed in time to satisfy demand request, i.e. 645prefetch depth didn't cover the read latency or the pool got saturated. 646. 647.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 648Max bytes to prefetch per stream. 649. 650.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint 651Max bytes to prefetch indirects for per stream. 652. 653.It Sy zfetch_max_reorder Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 654Requests within this byte distance from the current prefetch stream position 655are considered parts of the stream, reordered due to parallel processing. 656Such requests do not advance the stream position immediately unless 657.Sy zfetch_hole_shift 658fill threshold is reached, but saved to fill holes in the stream later. 659. 660.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint 661Max number of streams per zfetch (prefetch streams per file). 662. 663.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint 664Min time before inactive prefetch stream can be reclaimed 665. 666.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint 667Max time before inactive prefetch stream can be deleted 668. 669.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 670Enables ARC from using scatter/gather lists and forces all allocations to be 671linear in kernel memory. 672Disabling can improve performance in some code paths 673at the expense of fragmented kernel memory. 674. 675.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint 676Maximum number of consecutive memory pages allocated in a single block for 677scatter/gather lists. 678.Pp 679The value of 680.Sy MAX_ORDER 681depends on kernel configuration. 682. 683.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint 684This is the minimum allocation size that will use scatter (page-based) ABDs. 685Smaller allocations will use linear ABDs. 686. 687.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 688When the number of bytes consumed by dnodes in the ARC exceeds this number of 689bytes, try to unpin some of it in response to demand for non-metadata. 690This value acts as a ceiling to the amount of dnode metadata, and defaults to 691.Sy 0 , 692which indicates that a percent which is based on 693.Sy zfs_arc_dnode_limit_percent 694of the ARC meta buffers that may be used for dnodes. 695.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 696Percentage that can be consumed by dnodes of ARC meta buffers. 697.Pp 698See also 699.Sy zfs_arc_dnode_limit , 700which serves a similar purpose but has a higher priority if nonzero. 701. 702.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 703Percentage of ARC dnodes to try to scan in response to demand for non-metadata 704when the number of bytes consumed by dnodes exceeds 705.Sy zfs_arc_dnode_limit . 706. 707.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint 708The ARC's buffer hash table is sized based on the assumption of an average 709block size of this value. 710This works out to roughly 1 MiB of hash table per 1 GiB of physical memory 711with 8-byte pointers. 712For configurations with a known larger average block size, 713this value can be increased to reduce the memory footprint. 714. 715.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint 716When 717.Fn arc_is_overflowing , 718.Fn arc_get_data_impl 719waits for this percent of the requested amount of data to be evicted. 720For example, by default, for every 721.Em 2 KiB 722that's evicted, 723.Em 1 KiB 724of it may be "reused" by a new allocation. 725Since this is above 726.Sy 100 Ns % , 727it ensures that progress is made towards getting 728.Sy arc_size No under Sy arc_c . 729Since this is finite, it ensures that allocations can still happen, 730even during the potentially long time that 731.Sy arc_size No is more than Sy arc_c . 732. 733.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint 734Number ARC headers to evict per sub-list before proceeding to another sub-list. 735This batch-style operation prevents entire sub-lists from being evicted at once 736but comes at a cost of additional unlocking and locking. 737. 738.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint 739If set to a non zero value, it will replace the 740.Sy arc_grow_retry 741value with this value. 742The 743.Sy arc_grow_retry 744.No value Pq default Sy 5 Ns s 745is the number of seconds the ARC will wait before 746trying to resume growth after a memory pressure event. 747. 748.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int 749Throttle I/O when free system memory drops below this percentage of total 750system memory. 751Setting this value to 752.Sy 0 753will disable the throttle. 754. 755.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 756Max size of ARC in bytes. 757If 758.Sy 0 , 759then the max size of ARC is determined by the amount of system memory installed. 760The larger of 761.Sy all_system_memory No \- Sy 1 GiB 762and 763.Sy 5/8 No \(mu Sy all_system_memory 764will be used as the limit. 765This value must be at least 766.Sy 67108864 Ns B Pq 64 MiB . 767.Pp 768This value can be changed dynamically, with some caveats. 769It cannot be set back to 770.Sy 0 771while running, and reducing it below the current ARC size will not cause 772the ARC to shrink without memory pressure to induce shrinking. 773. 774.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint 775Balance between metadata and data on ghost hits. 776Values above 100 increase metadata caching by proportionally reducing effect 777of ghost data hits on target data/metadata rate. 778. 779.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 780Min size of ARC in bytes. 781.No If set to Sy 0 , arc_c_min 782will default to consuming the larger of 783.Sy 32 MiB 784and 785.Sy all_system_memory No / Sy 32 . 786. 787.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint 788Minimum time prefetched blocks are locked in the ARC. 789. 790.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint 791Minimum time "prescient prefetched" blocks are locked in the ARC. 792These blocks are meant to be prefetched fairly aggressively ahead of 793the code that may use them. 794. 795.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int 796Number of arc_prune threads. 797.Fx 798does not need more than one. 799Linux may theoretically use one per mount point up to number of CPUs, 800but that was not proven to be useful. 801. 802.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int 803Number of missing top-level vdevs which will be allowed during 804pool import (only in read-only mode). 805. 806.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 807Maximum size in bytes allowed to be passed as 808.Sy zc_nvlist_src_size 809for ioctls on 810.Pa /dev/zfs . 811This prevents a user from causing the kernel to allocate 812an excessive amount of memory. 813When the limit is exceeded, the ioctl fails with 814.Sy EINVAL 815and a description of the error is sent to the 816.Pa zfs-dbgmsg 817log. 818This parameter should not need to be touched under normal circumstances. 819If 820.Sy 0 , 821equivalent to a quarter of the user-wired memory limit under 822.Fx 823and to 824.Sy 134217728 Ns B Pq 128 MiB 825under Linux. 826. 827.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint 828To allow more fine-grained locking, each ARC state contains a series 829of lists for both data and metadata objects. 830Locking is performed at the level of these "sub-lists". 831This parameters controls the number of sub-lists per ARC state, 832and also applies to other uses of the multilist data structure. 833.Pp 834If 835.Sy 0 , 836equivalent to the greater of the number of online CPUs and 837.Sy 4 . 838. 839.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int 840The ARC size is considered to be overflowing if it exceeds the current 841ARC target size 842.Pq Sy arc_c 843by thresholds determined by this parameter. 844Exceeding by 845.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 846starts ARC reclamation process. 847If that appears insufficient, exceeding by 848.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 849blocks new buffer allocation until the reclaim thread catches up. 850Started reclamation process continues till ARC size returns below the 851target size. 852.Pp 853The default value of 854.Sy 8 855causes the ARC to start reclamation if it exceeds the target size by 856.Em 0.2% 857of the target size, and block allocations by 858.Em 0.6% . 859. 860.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint 861If nonzero, this will update 862.Sy arc_shrink_shift Pq default Sy 7 863with the new value. 864. 865.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint 866Percent of pagecache to reclaim ARC to. 867.Pp 868This tunable allows the ZFS ARC to play more nicely 869with the kernel's LRU pagecache. 870It can guarantee that the ARC size won't collapse under scanning 871pressure on the pagecache, yet still allows the ARC to be reclaimed down to 872.Sy zfs_arc_min 873if necessary. 874This value is specified as percent of pagecache size (as measured by 875.Sy NR_FILE_PAGES ) , 876where that percent may exceed 877.Sy 100 . 878This 879only operates during memory pressure/reclaim. 880. 881.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 0 Pq int 882This is a limit on how many pages the ARC shrinker makes available for 883eviction in response to one page allocation attempt. 884Note that in practice, the kernel's shrinker can ask us to evict 885up to about four times this for one allocation attempt. 886To reduce OOM risk, this limit is applied for kswapd reclaims only. 887.Pp 888For example a value of 889.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages 890limits the amount of time spent attempting to reclaim ARC memory to 891less than 100 ms per allocation attempt, 892even with a small average compressed block size of ~8 KiB. 893.Pp 894The parameter can be set to 0 (zero) to disable the limit, 895and only applies on Linux. 896. 897.It Sy zfs_arc_shrinker_seeks Ns = Ns Sy 2 Pq int 898Relative cost of ARC eviction on Linux, AKA number of seeks needed to 899restore evicted page. 900Bigger values make ARC more precious and evictions smaller, comparing to 901other kernel subsystems. 902Value of 4 means parity with page cache. 903. 904.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 905The target number of bytes the ARC should leave as free memory on the system. 906If zero, equivalent to the bigger of 907.Sy 512 KiB No and Sy all_system_memory/64 . 908. 909.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int 910Disable pool import at module load by ignoring the cache file 911.Pq Sy spa_config_path . 912. 913.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 914Rate limit checksum events to this many per second. 915Note that this should not be set below the ZED thresholds 916(currently 10 checksums over 10 seconds) 917or else the daemon may not trigger any action. 918. 919.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint 920This controls the amount of time that a ZIL block (lwb) will remain "open" 921when it isn't "full", and it has a thread waiting for it to be committed to 922stable storage. 923The timeout is scaled based on a percentage of the last lwb 924latency to avoid significantly impacting the latency of each individual 925transaction record (itx). 926. 927.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int 928Vdev indirection layer (used for device removal) sleeps for this many 929milliseconds during mapping generation. 930Intended for use with the test suite to throttle vdev removal speed. 931. 932.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint 933Minimum percent of obsolete bytes in vdev mapping required to attempt to 934condense 935.Pq see Sy zfs_condense_indirect_vdevs_enable . 936Intended for use with the test suite 937to facilitate triggering condensing as needed. 938. 939.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 940Enable condensing indirect vdev mappings. 941When set, attempt to condense indirect vdev mappings 942if the mapping uses more than 943.Sy zfs_condense_min_mapping_bytes 944bytes of memory and if the obsolete space map object uses more than 945.Sy zfs_condense_max_obsolete_bytes 946bytes on-disk. 947The condensing process is an attempt to save memory by removing obsolete 948mappings. 949. 950.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 951Only attempt to condense indirect vdev mappings if the on-disk size 952of the obsolete space map object is greater than this number of bytes 953.Pq see Sy zfs_condense_indirect_vdevs_enable . 954. 955.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 956Minimum size vdev mapping to attempt to condense 957.Pq see Sy zfs_condense_indirect_vdevs_enable . 958. 959.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int 960Internally ZFS keeps a small log to facilitate debugging. 961The log is enabled by default, and can be disabled by unsetting this option. 962The contents of the log can be accessed by reading 963.Pa /proc/spl/kstat/zfs/dbgmsg . 964Writing 965.Sy 0 966to the file clears the log. 967.Pp 968This setting does not influence debug prints due to 969.Sy zfs_flags . 970. 971.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint 972Maximum size of the internal ZFS debug log. 973. 974.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int 975Historically used for controlling what reporting was available under 976.Pa /proc/spl/kstat/zfs . 977No effect. 978. 979.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 980Check time in milliseconds. 981This defines the frequency at which we check for hung I/O requests 982and potentially invoke the 983.Sy zfs_deadman_failmode 984behavior. 985. 986.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 987When a pool sync operation takes longer than 988.Sy zfs_deadman_synctime_ms , 989or when an individual I/O operation takes longer than 990.Sy zfs_deadman_ziotime_ms , 991then the operation is considered to be "hung". 992If 993.Sy zfs_deadman_enabled 994is set, then the deadman behavior is invoked as described by 995.Sy zfs_deadman_failmode . 996By default, the deadman is enabled and set to 997.Sy wait 998which results in "hung" I/O operations only being logged. 999The deadman is automatically disabled when a pool gets suspended. 1000. 1001.It Sy zfs_deadman_events_per_second Ns = Ns Sy 1 Ns /s Pq int 1002Rate limit deadman zevents (which report hung I/O operations) to this many per 1003second. 1004. 1005.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp 1006Controls the failure behavior when the deadman detects a "hung" I/O operation. 1007Valid values are: 1008.Bl -tag -compact -offset 4n -width "continue" 1009.It Sy wait 1010Wait for a "hung" operation to complete. 1011For each "hung" operation a "deadman" event will be posted 1012describing that operation. 1013.It Sy continue 1014Attempt to recover from a "hung" operation by re-dispatching it 1015to the I/O pipeline if possible. 1016.It Sy panic 1017Panic the system. 1018This can be used to facilitate automatic fail-over 1019to a properly configured fail-over partner. 1020.El 1021. 1022.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 1023Interval in milliseconds after which the deadman is triggered and also 1024the interval after which a pool sync operation is considered to be "hung". 1025Once this limit is exceeded the deadman will be invoked every 1026.Sy zfs_deadman_checktime_ms 1027milliseconds until the pool sync completes. 1028. 1029.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 1030Interval in milliseconds after which the deadman is triggered and an 1031individual I/O operation is considered to be "hung". 1032As long as the operation remains "hung", 1033the deadman will be invoked every 1034.Sy zfs_deadman_checktime_ms 1035milliseconds until the operation completes. 1036. 1037.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1038Enable prefetching dedup-ed blocks which are going to be freed. 1039. 1040.It Sy zfs_dedup_log_flush_min_time_ms Ns = Ns Sy 1000 Ns Pq uint 1041Minimum time to spend on dedup log flush each transaction. 1042.Pp 1043At least this long will be spent flushing dedup log entries each transaction, 1044up to 1045.Sy zfs_txg_timeout . 1046This occurs even if doing so would delay the transaction, that is, other IO 1047completes under this time. 1048. 1049.It Sy zfs_dedup_log_flush_entries_min Ns = Ns Sy 100 Ns Pq uint 1050Flush at least this many entries each transaction. 1051.Pp 1052OpenZFS will flush a fraction of the log every TXG, to keep the size 1053proportional to the ingest rate (see 1054.Sy zfs_dedup_log_flush_txgs ) . 1055This sets the minimum for that estimate, which prevents the backlog from 1056completely draining if the ingest rate falls. 1057Raising it can force OpenZFS to flush more aggressively, reducing the backlog 1058to zero more quickly, but can make it less able to back off if log 1059flushing would compete with other IO too much. 1060. 1061.It Sy zfs_dedup_log_flush_entries_max Ns = Ns Sy UINT_MAX Ns Pq uint 1062Flush at most this many entries each transaction. 1063.Pp 1064Mostly used for debugging purposes. 1065.It Sy zfs_dedup_log_flush_txgs Ns = Ns Sy 100 Ns Pq uint 1066Target number of TXGs to process the whole dedup log. 1067.Pp 1068Every TXG, OpenZFS will process the inverse of this number times the size 1069of the DDT backlog. 1070This will keep the backlog at a size roughly equal to the ingest rate 1071times this value. 1072This offers a balance between a more efficient DDT log, with better 1073aggregation, and shorter import times, which increase as the size of the 1074DDT log increases. 1075Increasing this value will result in a more efficient DDT log, but longer 1076import times. 1077.It Sy zfs_dedup_log_cap Ns = Ns Sy UINT_MAX Ns Pq uint 1078Soft cap for the size of the current dedup log. 1079.Pp 1080If the log is larger than this size, we increase the aggressiveness of 1081the flushing to try to bring it back down to the soft cap. 1082Setting it will reduce import times, but will reduce the efficiency of 1083the DDT log, increasing the expected number of IOs required to flush the same 1084amount of data. 1085.It Sy zfs_dedup_log_hard_cap Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1086Whether to treat the log cap as a firm cap or not. 1087.Pp 1088When set to 0 (the default), the 1089.Sy zfs_dedup_log_cap 1090will increase the maximum number of log entries we flush in a given txg. 1091This will bring the backlog size down towards the cap, but not at the expense 1092of making TXG syncs take longer. 1093If this is set to 1, the cap acts more like a hard cap than a soft cap; it will 1094also increase the minimum number of log entries we flush per TXG. 1095Enabling it will reduce worst-case import times, at the cost of increased TXG 1096sync times. 1097.It Sy zfs_dedup_log_flush_flow_rate_txgs Ns = Ns Sy 10 Ns Pq uint 1098Number of transactions to use to compute the flow rate. 1099.Pp 1100OpenZFS will estimate number of entries changed (ingest rate), number of entries 1101flushed (flush rate) and time spent flushing (flush time rate) and combining 1102these into an overall "flow rate". 1103It will use an exponential weighted moving average over some number of recent 1104transactions to compute these rates. 1105This sets the number of transactions to compute these averages over. 1106Setting it higher can help to smooth out the flow rate in the face of spiky 1107workloads, but will take longer for the flow rate to adjust to a sustained 1108change in the ingress rate. 1109. 1110.It Sy zfs_dedup_log_txg_max Ns = Ns Sy 8 Ns Pq uint 1111Max transactions to before starting to flush dedup logs. 1112.Pp 1113OpenZFS maintains two dedup logs, one receiving new changes, one flushing. 1114If there is nothing to flush, it will accumulate changes for no more than this 1115many transactions before switching the logs and starting to flush entries out. 1116. 1117.It Sy zfs_dedup_log_mem_max Ns = Ns Sy 0 Ns Pq u64 1118Max memory to use for dedup logs. 1119.Pp 1120OpenZFS will spend no more than this much memory on maintaining the in-memory 1121dedup log. 1122Flushing will begin when around half this amount is being spent on logs. 1123The default value of 1124.Sy 0 1125will cause it to be set by 1126.Sy zfs_dedup_log_mem_max_percent 1127instead. 1128. 1129.It Sy zfs_dedup_log_mem_max_percent Ns = Ns Sy 1 Ns % Pq uint 1130Max memory to use for dedup logs, as a percentage of total memory. 1131.Pp 1132If 1133.Sy zfs_dedup_log_mem_max 1134is not set, it will be initialized as a percentage of the total memory in the 1135system. 1136. 1137.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1138Start to delay each transaction once there is this amount of dirty data, 1139expressed as a percentage of 1140.Sy zfs_dirty_data_max . 1141This value should be at least 1142.Sy zfs_vdev_async_write_active_max_dirty_percent . 1143.No See Sx ZFS TRANSACTION DELAY . 1144. 1145.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int 1146This controls how quickly the transaction delay approaches infinity. 1147Larger values cause longer delays for a given amount of dirty data. 1148.Pp 1149For the smoothest delay, this value should be about 1 billion divided 1150by the maximum number of operations per second. 1151This will smoothly handle between ten times and a tenth of this number. 1152.No See Sx ZFS TRANSACTION DELAY . 1153.Pp 1154.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . 1155. 1156.It Sy zfs_dio_write_verify_events_per_second Ns = Ns Sy 20 Ns /s Pq uint 1157Rate limit Direct I/O write verify events to this many per second. 1158. 1159.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int 1160Disables requirement for IVset GUIDs to be present and match when doing a raw 1161receive of encrypted datasets. 1162Intended for users whose pools were created with 1163OpenZFS pre-release versions and now have compatibility issues. 1164. 1165.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong 1166Maximum number of uses of a single salt value before generating a new one for 1167encrypted datasets. 1168The default value is also the maximum. 1169. 1170.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint 1171Size of the znode hashtable used for holds. 1172.Pp 1173Due to the need to hold locks on objects that may not exist yet, kernel mutexes 1174are not created per-object and instead a hashtable is used where collisions 1175will result in objects waiting when there is not actually contention on the 1176same object. 1177. 1178.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int 1179Rate limit delay zevents (which report slow I/O operations) to this many per 1180second. 1181. 1182.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 1183Upper-bound limit for unflushed metadata changes to be held by the 1184log spacemap in memory, in bytes. 1185. 1186.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 1187Part of overall system memory that ZFS allows to be used 1188for unflushed metadata changes by the log spacemap, in millionths. 1189. 1190.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 1191Describes the maximum number of log spacemap blocks allowed for each pool. 1192The default value means that the space in all the log spacemaps 1193can add up to no more than 1194.Sy 131072 1195blocks (which means 1196.Em 16 GiB 1197of logical space before compression and ditto blocks, 1198assuming that blocksize is 1199.Em 128 KiB ) . 1200.Pp 1201This tunable is important because it involves a trade-off between import 1202time after an unclean export and the frequency of flushing metaslabs. 1203The higher this number is, the more log blocks we allow when the pool is 1204active which means that we flush metaslabs less often and thus decrease 1205the number of I/O operations for spacemap updates per TXG. 1206At the same time though, that means that in the event of an unclean export, 1207there will be more log spacemap blocks for us to read, inducing overhead 1208in the import time of the pool. 1209The lower the number, the amount of flushing increases, destroying log 1210blocks quicker as they become obsolete faster, which leaves less blocks 1211to be read during import time after a crash. 1212.Pp 1213Each log spacemap block existing during pool import leads to approximately 1214one extra logical I/O issued. 1215This is the reason why this tunable is exposed in terms of blocks rather 1216than space used. 1217. 1218.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 1219If the number of metaslabs is small and our incoming rate is high, 1220we could get into a situation that we are flushing all our metaslabs every TXG. 1221Thus we always allow at least this many log blocks. 1222. 1223.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 1224Tunable used to determine the number of blocks that can be used for 1225the spacemap log, expressed as a percentage of the total number of 1226unflushed metaslabs in the pool. 1227. 1228.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 1229Tunable limiting maximum time in TXGs any metaslab may remain unflushed. 1230It effectively limits maximum number of unflushed per-TXG spacemap logs 1231that need to be read after unclean pool export. 1232. 1233.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1234When enabled, files will not be asynchronously removed from the list of pending 1235unlinks and the space they consume will be leaked. 1236Once this option has been disabled and the dataset is remounted, 1237the pending unlinks will be processed and the freed space returned to the pool. 1238This option is used by the test suite. 1239. 1240.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong 1241This is the used to define a large file for the purposes of deletion. 1242Files containing more than 1243.Sy zfs_delete_blocks 1244will be deleted asynchronously, while smaller files are deleted synchronously. 1245Decreasing this value will reduce the time spent in an 1246.Xr unlink 2 1247system call, at the expense of a longer delay before the freed space is 1248available. 1249This only applies on Linux. 1250. 1251.It Sy zfs_dirty_data_max Ns = Pq int 1252Determines the dirty space limit in bytes. 1253Once this limit is exceeded, new writes are halted until space frees up. 1254This parameter takes precedence over 1255.Sy zfs_dirty_data_max_percent . 1256.No See Sx ZFS TRANSACTION DELAY . 1257.Pp 1258Defaults to 1259.Sy physical_ram/10 , 1260capped at 1261.Sy zfs_dirty_data_max_max . 1262. 1263.It Sy zfs_dirty_data_max_max Ns = Pq int 1264Maximum allowable value of 1265.Sy zfs_dirty_data_max , 1266expressed in bytes. 1267This limit is only enforced at module load time, and will be ignored if 1268.Sy zfs_dirty_data_max 1269is later changed. 1270This parameter takes precedence over 1271.Sy zfs_dirty_data_max_max_percent . 1272.No See Sx ZFS TRANSACTION DELAY . 1273.Pp 1274Defaults to 1275.Sy min(physical_ram/4, 4GiB) , 1276or 1277.Sy min(physical_ram/4, 1GiB) 1278for 32-bit systems. 1279. 1280.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint 1281Maximum allowable value of 1282.Sy zfs_dirty_data_max , 1283expressed as a percentage of physical RAM. 1284This limit is only enforced at module load time, and will be ignored if 1285.Sy zfs_dirty_data_max 1286is later changed. 1287The parameter 1288.Sy zfs_dirty_data_max_max 1289takes precedence over this one. 1290.No See Sx ZFS TRANSACTION DELAY . 1291. 1292.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint 1293Determines the dirty space limit, expressed as a percentage of all memory. 1294Once this limit is exceeded, new writes are halted until space frees up. 1295The parameter 1296.Sy zfs_dirty_data_max 1297takes precedence over this one. 1298.No See Sx ZFS TRANSACTION DELAY . 1299.Pp 1300Subject to 1301.Sy zfs_dirty_data_max_max . 1302. 1303.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint 1304Start syncing out a transaction group if there's at least this much dirty data 1305.Pq as a percentage of Sy zfs_dirty_data_max . 1306This should be less than 1307.Sy zfs_vdev_async_write_active_min_dirty_percent . 1308. 1309.It Sy zfs_wrlog_data_max Ns = Pq int 1310The upper limit of write-transaction ZIL log data size in bytes. 1311Write operations are throttled when approaching the limit until log data is 1312cleared out after transaction group sync. 1313Because of some overhead, it should be set at least 2 times the size of 1314.Sy zfs_dirty_data_max 1315.No to prevent harming normal write throughput . 1316It also should be smaller than the size of the slog device if slog is present. 1317.Pp 1318Defaults to 1319.Sy zfs_dirty_data_max*2 1320. 1321.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint 1322Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be 1323preallocated for a file in order to guarantee that later writes will not 1324run out of space. 1325Instead, 1326.Xr fallocate 2 1327space preallocation only checks that sufficient space is currently available 1328in the pool or the user's project quota allocation, 1329and then creates a sparse file of the requested size. 1330The requested space is multiplied by 1331.Sy zfs_fallocate_reserve_percent 1332to allow additional space for indirect blocks and other internal metadata. 1333Setting this to 1334.Sy 0 1335disables support for 1336.Xr fallocate 2 1337and causes it to return 1338.Sy EOPNOTSUPP . 1339. 1340.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string 1341Select a fletcher 4 implementation. 1342.Pp 1343Supported selectors are: 1344.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , 1345.No and Sy aarch64_neon . 1346All except 1347.Sy fastest No and Sy scalar 1348require instruction set extensions to be available, 1349and will only appear if ZFS detects that they are present at runtime. 1350If multiple implementations of fletcher 4 are available, the 1351.Sy fastest 1352will be chosen using a micro benchmark. 1353Selecting 1354.Sy scalar 1355results in the original CPU-based calculation being used. 1356Selecting any option other than 1357.Sy fastest No or Sy scalar 1358results in vector instructions 1359from the respective CPU instruction set being used. 1360. 1361.It Sy zfs_bclone_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1362Enables access to the block cloning feature. 1363If this setting is 0, then even if feature@block_cloning is enabled, 1364using functions and system calls that attempt to clone blocks will act as 1365though the feature is disabled. 1366. 1367.It Sy zfs_bclone_wait_dirty Ns = Ns Sy 0 Ns | Ns 1 Pq int 1368When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be 1369written to disk. 1370This allows the clone operation to reliably succeed when a file is 1371modified and then immediately cloned. 1372For small files this may be slower than making a copy of the file. 1373Therefore, this setting defaults to 0 which causes a clone operation to 1374immediately fail when encountering a dirty block. 1375. 1376.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string 1377Select a BLAKE3 implementation. 1378.Pp 1379Supported selectors are: 1380.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . 1381All except 1382.Sy cycle , fastest No and Sy generic 1383require instruction set extensions to be available, 1384and will only appear if ZFS detects that they are present at runtime. 1385If multiple implementations of BLAKE3 are available, the 1386.Sy fastest will be chosen using a micro benchmark. You can see the 1387benchmark results by reading this kstat file: 1388.Pa /proc/spl/kstat/zfs/chksum_bench . 1389. 1390.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1391Enable/disable the processing of the free_bpobj object. 1392. 1393.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 1394Maximum number of blocks freed in a single TXG. 1395. 1396.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 1397Maximum number of dedup blocks freed in a single TXG. 1398. 1399.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint 1400Maximum asynchronous read I/O operations active to each device. 1401.No See Sx ZFS I/O SCHEDULER . 1402. 1403.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint 1404Minimum asynchronous read I/O operation active to each device. 1405.No See Sx ZFS I/O SCHEDULER . 1406. 1407.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint 1408When the pool has more than this much dirty data, use 1409.Sy zfs_vdev_async_write_max_active 1410to limit active async writes. 1411If the dirty data is between the minimum and maximum, 1412the active I/O limit is linearly interpolated. 1413.No See Sx ZFS I/O SCHEDULER . 1414. 1415.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint 1416When the pool has less than this much dirty data, use 1417.Sy zfs_vdev_async_write_min_active 1418to limit active async writes. 1419If the dirty data is between the minimum and maximum, 1420the active I/O limit is linearly 1421interpolated. 1422.No See Sx ZFS I/O SCHEDULER . 1423. 1424.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint 1425Maximum asynchronous write I/O operations active to each device. 1426.No See Sx ZFS I/O SCHEDULER . 1427. 1428.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint 1429Minimum asynchronous write I/O operations active to each device. 1430.No See Sx ZFS I/O SCHEDULER . 1431.Pp 1432Lower values are associated with better latency on rotational media but poorer 1433resilver performance. 1434The default value of 1435.Sy 2 1436was chosen as a compromise. 1437A value of 1438.Sy 3 1439has been shown to improve resilver performance further at a cost of 1440further increasing latency. 1441. 1442.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint 1443Maximum initializing I/O operations active to each device. 1444.No See Sx ZFS I/O SCHEDULER . 1445. 1446.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint 1447Minimum initializing I/O operations active to each device. 1448.No See Sx ZFS I/O SCHEDULER . 1449. 1450.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint 1451The maximum number of I/O operations active to each device. 1452Ideally, this will be at least the sum of each queue's 1453.Sy max_active . 1454.No See Sx ZFS I/O SCHEDULER . 1455. 1456.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint 1457Timeout value to wait before determining a device is missing 1458during import. 1459This is helpful for transient missing paths due 1460to links being briefly removed and recreated in response to 1461udev events. 1462. 1463.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint 1464Maximum sequential resilver I/O operations active to each device. 1465.No See Sx ZFS I/O SCHEDULER . 1466. 1467.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint 1468Minimum sequential resilver I/O operations active to each device. 1469.No See Sx ZFS I/O SCHEDULER . 1470. 1471.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint 1472Maximum removal I/O operations active to each device. 1473.No See Sx ZFS I/O SCHEDULER . 1474. 1475.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint 1476Minimum removal I/O operations active to each device. 1477.No See Sx ZFS I/O SCHEDULER . 1478. 1479.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint 1480Maximum scrub I/O operations active to each device. 1481.No See Sx ZFS I/O SCHEDULER . 1482. 1483.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint 1484Minimum scrub I/O operations active to each device. 1485.No See Sx ZFS I/O SCHEDULER . 1486. 1487.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint 1488Maximum synchronous read I/O operations active to each device. 1489.No See Sx ZFS I/O SCHEDULER . 1490. 1491.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint 1492Minimum synchronous read I/O operations active to each device. 1493.No See Sx ZFS I/O SCHEDULER . 1494. 1495.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint 1496Maximum synchronous write I/O operations active to each device. 1497.No See Sx ZFS I/O SCHEDULER . 1498. 1499.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint 1500Minimum synchronous write I/O operations active to each device. 1501.No See Sx ZFS I/O SCHEDULER . 1502. 1503.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint 1504Maximum trim/discard I/O operations active to each device. 1505.No See Sx ZFS I/O SCHEDULER . 1506. 1507.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint 1508Minimum trim/discard I/O operations active to each device. 1509.No See Sx ZFS I/O SCHEDULER . 1510. 1511.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint 1512For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), 1513the number of concurrently-active I/O operations is limited to 1514.Sy zfs_*_min_active , 1515unless the vdev is "idle". 1516When there are no interactive I/O operations active (synchronous or otherwise), 1517and 1518.Sy zfs_vdev_nia_delay 1519operations have completed since the last interactive operation, 1520then the vdev is considered to be "idle", 1521and the number of concurrently-active non-interactive operations is increased to 1522.Sy zfs_*_max_active . 1523.No See Sx ZFS I/O SCHEDULER . 1524. 1525.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint 1526Some HDDs tend to prioritize sequential I/O so strongly, that concurrent 1527random I/O latency reaches several seconds. 1528On some HDDs this happens even if sequential I/O operations 1529are submitted one at a time, and so setting 1530.Sy zfs_*_max_active Ns = Sy 1 1531does not help. 1532To prevent non-interactive I/O, like scrub, 1533from monopolizing the device, no more than 1534.Sy zfs_vdev_nia_credit operations can be sent 1535while there are outstanding incomplete interactive operations. 1536This enforced wait ensures the HDD services the interactive I/O 1537within a reasonable amount of time. 1538.No See Sx ZFS I/O SCHEDULER . 1539. 1540.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint 1541Defines if the driver should retire on a given error type. 1542The following options may be bitwise-ored together: 1543.TS 1544box; 1545lbz r l l . 1546 Value Name Description 1547_ 1548 1 Device No driver retries on device errors 1549 2 Transport No driver retries on transport errors. 1550 4 Driver No driver retries on driver errors. 1551.TE 1552. 1553.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint 1554Maximum number of segments to add to a BIO (min 4). 1555If this is higher than the maximum allowed by the device queue or the kernel 1556itself, it will be clamped. 1557Setting it to zero will cause the kernel's ideal size to be used. 1558This parameter only applies on Linux. 1559This parameter is ignored if 1560.Sy zfs_vdev_disk_classic Ns = Ns Sy 1 . 1561. 1562.It Sy zfs_vdev_disk_classic Ns = Ns Sy 0 Ns | Ns 1 Pq uint 1563If set to 1, OpenZFS will submit IO to Linux using the method it used in 2.2 1564and earlier. 1565This "classic" method has known issues with highly fragmented IO requests and 1566is slower on many workloads, but it has been in use for many years and is known 1567to be very stable. 1568If you set this parameter, please also open a bug report why you did so, 1569including the workload involved and any error messages. 1570.Pp 1571This parameter and the classic submission method will be removed once we have 1572total confidence in the new method. 1573.Pp 1574This parameter only applies on Linux, and can only be set at module load time. 1575. 1576.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int 1577Time before expiring 1578.Pa .zfs/snapshot . 1579. 1580.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int 1581Allow the creation, removal, or renaming of entries in the 1582.Sy .zfs/snapshot 1583directory to cause the creation, destruction, or renaming of snapshots. 1584When enabled, this functionality works both locally and over NFS exports 1585which have the 1586.Em no_root_squash 1587option set. 1588. 1589.It Sy zfs_snapshot_no_setuid Ns = Ns Sy 0 Ns | Ns 1 Pq int 1590Whether to disable 1591.Em setuid/setgid 1592support for snapshot mounts triggered by access to the 1593.Sy .zfs/snapshot 1594directory by setting the 1595.Em nosuid 1596mount option. 1597. 1598.It Sy zfs_flags Ns = Ns Sy 0 Pq int 1599Set additional debugging flags. 1600The following flags may be bitwise-ored together: 1601.TS 1602box; 1603lbz r l l . 1604 Value Name Description 1605_ 1606 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. 1607* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. 1608* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. 1609 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. 1610* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. 1611 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. 1612 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. 1613 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. 1614 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. 1615 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. 1616 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. 1617 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log 1618 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. 1619 8192 ZFS_DEBUG_METASLAB_ALLOC Enable debugging messages when allocations fail. 1620 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages. 1621 32768 ZFS_DEBUG_RAIDZ_RECONSTRUCT Enabled debugging messages for raidz reconstruction. 1622 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages. 1623.TE 1624.Sy \& * No Requires debug build . 1625. 1626.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint 1627Enables btree verification. 1628The following settings are cumulative: 1629.TS 1630box; 1631lbz r l l . 1632 Value Description 1633 1634 1 Verify height. 1635 2 Verify pointers from children to parent. 1636 3 Verify element counts. 1637 4 Verify element order. (expensive) 1638* 5 Verify unused memory is poisoned. (expensive) 1639.TE 1640.Sy \& * No Requires debug build . 1641. 1642.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int 1643If destroy encounters an 1644.Sy EIO 1645while reading metadata (e.g. indirect blocks), 1646space referenced by the missing metadata can not be freed. 1647Normally this causes the background destroy to become "stalled", 1648as it is unable to make forward progress. 1649While in this stalled state, all remaining space to free 1650from the error-encountering filesystem is "temporarily leaked". 1651Set this flag to cause it to ignore the 1652.Sy EIO , 1653permanently leak the space from indirect blocks that can not be read, 1654and continue to free everything else that it can. 1655.Pp 1656The default "stalling" behavior is useful if the storage partially 1657fails (i.e. some but not all I/O operations fail), and then later recovers. 1658In this case, we will be able to continue pool operations while it is 1659partially failed, and when it recovers, we can continue to free the 1660space, with no leaks. 1661Note, however, that this case is actually fairly rare. 1662.Pp 1663Typically pools either 1664.Bl -enum -compact -offset 4n -width "1." 1665.It 1666fail completely (but perhaps temporarily, 1667e.g. due to a top-level vdev going offline), or 1668.It 1669have localized, permanent errors (e.g. disk returns the wrong data 1670due to bit flip or firmware bug). 1671.El 1672In the former case, this setting does not matter because the 1673pool will be suspended and the sync thread will not be able to make 1674forward progress regardless. 1675In the latter, because the error is permanent, the best we can do 1676is leak the minimum amount of space, 1677which is what setting this flag will do. 1678It is therefore reasonable for this flag to normally be set, 1679but we chose the more conservative approach of not setting it, 1680so that there is no possibility of 1681leaking space in the "partial temporary" failure case. 1682. 1683.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint 1684During a 1685.Nm zfs Cm destroy 1686operation using the 1687.Sy async_destroy 1688feature, 1689a minimum of this much time will be spent working on freeing blocks per TXG. 1690. 1691.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint 1692Similar to 1693.Sy zfs_free_min_time_ms , 1694but for cleanup of old indirection records for removed vdevs. 1695. 1696.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 1697Largest data block to write to the ZIL. 1698Larger blocks will be treated as if the dataset being written to had the 1699.Sy logbias Ns = Ns Sy throughput 1700property set. 1701. 1702.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 1703Pattern written to vdev free space by 1704.Xr zpool-initialize 8 . 1705. 1706.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1707Size of writes used by 1708.Xr zpool-initialize 8 . 1709This option is used by the test suite. 1710. 1711.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 1712The threshold size (in block pointers) at which we create a new sub-livelist. 1713Larger sublists are more costly from a memory perspective but the fewer 1714sublists there are, the lower the cost of insertion. 1715. 1716.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int 1717If the amount of shared space between a snapshot and its clone drops below 1718this threshold, the clone turns off the livelist and reverts to the old 1719deletion method. 1720This is in place because livelists no long give us a benefit 1721once a clone has been overwritten enough. 1722. 1723.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int 1724Incremented each time an extra ALLOC blkptr is added to a livelist entry while 1725it is being condensed. 1726This option is used by the test suite to track race conditions. 1727. 1728.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int 1729Incremented each time livelist condensing is canceled while in 1730.Fn spa_livelist_condense_sync . 1731This option is used by the test suite to track race conditions. 1732. 1733.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1734When set, the livelist condense process pauses indefinitely before 1735executing the synctask \(em 1736.Fn spa_livelist_condense_sync . 1737This option is used by the test suite to trigger race conditions. 1738. 1739.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int 1740Incremented each time livelist condensing is canceled while in 1741.Fn spa_livelist_condense_cb . 1742This option is used by the test suite to track race conditions. 1743. 1744.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int 1745When set, the livelist condense process pauses indefinitely before 1746executing the open context condensing work in 1747.Fn spa_livelist_condense_cb . 1748This option is used by the test suite to trigger race conditions. 1749. 1750.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 1751The maximum execution time limit that can be set for a ZFS channel program, 1752specified as a number of Lua instructions. 1753. 1754.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 1755The maximum memory limit that can be set for a ZFS channel program, specified 1756in bytes. 1757. 1758.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int 1759The maximum depth of nested datasets. 1760This value can be tuned temporarily to 1761fix existing datasets that exceed the predefined limit. 1762. 1763.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 1764The number of past TXGs that the flushing algorithm of the log spacemap 1765feature uses to estimate incoming log blocks. 1766. 1767.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 1768Maximum number of rows allowed in the summary of the spacemap log. 1769. 1770.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint 1771We currently support block sizes from 1772.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . 1773The benefits of larger blocks, and thus larger I/O, 1774need to be weighed against the cost of COWing a giant block to modify one byte. 1775Additionally, very large blocks can have an impact on I/O latency, 1776and also potentially on the memory allocator. 1777Therefore, we formerly forbade creating blocks larger than 1M. 1778Larger blocks could be created by changing it, 1779and pools with larger blocks can always be imported and used, 1780regardless of this setting. 1781.Pp 1782Note that it is still limited by default to 1783.Ar 1 MiB 1784on x86_32, because Linux's 17853/1 memory split doesn't leave much room for 16M chunks. 1786. 1787.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int 1788Allow datasets received with redacted send/receive to be mounted. 1789Normally disabled because these datasets may be missing key data. 1790. 1791.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 1792Minimum number of metaslabs to flush per dirty TXG. 1793. 1794.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 77 Ns % Pq uint 1795Allow metaslabs to keep their active state as long as their fragmentation 1796percentage is no more than this value. 1797An active metaslab that exceeds this threshold 1798will no longer keep its active status allowing better metaslabs to be selected. 1799. 1800.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint 1801Metaslab groups are considered eligible for allocations if their 1802fragmentation metric (measured as a percentage) is less than or equal to 1803this value. 1804If a metaslab group exceeds this threshold then it will be 1805skipped unless all metaslab groups within the metaslab class have also 1806crossed this threshold. 1807. 1808.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint 1809Defines a threshold at which metaslab groups should be eligible for allocations. 1810The value is expressed as a percentage of free space 1811beyond which a metaslab group is always eligible for allocations. 1812If a metaslab group's free space is less than or equal to the 1813threshold, the allocator will avoid allocating to that group 1814unless all groups in the pool have reached the threshold. 1815Once all groups have reached the threshold, all groups are allowed to accept 1816allocations. 1817The default value of 1818.Sy 0 1819disables the feature and causes all metaslab groups to be eligible for 1820allocations. 1821.Pp 1822This parameter allows one to deal with pools having heavily imbalanced 1823vdevs such as would be the case when a new vdev has been added. 1824Setting the threshold to a non-zero percentage will stop allocations 1825from being made to vdevs that aren't filled to the specified percentage 1826and allow lesser filled vdevs to acquire more allocations than they 1827otherwise would under the old 1828.Sy zfs_mg_alloc_failures 1829facility. 1830. 1831.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1832If enabled, ZFS will place DDT data into the special allocation class. 1833. 1834.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int 1835If enabled, ZFS will place user data indirect blocks 1836into the special allocation class. 1837. 1838.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint 1839Historical statistics for this many latest multihost updates will be available 1840in 1841.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . 1842. 1843.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 1844Used to control the frequency of multihost writes which are performed when the 1845.Sy multihost 1846pool property is on. 1847This is one of the factors used to determine the 1848length of the activity check during import. 1849.Pp 1850The multihost write period is 1851.Sy zfs_multihost_interval No / Sy leaf-vdevs . 1852On average a multihost write will be issued for each leaf vdev 1853every 1854.Sy zfs_multihost_interval 1855milliseconds. 1856In practice, the observed period can vary with the I/O load 1857and this observed value is the delay which is stored in the uberblock. 1858. 1859.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint 1860Used to control the duration of the activity test on import. 1861Smaller values of 1862.Sy zfs_multihost_import_intervals 1863will reduce the import time but increase 1864the risk of failing to detect an active pool. 1865The total activity check time is never allowed to drop below one second. 1866.Pp 1867On import the activity check waits a minimum amount of time determined by 1868.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , 1869or the same product computed on the host which last had the pool imported, 1870whichever is greater. 1871The activity check time may be further extended if the value of MMP 1872delay found in the best uberblock indicates actual multihost updates happened 1873at longer intervals than 1874.Sy zfs_multihost_interval . 1875A minimum of 1876.Em 100 ms 1877is enforced. 1878.Pp 1879.Sy 0 No is equivalent to Sy 1 . 1880. 1881.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint 1882Controls the behavior of the pool when multihost write failures or delays are 1883detected. 1884.Pp 1885When 1886.Sy 0 , 1887multihost write failures or delays are ignored. 1888The failures will still be reported to the ZED which depending on 1889its configuration may take action such as suspending the pool or offlining a 1890device. 1891.Pp 1892Otherwise, the pool will be suspended if 1893.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval 1894milliseconds pass without a successful MMP write. 1895This guarantees the activity test will see MMP writes if the pool is imported. 1896.Sy 1 No is equivalent to Sy 2 ; 1897this is necessary to prevent the pool from being suspended 1898due to normal, small I/O latency variations. 1899. 1900.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int 1901Set to disable scrub I/O. 1902This results in scrubs not actually scrubbing data and 1903simply doing a metadata crawl of the pool instead. 1904. 1905.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int 1906Set to disable block prefetching for scrubs. 1907. 1908.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 1909Disable cache flush operations on disks when writing. 1910Setting this will cause pool corruption on power loss 1911if a volatile out-of-order write cache is enabled. 1912. 1913.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1914Allow no-operation writes. 1915The occurrence of nopwrites will further depend on other pool properties 1916.Pq i.a. the checksumming and compression algorithms . 1917. 1918.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int 1919Enable forcing TXG sync to find holes. 1920When enabled forces ZFS to sync data when 1921.Sy SEEK_HOLE No or Sy SEEK_DATA 1922flags are used allowing holes in a file to be accurately reported. 1923When disabled holes will not be reported in recently dirtied files. 1924. 1925.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int 1926The number of bytes which should be prefetched during a pool traversal, like 1927.Nm zfs Cm send 1928or other data crawling operations. 1929. 1930.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint 1931The number of blocks pointed by indirect (non-L0) block which should be 1932prefetched during a pool traversal, like 1933.Nm zfs Cm send 1934or other data crawling operations. 1935. 1936.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 1937Control percentage of dirtied indirect blocks from frees allowed into one TXG. 1938After this threshold is crossed, additional frees will wait until the next TXG. 1939.Sy 0 No disables this throttle . 1940. 1941.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1942Disable predictive prefetch. 1943Note that it leaves "prescient" prefetch 1944.Pq for, e.g., Nm zfs Cm send 1945intact. 1946Unlike predictive prefetch, prescient prefetch never issues I/O 1947that ends up not being needed, so it can't hurt performance. 1948. 1949.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1950Disable QAT hardware acceleration for SHA256 checksums. 1951May be unset after the ZFS modules have been loaded to initialize the QAT 1952hardware as long as support is compiled in and the QAT driver is present. 1953. 1954.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1955Disable QAT hardware acceleration for gzip compression. 1956May be unset after the ZFS modules have been loaded to initialize the QAT 1957hardware as long as support is compiled in and the QAT driver is present. 1958. 1959.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 1960Disable QAT hardware acceleration for AES-GCM encryption. 1961May be unset after the ZFS modules have been loaded to initialize the QAT 1962hardware as long as support is compiled in and the QAT driver is present. 1963. 1964.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1965Bytes to read per chunk. 1966. 1967.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint 1968Historical statistics for this many latest reads will be available in 1969.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . 1970. 1971.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int 1972Include cache hits in read history 1973. 1974.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 1975Maximum read segment size to issue when sequentially resilvering a 1976top-level vdev. 1977. 1978.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 1979Automatically start a pool scrub when the last active sequential resilver 1980completes in order to verify the checksums of all blocks which have been 1981resilvered. 1982This is enabled by default and strongly recommended. 1983. 1984.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 1985Maximum amount of I/O that can be concurrently issued for a sequential 1986resilver per leaf device, given in bytes. 1987. 1988.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int 1989If an indirect split block contains more than this many possible unique 1990combinations when being reconstructed, consider it too computationally 1991expensive to check them all. 1992Instead, try at most this many randomly selected 1993combinations each time the block is accessed. 1994This allows all segment copies to participate fairly 1995in the reconstruction when all combinations 1996cannot be checked and prevents repeated use of one bad copy. 1997. 1998.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int 1999Set to attempt to recover from fatal errors. 2000This should only be used as a last resort, 2001as it typically results in leaked space, or worse. 2002. 2003.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 2004Ignore hard I/O errors during device removal. 2005When set, if a device encounters a hard I/O error during the removal process 2006the removal will not be canceled. 2007This can result in a normally recoverable block becoming permanently damaged 2008and is hence not recommended. 2009This should only be used as a last resort when the 2010pool cannot be returned to a healthy state prior to removing the device. 2011. 2012.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2013This is used by the test suite so that it can ensure that certain actions 2014happen while in the middle of a removal. 2015. 2016.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2017The largest contiguous segment that we will attempt to allocate when removing 2018a device. 2019If there is a performance problem with attempting to allocate large blocks, 2020consider decreasing this. 2021The default value is also the maximum. 2022. 2023.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int 2024Ignore the 2025.Sy resilver_defer 2026feature, causing an operation that would start a resilver to 2027immediately restart the one in progress. 2028. 2029.It Sy zfs_resilver_defer_percent Ns = Ns Sy 10 Ns % Pq uint 2030If the ongoing resilver progress is below this threshold, a new resilver will 2031restart from scratch instead of being deferred after the current one finishes, 2032even if the 2033.Sy resilver_defer 2034feature is enabled. 2035. 2036.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint 2037Resilvers are processed by the sync thread. 2038While resilvering, it will spend at least this much time 2039working on a resilver between TXG flushes. 2040. 2041.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int 2042If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), 2043even if there were unrepairable errors. 2044Intended to be used during pool repair or recovery to 2045stop resilvering when the pool is next imported. 2046. 2047.It Sy zfs_scrub_after_expand Ns = Ns Sy 1 Ns | Ns 0 Pq int 2048Automatically start a pool scrub after a RAIDZ expansion completes 2049in order to verify the checksums of all blocks which have been 2050copied during the expansion. 2051This is enabled by default and strongly recommended. 2052. 2053.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint 2054Scrubs are processed by the sync thread. 2055While scrubbing, it will spend at least this much time 2056working on a scrub between TXG flushes. 2057. 2058.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint 2059Error blocks to be scrubbed in one txg. 2060. 2061.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint 2062To preserve progress across reboots, the sequential scan algorithm periodically 2063needs to stop metadata scanning and issue all the verification I/O to disk. 2064The frequency of this flushing is determined by this tunable. 2065. 2066.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint 2067This tunable affects how scrub and resilver I/O segments are ordered. 2068A higher number indicates that we care more about how filled in a segment is, 2069while a lower number indicates we care more about the size of the extent without 2070considering the gaps within a segment. 2071This value is only tunable upon module insertion. 2072Changing the value afterwards will have no effect on scrub or resilver 2073performance. 2074. 2075.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint 2076Determines the order that data will be verified while scrubbing or resilvering: 2077.Bl -tag -compact -offset 4n -width "a" 2078.It Sy 1 2079Data will be verified as sequentially as possible, given the 2080amount of memory reserved for scrubbing 2081.Pq see Sy zfs_scan_mem_lim_fact . 2082This may improve scrub performance if the pool's data is very fragmented. 2083.It Sy 2 2084The largest mostly-contiguous chunk of found data will be verified first. 2085By deferring scrubbing of small segments, we may later find adjacent data 2086to coalesce and increase the segment size. 2087.It Sy 0 2088.No Use strategy Sy 1 No during normal verification 2089.No and strategy Sy 2 No while taking a checkpoint . 2090.El 2091. 2092.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int 2093If unset, indicates that scrubs and resilvers will gather metadata in 2094memory before issuing sequential I/O. 2095Otherwise indicates that the legacy algorithm will be used, 2096where I/O is initiated as soon as it is discovered. 2097Unsetting will not affect scrubs or resilvers that are already in progress. 2098. 2099.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int 2100Sets the largest gap in bytes between scrub/resilver I/O operations 2101that will still be considered sequential for sorting purposes. 2102Changing this value will not 2103affect scrubs or resilvers that are already in progress. 2104. 2105.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 2106Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. 2107This tunable determines the hard limit for I/O sorting memory usage. 2108When the hard limit is reached we stop scanning metadata and start issuing 2109data verification I/O. 2110This is done until we get below the soft limit. 2111. 2112.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint 2113The fraction of the hard limit used to determined the soft limit for I/O sorting 2114by the sequential scan algorithm. 2115When we cross this limit from below no action is taken. 2116When we cross this limit from above it is because we are issuing verification 2117I/O. 2118In this case (unless the metadata scan is done) we stop issuing verification I/O 2119and start scanning metadata again until we get to the hard limit. 2120. 2121.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2122When reporting resilver throughput and estimated completion time use the 2123performance observed over roughly the last 2124.Sy zfs_scan_report_txgs 2125TXGs. 2126When set to zero performance is calculated over the time between checkpoints. 2127. 2128.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int 2129Enforce tight memory limits on pool scans when a sequential scan is in progress. 2130When disabled, the memory limit may be exceeded by fast disks. 2131. 2132.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int 2133Freezes a scrub/resilver in progress without actually pausing it. 2134Intended for testing/debugging. 2135. 2136.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 2137Maximum amount of data that can be concurrently issued at once for scrubs and 2138resilvers per leaf device, given in bytes. 2139. 2140.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int 2141Allow sending of corrupt data (ignore read/checksum errors when sending). 2142. 2143.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int 2144Include unmodified spill blocks in the send stream. 2145Under certain circumstances, previous versions of ZFS could incorrectly 2146remove the spill block from an existing object. 2147Including unmodified copies of the spill blocks creates a backwards-compatible 2148stream which will recreate a spill block if it was incorrectly removed. 2149. 2150.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2151The fill fraction of the 2152.Nm zfs Cm send 2153internal queues. 2154The fill fraction controls the timing with which internal threads are woken up. 2155. 2156.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2157The maximum number of bytes allowed in 2158.Nm zfs Cm send Ns 's 2159internal queues. 2160. 2161.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2162The fill fraction of the 2163.Nm zfs Cm send 2164prefetch queue. 2165The fill fraction controls the timing with which internal threads are woken up. 2166. 2167.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2168The maximum number of bytes allowed that will be prefetched by 2169.Nm zfs Cm send . 2170This value must be at least twice the maximum block size in use. 2171. 2172.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint 2173The fill fraction of the 2174.Nm zfs Cm receive 2175queue. 2176The fill fraction controls the timing with which internal threads are woken up. 2177. 2178.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint 2179The maximum number of bytes allowed in the 2180.Nm zfs Cm receive 2181queue. 2182This value must be at least twice the maximum block size in use. 2183. 2184.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2185The maximum amount of data, in bytes, that 2186.Nm zfs Cm receive 2187will write in one DMU transaction. 2188This is the uncompressed size, even when receiving a compressed send stream. 2189This setting will not reduce the write size below a single block. 2190Capped at a maximum of 2191.Sy 32 MiB . 2192. 2193.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int 2194When this variable is set to non-zero a corrective receive: 2195.Bl -enum -compact -offset 4n -width "1." 2196.It 2197Does not enforce the restriction of source & destination snapshot GUIDs 2198matching. 2199.It 2200If there is an error during healing, the healing receive is not 2201terminated instead it moves on to the next record. 2202.El 2203. 2204.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2205Setting this variable overrides the default logic for estimating block 2206sizes when doing a 2207.Nm zfs Cm send . 2208The default heuristic is that the average block size 2209will be the current recordsize. 2210Override this value if most data in your dataset is not of that size 2211and you require accurate zfs send size estimates. 2212. 2213.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint 2214Flushing of data to disk is done in passes. 2215Defer frees starting in this pass. 2216. 2217.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int 2218Maximum memory used for prefetching a checkpoint's space map on each 2219vdev while discarding the checkpoint. 2220. 2221.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint 2222Only allow small data blocks to be allocated on the special and dedup vdev 2223types when the available free space percentage on these vdevs exceeds this 2224value. 2225This ensures reserved space is available for pool metadata as the 2226special vdevs approach capacity. 2227. 2228.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint 2229Starting in this sync pass, disable compression (including of metadata). 2230With the default setting, in practice, we don't have this many sync passes, 2231so this has no effect. 2232.Pp 2233The original intent was that disabling compression would help the sync passes 2234to converge. 2235However, in practice, disabling compression increases 2236the average number of sync passes; because when we turn compression off, 2237many blocks' size will change, and thus we have to re-allocate 2238(not overwrite) them. 2239It also increases the number of 2240.Em 128 KiB 2241allocations (e.g. for indirect blocks and spacemaps) 2242because these will not be compressed. 2243The 2244.Em 128 KiB 2245allocations are especially detrimental to performance 2246on highly fragmented systems, which may have very few free segments of this 2247size, 2248and may need to load new metaslabs to satisfy these allocations. 2249. 2250.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint 2251Rewrite new block pointers starting in this pass. 2252. 2253.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint 2254Maximum size of TRIM command. 2255Larger ranges will be split into chunks no larger than this value before 2256issuing. 2257. 2258.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2259Minimum size of TRIM commands. 2260TRIM ranges smaller than this will be skipped, 2261unless they're part of a larger range which was chunked. 2262This is done because it's common for these small TRIMs 2263to negatively impact overall performance. 2264. 2265.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2266Skip uninitialized metaslabs during the TRIM process. 2267This option is useful for pools constructed from large thinly-provisioned 2268devices 2269where TRIM operations are slow. 2270As a pool ages, an increasing fraction of the pool's metaslabs 2271will be initialized, progressively degrading the usefulness of this option. 2272This setting is stored when starting a manual TRIM and will 2273persist for the duration of the requested TRIM. 2274. 2275.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint 2276Maximum number of queued TRIMs outstanding per leaf vdev. 2277The number of concurrent TRIM commands issued to the device is controlled by 2278.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . 2279. 2280.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint 2281The number of transaction groups' worth of frees which should be aggregated 2282before TRIM operations are issued to the device. 2283This setting represents a trade-off between issuing larger, 2284more efficient TRIM operations and the delay 2285before the recently trimmed space is available for use by the device. 2286.Pp 2287Increasing this value will allow frees to be aggregated for a longer time. 2288This will result is larger TRIM operations and potentially increased memory 2289usage. 2290Decreasing this value will have the opposite effect. 2291The default of 2292.Sy 32 2293was determined to be a reasonable compromise. 2294. 2295.It Sy zfs_txg_history Ns = Ns Sy 100 Pq uint 2296Historical statistics for this many latest TXGs will be available in 2297.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . 2298. 2299.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint 2300Flush dirty data to disk at least every this many seconds (maximum TXG 2301duration). 2302. 2303.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint 2304Max vdev I/O aggregation size. 2305. 2306.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2307Max vdev I/O aggregation size for non-rotating media. 2308. 2309.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int 2310A number by which the balancing algorithm increments the load calculation for 2311the purpose of selecting the least busy mirror member when an I/O operation 2312immediately follows its predecessor on rotational vdevs 2313for the purpose of making decisions based on load. 2314. 2315.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int 2316A number by which the balancing algorithm increments the load calculation for 2317the purpose of selecting the least busy mirror member when an I/O operation 2318lacks locality as defined by 2319.Sy zfs_vdev_mirror_rotating_seek_offset . 2320Operations within this that are not immediately following the previous operation 2321are incremented by half. 2322. 2323.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int 2324The maximum distance for the last queued I/O operation in which 2325the balancing algorithm considers an operation to have locality. 2326.No See Sx ZFS I/O SCHEDULER . 2327. 2328.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int 2329A number by which the balancing algorithm increments the load calculation for 2330the purpose of selecting the least busy mirror member on non-rotational vdevs 2331when I/O operations do not immediately follow one another. 2332. 2333.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int 2334A number by which the balancing algorithm increments the load calculation for 2335the purpose of selecting the least busy mirror member when an I/O operation 2336lacks 2337locality as defined by the 2338.Sy zfs_vdev_mirror_rotating_seek_offset . 2339Operations within this that are not immediately following the previous operation 2340are incremented by half. 2341. 2342.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint 2343Aggregate read I/O operations if the on-disk gap between them is within this 2344threshold. 2345. 2346.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint 2347Aggregate write I/O operations if the on-disk gap between them is within this 2348threshold. 2349. 2350.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string 2351Select the raidz parity implementation to use. 2352.Pp 2353Variants that don't depend on CPU-specific features 2354may be selected on module load, as they are supported on all systems. 2355The remaining options may only be set after the module is loaded, 2356as they are available only if the implementations are compiled in 2357and supported on the running system. 2358.Pp 2359Once the module is loaded, 2360.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl 2361will show the available options, 2362with the currently selected one enclosed in square brackets. 2363.Pp 2364.TS 2365lb l l . 2366fastest selected by built-in benchmark 2367original original implementation 2368scalar scalar implementation 2369sse2 SSE2 instruction set 64-bit x86 2370ssse3 SSSE3 instruction set 64-bit x86 2371avx2 AVX2 instruction set 64-bit x86 2372avx512f AVX512F instruction set 64-bit x86 2373avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 2374aarch64_neon NEON Aarch64/64-bit ARMv8 2375aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 2376powerpc_altivec Altivec PowerPC 2377.TE 2378. 2379.It Sy zfs_vdev_scheduler Pq charp 2380.Sy DEPRECATED . 2381Prints warning to kernel log for compatibility. 2382. 2383.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint 2384Max event queue length. 2385Events in the queue can be viewed with 2386.Xr zpool-events 8 . 2387. 2388.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int 2389Maximum recent zevent records to retain for duplicate checking. 2390Setting this to 2391.Sy 0 2392disables duplicate detection. 2393. 2394.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int 2395Lifespan for a recent ereport that was retained for duplicate checking. 2396. 2397.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int 2398The maximum number of taskq entries that are allowed to be cached. 2399When this limit is exceeded transaction records (itxs) 2400will be cleaned synchronously. 2401. 2402.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int 2403The number of taskq entries that are pre-populated when the taskq is first 2404created and are immediately available for use. 2405. 2406.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int 2407This controls the number of threads used by 2408.Sy dp_zil_clean_taskq . 2409The default value of 2410.Sy 100% 2411will create a maximum of one thread per CPU. 2412. 2413.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2414This sets the maximum block size used by the ZIL. 2415On very fragmented pools, lowering this 2416.Pq typically to Sy 36 KiB 2417can improve performance. 2418. 2419.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint 2420This sets the maximum number of write bytes logged via WR_COPIED. 2421It tunes a tradeoff between additional memory copy and possibly worse log 2422space efficiency vs additional range lock/unlock. 2423. 2424.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int 2425Disable the cache flush commands that are normally sent to disk by 2426the ZIL after an LWB write has completed. 2427Setting this will cause ZIL corruption on power loss 2428if a volatile out-of-order write cache is enabled. 2429. 2430.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int 2431Disable intent logging replay. 2432Can be disabled for recovery from corrupted ZIL. 2433. 2434.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 2435Limit SLOG write size per commit executed with synchronous priority. 2436Any writes above that will be executed with lower (asynchronous) priority 2437to limit potential SLOG device abuse by single active ZIL writer. 2438. 2439.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int 2440Setting this tunable to zero disables ZIL logging of new 2441.Sy xattr Ns = Ns Sy sa 2442records if the 2443.Sy org.openzfs:zilsaxattr 2444feature is enabled on the pool. 2445This would only be necessary to work around bugs in the ZIL logging or replay 2446code for this record type. 2447The tunable has no effect if the feature is disabled. 2448. 2449.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint 2450Usually, one metaslab from each normal-class vdev is dedicated for use by 2451the ZIL to log synchronous writes. 2452However, if there are fewer than 2453.Sy zfs_embedded_slog_min_ms 2454metaslabs in the vdev, this functionality is disabled. 2455This ensures that we don't set aside an unreasonable amount of space for the 2456ZIL. 2457. 2458.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint 2459Whether heuristic for detection of incompressible data with zstd levels >= 3 2460using LZ4 and zstd-1 passes is enabled. 2461. 2462.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint 2463Minimal uncompressed size (inclusive) of a record before the early abort 2464heuristic will be attempted. 2465. 2466.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int 2467If non-zero, the zio deadman will produce debugging messages 2468.Pq see Sy zfs_dbgmsg_enable 2469for all zios, rather than only for leaf zios possessing a vdev. 2470This is meant to be used by developers to gain 2471diagnostic information for hang conditions which don't involve a mutex 2472or other locking primitive: typically conditions in which a thread in 2473the zio pipeline is looping indefinitely. 2474. 2475.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int 2476When an I/O operation takes more than this much time to complete, 2477it's marked as slow. 2478Each slow operation causes a delay zevent. 2479Slow I/O counters can be seen with 2480.Nm zpool Cm status Fl s . 2481. 2482.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int 2483Throttle block allocations in the I/O pipeline. 2484This allows for dynamic allocation distribution based on device performance. 2485. 2486.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int 2487Control the naming scheme used when setting new xattrs in the user namespace. 2488If 2489.Sy 0 2490.Pq the default on Linux , 2491user namespace xattr names are prefixed with the namespace, to be backwards 2492compatible with previous versions of ZFS on Linux. 2493If 2494.Sy 1 2495.Pq the default on Fx , 2496user namespace xattr names are not prefixed, to be backwards compatible with 2497previous versions of ZFS on illumos and 2498.Fx . 2499.Pp 2500Either naming scheme can be read on this and future versions of ZFS, regardless 2501of this tunable, but legacy ZFS on illumos or 2502.Fx 2503are unable to read user namespace xattrs written in the Linux format, and 2504legacy versions of ZFS on Linux are unable to read user namespace xattrs written 2505in the legacy ZFS format. 2506.Pp 2507An existing xattr with the alternate naming scheme is removed when overwriting 2508the xattr so as to not accumulate duplicates. 2509. 2510.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int 2511Prioritize requeued I/O. 2512. 2513.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint 2514Percentage of online CPUs which will run a worker thread for I/O. 2515These workers are responsible for I/O work such as compression, encryption, 2516checksum and parity calculations. 2517Fractional number of CPUs will be rounded down. 2518.Pp 2519The default value of 2520.Sy 80% 2521was chosen to avoid using all CPUs which can result in 2522latency issues and inconsistent application performance, 2523especially when slower compression and/or checksumming is enabled. 2524Set value only applies to pools imported/created after that. 2525. 2526.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint 2527Number of worker threads per taskq. 2528Higher values improve I/O ordering and CPU utilization, 2529while lower reduce lock contention. 2530Set value only applies to pools imported/created after that. 2531.Pp 2532If 2533.Sy 0 , 2534generate a system-dependent value close to 6 threads per taskq. 2535Set value only applies to pools imported/created after that. 2536. 2537.It Sy zio_taskq_write_tpq Ns = Ns Sy 16 Pq uint 2538Determines the minimum number of threads per write issue taskq. 2539Higher values improve CPU utilization on high throughput, 2540while lower reduce taskq locks contention on high IOPS. 2541Set value only applies to pools imported/created after that. 2542. 2543.It Sy zio_taskq_read Ns = Ns Sy fixed,1,8 null scale null Pq charp 2544Set the queue and thread configuration for the IO read queues. 2545This is an advanced debugging parameter. 2546Don't change this unless you understand what it does. 2547Set values only apply to pools imported/created after that. 2548. 2549.It Sy zio_taskq_write Ns = Ns Sy sync null scale null Pq charp 2550Set the queue and thread configuration for the IO write queues. 2551This is an advanced debugging parameter. 2552Don't change this unless you understand what it does. 2553Set values only apply to pools imported/created after that. 2554. 2555.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2556Do not create zvol device nodes. 2557This may slightly improve startup time on 2558systems with a very large number of zvols. 2559. 2560.It Sy zvol_major Ns = Ns Sy 230 Pq uint 2561Major number for zvol block devices. 2562. 2563.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long 2564Discard (TRIM) operations done on zvols will be done in batches of this 2565many blocks, where block size is determined by the 2566.Sy volblocksize 2567property of a zvol. 2568. 2569.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint 2570When adding a zvol to the system, prefetch this many bytes 2571from the start and end of the volume. 2572Prefetching these regions of the volume is desirable, 2573because they are likely to be accessed immediately by 2574.Xr blkid 8 2575or the kernel partitioner. 2576. 2577.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2578When processing I/O requests for a zvol, submit them synchronously. 2579This effectively limits the queue depth to 2580.Em 1 2581for each I/O submitter. 2582When unset, requests are handled asynchronously by a thread pool. 2583The number of requests which can be handled concurrently is controlled by 2584.Sy zvol_threads . 2585.Sy zvol_request_sync 2586is ignored when running on a kernel that supports block multiqueue 2587.Pq Li blk-mq . 2588. 2589.It Sy zvol_num_taskqs Ns = Ns Sy 0 Pq uint 2590Number of zvol taskqs. 2591If 2592.Sy 0 2593(the default) then scaling is done internally to prefer 6 threads per taskq. 2594This only applies on Linux. 2595. 2596.It Sy zvol_threads Ns = Ns Sy 0 Pq uint 2597The number of system wide threads to use for processing zvol block IOs. 2598If 2599.Sy 0 2600(the default) then internally set 2601.Sy zvol_threads 2602to the number of CPUs present or 32 (whichever is greater). 2603. 2604.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint 2605The number of threads per zvol to use for queuing IO requests. 2606This parameter will only appear if your kernel supports 2607.Li blk-mq 2608and is only read and assigned to a zvol at zvol load time. 2609If 2610.Sy 0 2611(the default) then internally set 2612.Sy zvol_blk_mq_threads 2613to the number of CPUs present. 2614. 2615.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2616Set to 2617.Sy 1 2618to use the 2619.Li blk-mq 2620API for zvols. 2621Set to 2622.Sy 0 2623(the default) to use the legacy zvol APIs. 2624This setting can give better or worse zvol performance depending on 2625the workload. 2626This parameter will only appear if your kernel supports 2627.Li blk-mq 2628and is only read and assigned to a zvol at zvol load time. 2629. 2630.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint 2631If 2632.Sy zvol_use_blk_mq 2633is enabled, then process this number of 2634.Sy volblocksize Ns -sized blocks per zvol thread. 2635This tunable can be use to favor better performance for zvol reads (lower 2636values) or writes (higher values). 2637If set to 2638.Sy 0 , 2639then the zvol layer will process the maximum number of blocks 2640per thread that it can. 2641This parameter will only appear if your kernel supports 2642.Li blk-mq 2643and is only applied at each zvol's load time. 2644. 2645.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint 2646The queue_depth value for the zvol 2647.Li blk-mq 2648interface. 2649This parameter will only appear if your kernel supports 2650.Li blk-mq 2651and is only applied at each zvol's load time. 2652If 2653.Sy 0 2654(the default) then use the kernel's default queue depth. 2655Values are clamped to the kernel's 2656.Dv BLKDEV_MIN_RQ 2657and 2658.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ 2659limits. 2660. 2661.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint 2662Defines zvol block devices behavior when 2663.Sy volmode Ns = Ns Sy default : 2664.Bl -tag -compact -offset 4n -width "a" 2665.It Sy 1 2666.No equivalent to Sy full 2667.It Sy 2 2668.No equivalent to Sy dev 2669.It Sy 3 2670.No equivalent to Sy none 2671.El 2672. 2673.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint 2674Enable strict ZVOL quota enforcement. 2675The strict quota enforcement may have a performance impact. 2676.El 2677. 2678.Sh ZFS I/O SCHEDULER 2679ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. 2680The scheduler determines when and in what order those operations are issued. 2681The scheduler divides operations into five I/O classes, 2682prioritized in the following order: sync read, sync write, async read, 2683async write, and scrub/resilver. 2684Each queue defines the minimum and maximum number of concurrent operations 2685that may be issued to the device. 2686In addition, the device has an aggregate maximum, 2687.Sy zfs_vdev_max_active . 2688Note that the sum of the per-queue minima must not exceed the aggregate maximum. 2689If the sum of the per-queue maxima exceeds the aggregate maximum, 2690then the number of active operations may reach 2691.Sy zfs_vdev_max_active , 2692in which case no further operations will be issued, 2693regardless of whether all per-queue minima have been met. 2694.Pp 2695For many physical devices, throughput increases with the number of 2696concurrent operations, but latency typically suffers. 2697Furthermore, physical devices typically have a limit 2698at which more concurrent operations have no 2699effect on throughput or can actually cause it to decrease. 2700.Pp 2701The scheduler selects the next operation to issue by first looking for an 2702I/O class whose minimum has not been satisfied. 2703Once all are satisfied and the aggregate maximum has not been hit, 2704the scheduler looks for classes whose maximum has not been satisfied. 2705Iteration through the I/O classes is done in the order specified above. 2706No further operations are issued 2707if the aggregate maximum number of concurrent operations has been hit, 2708or if there are no operations queued for an I/O class that has not hit its 2709maximum. 2710Every time an I/O operation is queued or an operation completes, 2711the scheduler looks for new operations to issue. 2712.Pp 2713In general, smaller 2714.Sy max_active Ns s 2715will lead to lower latency of synchronous operations. 2716Larger 2717.Sy max_active Ns s 2718may lead to higher overall throughput, depending on underlying storage. 2719.Pp 2720The ratio of the queues' 2721.Sy max_active Ns s 2722determines the balance of performance between reads, writes, and scrubs. 2723For example, increasing 2724.Sy zfs_vdev_scrub_max_active 2725will cause the scrub or resilver to complete more quickly, 2726but reads and writes to have higher latency and lower throughput. 2727.Pp 2728All I/O classes have a fixed maximum number of outstanding operations, 2729except for the async write class. 2730Asynchronous writes represent the data that is committed to stable storage 2731during the syncing stage for transaction groups. 2732Transaction groups enter the syncing state periodically, 2733so the number of queued async writes will quickly burst up 2734and then bleed down to zero. 2735Rather than servicing them as quickly as possible, 2736the I/O scheduler changes the maximum number of active async write operations 2737according to the amount of dirty data in the pool. 2738Since both throughput and latency typically increase with the number of 2739concurrent operations issued to physical devices, reducing the 2740burstiness in the number of simultaneous operations also stabilizes the 2741response time of operations from other queues, in particular synchronous ones. 2742In broad strokes, the I/O scheduler will issue more concurrent operations 2743from the async write queue as there is more dirty data in the pool. 2744. 2745.Ss Async Writes 2746The number of concurrent operations issued for the async write I/O class 2747follows a piece-wise linear function defined by a few adjustable points: 2748.Bd -literal 2749 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP 2750 ^ | /^ | 2751 | | / | | 2752active | / | | 2753 I/O | / | | 2754count | / | | 2755 | / | | 2756 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP 2757 0|_______^______|_________| 2758 0% | | 100% of \fBzfs_dirty_data_max\fP 2759 | | 2760 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP 2761 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP 2762.Ed 2763.Pp 2764Until the amount of dirty data exceeds a minimum percentage of the dirty 2765data allowed in the pool, the I/O scheduler will limit the number of 2766concurrent operations to the minimum. 2767As that threshold is crossed, the number of concurrent operations issued 2768increases linearly to the maximum at the specified maximum percentage 2769of the dirty data allowed in the pool. 2770.Pp 2771Ideally, the amount of dirty data on a busy pool will stay in the sloped 2772part of the function between 2773.Sy zfs_vdev_async_write_active_min_dirty_percent 2774and 2775.Sy zfs_vdev_async_write_active_max_dirty_percent . 2776If it exceeds the maximum percentage, 2777this indicates that the rate of incoming data is 2778greater than the rate that the backend storage can handle. 2779In this case, we must further throttle incoming writes, 2780as described in the next section. 2781. 2782.Sh ZFS TRANSACTION DELAY 2783We delay transactions when we've determined that the backend storage 2784isn't able to accommodate the rate of incoming writes. 2785.Pp 2786If there is already a transaction waiting, we delay relative to when 2787that transaction will finish waiting. 2788This way the calculated delay time 2789is independent of the number of threads concurrently executing transactions. 2790.Pp 2791If we are the only waiter, wait relative to when the transaction started, 2792rather than the current time. 2793This credits the transaction for "time already served", 2794e.g. reading indirect blocks. 2795.Pp 2796The minimum time for a transaction to take is calculated as 2797.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) 2798.Pp 2799The delay has two degrees of freedom that can be adjusted via tunables. 2800The percentage of dirty data at which we start to delay is defined by 2801.Sy zfs_delay_min_dirty_percent . 2802This should typically be at or above 2803.Sy zfs_vdev_async_write_active_max_dirty_percent , 2804so that we only start to delay after writing at full speed 2805has failed to keep up with the incoming write rate. 2806The scale of the curve is defined by 2807.Sy zfs_delay_scale . 2808Roughly speaking, this variable determines the amount of delay at the midpoint 2809of the curve. 2810.Bd -literal 2811delay 2812 10ms +-------------------------------------------------------------*+ 2813 | *| 2814 9ms + *+ 2815 | *| 2816 8ms + *+ 2817 | * | 2818 7ms + * + 2819 | * | 2820 6ms + * + 2821 | * | 2822 5ms + * + 2823 | * | 2824 4ms + * + 2825 | * | 2826 3ms + * + 2827 | * | 2828 2ms + (midpoint) * + 2829 | | ** | 2830 1ms + v *** + 2831 | \fBzfs_delay_scale\fP ----------> ******** | 2832 0 +-------------------------------------*********----------------+ 2833 0% <- \fBzfs_dirty_data_max\fP -> 100% 2834.Ed 2835.Pp 2836Note, that since the delay is added to the outstanding time remaining on the 2837most recent transaction it's effectively the inverse of IOPS. 2838Here, the midpoint of 2839.Em 500 us 2840translates to 2841.Em 2000 IOPS . 2842The shape of the curve 2843was chosen such that small changes in the amount of accumulated dirty data 2844in the first three quarters of the curve yield relatively small differences 2845in the amount of delay. 2846.Pp 2847The effects can be easier to understand when the amount of delay is 2848represented on a logarithmic scale: 2849.Bd -literal 2850delay 2851100ms +-------------------------------------------------------------++ 2852 + + 2853 | | 2854 + *+ 2855 10ms + *+ 2856 + ** + 2857 | (midpoint) ** | 2858 + | ** + 2859 1ms + v **** + 2860 + \fBzfs_delay_scale\fP ----------> ***** + 2861 | **** | 2862 + **** + 2863100us + ** + 2864 + * + 2865 | * | 2866 + * + 2867 10us + * + 2868 + + 2869 | | 2870 + + 2871 +--------------------------------------------------------------+ 2872 0% <- \fBzfs_dirty_data_max\fP -> 100% 2873.Ed 2874.Pp 2875Note here that only as the amount of dirty data approaches its limit does 2876the delay start to increase rapidly. 2877The goal of a properly tuned system should be to keep the amount of dirty data 2878out of that range by first ensuring that the appropriate limits are set 2879for the I/O scheduler to reach optimal throughput on the back-end storage, 2880and then by changing the value of 2881.Sy zfs_delay_scale 2882to increase the steepness of the curve. 2883