xref: /freebsd/sys/contrib/openzfs/man/man4/zfs.4 (revision e47161e5f1f01ef300c6e7efdb9c92e3a6c497ff)
1.\"
2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
4.\" Copyright (c) 2019 Datto Inc.
5.\" Copyright (c) 2023, 2024 Klara, Inc.
6.\" The contents of this file are subject to the terms of the Common Development
7.\" and Distribution License (the "License").  You may not use this file except
8.\" in compliance with the License. You can obtain a copy of the license at
9.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
10.\"
11.\" See the License for the specific language governing permissions and
12.\" limitations under the License. When distributing Covered Code, include this
13.\" CDDL HEADER in each file and include the License file at
14.\" usr/src/OPENSOLARIS.LICENSE.  If applicable, add the following below this
15.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
16.\" own identifying information:
17.\" Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.Dd June 27, 2024
20.Dt ZFS 4
21.Os
22.
23.Sh NAME
24.Nm zfs
25.Nd tuning of the ZFS kernel module
26.
27.Sh DESCRIPTION
28The ZFS module supports these parameters:
29.Bl -tag -width Ds
30.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
31Maximum size in bytes of the dbuf cache.
32The target size is determined by the MIN versus
33.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
34of the target ARC size.
35The behavior of the dbuf cache and its associated settings
36can be observed via the
37.Pa /proc/spl/kstat/zfs/dbufstats
38kstat.
39.
40.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
41Maximum size in bytes of the metadata dbuf cache.
42The target size is determined by the MIN versus
43.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
44of the target ARC size.
45The behavior of the metadata dbuf cache and its associated settings
46can be observed via the
47.Pa /proc/spl/kstat/zfs/dbufstats
48kstat.
49.
50.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
51The percentage over
52.Sy dbuf_cache_max_bytes
53when dbufs must be evicted directly.
54.
55.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
56The percentage below
57.Sy dbuf_cache_max_bytes
58when the evict thread stops evicting dbufs.
59.
60.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint
61Set the size of the dbuf cache
62.Pq Sy dbuf_cache_max_bytes
63to a log2 fraction of the target ARC size.
64.
65.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint
66Set the size of the dbuf metadata cache
67.Pq Sy dbuf_metadata_cache_max_bytes
68to a log2 fraction of the target ARC size.
69.
70.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint
71Set the size of the mutex array for the dbuf cache.
72When set to
73.Sy 0
74the array is dynamically sized based on total system memory.
75.
76.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint
77dnode slots allocated in a single operation as a power of 2.
78The default value minimizes lock contention for the bulk operation performed.
79.
80.It Sy dmu_ddt_copies Ns = Ns Sy 3 Pq uint
81Controls the number of copies stored for DeDup Table
82.Pq DDT
83objects.
84Reducing the number of copies to 1 from the previous default of 3
85can reduce the write inflation caused by deduplication.
86This assumes redundancy for this data is provided by the vdev layer.
87If the DDT is damaged, space may be leaked
88.Pq not freed
89when the DDT can not report the correct reference count.
90.
91.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
92Limit the amount we can prefetch with one call to this amount in bytes.
93This helps to limit the amount of memory that can be used by prefetching.
94.
95.It Sy ignore_hole_birth Pq int
96Alias for
97.Sy send_holes_without_birth_time .
98.
99.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
100Turbo L2ARC warm-up.
101When the L2ARC is cold the fill interval will be set as fast as possible.
102.
103.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
104Min feed interval in milliseconds.
105Requires
106.Sy l2arc_feed_again Ns = Ns Ar 1
107and only applicable in related situations.
108.
109.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
110Seconds between L2ARC writing.
111.
112.It Sy l2arc_headroom Ns = Ns Sy 8 Pq u64
113How far through the ARC lists to search for L2ARC cacheable content,
114expressed as a multiplier of
115.Sy l2arc_write_max .
116ARC persistence across reboots can be achieved with persistent L2ARC
117by setting this parameter to
118.Sy 0 ,
119allowing the full length of ARC lists to be searched for cacheable content.
120.
121.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
122Scales
123.Sy l2arc_headroom
124by this percentage when L2ARC contents are being successfully compressed
125before writing.
126A value of
127.Sy 100
128disables this feature.
129.
130.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
131Controls whether buffers present on special vdevs are eligible for caching
132into L2ARC.
133If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
134.
135.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Ns | Ns 2 Pq int
136Controls whether only MFU metadata and data are cached from ARC into L2ARC.
137This may be desired to avoid wasting space on L2ARC when reading/writing large
138amounts of data that are not expected to be accessed more than once.
139.Pp
140The default is 0,
141meaning both MRU and MFU data and metadata are cached.
142When turning off this feature (setting it to 0), some MRU buffers will
143still be present in ARC and eventually cached on L2ARC.
144.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
145some prefetched buffers will be cached to L2ARC, and those might later
146transition to MRU, in which case the
147.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
148.Pp
149Setting it to 1 means to L2 cache only MFU data and metadata.
150.Pp
151Setting it to 2 means to L2 cache all metadata (MRU+MFU) but
152only MFU data (ie: MRU data are not cached). This can be the right setting
153to cache as much metadata as possible even when having high data turnover.
154.Pp
155Regardless of
156.Sy l2arc_noprefetch ,
157some MFU buffers might be evicted from ARC,
158accessed later on as prefetches and transition to MRU as prefetches.
159If accessed again they are counted as MRU and the
160.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
161.Pp
162The ARC status of L2ARC buffers when they were first cached in
163L2ARC can be seen in the
164.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
165arcstats when importing the pool or onlining a cache
166device if persistent L2ARC is enabled.
167.Pp
168The
169.Sy evict_l2_eligible_mru
170arcstat does not take into account if this option is enabled as the information
171provided by the
172.Sy evict_l2_eligible_m[rf]u
173arcstats can be used to decide if toggling this option is appropriate
174for the current workload.
175.
176.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint
177Percent of ARC size allowed for L2ARC-only headers.
178Since L2ARC buffers are not evicted on memory pressure,
179too many headers on a system with an irrationally large L2ARC
180can render it slow or unusable.
181This parameter limits L2ARC writes and rebuilds to achieve the target.
182.
183.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
184Trims ahead of the current write size
185.Pq Sy l2arc_write_max
186on L2ARC devices by this percentage of write size if we have filled the device.
187If set to
188.Sy 100
189we TRIM twice the space required to accommodate upcoming writes.
190A minimum of
191.Sy 64 MiB
192will be trimmed.
193It also enables TRIM of the whole L2ARC device upon creation
194or addition to an existing pool or if the header of the device is
195invalid upon importing a pool or onlining a cache device.
196A value of
197.Sy 0
198disables TRIM on L2ARC altogether and is the default as it can put significant
199stress on the underlying storage devices.
200This will vary depending of how well the specific device handles these commands.
201.
202.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
203Do not write buffers to L2ARC if they were prefetched but not used by
204applications.
205In case there are prefetched buffers in L2ARC and this option
206is later set, we do not read the prefetched buffers from L2ARC.
207Unsetting this option is useful for caching sequential reads from the
208disks to L2ARC and serve those reads from L2ARC later on.
209This may be beneficial in case the L2ARC device is significantly faster
210in sequential reads than the disks of the pool.
211.Pp
212Use
213.Sy 1
214to disable and
215.Sy 0
216to enable caching/reading prefetches to/from L2ARC.
217.
218.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
219No reads during writes.
220.
221.It Sy l2arc_write_boost Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
222Cold L2ARC devices will have
223.Sy l2arc_write_max
224increased by this amount while they remain cold.
225.
226.It Sy l2arc_write_max Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq u64
227Max write bytes per interval.
228.
229.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
230Rebuild the L2ARC when importing a pool (persistent L2ARC).
231This can be disabled if there are problems importing a pool
232or attaching an L2ARC device (e.g. the L2ARC device is slow
233in reading stored log metadata, or the metadata
234has become somehow fragmented/unusable).
235.
236.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
237Mininum size of an L2ARC device required in order to write log blocks in it.
238The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
239.Pp
240For L2ARC devices less than 1 GiB, the amount of data
241.Fn l2arc_evict
242evicts is significant compared to the amount of restored L2ARC data.
243In this case, do not write log blocks in L2ARC in order not to waste space.
244.
245.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
246Metaslab granularity, in bytes.
247This is roughly similar to what would be referred to as the "stripe size"
248in traditional RAID arrays.
249In normal operation, ZFS will try to write this amount of data to each disk
250before moving on to the next top-level vdev.
251.
252.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
253Enable metaslab group biasing based on their vdevs' over- or under-utilization
254relative to the pool.
255.
256.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
257Make some blocks above a certain size be gang blocks.
258This option is used by the test suite to facilitate testing.
259.
260.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint
261For blocks that could be forced to be a gang block (due to
262.Sy metaslab_force_ganging ) ,
263force this many of them to be gang blocks.
264.
265.It Sy brt_zap_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
266Controls prefetching BRT records for blocks which are going to be cloned.
267.
268.It Sy brt_zap_default_bs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
269Default BRT ZAP data block size as a power of 2. Note that changing this after
270creating a BRT on the pool will not affect existing BRTs, only newly created
271ones.
272.
273.It Sy brt_zap_default_ibs Ns = Ns Sy 12 Po 4 KiB Pc Pq int
274Default BRT ZAP indirect block size as a power of 2. Note that changing this
275after creating a BRT on the pool will not affect existing BRTs, only newly
276created ones.
277.
278.It Sy ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
279Default DDT ZAP data block size as a power of 2. Note that changing this after
280creating a DDT on the pool will not affect existing DDTs, only newly created
281ones.
282.
283.It Sy ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
284Default DDT ZAP indirect block size as a power of 2. Note that changing this
285after creating a DDT on the pool will not affect existing DDTs, only newly
286created ones.
287.
288.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int
289Default dnode block size as a power of 2.
290.
291.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int
292Default dnode indirect block size as a power of 2.
293.
294.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
295When attempting to log an output nvlist of an ioctl in the on-disk history,
296the output will not be stored if it is larger than this size (in bytes).
297This must be less than
298.Sy DMU_MAX_ACCESS Pq 64 MiB .
299This applies primarily to
300.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
301.
302.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
303Prevent log spacemaps from being destroyed during pool exports and destroys.
304.
305.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
306Enable/disable segment-based metaslab selection.
307.
308.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
309When using segment-based metaslab selection, continue allocating
310from the active metaslab until this option's
311worth of buckets have been exhausted.
312.
313.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
314Load all metaslabs during pool import.
315.
316.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
317Prevent metaslabs from being unloaded.
318.
319.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
320Enable use of the fragmentation metric in computing metaslab weights.
321.
322.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
323Maximum distance to search forward from the last offset.
324Without this limit, fragmented pools can see
325.Em >100`000
326iterations and
327.Fn metaslab_block_picker
328becomes the performance limiting factor on high-performance storage.
329.Pp
330With the default setting of
331.Sy 16 MiB ,
332we typically see less than
333.Em 500
334iterations, even with very fragmented
335.Sy ashift Ns = Ns Sy 9
336pools.
337The maximum number of iterations possible is
338.Sy metaslab_df_max_search / 2^(ashift+1) .
339With the default setting of
340.Sy 16 MiB
341this is
342.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
343or
344.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
345.
346.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
347If not searching forward (due to
348.Sy metaslab_df_max_search , metaslab_df_free_pct ,
349.No or Sy metaslab_df_alloc_threshold ) ,
350this tunable controls which segment is used.
351If set, we will use the largest free segment.
352If unset, we will use a segment of at least the requested size.
353.
354.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
355When we unload a metaslab, we cache the size of the largest free chunk.
356We use that cached size to determine whether or not to load a metaslab
357for a given allocation.
358As more frees accumulate in that metaslab while it's unloaded,
359the cached max size becomes less and less accurate.
360After a number of seconds controlled by this tunable,
361we stop considering the cached max size and start
362considering only the histogram instead.
363.
364.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint
365When we are loading a new metaslab, we check the amount of memory being used
366to store metaslab range trees.
367If it is over a threshold, we attempt to unload the least recently used metaslab
368to prevent the system from clogging all of its memory with range trees.
369This tunable sets the percentage of total system memory that is the threshold.
370.
371.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
372.Bl -item -compact
373.It
374If unset, we will first try normal allocation.
375.It
376If that fails then we will do a gang allocation.
377.It
378If that fails then we will do a "try hard" gang allocation.
379.It
380If that fails then we will have a multi-layer gang block.
381.El
382.Pp
383.Bl -item -compact
384.It
385If set, we will first try normal allocation.
386.It
387If that fails then we will do a "try hard" allocation.
388.It
389If that fails we will do a gang allocation.
390.It
391If that fails we will do a "try hard" gang allocation.
392.It
393If that fails then we will have a multi-layer gang block.
394.El
395.
396.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint
397When not trying hard, we only consider this number of the best metaslabs.
398This improves performance, especially when there are many metaslabs per vdev
399and the allocation can't actually be satisfied
400(so we would otherwise iterate all metaslabs).
401.
402.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint
403When a vdev is added, target this number of metaslabs per top-level vdev.
404.
405.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
406Default lower limit for metaslab size.
407.
408.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint
409Default upper limit for metaslab size.
410.
411.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
412Maximum ashift used when optimizing for logical \[->] physical sector size on
413new
414top-level vdevs.
415May be increased up to
416.Sy ASHIFT_MAX Po 16 Pc ,
417but this may negatively impact pool space efficiency.
418.
419.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
420Minimum ashift used when creating new top-level vdevs.
421.
422.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
423Minimum number of metaslabs to create in a top-level vdev.
424.
425.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
426Skip label validation steps during pool import.
427Changing is not recommended unless you know what you're doing
428and are recovering a damaged label.
429.
430.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint
431Practical upper limit of total metaslabs per top-level vdev.
432.
433.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
434Enable metaslab group preloading.
435.
436.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint
437Maximum number of metaslabs per group to preload
438.
439.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint
440Percentage of CPUs to run a metaslab preload taskq
441.
442.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
443Give more weight to metaslabs with lower LBAs,
444assuming they have greater bandwidth,
445as is typically the case on a modern constant angular velocity disk drive.
446.
447.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint
448After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
449reduce unnecessary reloading.
450Note that both this many TXGs and
451.Sy metaslab_unload_delay_ms
452milliseconds must pass before unloading will occur.
453.
454.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
455After a metaslab is used, we keep it loaded for this many milliseconds,
456to attempt to reduce unnecessary reloading.
457Note, that both this many milliseconds and
458.Sy metaslab_unload_delay
459TXGs must pass before unloading will occur.
460.
461.It Sy reference_history Ns = Ns Sy 3 Pq uint
462Maximum reference holders being tracked when reference_tracking_enable is
463active.
464.It Sy raidz_expand_max_copy_bytes Ns = Ns Sy 160MB Pq ulong
465Max amount of memory to use for RAID-Z expansion I/O.
466This limits how much I/O can be outstanding at once.
467.
468.It Sy raidz_expand_max_reflow_bytes Ns = Ns Sy 0 Pq ulong
469For testing, pause RAID-Z expansion when reflow amount reaches this value.
470.
471.It Sy raidz_io_aggregate_rows Ns = Ns Sy 4 Pq ulong
472For expanded RAID-Z, aggregate reads that have more rows than this.
473.
474.It Sy reference_history Ns = Ns Sy 3 Pq int
475Maximum reference holders being tracked when reference_tracking_enable is
476active.
477.
478.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
479Track reference holders to
480.Sy refcount_t
481objects (debug builds only).
482.
483.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
484When set, the
485.Sy hole_birth
486optimization will not be used, and all holes will always be sent during a
487.Nm zfs Cm send .
488This is useful if you suspect your datasets are affected by a bug in
489.Sy hole_birth .
490.
491.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
492SPA config file.
493.
494.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint
495Multiplication factor used to estimate actual disk consumption from the
496size of data being written.
497The default value is a worst case estimate,
498but lower values may be valid for a given pool depending on its configuration.
499Pool administrators who understand the factors involved
500may wish to specify a more realistic inflation factor,
501particularly if they operate close to quota or capacity limits.
502.
503.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
504Whether to print the vdev tree in the debugging message buffer during pool
505import.
506.
507.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
508Whether to traverse data blocks during an "extreme rewind"
509.Pq Fl X
510import.
511.Pp
512An extreme rewind import normally performs a full traversal of all
513blocks in the pool for verification.
514If this parameter is unset, the traversal skips non-metadata blocks.
515It can be toggled once the
516import has started to stop or start the traversal of non-metadata blocks.
517.
518.It Sy spa_load_verify_metadata  Ns = Ns Sy 1 Ns | Ns 0 Pq int
519Whether to traverse blocks during an "extreme rewind"
520.Pq Fl X
521pool import.
522.Pp
523An extreme rewind import normally performs a full traversal of all
524blocks in the pool for verification.
525If this parameter is unset, the traversal is not performed.
526It can be toggled once the import has started to stop or start the traversal.
527.
528.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint
529Sets the maximum number of bytes to consume during pool import to the log2
530fraction of the target ARC size.
531.
532.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
533Normally, we don't allow the last
534.Sy 3.2% Pq Sy 1/2^spa_slop_shift
535of space in the pool to be consumed.
536This ensures that we don't run the pool completely out of space,
537due to unaccounted changes (e.g. to the MOS).
538It also limits the worst-case time to allocate space.
539If we have less than this amount of free space,
540most ZPL operations (e.g. write, create) will return
541.Sy ENOSPC .
542.
543.It Sy spa_num_allocators Ns = Ns Sy 4 Pq int
544Determines the number of block alloctators to use per spa instance.
545Capped by the number of actual CPUs in the system via
546.Sy spa_cpus_per_allocator .
547.Pp
548Note that setting this value too high could result in performance
549degredation and/or excess fragmentation.
550Set value only applies to pools imported/created after that.
551.
552.It Sy spa_cpus_per_allocator Ns = Ns Sy 4 Pq int
553Determines the minimum number of CPUs in a system for block alloctator
554per spa instance.
555Set value only applies to pools imported/created after that.
556.
557.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
558Limits the number of on-disk error log entries that will be converted to the
559new format when enabling the
560.Sy head_errlog
561feature.
562The default is to convert all log entries.
563.
564.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
565During top-level vdev removal, chunks of data are copied from the vdev
566which may include free space in order to trade bandwidth for IOPS.
567This parameter determines the maximum span of free space, in bytes,
568which will be included as "unnecessary" data in a chunk of copied data.
569.Pp
570The default value here was chosen to align with
571.Sy zfs_vdev_read_gap_limit ,
572which is a similar concept when doing
573regular reads (but there's no reason it has to be the same).
574.
575.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
576Logical ashift for file-based devices.
577.
578.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
579Physical ashift for file-based devices.
580.
581.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
582If set, when we start iterating over a ZAP object,
583prefetch the entire object (all leaf blocks).
584However, this is limited by
585.Sy dmu_prefetch_max .
586.
587.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
588Maximum micro ZAP size.
589A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size.
590.
591.It Sy zap_shrink_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
592If set, adjacent empty ZAP blocks will be collapsed, reducing disk space.
593.
594.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
595Min bytes to prefetch per stream.
596Prefetch distance starts from the demand access size and quickly grows to
597this value, doubling on each hit.
598After that it may grow further by 1/8 per hit, but only if some prefetch
599since last time haven't completed in time to satisfy demand request, i.e.
600prefetch depth didn't cover the read latency or the pool got saturated.
601.
602.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
603Max bytes to prefetch per stream.
604.
605.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
606Max bytes to prefetch indirects for per stream.
607.
608.It Sy zfetch_max_reorder Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
609Requests within this byte distance from the current prefetch stream position
610are considered parts of the stream, reordered due to parallel processing.
611Such requests do not advance the stream position immediately unless
612.Sy zfetch_hole_shift
613fill threshold is reached, but saved to fill holes in the stream later.
614.
615.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
616Max number of streams per zfetch (prefetch streams per file).
617.
618.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
619Min time before inactive prefetch stream can be reclaimed
620.
621.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
622Max time before inactive prefetch stream can be deleted
623.
624.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
625Enables ARC from using scatter/gather lists and forces all allocations to be
626linear in kernel memory.
627Disabling can improve performance in some code paths
628at the expense of fragmented kernel memory.
629.
630.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
631Maximum number of consecutive memory pages allocated in a single block for
632scatter/gather lists.
633.Pp
634The value of
635.Sy MAX_ORDER
636depends on kernel configuration.
637.
638.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
639This is the minimum allocation size that will use scatter (page-based) ABDs.
640Smaller allocations will use linear ABDs.
641.
642.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
643When the number of bytes consumed by dnodes in the ARC exceeds this number of
644bytes, try to unpin some of it in response to demand for non-metadata.
645This value acts as a ceiling to the amount of dnode metadata, and defaults to
646.Sy 0 ,
647which indicates that a percent which is based on
648.Sy zfs_arc_dnode_limit_percent
649of the ARC meta buffers that may be used for dnodes.
650.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
651Percentage that can be consumed by dnodes of ARC meta buffers.
652.Pp
653See also
654.Sy zfs_arc_dnode_limit ,
655which serves a similar purpose but has a higher priority if nonzero.
656.
657.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
658Percentage of ARC dnodes to try to scan in response to demand for non-metadata
659when the number of bytes consumed by dnodes exceeds
660.Sy zfs_arc_dnode_limit .
661.
662.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint
663The ARC's buffer hash table is sized based on the assumption of an average
664block size of this value.
665This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
666with 8-byte pointers.
667For configurations with a known larger average block size,
668this value can be increased to reduce the memory footprint.
669.
670.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint
671When
672.Fn arc_is_overflowing ,
673.Fn arc_get_data_impl
674waits for this percent of the requested amount of data to be evicted.
675For example, by default, for every
676.Em 2 KiB
677that's evicted,
678.Em 1 KiB
679of it may be "reused" by a new allocation.
680Since this is above
681.Sy 100 Ns % ,
682it ensures that progress is made towards getting
683.Sy arc_size No under Sy arc_c .
684Since this is finite, it ensures that allocations can still happen,
685even during the potentially long time that
686.Sy arc_size No is more than Sy arc_c .
687.
688.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint
689Number ARC headers to evict per sub-list before proceeding to another sub-list.
690This batch-style operation prevents entire sub-lists from being evicted at once
691but comes at a cost of additional unlocking and locking.
692.
693.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint
694If set to a non zero value, it will replace the
695.Sy arc_grow_retry
696value with this value.
697The
698.Sy arc_grow_retry
699.No value Pq default Sy 5 Ns s
700is the number of seconds the ARC will wait before
701trying to resume growth after a memory pressure event.
702.
703.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
704Throttle I/O when free system memory drops below this percentage of total
705system memory.
706Setting this value to
707.Sy 0
708will disable the throttle.
709.
710.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
711Max size of ARC in bytes.
712If
713.Sy 0 ,
714then the max size of ARC is determined by the amount of system memory installed.
715The larger of
716.Sy all_system_memory No \- Sy 1 GiB
717and
718.Sy 5/8 No \(mu Sy all_system_memory
719will be used as the limit.
720This value must be at least
721.Sy 67108864 Ns B Pq 64 MiB .
722.Pp
723This value can be changed dynamically, with some caveats.
724It cannot be set back to
725.Sy 0
726while running, and reducing it below the current ARC size will not cause
727the ARC to shrink without memory pressure to induce shrinking.
728.
729.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint
730Balance between metadata and data on ghost hits.
731Values above 100 increase metadata caching by proportionally reducing effect
732of ghost data hits on target data/metadata rate.
733.
734.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
735Min size of ARC in bytes.
736.No If set to Sy 0 , arc_c_min
737will default to consuming the larger of
738.Sy 32 MiB
739and
740.Sy all_system_memory No / Sy 32 .
741.
742.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
743Minimum time prefetched blocks are locked in the ARC.
744.
745.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
746Minimum time "prescient prefetched" blocks are locked in the ARC.
747These blocks are meant to be prefetched fairly aggressively ahead of
748the code that may use them.
749.
750.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
751Number of arc_prune threads.
752.Fx
753does not need more than one.
754Linux may theoretically use one per mount point up to number of CPUs,
755but that was not proven to be useful.
756.
757.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
758Number of missing top-level vdevs which will be allowed during
759pool import (only in read-only mode).
760.
761.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
762Maximum size in bytes allowed to be passed as
763.Sy zc_nvlist_src_size
764for ioctls on
765.Pa /dev/zfs .
766This prevents a user from causing the kernel to allocate
767an excessive amount of memory.
768When the limit is exceeded, the ioctl fails with
769.Sy EINVAL
770and a description of the error is sent to the
771.Pa zfs-dbgmsg
772log.
773This parameter should not need to be touched under normal circumstances.
774If
775.Sy 0 ,
776equivalent to a quarter of the user-wired memory limit under
777.Fx
778and to
779.Sy 134217728 Ns B Pq 128 MiB
780under Linux.
781.
782.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint
783To allow more fine-grained locking, each ARC state contains a series
784of lists for both data and metadata objects.
785Locking is performed at the level of these "sub-lists".
786This parameters controls the number of sub-lists per ARC state,
787and also applies to other uses of the multilist data structure.
788.Pp
789If
790.Sy 0 ,
791equivalent to the greater of the number of online CPUs and
792.Sy 4 .
793.
794.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
795The ARC size is considered to be overflowing if it exceeds the current
796ARC target size
797.Pq Sy arc_c
798by thresholds determined by this parameter.
799Exceeding by
800.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
801starts ARC reclamation process.
802If that appears insufficient, exceeding by
803.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
804blocks new buffer allocation until the reclaim thread catches up.
805Started reclamation process continues till ARC size returns below the
806target size.
807.Pp
808The default value of
809.Sy 8
810causes the ARC to start reclamation if it exceeds the target size by
811.Em 0.2%
812of the target size, and block allocations by
813.Em 0.6% .
814.
815.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint
816If nonzero, this will update
817.Sy arc_shrink_shift Pq default Sy 7
818with the new value.
819.
820.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
821Percent of pagecache to reclaim ARC to.
822.Pp
823This tunable allows the ZFS ARC to play more nicely
824with the kernel's LRU pagecache.
825It can guarantee that the ARC size won't collapse under scanning
826pressure on the pagecache, yet still allows the ARC to be reclaimed down to
827.Sy zfs_arc_min
828if necessary.
829This value is specified as percent of pagecache size (as measured by
830.Sy NR_FILE_PAGES ) ,
831where that percent may exceed
832.Sy 100 .
833This
834only operates during memory pressure/reclaim.
835.
836.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int
837This is a limit on how many pages the ARC shrinker makes available for
838eviction in response to one page allocation attempt.
839Note that in practice, the kernel's shrinker can ask us to evict
840up to about four times this for one allocation attempt.
841To reduce OOM risk, this limit is applied for kswapd reclaims only.
842.Pp
843The default limit of
844.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
845limits the amount of time spent attempting to reclaim ARC memory to
846less than 100 ms per allocation attempt,
847even with a small average compressed block size of ~8 KiB.
848.Pp
849The parameter can be set to 0 (zero) to disable the limit,
850and only applies on Linux.
851.
852.It Sy zfs_arc_shrinker_seeks Ns = Ns Sy 2 Pq int
853Relative cost of ARC eviction on Linux, AKA number of seeks needed to
854restore evicted page.
855Bigger values make ARC more precious and evictions smaller, comparing to
856other kernel subsystems.
857Value of 4 means parity with page cache.
858.
859.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
860The target number of bytes the ARC should leave as free memory on the system.
861If zero, equivalent to the bigger of
862.Sy 512 KiB No and Sy all_system_memory/64 .
863.
864.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
865Disable pool import at module load by ignoring the cache file
866.Pq Sy spa_config_path .
867.
868.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
869Rate limit checksum events to this many per second.
870Note that this should not be set below the ZED thresholds
871(currently 10 checksums over 10 seconds)
872or else the daemon may not trigger any action.
873.
874.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint
875This controls the amount of time that a ZIL block (lwb) will remain "open"
876when it isn't "full", and it has a thread waiting for it to be committed to
877stable storage.
878The timeout is scaled based on a percentage of the last lwb
879latency to avoid significantly impacting the latency of each individual
880transaction record (itx).
881.
882.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
883Vdev indirection layer (used for device removal) sleeps for this many
884milliseconds during mapping generation.
885Intended for use with the test suite to throttle vdev removal speed.
886.
887.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint
888Minimum percent of obsolete bytes in vdev mapping required to attempt to
889condense
890.Pq see Sy zfs_condense_indirect_vdevs_enable .
891Intended for use with the test suite
892to facilitate triggering condensing as needed.
893.
894.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
895Enable condensing indirect vdev mappings.
896When set, attempt to condense indirect vdev mappings
897if the mapping uses more than
898.Sy zfs_condense_min_mapping_bytes
899bytes of memory and if the obsolete space map object uses more than
900.Sy zfs_condense_max_obsolete_bytes
901bytes on-disk.
902The condensing process is an attempt to save memory by removing obsolete
903mappings.
904.
905.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
906Only attempt to condense indirect vdev mappings if the on-disk size
907of the obsolete space map object is greater than this number of bytes
908.Pq see Sy zfs_condense_indirect_vdevs_enable .
909.
910.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
911Minimum size vdev mapping to attempt to condense
912.Pq see Sy zfs_condense_indirect_vdevs_enable .
913.
914.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
915Internally ZFS keeps a small log to facilitate debugging.
916The log is enabled by default, and can be disabled by unsetting this option.
917The contents of the log can be accessed by reading
918.Pa /proc/spl/kstat/zfs/dbgmsg .
919Writing
920.Sy 0
921to the file clears the log.
922.Pp
923This setting does not influence debug prints due to
924.Sy zfs_flags .
925.
926.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
927Maximum size of the internal ZFS debug log.
928.
929.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
930Historically used for controlling what reporting was available under
931.Pa /proc/spl/kstat/zfs .
932No effect.
933.
934.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
935Check time in milliseconds.
936This defines the frequency at which we check for hung I/O requests
937and potentially invoke the
938.Sy zfs_deadman_failmode
939behavior.
940.
941.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
942When a pool sync operation takes longer than
943.Sy zfs_deadman_synctime_ms ,
944or when an individual I/O operation takes longer than
945.Sy zfs_deadman_ziotime_ms ,
946then the operation is considered to be "hung".
947If
948.Sy zfs_deadman_enabled
949is set, then the deadman behavior is invoked as described by
950.Sy zfs_deadman_failmode .
951By default, the deadman is enabled and set to
952.Sy wait
953which results in "hung" I/O operations only being logged.
954The deadman is automatically disabled when a pool gets suspended.
955.
956.It Sy zfs_deadman_events_per_second Ns = Ns Sy 1 Ns /s Pq int
957Rate limit deadman zevents (which report hung I/O operations) to this many per
958second.
959.
960.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
961Controls the failure behavior when the deadman detects a "hung" I/O operation.
962Valid values are:
963.Bl -tag -compact -offset 4n -width "continue"
964.It Sy wait
965Wait for a "hung" operation to complete.
966For each "hung" operation a "deadman" event will be posted
967describing that operation.
968.It Sy continue
969Attempt to recover from a "hung" operation by re-dispatching it
970to the I/O pipeline if possible.
971.It Sy panic
972Panic the system.
973This can be used to facilitate automatic fail-over
974to a properly configured fail-over partner.
975.El
976.
977.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
978Interval in milliseconds after which the deadman is triggered and also
979the interval after which a pool sync operation is considered to be "hung".
980Once this limit is exceeded the deadman will be invoked every
981.Sy zfs_deadman_checktime_ms
982milliseconds until the pool sync completes.
983.
984.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
985Interval in milliseconds after which the deadman is triggered and an
986individual I/O operation is considered to be "hung".
987As long as the operation remains "hung",
988the deadman will be invoked every
989.Sy zfs_deadman_checktime_ms
990milliseconds until the operation completes.
991.
992.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
993Enable prefetching dedup-ed blocks which are going to be freed.
994.
995.It Sy zfs_dedup_log_flush_passes_max Ns = Ns Sy 8 Ns Pq uint
996Maximum number of dedup log flush passes (iterations) each transaction.
997.Pp
998At the start of each transaction, OpenZFS will estimate how many entries it
999needs to flush out to keep up with the change rate, taking the amount and time
1000taken to flush on previous txgs into account (see
1001.Sy zfs_dedup_log_flush_flow_rate_txgs ) .
1002It will spread this amount into a number of passes.
1003At each pass, it will use the amount already flushed and the total time taken
1004by flushing and by other IO to recompute how much it should do for the remainder
1005of the txg.
1006.Pp
1007Reducing the max number of passes will make flushing more aggressive, flushing
1008out more entries on each pass.
1009This can be faster, but also more likely to compete with other IO.
1010Increasing the max number of passes will put fewer entries onto each pass,
1011keeping the overhead of dedup changes to a minimum but possibly causing a large
1012number of changes to be dumped on the last pass, which can blow out the txg
1013sync time beyond
1014.Sy zfs_txg_timeout .
1015.
1016.It Sy zfs_dedup_log_flush_min_time_ms Ns = Ns Sy 1000 Ns Pq uint
1017Minimum time to spend on dedup log flush each transaction.
1018.Pp
1019At least this long will be spent flushing dedup log entries each transaction,
1020up to
1021.Sy zfs_txg_timeout .
1022This occurs even if doing so would delay the transaction, that is, other IO
1023completes under this time.
1024.
1025.It Sy zfs_dedup_log_flush_entries_min Ns = Ns Sy 1000 Ns Pq uint
1026Flush at least this many entries each transaction.
1027.Pp
1028OpenZFS will estimate how many entries it needs to flush each transaction to
1029keep up with the ingest rate (see
1030.Sy zfs_dedup_log_flush_flow_rate_txgs ) .
1031This sets the minimum for that estimate.
1032Raising it can force OpenZFS to flush more aggressively, keeping the log small
1033and so reducing pool import times, but can make it less able to back off if
1034log flushing would compete with other IO too much.
1035.
1036.It Sy zfs_dedup_log_flush_flow_rate_txgs Ns = Ns Sy 10 Ns Pq uint
1037Number of transactions to use to compute the flow rate.
1038.Pp
1039OpenZFS will estimate how many entries it needs to flush each transaction by
1040monitoring the number of entries changed (ingest rate), number of entries
1041flushed (flush rate) and time spent flushing (flush time rate) and combining
1042these into an overall "flow rate".
1043It will use an exponential weighted moving average over some number of recent
1044transactions to compute these rates.
1045This sets the number of transactions to compute these averages over.
1046Setting it higher can help to smooth out the flow rate in the face of spiky
1047workloads, but will take longer for the flow rate to adjust to a sustained
1048change in the ingress rate.
1049.
1050.It Sy zfs_dedup_log_txg_max Ns = Ns Sy 8 Ns Pq uint
1051Max transactions to before starting to flush dedup logs.
1052.Pp
1053OpenZFS maintains two dedup logs, one receiving new changes, one flushing.
1054If there is nothing to flush, it will accumulate changes for no more than this
1055many transactions before switching the logs and starting to flush entries out.
1056.
1057.It Sy zfs_dedup_log_mem_max Ns = Ns Sy 0 Ns Pq u64
1058Max memory to use for dedup logs.
1059.Pp
1060OpenZFS will spend no more than this much memory on maintaining the in-memory
1061dedup log.
1062Flushing will begin when around half this amount is being spent on logs.
1063The default value of
1064.Sy 0
1065will cause it to be set by
1066.Sy zfs_dedup_log_mem_max_percent
1067instead.
1068.
1069.It Sy zfs_dedup_log_mem_max_percent Ns = Ns Sy 1 Ns % Pq uint
1070Max memory to use for dedup logs, as a percentage of total memory.
1071.Pp
1072If
1073.Sy zfs_dedup_log_mem_max
1074is not set, it will be initialised as a percentage of the total memory in the
1075system.
1076.
1077.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
1078Start to delay each transaction once there is this amount of dirty data,
1079expressed as a percentage of
1080.Sy zfs_dirty_data_max .
1081This value should be at least
1082.Sy zfs_vdev_async_write_active_max_dirty_percent .
1083.No See Sx ZFS TRANSACTION DELAY .
1084.
1085.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
1086This controls how quickly the transaction delay approaches infinity.
1087Larger values cause longer delays for a given amount of dirty data.
1088.Pp
1089For the smoothest delay, this value should be about 1 billion divided
1090by the maximum number of operations per second.
1091This will smoothly handle between ten times and a tenth of this number.
1092.No See Sx ZFS TRANSACTION DELAY .
1093.Pp
1094.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
1095.
1096.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
1097Disables requirement for IVset GUIDs to be present and match when doing a raw
1098receive of encrypted datasets.
1099Intended for users whose pools were created with
1100OpenZFS pre-release versions and now have compatibility issues.
1101.
1102.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
1103Maximum number of uses of a single salt value before generating a new one for
1104encrypted datasets.
1105The default value is also the maximum.
1106.
1107.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
1108Size of the znode hashtable used for holds.
1109.Pp
1110Due to the need to hold locks on objects that may not exist yet, kernel mutexes
1111are not created per-object and instead a hashtable is used where collisions
1112will result in objects waiting when there is not actually contention on the
1113same object.
1114.
1115.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
1116Rate limit delay zevents (which report slow I/O operations) to this many per
1117second.
1118.
1119.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
1120Upper-bound limit for unflushed metadata changes to be held by the
1121log spacemap in memory, in bytes.
1122.
1123.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
1124Part of overall system memory that ZFS allows to be used
1125for unflushed metadata changes by the log spacemap, in millionths.
1126.
1127.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
1128Describes the maximum number of log spacemap blocks allowed for each pool.
1129The default value means that the space in all the log spacemaps
1130can add up to no more than
1131.Sy 131072
1132blocks (which means
1133.Em 16 GiB
1134of logical space before compression and ditto blocks,
1135assuming that blocksize is
1136.Em 128 KiB ) .
1137.Pp
1138This tunable is important because it involves a trade-off between import
1139time after an unclean export and the frequency of flushing metaslabs.
1140The higher this number is, the more log blocks we allow when the pool is
1141active which means that we flush metaslabs less often and thus decrease
1142the number of I/O operations for spacemap updates per TXG.
1143At the same time though, that means that in the event of an unclean export,
1144there will be more log spacemap blocks for us to read, inducing overhead
1145in the import time of the pool.
1146The lower the number, the amount of flushing increases, destroying log
1147blocks quicker as they become obsolete faster, which leaves less blocks
1148to be read during import time after a crash.
1149.Pp
1150Each log spacemap block existing during pool import leads to approximately
1151one extra logical I/O issued.
1152This is the reason why this tunable is exposed in terms of blocks rather
1153than space used.
1154.
1155.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
1156If the number of metaslabs is small and our incoming rate is high,
1157we could get into a situation that we are flushing all our metaslabs every TXG.
1158Thus we always allow at least this many log blocks.
1159.
1160.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
1161Tunable used to determine the number of blocks that can be used for
1162the spacemap log, expressed as a percentage of the total number of
1163unflushed metaslabs in the pool.
1164.
1165.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
1166Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1167It effectively limits maximum number of unflushed per-TXG spacemap logs
1168that need to be read after unclean pool export.
1169.
1170.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1171When enabled, files will not be asynchronously removed from the list of pending
1172unlinks and the space they consume will be leaked.
1173Once this option has been disabled and the dataset is remounted,
1174the pending unlinks will be processed and the freed space returned to the pool.
1175This option is used by the test suite.
1176.
1177.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1178This is the used to define a large file for the purposes of deletion.
1179Files containing more than
1180.Sy zfs_delete_blocks
1181will be deleted asynchronously, while smaller files are deleted synchronously.
1182Decreasing this value will reduce the time spent in an
1183.Xr unlink 2
1184system call, at the expense of a longer delay before the freed space is
1185available.
1186This only applies on Linux.
1187.
1188.It Sy zfs_dirty_data_max Ns = Pq int
1189Determines the dirty space limit in bytes.
1190Once this limit is exceeded, new writes are halted until space frees up.
1191This parameter takes precedence over
1192.Sy zfs_dirty_data_max_percent .
1193.No See Sx ZFS TRANSACTION DELAY .
1194.Pp
1195Defaults to
1196.Sy physical_ram/10 ,
1197capped at
1198.Sy zfs_dirty_data_max_max .
1199.
1200.It Sy zfs_dirty_data_max_max Ns = Pq int
1201Maximum allowable value of
1202.Sy zfs_dirty_data_max ,
1203expressed in bytes.
1204This limit is only enforced at module load time, and will be ignored if
1205.Sy zfs_dirty_data_max
1206is later changed.
1207This parameter takes precedence over
1208.Sy zfs_dirty_data_max_max_percent .
1209.No See Sx ZFS TRANSACTION DELAY .
1210.Pp
1211Defaults to
1212.Sy min(physical_ram/4, 4GiB) ,
1213or
1214.Sy min(physical_ram/4, 1GiB)
1215for 32-bit systems.
1216.
1217.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint
1218Maximum allowable value of
1219.Sy zfs_dirty_data_max ,
1220expressed as a percentage of physical RAM.
1221This limit is only enforced at module load time, and will be ignored if
1222.Sy zfs_dirty_data_max
1223is later changed.
1224The parameter
1225.Sy zfs_dirty_data_max_max
1226takes precedence over this one.
1227.No See Sx ZFS TRANSACTION DELAY .
1228.
1229.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint
1230Determines the dirty space limit, expressed as a percentage of all memory.
1231Once this limit is exceeded, new writes are halted until space frees up.
1232The parameter
1233.Sy zfs_dirty_data_max
1234takes precedence over this one.
1235.No See Sx ZFS TRANSACTION DELAY .
1236.Pp
1237Subject to
1238.Sy zfs_dirty_data_max_max .
1239.
1240.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint
1241Start syncing out a transaction group if there's at least this much dirty data
1242.Pq as a percentage of Sy zfs_dirty_data_max .
1243This should be less than
1244.Sy zfs_vdev_async_write_active_min_dirty_percent .
1245.
1246.It Sy zfs_wrlog_data_max Ns = Pq int
1247The upper limit of write-transaction zil log data size in bytes.
1248Write operations are throttled when approaching the limit until log data is
1249cleared out after transaction group sync.
1250Because of some overhead, it should be set at least 2 times the size of
1251.Sy zfs_dirty_data_max
1252.No to prevent harming normal write throughput .
1253It also should be smaller than the size of the slog device if slog is present.
1254.Pp
1255Defaults to
1256.Sy zfs_dirty_data_max*2
1257.
1258.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
1259Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1260preallocated for a file in order to guarantee that later writes will not
1261run out of space.
1262Instead,
1263.Xr fallocate 2
1264space preallocation only checks that sufficient space is currently available
1265in the pool or the user's project quota allocation,
1266and then creates a sparse file of the requested size.
1267The requested space is multiplied by
1268.Sy zfs_fallocate_reserve_percent
1269to allow additional space for indirect blocks and other internal metadata.
1270Setting this to
1271.Sy 0
1272disables support for
1273.Xr fallocate 2
1274and causes it to return
1275.Sy EOPNOTSUPP .
1276.
1277.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1278Select a fletcher 4 implementation.
1279.Pp
1280Supported selectors are:
1281.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1282.No and Sy aarch64_neon .
1283All except
1284.Sy fastest No and Sy scalar
1285require instruction set extensions to be available,
1286and will only appear if ZFS detects that they are present at runtime.
1287If multiple implementations of fletcher 4 are available, the
1288.Sy fastest
1289will be chosen using a micro benchmark.
1290Selecting
1291.Sy scalar
1292results in the original CPU-based calculation being used.
1293Selecting any option other than
1294.Sy fastest No or Sy scalar
1295results in vector instructions
1296from the respective CPU instruction set being used.
1297.
1298.It Sy zfs_bclone_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1299Enable the experimental block cloning feature.
1300If this setting is 0, then even if feature@block_cloning is enabled,
1301attempts to clone blocks will act as though the feature is disabled.
1302.
1303.It Sy zfs_bclone_wait_dirty Ns = Ns Sy 0 Ns | Ns 1 Pq int
1304When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be
1305written to disk.
1306This allows the clone operation to reliably succeed when a file is
1307modified and then immediately cloned.
1308For small files this may be slower than making a copy of the file.
1309Therefore, this setting defaults to 0 which causes a clone operation to
1310immediately fail when encountering a dirty block.
1311.
1312.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string
1313Select a BLAKE3 implementation.
1314.Pp
1315Supported selectors are:
1316.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 .
1317All except
1318.Sy cycle , fastest No and Sy generic
1319require instruction set extensions to be available,
1320and will only appear if ZFS detects that they are present at runtime.
1321If multiple implementations of BLAKE3 are available, the
1322.Sy fastest will be chosen using a micro benchmark. You can see the
1323benchmark results by reading this kstat file:
1324.Pa /proc/spl/kstat/zfs/chksum_bench .
1325.
1326.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1327Enable/disable the processing of the free_bpobj object.
1328.
1329.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
1330Maximum number of blocks freed in a single TXG.
1331.
1332.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
1333Maximum number of dedup blocks freed in a single TXG.
1334.
1335.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
1336Maximum asynchronous read I/O operations active to each device.
1337.No See Sx ZFS I/O SCHEDULER .
1338.
1339.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint
1340Minimum asynchronous read I/O operation active to each device.
1341.No See Sx ZFS I/O SCHEDULER .
1342.
1343.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
1344When the pool has more than this much dirty data, use
1345.Sy zfs_vdev_async_write_max_active
1346to limit active async writes.
1347If the dirty data is between the minimum and maximum,
1348the active I/O limit is linearly interpolated.
1349.No See Sx ZFS I/O SCHEDULER .
1350.
1351.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint
1352When the pool has less than this much dirty data, use
1353.Sy zfs_vdev_async_write_min_active
1354to limit active async writes.
1355If the dirty data is between the minimum and maximum,
1356the active I/O limit is linearly
1357interpolated.
1358.No See Sx ZFS I/O SCHEDULER .
1359.
1360.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint
1361Maximum asynchronous write I/O operations active to each device.
1362.No See Sx ZFS I/O SCHEDULER .
1363.
1364.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint
1365Minimum asynchronous write I/O operations active to each device.
1366.No See Sx ZFS I/O SCHEDULER .
1367.Pp
1368Lower values are associated with better latency on rotational media but poorer
1369resilver performance.
1370The default value of
1371.Sy 2
1372was chosen as a compromise.
1373A value of
1374.Sy 3
1375has been shown to improve resilver performance further at a cost of
1376further increasing latency.
1377.
1378.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint
1379Maximum initializing I/O operations active to each device.
1380.No See Sx ZFS I/O SCHEDULER .
1381.
1382.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint
1383Minimum initializing I/O operations active to each device.
1384.No See Sx ZFS I/O SCHEDULER .
1385.
1386.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint
1387The maximum number of I/O operations active to each device.
1388Ideally, this will be at least the sum of each queue's
1389.Sy max_active .
1390.No See Sx ZFS I/O SCHEDULER .
1391.
1392.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint
1393Timeout value to wait before determining a device is missing
1394during import.
1395This is helpful for transient missing paths due
1396to links being briefly removed and recreated in response to
1397udev events.
1398.
1399.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint
1400Maximum sequential resilver I/O operations active to each device.
1401.No See Sx ZFS I/O SCHEDULER .
1402.
1403.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint
1404Minimum sequential resilver I/O operations active to each device.
1405.No See Sx ZFS I/O SCHEDULER .
1406.
1407.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint
1408Maximum removal I/O operations active to each device.
1409.No See Sx ZFS I/O SCHEDULER .
1410.
1411.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint
1412Minimum removal I/O operations active to each device.
1413.No See Sx ZFS I/O SCHEDULER .
1414.
1415.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint
1416Maximum scrub I/O operations active to each device.
1417.No See Sx ZFS I/O SCHEDULER .
1418.
1419.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint
1420Minimum scrub I/O operations active to each device.
1421.No See Sx ZFS I/O SCHEDULER .
1422.
1423.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint
1424Maximum synchronous read I/O operations active to each device.
1425.No See Sx ZFS I/O SCHEDULER .
1426.
1427.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint
1428Minimum synchronous read I/O operations active to each device.
1429.No See Sx ZFS I/O SCHEDULER .
1430.
1431.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint
1432Maximum synchronous write I/O operations active to each device.
1433.No See Sx ZFS I/O SCHEDULER .
1434.
1435.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint
1436Minimum synchronous write I/O operations active to each device.
1437.No See Sx ZFS I/O SCHEDULER .
1438.
1439.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint
1440Maximum trim/discard I/O operations active to each device.
1441.No See Sx ZFS I/O SCHEDULER .
1442.
1443.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint
1444Minimum trim/discard I/O operations active to each device.
1445.No See Sx ZFS I/O SCHEDULER .
1446.
1447.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint
1448For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1449the number of concurrently-active I/O operations is limited to
1450.Sy zfs_*_min_active ,
1451unless the vdev is "idle".
1452When there are no interactive I/O operations active (synchronous or otherwise),
1453and
1454.Sy zfs_vdev_nia_delay
1455operations have completed since the last interactive operation,
1456then the vdev is considered to be "idle",
1457and the number of concurrently-active non-interactive operations is increased to
1458.Sy zfs_*_max_active .
1459.No See Sx ZFS I/O SCHEDULER .
1460.
1461.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint
1462Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1463random I/O latency reaches several seconds.
1464On some HDDs this happens even if sequential I/O operations
1465are submitted one at a time, and so setting
1466.Sy zfs_*_max_active Ns = Sy 1
1467does not help.
1468To prevent non-interactive I/O, like scrub,
1469from monopolizing the device, no more than
1470.Sy zfs_vdev_nia_credit operations can be sent
1471while there are outstanding incomplete interactive operations.
1472This enforced wait ensures the HDD services the interactive I/O
1473within a reasonable amount of time.
1474.No See Sx ZFS I/O SCHEDULER .
1475.
1476.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint
1477Maximum number of queued allocations per top-level vdev expressed as
1478a percentage of
1479.Sy zfs_vdev_async_write_max_active ,
1480which allows the system to detect devices that are more capable
1481of handling allocations and to allocate more blocks to those devices.
1482This allows for dynamic allocation distribution when devices are imbalanced,
1483as fuller devices will tend to be slower than empty devices.
1484.Pp
1485Also see
1486.Sy zio_dva_throttle_enabled .
1487.
1488.It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint
1489Default queue depth for each vdev IO allocator.
1490Higher values allow for better coalescing of sequential writes before sending
1491them to the disk, but can increase transaction commit times.
1492.
1493.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint
1494Defines if the driver should retire on a given error type.
1495The following options may be bitwise-ored together:
1496.TS
1497box;
1498lbz r l l .
1499	Value	Name	Description
1500_
1501	1	Device	No driver retries on device errors
1502	2	Transport	No driver retries on transport errors.
1503	4	Driver	No driver retries on driver errors.
1504.TE
1505.
1506.It Sy zfs_vdev_disk_max_segs Ns = Ns Sy 0 Pq uint
1507Maximum number of segments to add to a BIO (min 4).
1508If this is higher than the maximum allowed by the device queue or the kernel
1509itself, it will be clamped.
1510Setting it to zero will cause the kernel's ideal size to be used.
1511This parameter only applies on Linux.
1512This parameter is ignored if
1513.Sy zfs_vdev_disk_classic Ns = Ns Sy 1 .
1514.
1515.It Sy zfs_vdev_disk_classic Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1516If set to 1, OpenZFS will submit IO to Linux using the method it used in 2.2
1517and earlier.
1518This "classic" method has known issues with highly fragmented IO requests and
1519is slower on many workloads, but it has been in use for many years and is known
1520to be very stable.
1521If you set this parameter, please also open a bug report why you did so,
1522including the workload involved and any error messages.
1523.Pp
1524This parameter and the classic submission method will be removed once we have
1525total confidence in the new method.
1526.Pp
1527This parameter only applies on Linux, and can only be set at module load time.
1528.
1529.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1530Time before expiring
1531.Pa .zfs/snapshot .
1532.
1533.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1534Allow the creation, removal, or renaming of entries in the
1535.Sy .zfs/snapshot
1536directory to cause the creation, destruction, or renaming of snapshots.
1537When enabled, this functionality works both locally and over NFS exports
1538which have the
1539.Em no_root_squash
1540option set.
1541.
1542.It Sy zfs_flags Ns = Ns Sy 0 Pq int
1543Set additional debugging flags.
1544The following flags may be bitwise-ored together:
1545.TS
1546box;
1547lbz r l l .
1548	Value	Name	Description
1549_
1550	1	ZFS_DEBUG_DPRINTF	Enable dprintf entries in the debug log.
1551*	2	ZFS_DEBUG_DBUF_VERIFY	Enable extra dbuf verifications.
1552*	4	ZFS_DEBUG_DNODE_VERIFY	Enable extra dnode verifications.
1553	8	ZFS_DEBUG_SNAPNAMES	Enable snapshot name verification.
1554*	16	ZFS_DEBUG_MODIFY	Check for illegally modified ARC buffers.
1555	64	ZFS_DEBUG_ZIO_FREE	Enable verification of block frees.
1556	128	ZFS_DEBUG_HISTOGRAM_VERIFY	Enable extra spacemap histogram verifications.
1557	256	ZFS_DEBUG_METASLAB_VERIFY	Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1558	512	ZFS_DEBUG_SET_ERROR	Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1559	1024	ZFS_DEBUG_INDIRECT_REMAP	Verify split blocks created by device removal.
1560	2048	ZFS_DEBUG_TRIM	Verify TRIM ranges are always within the allocatable range tree.
1561	4096	ZFS_DEBUG_LOG_SPACEMAP	Verify that the log summary is consistent with the spacemap log
1562			       and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
1563.TE
1564.Sy \& * No Requires debug build .
1565.
1566.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint
1567Enables btree verification.
1568The following settings are culminative:
1569.TS
1570box;
1571lbz r l l .
1572	Value	Description
1573
1574	1	Verify height.
1575	2	Verify pointers from children to parent.
1576	3	Verify element counts.
1577	4	Verify element order. (expensive)
1578*	5	Verify unused memory is poisoned. (expensive)
1579.TE
1580.Sy \& * No Requires debug build .
1581.
1582.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1583If destroy encounters an
1584.Sy EIO
1585while reading metadata (e.g. indirect blocks),
1586space referenced by the missing metadata can not be freed.
1587Normally this causes the background destroy to become "stalled",
1588as it is unable to make forward progress.
1589While in this stalled state, all remaining space to free
1590from the error-encountering filesystem is "temporarily leaked".
1591Set this flag to cause it to ignore the
1592.Sy EIO ,
1593permanently leak the space from indirect blocks that can not be read,
1594and continue to free everything else that it can.
1595.Pp
1596The default "stalling" behavior is useful if the storage partially
1597fails (i.e. some but not all I/O operations fail), and then later recovers.
1598In this case, we will be able to continue pool operations while it is
1599partially failed, and when it recovers, we can continue to free the
1600space, with no leaks.
1601Note, however, that this case is actually fairly rare.
1602.Pp
1603Typically pools either
1604.Bl -enum -compact -offset 4n -width "1."
1605.It
1606fail completely (but perhaps temporarily,
1607e.g. due to a top-level vdev going offline), or
1608.It
1609have localized, permanent errors (e.g. disk returns the wrong data
1610due to bit flip or firmware bug).
1611.El
1612In the former case, this setting does not matter because the
1613pool will be suspended and the sync thread will not be able to make
1614forward progress regardless.
1615In the latter, because the error is permanent, the best we can do
1616is leak the minimum amount of space,
1617which is what setting this flag will do.
1618It is therefore reasonable for this flag to normally be set,
1619but we chose the more conservative approach of not setting it,
1620so that there is no possibility of
1621leaking space in the "partial temporary" failure case.
1622.
1623.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
1624During a
1625.Nm zfs Cm destroy
1626operation using the
1627.Sy async_destroy
1628feature,
1629a minimum of this much time will be spent working on freeing blocks per TXG.
1630.
1631.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
1632Similar to
1633.Sy zfs_free_min_time_ms ,
1634but for cleanup of old indirection records for removed vdevs.
1635.
1636.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
1637Largest data block to write to the ZIL.
1638Larger blocks will be treated as if the dataset being written to had the
1639.Sy logbias Ns = Ns Sy throughput
1640property set.
1641.
1642.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
1643Pattern written to vdev free space by
1644.Xr zpool-initialize 8 .
1645.
1646.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1647Size of writes used by
1648.Xr zpool-initialize 8 .
1649This option is used by the test suite.
1650.
1651.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
1652The threshold size (in block pointers) at which we create a new sub-livelist.
1653Larger sublists are more costly from a memory perspective but the fewer
1654sublists there are, the lower the cost of insertion.
1655.
1656.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
1657If the amount of shared space between a snapshot and its clone drops below
1658this threshold, the clone turns off the livelist and reverts to the old
1659deletion method.
1660This is in place because livelists no long give us a benefit
1661once a clone has been overwritten enough.
1662.
1663.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
1664Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1665it is being condensed.
1666This option is used by the test suite to track race conditions.
1667.
1668.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
1669Incremented each time livelist condensing is canceled while in
1670.Fn spa_livelist_condense_sync .
1671This option is used by the test suite to track race conditions.
1672.
1673.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1674When set, the livelist condense process pauses indefinitely before
1675executing the synctask \(em
1676.Fn spa_livelist_condense_sync .
1677This option is used by the test suite to trigger race conditions.
1678.
1679.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
1680Incremented each time livelist condensing is canceled while in
1681.Fn spa_livelist_condense_cb .
1682This option is used by the test suite to track race conditions.
1683.
1684.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1685When set, the livelist condense process pauses indefinitely before
1686executing the open context condensing work in
1687.Fn spa_livelist_condense_cb .
1688This option is used by the test suite to trigger race conditions.
1689.
1690.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
1691The maximum execution time limit that can be set for a ZFS channel program,
1692specified as a number of Lua instructions.
1693.
1694.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
1695The maximum memory limit that can be set for a ZFS channel program, specified
1696in bytes.
1697.
1698.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1699The maximum depth of nested datasets.
1700This value can be tuned temporarily to
1701fix existing datasets that exceed the predefined limit.
1702.
1703.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
1704The number of past TXGs that the flushing algorithm of the log spacemap
1705feature uses to estimate incoming log blocks.
1706.
1707.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
1708Maximum number of rows allowed in the summary of the spacemap log.
1709.
1710.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
1711We currently support block sizes from
1712.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
1713The benefits of larger blocks, and thus larger I/O,
1714need to be weighed against the cost of COWing a giant block to modify one byte.
1715Additionally, very large blocks can have an impact on I/O latency,
1716and also potentially on the memory allocator.
1717Therefore, we formerly forbade creating blocks larger than 1M.
1718Larger blocks could be created by changing it,
1719and pools with larger blocks can always be imported and used,
1720regardless of this setting.
1721.
1722.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1723Allow datasets received with redacted send/receive to be mounted.
1724Normally disabled because these datasets may be missing key data.
1725.
1726.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
1727Minimum number of metaslabs to flush per dirty TXG.
1728.
1729.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
1730Allow metaslabs to keep their active state as long as their fragmentation
1731percentage is no more than this value.
1732An active metaslab that exceeds this threshold
1733will no longer keep its active status allowing better metaslabs to be selected.
1734.
1735.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint
1736Metaslab groups are considered eligible for allocations if their
1737fragmentation metric (measured as a percentage) is less than or equal to
1738this value.
1739If a metaslab group exceeds this threshold then it will be
1740skipped unless all metaslab groups within the metaslab class have also
1741crossed this threshold.
1742.
1743.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint
1744Defines a threshold at which metaslab groups should be eligible for allocations.
1745The value is expressed as a percentage of free space
1746beyond which a metaslab group is always eligible for allocations.
1747If a metaslab group's free space is less than or equal to the
1748threshold, the allocator will avoid allocating to that group
1749unless all groups in the pool have reached the threshold.
1750Once all groups have reached the threshold, all groups are allowed to accept
1751allocations.
1752The default value of
1753.Sy 0
1754disables the feature and causes all metaslab groups to be eligible for
1755allocations.
1756.Pp
1757This parameter allows one to deal with pools having heavily imbalanced
1758vdevs such as would be the case when a new vdev has been added.
1759Setting the threshold to a non-zero percentage will stop allocations
1760from being made to vdevs that aren't filled to the specified percentage
1761and allow lesser filled vdevs to acquire more allocations than they
1762otherwise would under the old
1763.Sy zfs_mg_alloc_failures
1764facility.
1765.
1766.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1767If enabled, ZFS will place DDT data into the special allocation class.
1768.
1769.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1770If enabled, ZFS will place user data indirect blocks
1771into the special allocation class.
1772.
1773.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint
1774Historical statistics for this many latest multihost updates will be available
1775in
1776.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1777.
1778.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
1779Used to control the frequency of multihost writes which are performed when the
1780.Sy multihost
1781pool property is on.
1782This is one of the factors used to determine the
1783length of the activity check during import.
1784.Pp
1785The multihost write period is
1786.Sy zfs_multihost_interval No / Sy leaf-vdevs .
1787On average a multihost write will be issued for each leaf vdev
1788every
1789.Sy zfs_multihost_interval
1790milliseconds.
1791In practice, the observed period can vary with the I/O load
1792and this observed value is the delay which is stored in the uberblock.
1793.
1794.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1795Used to control the duration of the activity test on import.
1796Smaller values of
1797.Sy zfs_multihost_import_intervals
1798will reduce the import time but increase
1799the risk of failing to detect an active pool.
1800The total activity check time is never allowed to drop below one second.
1801.Pp
1802On import the activity check waits a minimum amount of time determined by
1803.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
1804or the same product computed on the host which last had the pool imported,
1805whichever is greater.
1806The activity check time may be further extended if the value of MMP
1807delay found in the best uberblock indicates actual multihost updates happened
1808at longer intervals than
1809.Sy zfs_multihost_interval .
1810A minimum of
1811.Em 100 ms
1812is enforced.
1813.Pp
1814.Sy 0 No is equivalent to Sy 1 .
1815.
1816.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
1817Controls the behavior of the pool when multihost write failures or delays are
1818detected.
1819.Pp
1820When
1821.Sy 0 ,
1822multihost write failures or delays are ignored.
1823The failures will still be reported to the ZED which depending on
1824its configuration may take action such as suspending the pool or offlining a
1825device.
1826.Pp
1827Otherwise, the pool will be suspended if
1828.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
1829milliseconds pass without a successful MMP write.
1830This guarantees the activity test will see MMP writes if the pool is imported.
1831.Sy 1 No is equivalent to Sy 2 ;
1832this is necessary to prevent the pool from being suspended
1833due to normal, small I/O latency variations.
1834.
1835.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1836Set to disable scrub I/O.
1837This results in scrubs not actually scrubbing data and
1838simply doing a metadata crawl of the pool instead.
1839.
1840.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1841Set to disable block prefetching for scrubs.
1842.
1843.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1844Disable cache flush operations on disks when writing.
1845Setting this will cause pool corruption on power loss
1846if a volatile out-of-order write cache is enabled.
1847.
1848.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1849Allow no-operation writes.
1850The occurrence of nopwrites will further depend on other pool properties
1851.Pq i.a. the checksumming and compression algorithms .
1852.
1853.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
1854Enable forcing TXG sync to find holes.
1855When enabled forces ZFS to sync data when
1856.Sy SEEK_HOLE No or Sy SEEK_DATA
1857flags are used allowing holes in a file to be accurately reported.
1858When disabled holes will not be reported in recently dirtied files.
1859.
1860.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
1861The number of bytes which should be prefetched during a pool traversal, like
1862.Nm zfs Cm send
1863or other data crawling operations.
1864.
1865.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint
1866The number of blocks pointed by indirect (non-L0) block which should be
1867prefetched during a pool traversal, like
1868.Nm zfs Cm send
1869or other data crawling operations.
1870.
1871.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
1872Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1873After this threshold is crossed, additional frees will wait until the next TXG.
1874.Sy 0 No disables this throttle .
1875.
1876.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1877Disable predictive prefetch.
1878Note that it leaves "prescient" prefetch
1879.Pq for, e.g., Nm zfs Cm send
1880intact.
1881Unlike predictive prefetch, prescient prefetch never issues I/O
1882that ends up not being needed, so it can't hurt performance.
1883.
1884.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1885Disable QAT hardware acceleration for SHA256 checksums.
1886May be unset after the ZFS modules have been loaded to initialize the QAT
1887hardware as long as support is compiled in and the QAT driver is present.
1888.
1889.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1890Disable QAT hardware acceleration for gzip compression.
1891May be unset after the ZFS modules have been loaded to initialize the QAT
1892hardware as long as support is compiled in and the QAT driver is present.
1893.
1894.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1895Disable QAT hardware acceleration for AES-GCM encryption.
1896May be unset after the ZFS modules have been loaded to initialize the QAT
1897hardware as long as support is compiled in and the QAT driver is present.
1898.
1899.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1900Bytes to read per chunk.
1901.
1902.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
1903Historical statistics for this many latest reads will be available in
1904.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
1905.
1906.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
1907Include cache hits in read history
1908.
1909.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
1910Maximum read segment size to issue when sequentially resilvering a
1911top-level vdev.
1912.
1913.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1914Automatically start a pool scrub when the last active sequential resilver
1915completes in order to verify the checksums of all blocks which have been
1916resilvered.
1917This is enabled by default and strongly recommended.
1918.
1919.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
1920Maximum amount of I/O that can be concurrently issued for a sequential
1921resilver per leaf device, given in bytes.
1922.
1923.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
1924If an indirect split block contains more than this many possible unique
1925combinations when being reconstructed, consider it too computationally
1926expensive to check them all.
1927Instead, try at most this many randomly selected
1928combinations each time the block is accessed.
1929This allows all segment copies to participate fairly
1930in the reconstruction when all combinations
1931cannot be checked and prevents repeated use of one bad copy.
1932.
1933.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
1934Set to attempt to recover from fatal errors.
1935This should only be used as a last resort,
1936as it typically results in leaked space, or worse.
1937.
1938.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1939Ignore hard I/O errors during device removal.
1940When set, if a device encounters a hard I/O error during the removal process
1941the removal will not be cancelled.
1942This can result in a normally recoverable block becoming permanently damaged
1943and is hence not recommended.
1944This should only be used as a last resort when the
1945pool cannot be returned to a healthy state prior to removing the device.
1946.
1947.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1948This is used by the test suite so that it can ensure that certain actions
1949happen while in the middle of a removal.
1950.
1951.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
1952The largest contiguous segment that we will attempt to allocate when removing
1953a device.
1954If there is a performance problem with attempting to allocate large blocks,
1955consider decreasing this.
1956The default value is also the maximum.
1957.
1958.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
1959Ignore the
1960.Sy resilver_defer
1961feature, causing an operation that would start a resilver to
1962immediately restart the one in progress.
1963.
1964.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
1965Resilvers are processed by the sync thread.
1966While resilvering, it will spend at least this much time
1967working on a resilver between TXG flushes.
1968.
1969.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1970If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
1971even if there were unrepairable errors.
1972Intended to be used during pool repair or recovery to
1973stop resilvering when the pool is next imported.
1974.
1975.It Sy zfs_scrub_after_expand Ns = Ns Sy 1 Ns | Ns 0 Pq int
1976Automatically start a pool scrub after a RAIDZ expansion completes
1977in order to verify the checksums of all blocks which have been
1978copied during the expansion.
1979This is enabled by default and strongly recommended.
1980.
1981.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
1982Scrubs are processed by the sync thread.
1983While scrubbing, it will spend at least this much time
1984working on a scrub between TXG flushes.
1985.
1986.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint
1987Error blocks to be scrubbed in one txg.
1988.
1989.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint
1990To preserve progress across reboots, the sequential scan algorithm periodically
1991needs to stop metadata scanning and issue all the verification I/O to disk.
1992The frequency of this flushing is determined by this tunable.
1993.
1994.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint
1995This tunable affects how scrub and resilver I/O segments are ordered.
1996A higher number indicates that we care more about how filled in a segment is,
1997while a lower number indicates we care more about the size of the extent without
1998considering the gaps within a segment.
1999This value is only tunable upon module insertion.
2000Changing the value afterwards will have no effect on scrub or resilver
2001performance.
2002.
2003.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint
2004Determines the order that data will be verified while scrubbing or resilvering:
2005.Bl -tag -compact -offset 4n -width "a"
2006.It Sy 1
2007Data will be verified as sequentially as possible, given the
2008amount of memory reserved for scrubbing
2009.Pq see Sy zfs_scan_mem_lim_fact .
2010This may improve scrub performance if the pool's data is very fragmented.
2011.It Sy 2
2012The largest mostly-contiguous chunk of found data will be verified first.
2013By deferring scrubbing of small segments, we may later find adjacent data
2014to coalesce and increase the segment size.
2015.It Sy 0
2016.No Use strategy Sy 1 No during normal verification
2017.No and strategy Sy 2 No while taking a checkpoint .
2018.El
2019.
2020.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
2021If unset, indicates that scrubs and resilvers will gather metadata in
2022memory before issuing sequential I/O.
2023Otherwise indicates that the legacy algorithm will be used,
2024where I/O is initiated as soon as it is discovered.
2025Unsetting will not affect scrubs or resilvers that are already in progress.
2026.
2027.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
2028Sets the largest gap in bytes between scrub/resilver I/O operations
2029that will still be considered sequential for sorting purposes.
2030Changing this value will not
2031affect scrubs or resilvers that are already in progress.
2032.
2033.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2034Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2035This tunable determines the hard limit for I/O sorting memory usage.
2036When the hard limit is reached we stop scanning metadata and start issuing
2037data verification I/O.
2038This is done until we get below the soft limit.
2039.
2040.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2041The fraction of the hard limit used to determined the soft limit for I/O sorting
2042by the sequential scan algorithm.
2043When we cross this limit from below no action is taken.
2044When we cross this limit from above it is because we are issuing verification
2045I/O.
2046In this case (unless the metadata scan is done) we stop issuing verification I/O
2047and start scanning metadata again until we get to the hard limit.
2048.
2049.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2050When reporting resilver throughput and estimated completion time use the
2051performance observed over roughly the last
2052.Sy zfs_scan_report_txgs
2053TXGs.
2054When set to zero performance is calculated over the time between checkpoints.
2055.
2056.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
2057Enforce tight memory limits on pool scans when a sequential scan is in progress.
2058When disabled, the memory limit may be exceeded by fast disks.
2059.
2060.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
2061Freezes a scrub/resilver in progress without actually pausing it.
2062Intended for testing/debugging.
2063.
2064.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
2065Maximum amount of data that can be concurrently issued at once for scrubs and
2066resilvers per leaf device, given in bytes.
2067.
2068.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
2069Allow sending of corrupt data (ignore read/checksum errors when sending).
2070.
2071.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
2072Include unmodified spill blocks in the send stream.
2073Under certain circumstances, previous versions of ZFS could incorrectly
2074remove the spill block from an existing object.
2075Including unmodified copies of the spill blocks creates a backwards-compatible
2076stream which will recreate a spill block if it was incorrectly removed.
2077.
2078.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2079The fill fraction of the
2080.Nm zfs Cm send
2081internal queues.
2082The fill fraction controls the timing with which internal threads are woken up.
2083.
2084.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2085The maximum number of bytes allowed in
2086.Nm zfs Cm send Ns 's
2087internal queues.
2088.
2089.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2090The fill fraction of the
2091.Nm zfs Cm send
2092prefetch queue.
2093The fill fraction controls the timing with which internal threads are woken up.
2094.
2095.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2096The maximum number of bytes allowed that will be prefetched by
2097.Nm zfs Cm send .
2098This value must be at least twice the maximum block size in use.
2099.
2100.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2101The fill fraction of the
2102.Nm zfs Cm receive
2103queue.
2104The fill fraction controls the timing with which internal threads are woken up.
2105.
2106.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2107The maximum number of bytes allowed in the
2108.Nm zfs Cm receive
2109queue.
2110This value must be at least twice the maximum block size in use.
2111.
2112.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2113The maximum amount of data, in bytes, that
2114.Nm zfs Cm receive
2115will write in one DMU transaction.
2116This is the uncompressed size, even when receiving a compressed send stream.
2117This setting will not reduce the write size below a single block.
2118Capped at a maximum of
2119.Sy 32 MiB .
2120.
2121.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
2122When this variable is set to non-zero a corrective receive:
2123.Bl -enum -compact -offset 4n -width "1."
2124.It
2125Does not enforce the restriction of source & destination snapshot GUIDs
2126matching.
2127.It
2128If there is an error during healing, the healing receive is not
2129terminated instead it moves on to the next record.
2130.El
2131.
2132.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2133Setting this variable overrides the default logic for estimating block
2134sizes when doing a
2135.Nm zfs Cm send .
2136The default heuristic is that the average block size
2137will be the current recordsize.
2138Override this value if most data in your dataset is not of that size
2139and you require accurate zfs send size estimates.
2140.
2141.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint
2142Flushing of data to disk is done in passes.
2143Defer frees starting in this pass.
2144.
2145.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
2146Maximum memory used for prefetching a checkpoint's space map on each
2147vdev while discarding the checkpoint.
2148.
2149.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint
2150Only allow small data blocks to be allocated on the special and dedup vdev
2151types when the available free space percentage on these vdevs exceeds this
2152value.
2153This ensures reserved space is available for pool metadata as the
2154special vdevs approach capacity.
2155.
2156.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint
2157Starting in this sync pass, disable compression (including of metadata).
2158With the default setting, in practice, we don't have this many sync passes,
2159so this has no effect.
2160.Pp
2161The original intent was that disabling compression would help the sync passes
2162to converge.
2163However, in practice, disabling compression increases
2164the average number of sync passes; because when we turn compression off,
2165many blocks' size will change, and thus we have to re-allocate
2166(not overwrite) them.
2167It also increases the number of
2168.Em 128 KiB
2169allocations (e.g. for indirect blocks and spacemaps)
2170because these will not be compressed.
2171The
2172.Em 128 KiB
2173allocations are especially detrimental to performance
2174on highly fragmented systems, which may have very few free segments of this
2175size,
2176and may need to load new metaslabs to satisfy these allocations.
2177.
2178.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint
2179Rewrite new block pointers starting in this pass.
2180.
2181.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
2182Maximum size of TRIM command.
2183Larger ranges will be split into chunks no larger than this value before
2184issuing.
2185.
2186.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2187Minimum size of TRIM commands.
2188TRIM ranges smaller than this will be skipped,
2189unless they're part of a larger range which was chunked.
2190This is done because it's common for these small TRIMs
2191to negatively impact overall performance.
2192.
2193.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2194Skip uninitialized metaslabs during the TRIM process.
2195This option is useful for pools constructed from large thinly-provisioned
2196devices
2197where TRIM operations are slow.
2198As a pool ages, an increasing fraction of the pool's metaslabs
2199will be initialized, progressively degrading the usefulness of this option.
2200This setting is stored when starting a manual TRIM and will
2201persist for the duration of the requested TRIM.
2202.
2203.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
2204Maximum number of queued TRIMs outstanding per leaf vdev.
2205The number of concurrent TRIM commands issued to the device is controlled by
2206.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
2207.
2208.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
2209The number of transaction groups' worth of frees which should be aggregated
2210before TRIM operations are issued to the device.
2211This setting represents a trade-off between issuing larger,
2212more efficient TRIM operations and the delay
2213before the recently trimmed space is available for use by the device.
2214.Pp
2215Increasing this value will allow frees to be aggregated for a longer time.
2216This will result is larger TRIM operations and potentially increased memory
2217usage.
2218Decreasing this value will have the opposite effect.
2219The default of
2220.Sy 32
2221was determined to be a reasonable compromise.
2222.
2223.It Sy zfs_txg_history Ns = Ns Sy 100 Pq uint
2224Historical statistics for this many latest TXGs will be available in
2225.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
2226.
2227.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint
2228Flush dirty data to disk at least every this many seconds (maximum TXG
2229duration).
2230.
2231.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2232Max vdev I/O aggregation size.
2233.
2234.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2235Max vdev I/O aggregation size for non-rotating media.
2236.
2237.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
2238A number by which the balancing algorithm increments the load calculation for
2239the purpose of selecting the least busy mirror member when an I/O operation
2240immediately follows its predecessor on rotational vdevs
2241for the purpose of making decisions based on load.
2242.
2243.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
2244A number by which the balancing algorithm increments the load calculation for
2245the purpose of selecting the least busy mirror member when an I/O operation
2246lacks locality as defined by
2247.Sy zfs_vdev_mirror_rotating_seek_offset .
2248Operations within this that are not immediately following the previous operation
2249are incremented by half.
2250.
2251.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
2252The maximum distance for the last queued I/O operation in which
2253the balancing algorithm considers an operation to have locality.
2254.No See Sx ZFS I/O SCHEDULER .
2255.
2256.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
2257A number by which the balancing algorithm increments the load calculation for
2258the purpose of selecting the least busy mirror member on non-rotational vdevs
2259when I/O operations do not immediately follow one another.
2260.
2261.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
2262A number by which the balancing algorithm increments the load calculation for
2263the purpose of selecting the least busy mirror member when an I/O operation
2264lacks
2265locality as defined by the
2266.Sy zfs_vdev_mirror_rotating_seek_offset .
2267Operations within this that are not immediately following the previous operation
2268are incremented by half.
2269.
2270.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2271Aggregate read I/O operations if the on-disk gap between them is within this
2272threshold.
2273.
2274.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint
2275Aggregate write I/O operations if the on-disk gap between them is within this
2276threshold.
2277.
2278.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
2279Select the raidz parity implementation to use.
2280.Pp
2281Variants that don't depend on CPU-specific features
2282may be selected on module load, as they are supported on all systems.
2283The remaining options may only be set after the module is loaded,
2284as they are available only if the implementations are compiled in
2285and supported on the running system.
2286.Pp
2287Once the module is loaded,
2288.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2289will show the available options,
2290with the currently selected one enclosed in square brackets.
2291.Pp
2292.TS
2293lb l l .
2294fastest	selected by built-in benchmark
2295original	original implementation
2296scalar	scalar implementation
2297sse2	SSE2 instruction set	64-bit x86
2298ssse3	SSSE3 instruction set	64-bit x86
2299avx2	AVX2 instruction set	64-bit x86
2300avx512f	AVX512F instruction set	64-bit x86
2301avx512bw	AVX512F & AVX512BW instruction sets	64-bit x86
2302aarch64_neon	NEON	Aarch64/64-bit ARMv8
2303aarch64_neonx2	NEON with more unrolling	Aarch64/64-bit ARMv8
2304powerpc_altivec	Altivec	PowerPC
2305.TE
2306.
2307.It Sy zfs_vdev_scheduler Pq charp
2308.Sy DEPRECATED .
2309Prints warning to kernel log for compatibility.
2310.
2311.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint
2312Max event queue length.
2313Events in the queue can be viewed with
2314.Xr zpool-events 8 .
2315.
2316.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2317Maximum recent zevent records to retain for duplicate checking.
2318Setting this to
2319.Sy 0
2320disables duplicate detection.
2321.
2322.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
2323Lifespan for a recent ereport that was retained for duplicate checking.
2324.
2325.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2326The maximum number of taskq entries that are allowed to be cached.
2327When this limit is exceeded transaction records (itxs)
2328will be cleaned synchronously.
2329.
2330.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
2331The number of taskq entries that are pre-populated when the taskq is first
2332created and are immediately available for use.
2333.
2334.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2335This controls the number of threads used by
2336.Sy dp_zil_clean_taskq .
2337The default value of
2338.Sy 100%
2339will create a maximum of one thread per cpu.
2340.
2341.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2342This sets the maximum block size used by the ZIL.
2343On very fragmented pools, lowering this
2344.Pq typically to Sy 36 KiB
2345can improve performance.
2346.
2347.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint
2348This sets the maximum number of write bytes logged via WR_COPIED.
2349It tunes a tradeoff between additional memory copy and possibly worse log
2350space efficiency vs additional range lock/unlock.
2351.
2352.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2353Disable the cache flush commands that are normally sent to disk by
2354the ZIL after an LWB write has completed.
2355Setting this will cause ZIL corruption on power loss
2356if a volatile out-of-order write cache is enabled.
2357.
2358.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2359Disable intent logging replay.
2360Can be disabled for recovery from corrupted ZIL.
2361.
2362.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
2363Limit SLOG write size per commit executed with synchronous priority.
2364Any writes above that will be executed with lower (asynchronous) priority
2365to limit potential SLOG device abuse by single active ZIL writer.
2366.
2367.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
2368Setting this tunable to zero disables ZIL logging of new
2369.Sy xattr Ns = Ns Sy sa
2370records if the
2371.Sy org.openzfs:zilsaxattr
2372feature is enabled on the pool.
2373This would only be necessary to work around bugs in the ZIL logging or replay
2374code for this record type.
2375The tunable has no effect if the feature is disabled.
2376.
2377.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint
2378Usually, one metaslab from each normal-class vdev is dedicated for use by
2379the ZIL to log synchronous writes.
2380However, if there are fewer than
2381.Sy zfs_embedded_slog_min_ms
2382metaslabs in the vdev, this functionality is disabled.
2383This ensures that we don't set aside an unreasonable amount of space for the
2384ZIL.
2385.
2386.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint
2387Whether heuristic for detection of incompressible data with zstd levels >= 3
2388using LZ4 and zstd-1 passes is enabled.
2389.
2390.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint
2391Minimal uncompressed size (inclusive) of a record before the early abort
2392heuristic will be attempted.
2393.
2394.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2395If non-zero, the zio deadman will produce debugging messages
2396.Pq see Sy zfs_dbgmsg_enable
2397for all zios, rather than only for leaf zios possessing a vdev.
2398This is meant to be used by developers to gain
2399diagnostic information for hang conditions which don't involve a mutex
2400or other locking primitive: typically conditions in which a thread in
2401the zio pipeline is looping indefinitely.
2402.
2403.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2404When an I/O operation takes more than this much time to complete,
2405it's marked as slow.
2406Each slow operation causes a delay zevent.
2407Slow I/O counters can be seen with
2408.Nm zpool Cm status Fl s .
2409.
2410.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2411Throttle block allocations in the I/O pipeline.
2412This allows for dynamic allocation distribution when devices are imbalanced.
2413When enabled, the maximum number of pending allocations per top-level vdev
2414is limited by
2415.Sy zfs_vdev_queue_depth_pct .
2416.
2417.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
2418Control the naming scheme used when setting new xattrs in the user namespace.
2419If
2420.Sy 0
2421.Pq the default on Linux ,
2422user namespace xattr names are prefixed with the namespace, to be backwards
2423compatible with previous versions of ZFS on Linux.
2424If
2425.Sy 1
2426.Pq the default on Fx ,
2427user namespace xattr names are not prefixed, to be backwards compatible with
2428previous versions of ZFS on illumos and
2429.Fx .
2430.Pp
2431Either naming scheme can be read on this and future versions of ZFS, regardless
2432of this tunable, but legacy ZFS on illumos or
2433.Fx
2434are unable to read user namespace xattrs written in the Linux format, and
2435legacy versions of ZFS on Linux are unable to read user namespace xattrs written
2436in the legacy ZFS format.
2437.Pp
2438An existing xattr with the alternate naming scheme is removed when overwriting
2439the xattr so as to not accumulate duplicates.
2440.
2441.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2442Prioritize requeued I/O.
2443.
2444.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2445Percentage of online CPUs which will run a worker thread for I/O.
2446These workers are responsible for I/O work such as compression, encryption,
2447checksum and parity calculations.
2448Fractional number of CPUs will be rounded down.
2449.Pp
2450The default value of
2451.Sy 80%
2452was chosen to avoid using all CPUs which can result in
2453latency issues and inconsistent application performance,
2454especially when slower compression and/or checksumming is enabled.
2455Set value only applies to pools imported/created after that.
2456.
2457.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2458Number of worker threads per taskq.
2459Higher values improve I/O ordering and CPU utilization,
2460while lower reduce lock contention.
2461Set value only applies to pools imported/created after that.
2462.Pp
2463If
2464.Sy 0 ,
2465generate a system-dependent value close to 6 threads per taskq.
2466Set value only applies to pools imported/created after that.
2467.
2468.It Sy zio_taskq_write_tpq Ns = Ns Sy 16 Pq uint
2469Determines the minumum number of threads per write issue taskq.
2470Higher values improve CPU utilization on high throughput,
2471while lower reduce taskq locks contention on high IOPS.
2472Set value only applies to pools imported/created after that.
2473.
2474.It Sy zio_taskq_read Ns = Ns Sy fixed,1,8 null scale null Pq charp
2475Set the queue and thread configuration for the IO read queues.
2476This is an advanced debugging parameter.
2477Don't change this unless you understand what it does.
2478Set values only apply to pools imported/created after that.
2479.
2480.It Sy zio_taskq_write Ns = Ns Sy sync null scale null Pq charp
2481Set the queue and thread configuration for the IO write queues.
2482This is an advanced debugging parameter.
2483Don't change this unless you understand what it does.
2484Set values only apply to pools imported/created after that.
2485.
2486.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2487Do not create zvol device nodes.
2488This may slightly improve startup time on
2489systems with a very large number of zvols.
2490.
2491.It Sy zvol_major Ns = Ns Sy 230 Pq uint
2492Major number for zvol block devices.
2493.
2494.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
2495Discard (TRIM) operations done on zvols will be done in batches of this
2496many blocks, where block size is determined by the
2497.Sy volblocksize
2498property of a zvol.
2499.
2500.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2501When adding a zvol to the system, prefetch this many bytes
2502from the start and end of the volume.
2503Prefetching these regions of the volume is desirable,
2504because they are likely to be accessed immediately by
2505.Xr blkid 8
2506or the kernel partitioner.
2507.
2508.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2509When processing I/O requests for a zvol, submit them synchronously.
2510This effectively limits the queue depth to
2511.Em 1
2512for each I/O submitter.
2513When unset, requests are handled asynchronously by a thread pool.
2514The number of requests which can be handled concurrently is controlled by
2515.Sy zvol_threads .
2516.Sy zvol_request_sync
2517is ignored when running on a kernel that supports block multiqueue
2518.Pq Li blk-mq .
2519.
2520.It Sy zvol_num_taskqs Ns = Ns Sy 0 Pq uint
2521Number of zvol taskqs.
2522If
2523.Sy 0
2524(the default) then scaling is done internally to prefer 6 threads per taskq.
2525This only applies on Linux.
2526.
2527.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
2528The number of system wide threads to use for processing zvol block IOs.
2529If
2530.Sy 0
2531(the default) then internally set
2532.Sy zvol_threads
2533to the number of CPUs present or 32 (whichever is greater).
2534.
2535.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
2536The number of threads per zvol to use for queuing IO requests.
2537This parameter will only appear if your kernel supports
2538.Li blk-mq
2539and is only read and assigned to a zvol at zvol load time.
2540If
2541.Sy 0
2542(the default) then internally set
2543.Sy zvol_blk_mq_threads
2544to the number of CPUs present.
2545.
2546.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2547Set to
2548.Sy 1
2549to use the
2550.Li blk-mq
2551API for zvols.
2552Set to
2553.Sy 0
2554(the default) to use the legacy zvol APIs.
2555This setting can give better or worse zvol performance depending on
2556the workload.
2557This parameter will only appear if your kernel supports
2558.Li blk-mq
2559and is only read and assigned to a zvol at zvol load time.
2560.
2561.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
2562If
2563.Sy zvol_use_blk_mq
2564is enabled, then process this number of
2565.Sy volblocksize Ns -sized blocks per zvol thread.
2566This tunable can be use to favor better performance for zvol reads (lower
2567values) or writes (higher values).
2568If set to
2569.Sy 0 ,
2570then the zvol layer will process the maximum number of blocks
2571per thread that it can.
2572This parameter will only appear if your kernel supports
2573.Li blk-mq
2574and is only applied at each zvol's load time.
2575.
2576.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
2577The queue_depth value for the zvol
2578.Li blk-mq
2579interface.
2580This parameter will only appear if your kernel supports
2581.Li blk-mq
2582and is only applied at each zvol's load time.
2583If
2584.Sy 0
2585(the default) then use the kernel's default queue depth.
2586Values are clamped to the kernel's
2587.Dv BLKDEV_MIN_RQ
2588and
2589.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
2590limits.
2591.
2592.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2593Defines zvol block devices behaviour when
2594.Sy volmode Ns = Ns Sy default :
2595.Bl -tag -compact -offset 4n -width "a"
2596.It Sy 1
2597.No equivalent to Sy full
2598.It Sy 2
2599.No equivalent to Sy dev
2600.It Sy 3
2601.No equivalent to Sy none
2602.El
2603.
2604.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2605Enable strict ZVOL quota enforcement.
2606The strict quota enforcement may have a performance impact.
2607.El
2608.
2609.Sh ZFS I/O SCHEDULER
2610ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2611The scheduler determines when and in what order those operations are issued.
2612The scheduler divides operations into five I/O classes,
2613prioritized in the following order: sync read, sync write, async read,
2614async write, and scrub/resilver.
2615Each queue defines the minimum and maximum number of concurrent operations
2616that may be issued to the device.
2617In addition, the device has an aggregate maximum,
2618.Sy zfs_vdev_max_active .
2619Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2620If the sum of the per-queue maxima exceeds the aggregate maximum,
2621then the number of active operations may reach
2622.Sy zfs_vdev_max_active ,
2623in which case no further operations will be issued,
2624regardless of whether all per-queue minima have been met.
2625.Pp
2626For many physical devices, throughput increases with the number of
2627concurrent operations, but latency typically suffers.
2628Furthermore, physical devices typically have a limit
2629at which more concurrent operations have no
2630effect on throughput or can actually cause it to decrease.
2631.Pp
2632The scheduler selects the next operation to issue by first looking for an
2633I/O class whose minimum has not been satisfied.
2634Once all are satisfied and the aggregate maximum has not been hit,
2635the scheduler looks for classes whose maximum has not been satisfied.
2636Iteration through the I/O classes is done in the order specified above.
2637No further operations are issued
2638if the aggregate maximum number of concurrent operations has been hit,
2639or if there are no operations queued for an I/O class that has not hit its
2640maximum.
2641Every time an I/O operation is queued or an operation completes,
2642the scheduler looks for new operations to issue.
2643.Pp
2644In general, smaller
2645.Sy max_active Ns s
2646will lead to lower latency of synchronous operations.
2647Larger
2648.Sy max_active Ns s
2649may lead to higher overall throughput, depending on underlying storage.
2650.Pp
2651The ratio of the queues'
2652.Sy max_active Ns s
2653determines the balance of performance between reads, writes, and scrubs.
2654For example, increasing
2655.Sy zfs_vdev_scrub_max_active
2656will cause the scrub or resilver to complete more quickly,
2657but reads and writes to have higher latency and lower throughput.
2658.Pp
2659All I/O classes have a fixed maximum number of outstanding operations,
2660except for the async write class.
2661Asynchronous writes represent the data that is committed to stable storage
2662during the syncing stage for transaction groups.
2663Transaction groups enter the syncing state periodically,
2664so the number of queued async writes will quickly burst up
2665and then bleed down to zero.
2666Rather than servicing them as quickly as possible,
2667the I/O scheduler changes the maximum number of active async write operations
2668according to the amount of dirty data in the pool.
2669Since both throughput and latency typically increase with the number of
2670concurrent operations issued to physical devices, reducing the
2671burstiness in the number of simultaneous operations also stabilizes the
2672response time of operations from other queues, in particular synchronous ones.
2673In broad strokes, the I/O scheduler will issue more concurrent operations
2674from the async write queue as there is more dirty data in the pool.
2675.
2676.Ss Async Writes
2677The number of concurrent operations issued for the async write I/O class
2678follows a piece-wise linear function defined by a few adjustable points:
2679.Bd -literal
2680       |              o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2681  ^    |             /^         |
2682  |    |            / |         |
2683active |           /  |         |
2684 I/O   |          /   |         |
2685count  |         /    |         |
2686       |        /     |         |
2687       |-------o      |         | <-- \fBzfs_vdev_async_write_min_active\fP
2688      0|_______^______|_________|
2689       0%      |      |       100% of \fBzfs_dirty_data_max\fP
2690               |      |
2691               |      `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2692               `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2693.Ed
2694.Pp
2695Until the amount of dirty data exceeds a minimum percentage of the dirty
2696data allowed in the pool, the I/O scheduler will limit the number of
2697concurrent operations to the minimum.
2698As that threshold is crossed, the number of concurrent operations issued
2699increases linearly to the maximum at the specified maximum percentage
2700of the dirty data allowed in the pool.
2701.Pp
2702Ideally, the amount of dirty data on a busy pool will stay in the sloped
2703part of the function between
2704.Sy zfs_vdev_async_write_active_min_dirty_percent
2705and
2706.Sy zfs_vdev_async_write_active_max_dirty_percent .
2707If it exceeds the maximum percentage,
2708this indicates that the rate of incoming data is
2709greater than the rate that the backend storage can handle.
2710In this case, we must further throttle incoming writes,
2711as described in the next section.
2712.
2713.Sh ZFS TRANSACTION DELAY
2714We delay transactions when we've determined that the backend storage
2715isn't able to accommodate the rate of incoming writes.
2716.Pp
2717If there is already a transaction waiting, we delay relative to when
2718that transaction will finish waiting.
2719This way the calculated delay time
2720is independent of the number of threads concurrently executing transactions.
2721.Pp
2722If we are the only waiter, wait relative to when the transaction started,
2723rather than the current time.
2724This credits the transaction for "time already served",
2725e.g. reading indirect blocks.
2726.Pp
2727The minimum time for a transaction to take is calculated as
2728.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2729.Pp
2730The delay has two degrees of freedom that can be adjusted via tunables.
2731The percentage of dirty data at which we start to delay is defined by
2732.Sy zfs_delay_min_dirty_percent .
2733This should typically be at or above
2734.Sy zfs_vdev_async_write_active_max_dirty_percent ,
2735so that we only start to delay after writing at full speed
2736has failed to keep up with the incoming write rate.
2737The scale of the curve is defined by
2738.Sy zfs_delay_scale .
2739Roughly speaking, this variable determines the amount of delay at the midpoint
2740of the curve.
2741.Bd -literal
2742delay
2743 10ms +-------------------------------------------------------------*+
2744      |                                                             *|
2745  9ms +                                                             *+
2746      |                                                             *|
2747  8ms +                                                             *+
2748      |                                                            * |
2749  7ms +                                                            * +
2750      |                                                            * |
2751  6ms +                                                            * +
2752      |                                                            * |
2753  5ms +                                                           *  +
2754      |                                                           *  |
2755  4ms +                                                           *  +
2756      |                                                           *  |
2757  3ms +                                                          *   +
2758      |                                                          *   |
2759  2ms +                                              (midpoint) *    +
2760      |                                                  |    **     |
2761  1ms +                                                  v ***       +
2762      |             \fBzfs_delay_scale\fP ---------->     ********         |
2763    0 +-------------------------------------*********----------------+
2764      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2765.Ed
2766.Pp
2767Note, that since the delay is added to the outstanding time remaining on the
2768most recent transaction it's effectively the inverse of IOPS.
2769Here, the midpoint of
2770.Em 500 us
2771translates to
2772.Em 2000 IOPS .
2773The shape of the curve
2774was chosen such that small changes in the amount of accumulated dirty data
2775in the first three quarters of the curve yield relatively small differences
2776in the amount of delay.
2777.Pp
2778The effects can be easier to understand when the amount of delay is
2779represented on a logarithmic scale:
2780.Bd -literal
2781delay
2782100ms +-------------------------------------------------------------++
2783      +                                                              +
2784      |                                                              |
2785      +                                                             *+
2786 10ms +                                                             *+
2787      +                                                           ** +
2788      |                                              (midpoint)  **  |
2789      +                                                  |     **    +
2790  1ms +                                                  v ****      +
2791      +             \fBzfs_delay_scale\fP ---------->        *****         +
2792      |                                             ****             |
2793      +                                          ****                +
2794100us +                                        **                    +
2795      +                                       *                      +
2796      |                                      *                       |
2797      +                                     *                        +
2798 10us +                                     *                        +
2799      +                                                              +
2800      |                                                              |
2801      +                                                              +
2802      +--------------------------------------------------------------+
2803      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2804.Ed
2805.Pp
2806Note here that only as the amount of dirty data approaches its limit does
2807the delay start to increase rapidly.
2808The goal of a properly tuned system should be to keep the amount of dirty data
2809out of that range by first ensuring that the appropriate limits are set
2810for the I/O scheduler to reach optimal throughput on the back-end storage,
2811and then by changing the value of
2812.Sy zfs_delay_scale
2813to increase the steepness of the curve.
2814