xref: /freebsd/sys/contrib/openzfs/man/man4/zfs.4 (revision d4eeb02986980bf33dd56c41ceb9fc5f180c0d47)
1.\"
2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
4.\" Copyright (c) 2019 Datto Inc.
5.\" The contents of this file are subject to the terms of the Common Development
6.\" and Distribution License (the "License").  You may not use this file except
7.\" in compliance with the License. You can obtain a copy of the license at
8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
9.\"
10.\" See the License for the specific language governing permissions and
11.\" limitations under the License. When distributing Covered Code, include this
12.\" CDDL HEADER in each file and include the License file at
13.\" usr/src/OPENSOLARIS.LICENSE.  If applicable, add the following below this
14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15.\" own identifying information:
16.\" Portions Copyright [yyyy] [name of copyright owner]
17.\"
18.Dd June 1, 2021
19.Dt ZFS 4
20.Os
21.
22.Sh NAME
23.Nm zfs
24.Nd tuning of the ZFS kernel module
25.
26.Sh DESCRIPTION
27The ZFS module supports these parameters:
28.Bl -tag -width Ds
29.It Sy dbuf_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
30Maximum size in bytes of the dbuf cache.
31The target size is determined by the MIN versus
32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
33of the target ARC size.
34The behavior of the dbuf cache and its associated settings
35can be observed via the
36.Pa /proc/spl/kstat/zfs/dbufstats
37kstat.
38.
39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
40Maximum size in bytes of the metadata dbuf cache.
41The target size is determined by the MIN versus
42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
43of the target ARC size.
44The behavior of the metadata dbuf cache and its associated settings
45can be observed via the
46.Pa /proc/spl/kstat/zfs/dbufstats
47kstat.
48.
49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
50The percentage over
51.Sy dbuf_cache_max_bytes
52when dbufs must be evicted directly.
53.
54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
55The percentage below
56.Sy dbuf_cache_max_bytes
57when the evict thread stops evicting dbufs.
58.
59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq int
60Set the size of the dbuf cache
61.Pq Sy dbuf_cache_max_bytes
62to a log2 fraction of the target ARC size.
63.
64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq int
65Set the size of the dbuf metadata cache
66.Pq Sy dbuf_metadata_cache_max_bytes
67to a log2 fraction of the target ARC size.
68.
69.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq int
70dnode slots allocated in a single operation as a power of 2.
71The default value minimizes lock contention for the bulk operation performed.
72.
73.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq int
74Limit the amount we can prefetch with one call to this amount in bytes.
75This helps to limit the amount of memory that can be used by prefetching.
76.
77.It Sy ignore_hole_birth Pq int
78Alias for
79.Sy send_holes_without_birth_time .
80.
81.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
82Turbo L2ARC warm-up.
83When the L2ARC is cold the fill interval will be set as fast as possible.
84.
85.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq ulong
86Min feed interval in milliseconds.
87Requires
88.Sy l2arc_feed_again Ns = Ns Ar 1
89and only applicable in related situations.
90.
91.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq ulong
92Seconds between L2ARC writing.
93.
94.It Sy l2arc_headroom Ns = Ns Sy 2 Pq ulong
95How far through the ARC lists to search for L2ARC cacheable content,
96expressed as a multiplier of
97.Sy l2arc_write_max .
98ARC persistence across reboots can be achieved with persistent L2ARC
99by setting this parameter to
100.Sy 0 ,
101allowing the full length of ARC lists to be searched for cacheable content.
102.
103.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq ulong
104Scales
105.Sy l2arc_headroom
106by this percentage when L2ARC contents are being successfully compressed
107before writing.
108A value of
109.Sy 100
110disables this feature.
111.
112.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
113Controls whether buffers present on special vdevs are eligible for caching
114into L2ARC.
115If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
116.
117.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq  int
118Controls whether only MFU metadata and data are cached from ARC into L2ARC.
119This may be desired to avoid wasting space on L2ARC when reading/writing large
120amounts of data that are not expected to be accessed more than once.
121.Pp
122The default is off,
123meaning both MRU and MFU data and metadata are cached.
124When turning off this feature, some MRU buffers will still be present
125in ARC and eventually cached on L2ARC.
126.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
127some prefetched buffers will be cached to L2ARC, and those might later
128transition to MRU, in which case the
129.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
130.Pp
131Regardless of
132.Sy l2arc_noprefetch ,
133some MFU buffers might be evicted from ARC,
134accessed later on as prefetches and transition to MRU as prefetches.
135If accessed again they are counted as MRU and the
136.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
137.Pp
138The ARC status of L2ARC buffers when they were first cached in
139L2ARC can be seen in the
140.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
141arcstats when importing the pool or onlining a cache
142device if persistent L2ARC is enabled.
143.Pp
144The
145.Sy evict_l2_eligible_mru
146arcstat does not take into account if this option is enabled as the information
147provided by the
148.Sy evict_l2_eligible_m[rf]u
149arcstats can be used to decide if toggling this option is appropriate
150for the current workload.
151.
152.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq int
153Percent of ARC size allowed for L2ARC-only headers.
154Since L2ARC buffers are not evicted on memory pressure,
155too many headers on a system with an irrationally large L2ARC
156can render it slow or unusable.
157This parameter limits L2ARC writes and rebuilds to achieve the target.
158.
159.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq ulong
160Trims ahead of the current write size
161.Pq Sy l2arc_write_max
162on L2ARC devices by this percentage of write size if we have filled the device.
163If set to
164.Sy 100
165we TRIM twice the space required to accommodate upcoming writes.
166A minimum of
167.Sy 64 MiB
168will be trimmed.
169It also enables TRIM of the whole L2ARC device upon creation
170or addition to an existing pool or if the header of the device is
171invalid upon importing a pool or onlining a cache device.
172A value of
173.Sy 0
174disables TRIM on L2ARC altogether and is the default as it can put significant
175stress on the underlying storage devices.
176This will vary depending of how well the specific device handles these commands.
177.
178.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
179Do not write buffers to L2ARC if they were prefetched but not used by
180applications.
181In case there are prefetched buffers in L2ARC and this option
182is later set, we do not read the prefetched buffers from L2ARC.
183Unsetting this option is useful for caching sequential reads from the
184disks to L2ARC and serve those reads from L2ARC later on.
185This may be beneficial in case the L2ARC device is significantly faster
186in sequential reads than the disks of the pool.
187.Pp
188Use
189.Sy 1
190to disable and
191.Sy 0
192to enable caching/reading prefetches to/from L2ARC.
193.
194.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
195No reads during writes.
196.
197.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq ulong
198Cold L2ARC devices will have
199.Sy l2arc_write_max
200increased by this amount while they remain cold.
201.
202.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq ulong
203Max write bytes per interval.
204.
205.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
206Rebuild the L2ARC when importing a pool (persistent L2ARC).
207This can be disabled if there are problems importing a pool
208or attaching an L2ARC device (e.g. the L2ARC device is slow
209in reading stored log metadata, or the metadata
210has become somehow fragmented/unusable).
211.
212.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
213Mininum size of an L2ARC device required in order to write log blocks in it.
214The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
215.Pp
216For L2ARC devices less than 1 GiB, the amount of data
217.Fn l2arc_evict
218evicts is significant compared to the amount of restored L2ARC data.
219In this case, do not write log blocks in L2ARC in order not to waste space.
220.
221.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
222Metaslab granularity, in bytes.
223This is roughly similar to what would be referred to as the "stripe size"
224in traditional RAID arrays.
225In normal operation, ZFS will try to write this amount of data to each disk
226before moving on to the next top-level vdev.
227.
228.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
229Enable metaslab group biasing based on their vdevs' over- or under-utilization
230relative to the pool.
231.
232.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq ulong
233Make some blocks above a certain size be gang blocks.
234This option is used by the test suite to facilitate testing.
235.
236.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
237When attempting to log an output nvlist of an ioctl in the on-disk history,
238the output will not be stored if it is larger than this size (in bytes).
239This must be less than
240.Sy DMU_MAX_ACCESS Pq 64 MiB .
241This applies primarily to
242.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
243.
244.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
245Prevent log spacemaps from being destroyed during pool exports and destroys.
246.
247.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
248Enable/disable segment-based metaslab selection.
249.
250.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
251When using segment-based metaslab selection, continue allocating
252from the active metaslab until this option's
253worth of buckets have been exhausted.
254.
255.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
256Load all metaslabs during pool import.
257.
258.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
259Prevent metaslabs from being unloaded.
260.
261.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
262Enable use of the fragmentation metric in computing metaslab weights.
263.
264.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
265Maximum distance to search forward from the last offset.
266Without this limit, fragmented pools can see
267.Em >100`000
268iterations and
269.Fn metaslab_block_picker
270becomes the performance limiting factor on high-performance storage.
271.Pp
272With the default setting of
273.Sy 16 MiB ,
274we typically see less than
275.Em 500
276iterations, even with very fragmented
277.Sy ashift Ns = Ns Sy 9
278pools.
279The maximum number of iterations possible is
280.Sy metaslab_df_max_search / 2^(ashift+1) .
281With the default setting of
282.Sy 16 MiB
283this is
284.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
285or
286.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
287.
288.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
289If not searching forward (due to
290.Sy metaslab_df_max_search , metaslab_df_free_pct ,
291.No or Sy metaslab_df_alloc_threshold ) ,
292this tunable controls which segment is used.
293If set, we will use the largest free segment.
294If unset, we will use a segment of at least the requested size.
295.
296.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq ulong
297When we unload a metaslab, we cache the size of the largest free chunk.
298We use that cached size to determine whether or not to load a metaslab
299for a given allocation.
300As more frees accumulate in that metaslab while it's unloaded,
301the cached max size becomes less and less accurate.
302After a number of seconds controlled by this tunable,
303we stop considering the cached max size and start
304considering only the histogram instead.
305.
306.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq int
307When we are loading a new metaslab, we check the amount of memory being used
308to store metaslab range trees.
309If it is over a threshold, we attempt to unload the least recently used metaslab
310to prevent the system from clogging all of its memory with range trees.
311This tunable sets the percentage of total system memory that is the threshold.
312.
313.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
314.Bl -item -compact
315.It
316If unset, we will first try normal allocation.
317.It
318If that fails then we will do a gang allocation.
319.It
320If that fails then we will do a "try hard" gang allocation.
321.It
322If that fails then we will have a multi-layer gang block.
323.El
324.Pp
325.Bl -item -compact
326.It
327If set, we will first try normal allocation.
328.It
329If that fails then we will do a "try hard" allocation.
330.It
331If that fails we will do a gang allocation.
332.It
333If that fails we will do a "try hard" gang allocation.
334.It
335If that fails then we will have a multi-layer gang block.
336.El
337.
338.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq int
339When not trying hard, we only consider this number of the best metaslabs.
340This improves performance, especially when there are many metaslabs per vdev
341and the allocation can't actually be satisfied
342(so we would otherwise iterate all metaslabs).
343.
344.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq int
345When a vdev is added, target this number of metaslabs per top-level vdev.
346.
347.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq int
348Default limit for metaslab size.
349.
350.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy ASHIFT_MAX Po 16 Pc Pq ulong
351Maximum ashift used when optimizing for logical \[->] physical sector size on new
352top-level vdevs.
353.
354.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq ulong
355Minimum ashift used when creating new top-level vdevs.
356.
357.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq int
358Minimum number of metaslabs to create in a top-level vdev.
359.
360.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
361Skip label validation steps during pool import.
362Changing is not recommended unless you know what you're doing
363and are recovering a damaged label.
364.
365.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq int
366Practical upper limit of total metaslabs per top-level vdev.
367.
368.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
369Enable metaslab group preloading.
370.
371.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
372Give more weight to metaslabs with lower LBAs,
373assuming they have greater bandwidth,
374as is typically the case on a modern constant angular velocity disk drive.
375.
376.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq int
377After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
378reduce unnecessary reloading.
379Note that both this many TXGs and
380.Sy metaslab_unload_delay_ms
381milliseconds must pass before unloading will occur.
382.
383.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq int
384After a metaslab is used, we keep it loaded for this many milliseconds,
385to attempt to reduce unnecessary reloading.
386Note, that both this many milliseconds and
387.Sy metaslab_unload_delay
388TXGs must pass before unloading will occur.
389.
390.It Sy reference_history Ns = Ns Sy 3 Pq int
391Maximum reference holders being tracked when reference_tracking_enable is active.
392.
393.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
394Track reference holders to
395.Sy refcount_t
396objects (debug builds only).
397.
398.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
399When set, the
400.Sy hole_birth
401optimization will not be used, and all holes will always be sent during a
402.Nm zfs Cm send .
403This is useful if you suspect your datasets are affected by a bug in
404.Sy hole_birth .
405.
406.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
407SPA config file.
408.
409.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq int
410Multiplication factor used to estimate actual disk consumption from the
411size of data being written.
412The default value is a worst case estimate,
413but lower values may be valid for a given pool depending on its configuration.
414Pool administrators who understand the factors involved
415may wish to specify a more realistic inflation factor,
416particularly if they operate close to quota or capacity limits.
417.
418.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
419Whether to print the vdev tree in the debugging message buffer during pool import.
420.
421.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
422Whether to traverse data blocks during an "extreme rewind"
423.Pq Fl X
424import.
425.Pp
426An extreme rewind import normally performs a full traversal of all
427blocks in the pool for verification.
428If this parameter is unset, the traversal skips non-metadata blocks.
429It can be toggled once the
430import has started to stop or start the traversal of non-metadata blocks.
431.
432.It Sy spa_load_verify_metadata  Ns = Ns Sy 1 Ns | Ns 0 Pq int
433Whether to traverse blocks during an "extreme rewind"
434.Pq Fl X
435pool import.
436.Pp
437An extreme rewind import normally performs a full traversal of all
438blocks in the pool for verification.
439If this parameter is unset, the traversal is not performed.
440It can be toggled once the import has started to stop or start the traversal.
441.
442.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq int
443Sets the maximum number of bytes to consume during pool import to the log2
444fraction of the target ARC size.
445.
446.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
447Normally, we don't allow the last
448.Sy 3.2% Pq Sy 1/2^spa_slop_shift
449of space in the pool to be consumed.
450This ensures that we don't run the pool completely out of space,
451due to unaccounted changes (e.g. to the MOS).
452It also limits the worst-case time to allocate space.
453If we have less than this amount of free space,
454most ZPL operations (e.g. write, create) will return
455.Sy ENOSPC .
456.
457.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
458Limits the number of on-disk error log entries that will be converted to the
459new format when enabling the
460.Sy head_errlog
461feature.
462The default is to convert all log entries.
463.
464.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq int
465During top-level vdev removal, chunks of data are copied from the vdev
466which may include free space in order to trade bandwidth for IOPS.
467This parameter determines the maximum span of free space, in bytes,
468which will be included as "unnecessary" data in a chunk of copied data.
469.Pp
470The default value here was chosen to align with
471.Sy zfs_vdev_read_gap_limit ,
472which is a similar concept when doing
473regular reads (but there's no reason it has to be the same).
474.
475.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq ulong
476Logical ashift for file-based devices.
477.
478.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq ulong
479Physical ashift for file-based devices.
480.
481.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
482If set, when we start iterating over a ZAP object,
483prefetch the entire object (all leaf blocks).
484However, this is limited by
485.Sy dmu_prefetch_max .
486.
487.It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
488If prefetching is enabled, disable prefetching for reads larger than this size.
489.
490.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
491Min bytes to prefetch per stream.
492Prefetch distance starts from the demand access size and quickly grows to
493this value, doubling on each hit.
494After that it may grow further by 1/8 per hit, but only if some prefetch
495since last time haven't completed in time to satisfy demand request, i.e.
496prefetch depth didn't cover the read latency or the pool got saturated.
497.
498.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
499Max bytes to prefetch per stream.
500.
501.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
502Max bytes to prefetch indirects for per stream.
503.
504.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
505Max number of streams per zfetch (prefetch streams per file).
506.
507.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
508Min time before inactive prefetch stream can be reclaimed
509.
510.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
511Max time before inactive prefetch stream can be deleted
512.
513.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
514Enables ARC from using scatter/gather lists and forces all allocations to be
515linear in kernel memory.
516Disabling can improve performance in some code paths
517at the expense of fragmented kernel memory.
518.
519.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
520Maximum number of consecutive memory pages allocated in a single block for
521scatter/gather lists.
522.Pp
523The value of
524.Sy MAX_ORDER
525depends on kernel configuration.
526.
527.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
528This is the minimum allocation size that will use scatter (page-based) ABDs.
529Smaller allocations will use linear ABDs.
530.
531.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq ulong
532When the number of bytes consumed by dnodes in the ARC exceeds this number of
533bytes, try to unpin some of it in response to demand for non-metadata.
534This value acts as a ceiling to the amount of dnode metadata, and defaults to
535.Sy 0 ,
536which indicates that a percent which is based on
537.Sy zfs_arc_dnode_limit_percent
538of the ARC meta buffers that may be used for dnodes.
539.Pp
540Also see
541.Sy zfs_arc_meta_prune
542which serves a similar purpose but is used
543when the amount of metadata in the ARC exceeds
544.Sy zfs_arc_meta_limit
545rather than in response to overall demand for non-metadata.
546.
547.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq ulong
548Percentage that can be consumed by dnodes of ARC meta buffers.
549.Pp
550See also
551.Sy zfs_arc_dnode_limit ,
552which serves a similar purpose but has a higher priority if nonzero.
553.
554.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq ulong
555Percentage of ARC dnodes to try to scan in response to demand for non-metadata
556when the number of bytes consumed by dnodes exceeds
557.Sy zfs_arc_dnode_limit .
558.
559.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq int
560The ARC's buffer hash table is sized based on the assumption of an average
561block size of this value.
562This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
563with 8-byte pointers.
564For configurations with a known larger average block size,
565this value can be increased to reduce the memory footprint.
566.
567.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq int
568When
569.Fn arc_is_overflowing ,
570.Fn arc_get_data_impl
571waits for this percent of the requested amount of data to be evicted.
572For example, by default, for every
573.Em 2 KiB
574that's evicted,
575.Em 1 KiB
576of it may be "reused" by a new allocation.
577Since this is above
578.Sy 100 Ns % ,
579it ensures that progress is made towards getting
580.Sy arc_size No under Sy arc_c .
581Since this is finite, it ensures that allocations can still happen,
582even during the potentially long time that
583.Sy arc_size No is more than Sy arc_c .
584.
585.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq int
586Number ARC headers to evict per sub-list before proceeding to another sub-list.
587This batch-style operation prevents entire sub-lists from being evicted at once
588but comes at a cost of additional unlocking and locking.
589.
590.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq int
591If set to a non zero value, it will replace the
592.Sy arc_grow_retry
593value with this value.
594The
595.Sy arc_grow_retry
596.No value Pq default Sy 5 Ns s
597is the number of seconds the ARC will wait before
598trying to resume growth after a memory pressure event.
599.
600.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
601Throttle I/O when free system memory drops below this percentage of total
602system memory.
603Setting this value to
604.Sy 0
605will disable the throttle.
606.
607.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq ulong
608Max size of ARC in bytes.
609If
610.Sy 0 ,
611then the max size of ARC is determined by the amount of system memory installed.
612Under Linux, half of system memory will be used as the limit.
613Under
614.Fx ,
615the larger of
616.Sy all_system_memory No \- Sy 1 GiB
617and
618.Sy 5/8 No \(mu Sy all_system_memory
619will be used as the limit.
620This value must be at least
621.Sy 67108864 Ns B Pq 64 MiB .
622.Pp
623This value can be changed dynamically, with some caveats.
624It cannot be set back to
625.Sy 0
626while running, and reducing it below the current ARC size will not cause
627the ARC to shrink without memory pressure to induce shrinking.
628.
629.It Sy zfs_arc_meta_adjust_restarts Ns = Ns Sy 4096 Pq ulong
630The number of restart passes to make while scanning the ARC attempting
631the free buffers in order to stay below the
632.Sy fs_arc_meta_limit .
633This value should not need to be tuned but is available to facilitate
634performance analysis.
635.
636.It Sy zfs_arc_meta_limit Ns = Ns Sy 0 Ns B Pq ulong
637The maximum allowed size in bytes that metadata buffers are allowed to
638consume in the ARC.
639When this limit is reached, metadata buffers will be reclaimed,
640even if the overall
641.Sy arc_c_max
642has not been reached.
643It defaults to
644.Sy 0 ,
645which indicates that a percentage based on
646.Sy zfs_arc_meta_limit_percent
647of the ARC may be used for metadata.
648.Pp
649This value my be changed dynamically, except that must be set to an explicit value
650.Pq cannot be set back to Sy 0 .
651.
652.It Sy zfs_arc_meta_limit_percent Ns = Ns Sy 75 Ns % Pq ulong
653Percentage of ARC buffers that can be used for metadata.
654.Pp
655See also
656.Sy zfs_arc_meta_limit ,
657which serves a similar purpose but has a higher priority if nonzero.
658.
659.It Sy zfs_arc_meta_min Ns = Ns Sy 0 Ns B Pq ulong
660The minimum allowed size in bytes that metadata buffers may consume in
661the ARC.
662.
663.It Sy zfs_arc_meta_prune Ns = Ns Sy 10000 Pq int
664The number of dentries and inodes to be scanned looking for entries
665which can be dropped.
666This may be required when the ARC reaches the
667.Sy zfs_arc_meta_limit
668because dentries and inodes can pin buffers in the ARC.
669Increasing this value will cause to dentry and inode caches
670to be pruned more aggressively.
671Setting this value to
672.Sy 0
673will disable pruning the inode and dentry caches.
674.
675.It Sy zfs_arc_meta_strategy Ns = Ns Sy 1 Ns | Ns 0 Pq int
676Define the strategy for ARC metadata buffer eviction (meta reclaim strategy):
677.Bl -tag -compact -offset 4n -width "0 (META_ONLY)"
678.It Sy 0 Pq META_ONLY
679evict only the ARC metadata buffers
680.It Sy 1 Pq BALANCED
681additional data buffers may be evicted if required
682to evict the required number of metadata buffers.
683.El
684.
685.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq ulong
686Min size of ARC in bytes.
687.No If set to Sy 0 , arc_c_min
688will default to consuming the larger of
689.Sy 32 MiB
690and
691.Sy all_system_memory No / Sy 32 .
692.
693.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq int
694Minimum time prefetched blocks are locked in the ARC.
695.
696.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq int
697Minimum time "prescient prefetched" blocks are locked in the ARC.
698These blocks are meant to be prefetched fairly aggressively ahead of
699the code that may use them.
700.
701.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
702Number of arc_prune threads.
703.Fx
704does not need more than one.
705Linux may theoretically use one per mount point up to number of CPUs,
706but that was not proven to be useful.
707.
708.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
709Number of missing top-level vdevs which will be allowed during
710pool import (only in read-only mode).
711.
712.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq ulong
713Maximum size in bytes allowed to be passed as
714.Sy zc_nvlist_src_size
715for ioctls on
716.Pa /dev/zfs .
717This prevents a user from causing the kernel to allocate
718an excessive amount of memory.
719When the limit is exceeded, the ioctl fails with
720.Sy EINVAL
721and a description of the error is sent to the
722.Pa zfs-dbgmsg
723log.
724This parameter should not need to be touched under normal circumstances.
725If
726.Sy 0 ,
727equivalent to a quarter of the user-wired memory limit under
728.Fx
729and to
730.Sy 134217728 Ns B Pq 128 MiB
731under Linux.
732.
733.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq int
734To allow more fine-grained locking, each ARC state contains a series
735of lists for both data and metadata objects.
736Locking is performed at the level of these "sub-lists".
737This parameters controls the number of sub-lists per ARC state,
738and also applies to other uses of the multilist data structure.
739.Pp
740If
741.Sy 0 ,
742equivalent to the greater of the number of online CPUs and
743.Sy 4 .
744.
745.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
746The ARC size is considered to be overflowing if it exceeds the current
747ARC target size
748.Pq Sy arc_c
749by thresholds determined by this parameter.
750Exceeding by
751.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
752starts ARC reclamation process.
753If that appears insufficient, exceeding by
754.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
755blocks new buffer allocation until the reclaim thread catches up.
756Started reclamation process continues till ARC size returns below the
757target size.
758.Pp
759The default value of
760.Sy 8
761causes the ARC to start reclamation if it exceeds the target size by
762.Em 0.2%
763of the target size, and block allocations by
764.Em 0.6% .
765.
766.It Sy zfs_arc_p_min_shift Ns = Ns Sy 0 Pq int
767If nonzero, this will update
768.Sy arc_p_min_shift Pq default Sy 4
769with the new value.
770.Sy arc_p_min_shift No is used as a shift of Sy arc_c
771when calculating the minumum
772.Sy arc_p No size.
773.
774.It Sy zfs_arc_p_dampener_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
775Disable
776.Sy arc_p
777adapt dampener, which reduces the maximum single adjustment to
778.Sy arc_p .
779.
780.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq int
781If nonzero, this will update
782.Sy arc_shrink_shift Pq default Sy 7
783with the new value.
784.
785.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
786Percent of pagecache to reclaim ARC to.
787.Pp
788This tunable allows the ZFS ARC to play more nicely
789with the kernel's LRU pagecache.
790It can guarantee that the ARC size won't collapse under scanning
791pressure on the pagecache, yet still allows the ARC to be reclaimed down to
792.Sy zfs_arc_min
793if necessary.
794This value is specified as percent of pagecache size (as measured by
795.Sy NR_FILE_PAGES ) ,
796where that percent may exceed
797.Sy 100 .
798This
799only operates during memory pressure/reclaim.
800.
801.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int
802This is a limit on how many pages the ARC shrinker makes available for
803eviction in response to one page allocation attempt.
804Note that in practice, the kernel's shrinker can ask us to evict
805up to about four times this for one allocation attempt.
806.Pp
807The default limit of
808.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
809limits the amount of time spent attempting to reclaim ARC memory to
810less than 100 ms per allocation attempt,
811even with a small average compressed block size of ~8 KiB.
812.Pp
813The parameter can be set to 0 (zero) to disable the limit,
814and only applies on Linux.
815.
816.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq ulong
817The target number of bytes the ARC should leave as free memory on the system.
818If zero, equivalent to the bigger of
819.Sy 512 KiB No and Sy all_system_memory/64 .
820.
821.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
822Disable pool import at module load by ignoring the cache file
823.Pq Sy spa_config_path .
824.
825.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
826Rate limit checksum events to this many per second.
827Note that this should not be set below the ZED thresholds
828(currently 10 checksums over 10 seconds)
829or else the daemon may not trigger any action.
830.
831.It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq int
832This controls the amount of time that a ZIL block (lwb) will remain "open"
833when it isn't "full", and it has a thread waiting for it to be committed to
834stable storage.
835The timeout is scaled based on a percentage of the last lwb
836latency to avoid significantly impacting the latency of each individual
837transaction record (itx).
838.
839.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
840Vdev indirection layer (used for device removal) sleeps for this many
841milliseconds during mapping generation.
842Intended for use with the test suite to throttle vdev removal speed.
843.
844.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq int
845Minimum percent of obsolete bytes in vdev mapping required to attempt to condense
846.Pq see Sy zfs_condense_indirect_vdevs_enable .
847Intended for use with the test suite
848to facilitate triggering condensing as needed.
849.
850.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
851Enable condensing indirect vdev mappings.
852When set, attempt to condense indirect vdev mappings
853if the mapping uses more than
854.Sy zfs_condense_min_mapping_bytes
855bytes of memory and if the obsolete space map object uses more than
856.Sy zfs_condense_max_obsolete_bytes
857bytes on-disk.
858The condensing process is an attempt to save memory by removing obsolete mappings.
859.
860.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
861Only attempt to condense indirect vdev mappings if the on-disk size
862of the obsolete space map object is greater than this number of bytes
863.Pq see Sy zfs_condense_indirect_vdevs_enable .
864.
865.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq ulong
866Minimum size vdev mapping to attempt to condense
867.Pq see Sy zfs_condense_indirect_vdevs_enable .
868.
869.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
870Internally ZFS keeps a small log to facilitate debugging.
871The log is enabled by default, and can be disabled by unsetting this option.
872The contents of the log can be accessed by reading
873.Pa /proc/spl/kstat/zfs/dbgmsg .
874Writing
875.Sy 0
876to the file clears the log.
877.Pp
878This setting does not influence debug prints due to
879.Sy zfs_flags .
880.
881.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq int
882Maximum size of the internal ZFS debug log.
883.
884.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
885Historically used for controlling what reporting was available under
886.Pa /proc/spl/kstat/zfs .
887No effect.
888.
889.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
890When a pool sync operation takes longer than
891.Sy zfs_deadman_synctime_ms ,
892or when an individual I/O operation takes longer than
893.Sy zfs_deadman_ziotime_ms ,
894then the operation is considered to be "hung".
895If
896.Sy zfs_deadman_enabled
897is set, then the deadman behavior is invoked as described by
898.Sy zfs_deadman_failmode .
899By default, the deadman is enabled and set to
900.Sy wait
901which results in "hung" I/O operations only being logged.
902The deadman is automatically disabled when a pool gets suspended.
903.
904.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
905Controls the failure behavior when the deadman detects a "hung" I/O operation.
906Valid values are:
907.Bl -tag -compact -offset 4n -width "continue"
908.It Sy wait
909Wait for a "hung" operation to complete.
910For each "hung" operation a "deadman" event will be posted
911describing that operation.
912.It Sy continue
913Attempt to recover from a "hung" operation by re-dispatching it
914to the I/O pipeline if possible.
915.It Sy panic
916Panic the system.
917This can be used to facilitate automatic fail-over
918to a properly configured fail-over partner.
919.El
920.
921.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq int
922Check time in milliseconds.
923This defines the frequency at which we check for hung I/O requests
924and potentially invoke the
925.Sy zfs_deadman_failmode
926behavior.
927.
928.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq ulong
929Interval in milliseconds after which the deadman is triggered and also
930the interval after which a pool sync operation is considered to be "hung".
931Once this limit is exceeded the deadman will be invoked every
932.Sy zfs_deadman_checktime_ms
933milliseconds until the pool sync completes.
934.
935.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq ulong
936Interval in milliseconds after which the deadman is triggered and an
937individual I/O operation is considered to be "hung".
938As long as the operation remains "hung",
939the deadman will be invoked every
940.Sy zfs_deadman_checktime_ms
941milliseconds until the operation completes.
942.
943.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
944Enable prefetching dedup-ed blocks which are going to be freed.
945.
946.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq int
947Start to delay each transaction once there is this amount of dirty data,
948expressed as a percentage of
949.Sy zfs_dirty_data_max .
950This value should be at least
951.Sy zfs_vdev_async_write_active_max_dirty_percent .
952.No See Sx ZFS TRANSACTION DELAY .
953.
954.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
955This controls how quickly the transaction delay approaches infinity.
956Larger values cause longer delays for a given amount of dirty data.
957.Pp
958For the smoothest delay, this value should be about 1 billion divided
959by the maximum number of operations per second.
960This will smoothly handle between ten times and a tenth of this number.
961.No See Sx ZFS TRANSACTION DELAY .
962.Pp
963.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
964.
965.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
966Disables requirement for IVset GUIDs to be present and match when doing a raw
967receive of encrypted datasets.
968Intended for users whose pools were created with
969OpenZFS pre-release versions and now have compatibility issues.
970.
971.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
972Maximum number of uses of a single salt value before generating a new one for
973encrypted datasets.
974The default value is also the maximum.
975.
976.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
977Size of the znode hashtable used for holds.
978.Pp
979Due to the need to hold locks on objects that may not exist yet, kernel mutexes
980are not created per-object and instead a hashtable is used where collisions
981will result in objects waiting when there is not actually contention on the
982same object.
983.
984.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
985Rate limit delay and deadman zevents (which report slow I/O operations) to this many per
986second.
987.
988.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq ulong
989Upper-bound limit for unflushed metadata changes to be held by the
990log spacemap in memory, in bytes.
991.
992.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq ulong
993Part of overall system memory that ZFS allows to be used
994for unflushed metadata changes by the log spacemap, in millionths.
995.
996.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq ulong
997Describes the maximum number of log spacemap blocks allowed for each pool.
998The default value means that the space in all the log spacemaps
999can add up to no more than
1000.Sy 131072
1001blocks (which means
1002.Em 16 GiB
1003of logical space before compression and ditto blocks,
1004assuming that blocksize is
1005.Em 128 KiB ) .
1006.Pp
1007This tunable is important because it involves a trade-off between import
1008time after an unclean export and the frequency of flushing metaslabs.
1009The higher this number is, the more log blocks we allow when the pool is
1010active which means that we flush metaslabs less often and thus decrease
1011the number of I/O operations for spacemap updates per TXG.
1012At the same time though, that means that in the event of an unclean export,
1013there will be more log spacemap blocks for us to read, inducing overhead
1014in the import time of the pool.
1015The lower the number, the amount of flushing increases, destroying log
1016blocks quicker as they become obsolete faster, which leaves less blocks
1017to be read during import time after a crash.
1018.Pp
1019Each log spacemap block existing during pool import leads to approximately
1020one extra logical I/O issued.
1021This is the reason why this tunable is exposed in terms of blocks rather
1022than space used.
1023.
1024.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq ulong
1025If the number of metaslabs is small and our incoming rate is high,
1026we could get into a situation that we are flushing all our metaslabs every TXG.
1027Thus we always allow at least this many log blocks.
1028.
1029.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq ulong
1030Tunable used to determine the number of blocks that can be used for
1031the spacemap log, expressed as a percentage of the total number of
1032unflushed metaslabs in the pool.
1033.
1034.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq ulong
1035Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1036It effectively limits maximum number of unflushed per-TXG spacemap logs
1037that need to be read after unclean pool export.
1038.
1039.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1040When enabled, files will not be asynchronously removed from the list of pending
1041unlinks and the space they consume will be leaked.
1042Once this option has been disabled and the dataset is remounted,
1043the pending unlinks will be processed and the freed space returned to the pool.
1044This option is used by the test suite.
1045.
1046.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1047This is the used to define a large file for the purposes of deletion.
1048Files containing more than
1049.Sy zfs_delete_blocks
1050will be deleted asynchronously, while smaller files are deleted synchronously.
1051Decreasing this value will reduce the time spent in an
1052.Xr unlink 2
1053system call, at the expense of a longer delay before the freed space is available.
1054.
1055.It Sy zfs_dirty_data_max Ns = Pq int
1056Determines the dirty space limit in bytes.
1057Once this limit is exceeded, new writes are halted until space frees up.
1058This parameter takes precedence over
1059.Sy zfs_dirty_data_max_percent .
1060.No See Sx ZFS TRANSACTION DELAY .
1061.Pp
1062Defaults to
1063.Sy physical_ram/10 ,
1064capped at
1065.Sy zfs_dirty_data_max_max .
1066.
1067.It Sy zfs_dirty_data_max_max Ns = Pq int
1068Maximum allowable value of
1069.Sy zfs_dirty_data_max ,
1070expressed in bytes.
1071This limit is only enforced at module load time, and will be ignored if
1072.Sy zfs_dirty_data_max
1073is later changed.
1074This parameter takes precedence over
1075.Sy zfs_dirty_data_max_max_percent .
1076.No See Sx ZFS TRANSACTION DELAY .
1077.Pp
1078Defaults to
1079.Sy physical_ram/4 ,
1080.
1081.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq int
1082Maximum allowable value of
1083.Sy zfs_dirty_data_max ,
1084expressed as a percentage of physical RAM.
1085This limit is only enforced at module load time, and will be ignored if
1086.Sy zfs_dirty_data_max
1087is later changed.
1088The parameter
1089.Sy zfs_dirty_data_max_max
1090takes precedence over this one.
1091.No See Sx ZFS TRANSACTION DELAY .
1092.
1093.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq int
1094Determines the dirty space limit, expressed as a percentage of all memory.
1095Once this limit is exceeded, new writes are halted until space frees up.
1096The parameter
1097.Sy zfs_dirty_data_max
1098takes precedence over this one.
1099.No See Sx ZFS TRANSACTION DELAY .
1100.Pp
1101Subject to
1102.Sy zfs_dirty_data_max_max .
1103.
1104.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq int
1105Start syncing out a transaction group if there's at least this much dirty data
1106.Pq as a percentage of Sy zfs_dirty_data_max .
1107This should be less than
1108.Sy zfs_vdev_async_write_active_min_dirty_percent .
1109.
1110.It Sy zfs_wrlog_data_max Ns = Pq int
1111The upper limit of write-transaction zil log data size in bytes.
1112Write operations are throttled when approaching the limit until log data is
1113cleared out after transaction group sync.
1114Because of some overhead, it should be set at least 2 times the size of
1115.Sy zfs_dirty_data_max
1116.No to prevent harming normal write throughput.
1117It also should be smaller than the size of the slog device if slog is present.
1118.Pp
1119Defaults to
1120.Sy zfs_dirty_data_max*2
1121.
1122.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
1123Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1124preallocated for a file in order to guarantee that later writes will not
1125run out of space.
1126Instead,
1127.Xr fallocate 2
1128space preallocation only checks that sufficient space is currently available
1129in the pool or the user's project quota allocation,
1130and then creates a sparse file of the requested size.
1131The requested space is multiplied by
1132.Sy zfs_fallocate_reserve_percent
1133to allow additional space for indirect blocks and other internal metadata.
1134Setting this to
1135.Sy 0
1136disables support for
1137.Xr fallocate 2
1138and causes it to return
1139.Sy EOPNOTSUPP .
1140.
1141.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1142Select a fletcher 4 implementation.
1143.Pp
1144Supported selectors are:
1145.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1146.No and Sy aarch64_neon .
1147All except
1148.Sy fastest No and Sy scalar
1149require instruction set extensions to be available,
1150and will only appear if ZFS detects that they are present at runtime.
1151If multiple implementations of fletcher 4 are available, the
1152.Sy fastest
1153will be chosen using a micro benchmark.
1154Selecting
1155.Sy scalar
1156results in the original CPU-based calculation being used.
1157Selecting any option other than
1158.Sy fastest No or Sy scalar
1159results in vector instructions
1160from the respective CPU instruction set being used.
1161.
1162.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1163Enable/disable the processing of the free_bpobj object.
1164.
1165.It Sy zfs_async_block_max_blocks Ns = Ns Sy ULONG_MAX Po unlimited Pc Pq ulong
1166Maximum number of blocks freed in a single TXG.
1167.
1168.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq ulong
1169Maximum number of dedup blocks freed in a single TXG.
1170.
1171.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq int
1172Maximum asynchronous read I/O operations active to each device.
1173.No See Sx ZFS I/O SCHEDULER .
1174.
1175.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq int
1176Minimum asynchronous read I/O operation active to each device.
1177.No See Sx ZFS I/O SCHEDULER .
1178.
1179.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq int
1180When the pool has more than this much dirty data, use
1181.Sy zfs_vdev_async_write_max_active
1182to limit active async writes.
1183If the dirty data is between the minimum and maximum,
1184the active I/O limit is linearly interpolated.
1185.No See Sx ZFS I/O SCHEDULER .
1186.
1187.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq int
1188When the pool has less than this much dirty data, use
1189.Sy zfs_vdev_async_write_min_active
1190to limit active async writes.
1191If the dirty data is between the minimum and maximum,
1192the active I/O limit is linearly
1193interpolated.
1194.No See Sx ZFS I/O SCHEDULER .
1195.
1196.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 30 Pq int
1197Maximum asynchronous write I/O operations active to each device.
1198.No See Sx ZFS I/O SCHEDULER .
1199.
1200.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq int
1201Minimum asynchronous write I/O operations active to each device.
1202.No See Sx ZFS I/O SCHEDULER .
1203.Pp
1204Lower values are associated with better latency on rotational media but poorer
1205resilver performance.
1206The default value of
1207.Sy 2
1208was chosen as a compromise.
1209A value of
1210.Sy 3
1211has been shown to improve resilver performance further at a cost of
1212further increasing latency.
1213.
1214.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq int
1215Maximum initializing I/O operations active to each device.
1216.No See Sx ZFS I/O SCHEDULER .
1217.
1218.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq int
1219Minimum initializing I/O operations active to each device.
1220.No See Sx ZFS I/O SCHEDULER .
1221.
1222.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq int
1223The maximum number of I/O operations active to each device.
1224Ideally, this will be at least the sum of each queue's
1225.Sy max_active .
1226.No See Sx ZFS I/O SCHEDULER .
1227.
1228.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq int
1229Maximum sequential resilver I/O operations active to each device.
1230.No See Sx ZFS I/O SCHEDULER .
1231.
1232.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq int
1233Minimum sequential resilver I/O operations active to each device.
1234.No See Sx ZFS I/O SCHEDULER .
1235.
1236.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq int
1237Maximum removal I/O operations active to each device.
1238.No See Sx ZFS I/O SCHEDULER .
1239.
1240.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq int
1241Minimum removal I/O operations active to each device.
1242.No See Sx ZFS I/O SCHEDULER .
1243.
1244.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq int
1245Maximum scrub I/O operations active to each device.
1246.No See Sx ZFS I/O SCHEDULER .
1247.
1248.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq int
1249Minimum scrub I/O operations active to each device.
1250.No See Sx ZFS I/O SCHEDULER .
1251.
1252.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq int
1253Maximum synchronous read I/O operations active to each device.
1254.No See Sx ZFS I/O SCHEDULER .
1255.
1256.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq int
1257Minimum synchronous read I/O operations active to each device.
1258.No See Sx ZFS I/O SCHEDULER .
1259.
1260.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq int
1261Maximum synchronous write I/O operations active to each device.
1262.No See Sx ZFS I/O SCHEDULER .
1263.
1264.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq int
1265Minimum synchronous write I/O operations active to each device.
1266.No See Sx ZFS I/O SCHEDULER .
1267.
1268.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq int
1269Maximum trim/discard I/O operations active to each device.
1270.No See Sx ZFS I/O SCHEDULER .
1271.
1272.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq int
1273Minimum trim/discard I/O operations active to each device.
1274.No See Sx ZFS I/O SCHEDULER .
1275.
1276.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq int
1277For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1278the number of concurrently-active I/O operations is limited to
1279.Sy zfs_*_min_active ,
1280unless the vdev is "idle".
1281When there are no interactive I/O operations active (synchronous or otherwise),
1282and
1283.Sy zfs_vdev_nia_delay
1284operations have completed since the last interactive operation,
1285then the vdev is considered to be "idle",
1286and the number of concurrently-active non-interactive operations is increased to
1287.Sy zfs_*_max_active .
1288.No See Sx ZFS I/O SCHEDULER .
1289.
1290.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq int
1291Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1292random I/O latency reaches several seconds.
1293On some HDDs this happens even if sequential I/O operations
1294are submitted one at a time, and so setting
1295.Sy zfs_*_max_active Ns = Sy 1
1296does not help.
1297To prevent non-interactive I/O, like scrub,
1298from monopolizing the device, no more than
1299.Sy zfs_vdev_nia_credit operations can be sent
1300while there are outstanding incomplete interactive operations.
1301This enforced wait ensures the HDD services the interactive I/O
1302within a reasonable amount of time.
1303.No See Sx ZFS I/O SCHEDULER .
1304.
1305.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq int
1306Maximum number of queued allocations per top-level vdev expressed as
1307a percentage of
1308.Sy zfs_vdev_async_write_max_active ,
1309which allows the system to detect devices that are more capable
1310of handling allocations and to allocate more blocks to those devices.
1311This allows for dynamic allocation distribution when devices are imbalanced,
1312as fuller devices will tend to be slower than empty devices.
1313.Pp
1314Also see
1315.Sy zio_dva_throttle_enabled .
1316.
1317.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1318Time before expiring
1319.Pa .zfs/snapshot .
1320.
1321.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1322Allow the creation, removal, or renaming of entries in the
1323.Sy .zfs/snapshot
1324directory to cause the creation, destruction, or renaming of snapshots.
1325When enabled, this functionality works both locally and over NFS exports
1326which have the
1327.Em no_root_squash
1328option set.
1329.
1330.It Sy zfs_flags Ns = Ns Sy 0 Pq int
1331Set additional debugging flags.
1332The following flags may be bitwise-ored together:
1333.TS
1334box;
1335lbz r l l .
1336	Value	Symbolic Name	Description
1337_
1338	1	ZFS_DEBUG_DPRINTF	Enable dprintf entries in the debug log.
1339*	2	ZFS_DEBUG_DBUF_VERIFY	Enable extra dbuf verifications.
1340*	4	ZFS_DEBUG_DNODE_VERIFY	Enable extra dnode verifications.
1341	8	ZFS_DEBUG_SNAPNAMES	Enable snapshot name verification.
1342	16	ZFS_DEBUG_MODIFY	Check for illegally modified ARC buffers.
1343	64	ZFS_DEBUG_ZIO_FREE	Enable verification of block frees.
1344	128	ZFS_DEBUG_HISTOGRAM_VERIFY	Enable extra spacemap histogram verifications.
1345	256	ZFS_DEBUG_METASLAB_VERIFY	Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1346	512	ZFS_DEBUG_SET_ERROR	Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1347	1024	ZFS_DEBUG_INDIRECT_REMAP	Verify split blocks created by device removal.
1348	2048	ZFS_DEBUG_TRIM	Verify TRIM ranges are always within the allocatable range tree.
1349	4096	ZFS_DEBUG_LOG_SPACEMAP	Verify that the log summary is consistent with the spacemap log
1350			       and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
1351.TE
1352.Sy \& * No Requires debug build.
1353.
1354.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1355If destroy encounters an
1356.Sy EIO
1357while reading metadata (e.g. indirect blocks),
1358space referenced by the missing metadata can not be freed.
1359Normally this causes the background destroy to become "stalled",
1360as it is unable to make forward progress.
1361While in this stalled state, all remaining space to free
1362from the error-encountering filesystem is "temporarily leaked".
1363Set this flag to cause it to ignore the
1364.Sy EIO ,
1365permanently leak the space from indirect blocks that can not be read,
1366and continue to free everything else that it can.
1367.Pp
1368The default "stalling" behavior is useful if the storage partially
1369fails (i.e. some but not all I/O operations fail), and then later recovers.
1370In this case, we will be able to continue pool operations while it is
1371partially failed, and when it recovers, we can continue to free the
1372space, with no leaks.
1373Note, however, that this case is actually fairly rare.
1374.Pp
1375Typically pools either
1376.Bl -enum -compact -offset 4n -width "1."
1377.It
1378fail completely (but perhaps temporarily,
1379e.g. due to a top-level vdev going offline), or
1380.It
1381have localized, permanent errors (e.g. disk returns the wrong data
1382due to bit flip or firmware bug).
1383.El
1384In the former case, this setting does not matter because the
1385pool will be suspended and the sync thread will not be able to make
1386forward progress regardless.
1387In the latter, because the error is permanent, the best we can do
1388is leak the minimum amount of space,
1389which is what setting this flag will do.
1390It is therefore reasonable for this flag to normally be set,
1391but we chose the more conservative approach of not setting it,
1392so that there is no possibility of
1393leaking space in the "partial temporary" failure case.
1394.
1395.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq int
1396During a
1397.Nm zfs Cm destroy
1398operation using the
1399.Sy async_destroy
1400feature,
1401a minimum of this much time will be spent working on freeing blocks per TXG.
1402.
1403.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq int
1404Similar to
1405.Sy zfs_free_min_time_ms ,
1406but for cleanup of old indirection records for removed vdevs.
1407.
1408.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq long
1409Largest data block to write to the ZIL.
1410Larger blocks will be treated as if the dataset being written to had the
1411.Sy logbias Ns = Ns Sy throughput
1412property set.
1413.
1414.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq ulong
1415Pattern written to vdev free space by
1416.Xr zpool-initialize 8 .
1417.
1418.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
1419Size of writes used by
1420.Xr zpool-initialize 8 .
1421This option is used by the test suite.
1422.
1423.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq ulong
1424The threshold size (in block pointers) at which we create a new sub-livelist.
1425Larger sublists are more costly from a memory perspective but the fewer
1426sublists there are, the lower the cost of insertion.
1427.
1428.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
1429If the amount of shared space between a snapshot and its clone drops below
1430this threshold, the clone turns off the livelist and reverts to the old
1431deletion method.
1432This is in place because livelists no long give us a benefit
1433once a clone has been overwritten enough.
1434.
1435.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
1436Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1437it is being condensed.
1438This option is used by the test suite to track race conditions.
1439.
1440.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
1441Incremented each time livelist condensing is canceled while in
1442.Fn spa_livelist_condense_sync .
1443This option is used by the test suite to track race conditions.
1444.
1445.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1446When set, the livelist condense process pauses indefinitely before
1447executing the synctask \(em
1448.Fn spa_livelist_condense_sync .
1449This option is used by the test suite to trigger race conditions.
1450.
1451.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
1452Incremented each time livelist condensing is canceled while in
1453.Fn spa_livelist_condense_cb .
1454This option is used by the test suite to track race conditions.
1455.
1456.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1457When set, the livelist condense process pauses indefinitely before
1458executing the open context condensing work in
1459.Fn spa_livelist_condense_cb .
1460This option is used by the test suite to trigger race conditions.
1461.
1462.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq ulong
1463The maximum execution time limit that can be set for a ZFS channel program,
1464specified as a number of Lua instructions.
1465.
1466.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq ulong
1467The maximum memory limit that can be set for a ZFS channel program, specified
1468in bytes.
1469.
1470.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1471The maximum depth of nested datasets.
1472This value can be tuned temporarily to
1473fix existing datasets that exceed the predefined limit.
1474.
1475.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq ulong
1476The number of past TXGs that the flushing algorithm of the log spacemap
1477feature uses to estimate incoming log blocks.
1478.
1479.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq ulong
1480Maximum number of rows allowed in the summary of the spacemap log.
1481.
1482.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq int
1483We currently support block sizes from
1484.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
1485The benefits of larger blocks, and thus larger I/O,
1486need to be weighed against the cost of COWing a giant block to modify one byte.
1487Additionally, very large blocks can have an impact on I/O latency,
1488and also potentially on the memory allocator.
1489Therefore, we formerly forbade creating blocks larger than 1M.
1490Larger blocks could be created by changing it,
1491and pools with larger blocks can always be imported and used,
1492regardless of this setting.
1493.
1494.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1495Allow datasets received with redacted send/receive to be mounted.
1496Normally disabled because these datasets may be missing key data.
1497.
1498.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq ulong
1499Minimum number of metaslabs to flush per dirty TXG.
1500.
1501.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq int
1502Allow metaslabs to keep their active state as long as their fragmentation
1503percentage is no more than this value.
1504An active metaslab that exceeds this threshold
1505will no longer keep its active status allowing better metaslabs to be selected.
1506.
1507.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq int
1508Metaslab groups are considered eligible for allocations if their
1509fragmentation metric (measured as a percentage) is less than or equal to
1510this value.
1511If a metaslab group exceeds this threshold then it will be
1512skipped unless all metaslab groups within the metaslab class have also
1513crossed this threshold.
1514.
1515.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq int
1516Defines a threshold at which metaslab groups should be eligible for allocations.
1517The value is expressed as a percentage of free space
1518beyond which a metaslab group is always eligible for allocations.
1519If a metaslab group's free space is less than or equal to the
1520threshold, the allocator will avoid allocating to that group
1521unless all groups in the pool have reached the threshold.
1522Once all groups have reached the threshold, all groups are allowed to accept
1523allocations.
1524The default value of
1525.Sy 0
1526disables the feature and causes all metaslab groups to be eligible for allocations.
1527.Pp
1528This parameter allows one to deal with pools having heavily imbalanced
1529vdevs such as would be the case when a new vdev has been added.
1530Setting the threshold to a non-zero percentage will stop allocations
1531from being made to vdevs that aren't filled to the specified percentage
1532and allow lesser filled vdevs to acquire more allocations than they
1533otherwise would under the old
1534.Sy zfs_mg_alloc_failures
1535facility.
1536.
1537.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1538If enabled, ZFS will place DDT data into the special allocation class.
1539.
1540.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1541If enabled, ZFS will place user data indirect blocks
1542into the special allocation class.
1543.
1544.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq int
1545Historical statistics for this many latest multihost updates will be available in
1546.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1547.
1548.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq ulong
1549Used to control the frequency of multihost writes which are performed when the
1550.Sy multihost
1551pool property is on.
1552This is one of the factors used to determine the
1553length of the activity check during import.
1554.Pp
1555The multihost write period is
1556.Sy zfs_multihost_interval No / Sy leaf-vdevs .
1557On average a multihost write will be issued for each leaf vdev
1558every
1559.Sy zfs_multihost_interval
1560milliseconds.
1561In practice, the observed period can vary with the I/O load
1562and this observed value is the delay which is stored in the uberblock.
1563.
1564.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1565Used to control the duration of the activity test on import.
1566Smaller values of
1567.Sy zfs_multihost_import_intervals
1568will reduce the import time but increase
1569the risk of failing to detect an active pool.
1570The total activity check time is never allowed to drop below one second.
1571.Pp
1572On import the activity check waits a minimum amount of time determined by
1573.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
1574or the same product computed on the host which last had the pool imported,
1575whichever is greater.
1576The activity check time may be further extended if the value of MMP
1577delay found in the best uberblock indicates actual multihost updates happened
1578at longer intervals than
1579.Sy zfs_multihost_interval .
1580A minimum of
1581.Em 100 ms
1582is enforced.
1583.Pp
1584.Sy 0 No is equivalent to Sy 1 .
1585.
1586.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
1587Controls the behavior of the pool when multihost write failures or delays are
1588detected.
1589.Pp
1590When
1591.Sy 0 ,
1592multihost write failures or delays are ignored.
1593The failures will still be reported to the ZED which depending on
1594its configuration may take action such as suspending the pool or offlining a
1595device.
1596.Pp
1597Otherwise, the pool will be suspended if
1598.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
1599milliseconds pass without a successful MMP write.
1600This guarantees the activity test will see MMP writes if the pool is imported.
1601.Sy 1 No is equivalent to Sy 2 ;
1602this is necessary to prevent the pool from being suspended
1603due to normal, small I/O latency variations.
1604.
1605.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1606Set to disable scrub I/O.
1607This results in scrubs not actually scrubbing data and
1608simply doing a metadata crawl of the pool instead.
1609.
1610.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1611Set to disable block prefetching for scrubs.
1612.
1613.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1614Disable cache flush operations on disks when writing.
1615Setting this will cause pool corruption on power loss
1616if a volatile out-of-order write cache is enabled.
1617.
1618.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1619Allow no-operation writes.
1620The occurrence of nopwrites will further depend on other pool properties
1621.Pq i.a. the checksumming and compression algorithms .
1622.
1623.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
1624Enable forcing TXG sync to find holes.
1625When enabled forces ZFS to sync data when
1626.Sy SEEK_HOLE No or Sy SEEK_DATA
1627flags are used allowing holes in a file to be accurately reported.
1628When disabled holes will not be reported in recently dirtied files.
1629.
1630.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
1631The number of bytes which should be prefetched during a pool traversal, like
1632.Nm zfs Cm send
1633or other data crawling operations.
1634.
1635.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq int
1636The number of blocks pointed by indirect (non-L0) block which should be
1637prefetched during a pool traversal, like
1638.Nm zfs Cm send
1639or other data crawling operations.
1640.
1641.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 5 Ns % Pq ulong
1642Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1643After this threshold is crossed, additional frees will wait until the next TXG.
1644.Sy 0 No disables this throttle.
1645.
1646.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1647Disable predictive prefetch.
1648Note that it leaves "prescient" prefetch
1649.Pq for, e.g., Nm zfs Cm send
1650intact.
1651Unlike predictive prefetch, prescient prefetch never issues I/O
1652that ends up not being needed, so it can't hurt performance.
1653.
1654.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1655Disable QAT hardware acceleration for SHA256 checksums.
1656May be unset after the ZFS modules have been loaded to initialize the QAT
1657hardware as long as support is compiled in and the QAT driver is present.
1658.
1659.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1660Disable QAT hardware acceleration for gzip compression.
1661May be unset after the ZFS modules have been loaded to initialize the QAT
1662hardware as long as support is compiled in and the QAT driver is present.
1663.
1664.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1665Disable QAT hardware acceleration for AES-GCM encryption.
1666May be unset after the ZFS modules have been loaded to initialize the QAT
1667hardware as long as support is compiled in and the QAT driver is present.
1668.
1669.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq long
1670Bytes to read per chunk.
1671.
1672.It Sy zfs_read_history Ns = Ns Sy 0 Pq int
1673Historical statistics for this many latest reads will be available in
1674.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
1675.
1676.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
1677Include cache hits in read history
1678.
1679.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq ulong
1680Maximum read segment size to issue when sequentially resilvering a
1681top-level vdev.
1682.
1683.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1684Automatically start a pool scrub when the last active sequential resilver
1685completes in order to verify the checksums of all blocks which have been
1686resilvered.
1687This is enabled by default and strongly recommended.
1688.
1689.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 33554432 Ns B Po 32 MiB Pc Pq ulong
1690Maximum amount of I/O that can be concurrently issued for a sequential
1691resilver per leaf device, given in bytes.
1692.
1693.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
1694If an indirect split block contains more than this many possible unique
1695combinations when being reconstructed, consider it too computationally
1696expensive to check them all.
1697Instead, try at most this many randomly selected
1698combinations each time the block is accessed.
1699This allows all segment copies to participate fairly
1700in the reconstruction when all combinations
1701cannot be checked and prevents repeated use of one bad copy.
1702.
1703.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
1704Set to attempt to recover from fatal errors.
1705This should only be used as a last resort,
1706as it typically results in leaked space, or worse.
1707.
1708.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1709Ignore hard I/O errors during device removal.
1710When set, if a device encounters a hard I/O error during the removal process
1711the removal will not be cancelled.
1712This can result in a normally recoverable block becoming permanently damaged
1713and is hence not recommended.
1714This should only be used as a last resort when the
1715pool cannot be returned to a healthy state prior to removing the device.
1716.
1717.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1718This is used by the test suite so that it can ensure that certain actions
1719happen while in the middle of a removal.
1720.
1721.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1722The largest contiguous segment that we will attempt to allocate when removing
1723a device.
1724If there is a performance problem with attempting to allocate large blocks,
1725consider decreasing this.
1726The default value is also the maximum.
1727.
1728.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
1729Ignore the
1730.Sy resilver_defer
1731feature, causing an operation that would start a resilver to
1732immediately restart the one in progress.
1733.
1734.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq int
1735Resilvers are processed by the sync thread.
1736While resilvering, it will spend at least this much time
1737working on a resilver between TXG flushes.
1738.
1739.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1740If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
1741even if there were unrepairable errors.
1742Intended to be used during pool repair or recovery to
1743stop resilvering when the pool is next imported.
1744.
1745.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq int
1746Scrubs are processed by the sync thread.
1747While scrubbing, it will spend at least this much time
1748working on a scrub between TXG flushes.
1749.
1750.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq int
1751To preserve progress across reboots, the sequential scan algorithm periodically
1752needs to stop metadata scanning and issue all the verification I/O to disk.
1753The frequency of this flushing is determined by this tunable.
1754.
1755.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq int
1756This tunable affects how scrub and resilver I/O segments are ordered.
1757A higher number indicates that we care more about how filled in a segment is,
1758while a lower number indicates we care more about the size of the extent without
1759considering the gaps within a segment.
1760This value is only tunable upon module insertion.
1761Changing the value afterwards will have no effect on scrub or resilver performance.
1762.
1763.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq int
1764Determines the order that data will be verified while scrubbing or resilvering:
1765.Bl -tag -compact -offset 4n -width "a"
1766.It Sy 1
1767Data will be verified as sequentially as possible, given the
1768amount of memory reserved for scrubbing
1769.Pq see Sy zfs_scan_mem_lim_fact .
1770This may improve scrub performance if the pool's data is very fragmented.
1771.It Sy 2
1772The largest mostly-contiguous chunk of found data will be verified first.
1773By deferring scrubbing of small segments, we may later find adjacent data
1774to coalesce and increase the segment size.
1775.It Sy 0
1776.No Use strategy Sy 1 No during normal verification
1777.No and strategy Sy 2 No while taking a checkpoint.
1778.El
1779.
1780.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
1781If unset, indicates that scrubs and resilvers will gather metadata in
1782memory before issuing sequential I/O.
1783Otherwise indicates that the legacy algorithm will be used,
1784where I/O is initiated as soon as it is discovered.
1785Unsetting will not affect scrubs or resilvers that are already in progress.
1786.
1787.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
1788Sets the largest gap in bytes between scrub/resilver I/O operations
1789that will still be considered sequential for sorting purposes.
1790Changing this value will not
1791affect scrubs or resilvers that are already in progress.
1792.
1793.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq int
1794Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
1795This tunable determines the hard limit for I/O sorting memory usage.
1796When the hard limit is reached we stop scanning metadata and start issuing
1797data verification I/O.
1798This is done until we get below the soft limit.
1799.
1800.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq int
1801The fraction of the hard limit used to determined the soft limit for I/O sorting
1802by the sequential scan algorithm.
1803When we cross this limit from below no action is taken.
1804When we cross this limit from above it is because we are issuing verification I/O.
1805In this case (unless the metadata scan is done) we stop issuing verification I/O
1806and start scanning metadata again until we get to the hard limit.
1807.
1808.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1809Enforce tight memory limits on pool scans when a sequential scan is in progress.
1810When disabled, the memory limit may be exceeded by fast disks.
1811.
1812.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1813Freezes a scrub/resilver in progress without actually pausing it.
1814Intended for testing/debugging.
1815.
1816.It Sy zfs_scan_vdev_limit Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq int
1817Maximum amount of data that can be concurrently issued at once for scrubs and
1818resilvers per leaf device, given in bytes.
1819.
1820.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
1821Allow sending of corrupt data (ignore read/checksum errors when sending).
1822.
1823.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
1824Include unmodified spill blocks in the send stream.
1825Under certain circumstances, previous versions of ZFS could incorrectly
1826remove the spill block from an existing object.
1827Including unmodified copies of the spill blocks creates a backwards-compatible
1828stream which will recreate a spill block if it was incorrectly removed.
1829.
1830.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq int
1831The fill fraction of the
1832.Nm zfs Cm send
1833internal queues.
1834The fill fraction controls the timing with which internal threads are woken up.
1835.
1836.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
1837The maximum number of bytes allowed in
1838.Nm zfs Cm send Ns 's
1839internal queues.
1840.
1841.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq int
1842The fill fraction of the
1843.Nm zfs Cm send
1844prefetch queue.
1845The fill fraction controls the timing with which internal threads are woken up.
1846.
1847.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1848The maximum number of bytes allowed that will be prefetched by
1849.Nm zfs Cm send .
1850This value must be at least twice the maximum block size in use.
1851.
1852.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq int
1853The fill fraction of the
1854.Nm zfs Cm receive
1855queue.
1856The fill fraction controls the timing with which internal threads are woken up.
1857.
1858.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1859The maximum number of bytes allowed in the
1860.Nm zfs Cm receive
1861queue.
1862This value must be at least twice the maximum block size in use.
1863.
1864.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
1865The maximum amount of data, in bytes, that
1866.Nm zfs Cm receive
1867will write in one DMU transaction.
1868This is the uncompressed size, even when receiving a compressed send stream.
1869This setting will not reduce the write size below a single block.
1870Capped at a maximum of
1871.Sy 32 MiB .
1872.
1873.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
1874When this variable is set to non-zero a corrective receive:
1875.Bl -enum -compact -offset 4n -width "1."
1876.It
1877Does not enforce the restriction of source & destination snapshot GUIDs
1878matching.
1879.It
1880If there is an error during healing, the healing receive is not
1881terminated instead it moves on to the next record.
1882.El
1883.
1884.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq ulong
1885Setting this variable overrides the default logic for estimating block
1886sizes when doing a
1887.Nm zfs Cm send .
1888The default heuristic is that the average block size
1889will be the current recordsize.
1890Override this value if most data in your dataset is not of that size
1891and you require accurate zfs send size estimates.
1892.
1893.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq int
1894Flushing of data to disk is done in passes.
1895Defer frees starting in this pass.
1896.
1897.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
1898Maximum memory used for prefetching a checkpoint's space map on each
1899vdev while discarding the checkpoint.
1900.
1901.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq int
1902Only allow small data blocks to be allocated on the special and dedup vdev
1903types when the available free space percentage on these vdevs exceeds this value.
1904This ensures reserved space is available for pool metadata as the
1905special vdevs approach capacity.
1906.
1907.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq int
1908Starting in this sync pass, disable compression (including of metadata).
1909With the default setting, in practice, we don't have this many sync passes,
1910so this has no effect.
1911.Pp
1912The original intent was that disabling compression would help the sync passes
1913to converge.
1914However, in practice, disabling compression increases
1915the average number of sync passes; because when we turn compression off,
1916many blocks' size will change, and thus we have to re-allocate
1917(not overwrite) them.
1918It also increases the number of
1919.Em 128 KiB
1920allocations (e.g. for indirect blocks and spacemaps)
1921because these will not be compressed.
1922The
1923.Em 128 KiB
1924allocations are especially detrimental to performance
1925on highly fragmented systems, which may have very few free segments of this size,
1926and may need to load new metaslabs to satisfy these allocations.
1927.
1928.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq int
1929Rewrite new block pointers starting in this pass.
1930.
1931.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int
1932This controls the number of threads used by
1933.Sy dp_sync_taskq .
1934The default value of
1935.Sy 75%
1936will create a maximum of one thread per CPU.
1937.
1938.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
1939Maximum size of TRIM command.
1940Larger ranges will be split into chunks no larger than this value before issuing.
1941.
1942.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
1943Minimum size of TRIM commands.
1944TRIM ranges smaller than this will be skipped,
1945unless they're part of a larger range which was chunked.
1946This is done because it's common for these small TRIMs
1947to negatively impact overall performance.
1948.
1949.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1950Skip uninitialized metaslabs during the TRIM process.
1951This option is useful for pools constructed from large thinly-provisioned devices
1952where TRIM operations are slow.
1953As a pool ages, an increasing fraction of the pool's metaslabs
1954will be initialized, progressively degrading the usefulness of this option.
1955This setting is stored when starting a manual TRIM and will
1956persist for the duration of the requested TRIM.
1957.
1958.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
1959Maximum number of queued TRIMs outstanding per leaf vdev.
1960The number of concurrent TRIM commands issued to the device is controlled by
1961.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
1962.
1963.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
1964The number of transaction groups' worth of frees which should be aggregated
1965before TRIM operations are issued to the device.
1966This setting represents a trade-off between issuing larger,
1967more efficient TRIM operations and the delay
1968before the recently trimmed space is available for use by the device.
1969.Pp
1970Increasing this value will allow frees to be aggregated for a longer time.
1971This will result is larger TRIM operations and potentially increased memory usage.
1972Decreasing this value will have the opposite effect.
1973The default of
1974.Sy 32
1975was determined to be a reasonable compromise.
1976.
1977.It Sy zfs_txg_history Ns = Ns Sy 0 Pq int
1978Historical statistics for this many latest TXGs will be available in
1979.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
1980.
1981.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq int
1982Flush dirty data to disk at least every this many seconds (maximum TXG duration).
1983.
1984.It Sy zfs_vdev_aggregate_trim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1985Allow TRIM I/O operations to be aggregated.
1986This is normally not helpful because the extents to be trimmed
1987will have been already been aggregated by the metaslab.
1988This option is provided for debugging and performance analysis.
1989.
1990.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
1991Max vdev I/O aggregation size.
1992.
1993.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
1994Max vdev I/O aggregation size for non-rotating media.
1995.
1996.It Sy zfs_vdev_cache_bshift Ns = Ns Sy 16 Po 64 KiB Pc Pq int
1997Shift size to inflate reads to.
1998.
1999.It Sy zfs_vdev_cache_max Ns = Ns Sy 16384 Ns B Po 16 KiB Pc Pq int
2000Inflate reads smaller than this value to meet the
2001.Sy zfs_vdev_cache_bshift
2002size
2003.Pq default Sy 64 KiB .
2004.
2005.It Sy zfs_vdev_cache_size Ns = Ns Sy 0 Pq int
2006Total size of the per-disk cache in bytes.
2007.Pp
2008Currently this feature is disabled, as it has been found to not be helpful
2009for performance and in some cases harmful.
2010.
2011.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
2012A number by which the balancing algorithm increments the load calculation for
2013the purpose of selecting the least busy mirror member when an I/O operation
2014immediately follows its predecessor on rotational vdevs
2015for the purpose of making decisions based on load.
2016.
2017.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
2018A number by which the balancing algorithm increments the load calculation for
2019the purpose of selecting the least busy mirror member when an I/O operation
2020lacks locality as defined by
2021.Sy zfs_vdev_mirror_rotating_seek_offset .
2022Operations within this that are not immediately following the previous operation
2023are incremented by half.
2024.
2025.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
2026The maximum distance for the last queued I/O operation in which
2027the balancing algorithm considers an operation to have locality.
2028.No See Sx ZFS I/O SCHEDULER .
2029.
2030.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
2031A number by which the balancing algorithm increments the load calculation for
2032the purpose of selecting the least busy mirror member on non-rotational vdevs
2033when I/O operations do not immediately follow one another.
2034.
2035.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
2036A number by which the balancing algorithm increments the load calculation for
2037the purpose of selecting the least busy mirror member when an I/O operation lacks
2038locality as defined by the
2039.Sy zfs_vdev_mirror_rotating_seek_offset .
2040Operations within this that are not immediately following the previous operation
2041are incremented by half.
2042.
2043.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq int
2044Aggregate read I/O operations if the on-disk gap between them is within this
2045threshold.
2046.
2047.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq int
2048Aggregate write I/O operations if the on-disk gap between them is within this
2049threshold.
2050.
2051.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
2052Select the raidz parity implementation to use.
2053.Pp
2054Variants that don't depend on CPU-specific features
2055may be selected on module load, as they are supported on all systems.
2056The remaining options may only be set after the module is loaded,
2057as they are available only if the implementations are compiled in
2058and supported on the running system.
2059.Pp
2060Once the module is loaded,
2061.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2062will show the available options,
2063with the currently selected one enclosed in square brackets.
2064.Pp
2065.TS
2066lb l l .
2067fastest	selected by built-in benchmark
2068original	original implementation
2069scalar	scalar implementation
2070sse2	SSE2 instruction set	64-bit x86
2071ssse3	SSSE3 instruction set	64-bit x86
2072avx2	AVX2 instruction set	64-bit x86
2073avx512f	AVX512F instruction set	64-bit x86
2074avx512bw	AVX512F & AVX512BW instruction sets	64-bit x86
2075aarch64_neon	NEON	Aarch64/64-bit ARMv8
2076aarch64_neonx2	NEON with more unrolling	Aarch64/64-bit ARMv8
2077powerpc_altivec	Altivec	PowerPC
2078.TE
2079.
2080.It Sy zfs_vdev_scheduler Pq charp
2081.Sy DEPRECATED .
2082Prints warning to kernel log for compatibility.
2083.
2084.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq int
2085Max event queue length.
2086Events in the queue can be viewed with
2087.Xr zpool-events 8 .
2088.
2089.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2090Maximum recent zevent records to retain for duplicate checking.
2091Setting this to
2092.Sy 0
2093disables duplicate detection.
2094.
2095.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
2096Lifespan for a recent ereport that was retained for duplicate checking.
2097.
2098.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2099The maximum number of taskq entries that are allowed to be cached.
2100When this limit is exceeded transaction records (itxs)
2101will be cleaned synchronously.
2102.
2103.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
2104The number of taskq entries that are pre-populated when the taskq is first
2105created and are immediately available for use.
2106.
2107.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2108This controls the number of threads used by
2109.Sy dp_zil_clean_taskq .
2110The default value of
2111.Sy 100%
2112will create a maximum of one thread per cpu.
2113.
2114.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
2115This sets the maximum block size used by the ZIL.
2116On very fragmented pools, lowering this
2117.Pq typically to Sy 36 KiB
2118can improve performance.
2119.
2120.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2121Disable the cache flush commands that are normally sent to disk by
2122the ZIL after an LWB write has completed.
2123Setting this will cause ZIL corruption on power loss
2124if a volatile out-of-order write cache is enabled.
2125.
2126.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2127Disable intent logging replay.
2128Can be disabled for recovery from corrupted ZIL.
2129.
2130.It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq ulong
2131Limit SLOG write size per commit executed with synchronous priority.
2132Any writes above that will be executed with lower (asynchronous) priority
2133to limit potential SLOG device abuse by single active ZIL writer.
2134.
2135.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
2136Setting this tunable to zero disables ZIL logging of new
2137.Sy xattr Ns = Ns Sy sa
2138records if the
2139.Sy org.openzfs:zilsaxattr
2140feature is enabled on the pool.
2141This would only be necessary to work around bugs in the ZIL logging or replay
2142code for this record type.
2143The tunable has no effect if the feature is disabled.
2144.
2145.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq int
2146Usually, one metaslab from each normal-class vdev is dedicated for use by
2147the ZIL to log synchronous writes.
2148However, if there are fewer than
2149.Sy zfs_embedded_slog_min_ms
2150metaslabs in the vdev, this functionality is disabled.
2151This ensures that we don't set aside an unreasonable amount of space for the ZIL.
2152.
2153.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq int
2154Whether heuristic for detection of incompressible data with zstd levels >= 3
2155using LZ4 and zstd-1 passes is enabled.
2156.
2157.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq int
2158Minimal uncompressed size (inclusive) of a record before the early abort
2159heuristic will be attempted.
2160.
2161.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2162If non-zero, the zio deadman will produce debugging messages
2163.Pq see Sy zfs_dbgmsg_enable
2164for all zios, rather than only for leaf zios possessing a vdev.
2165This is meant to be used by developers to gain
2166diagnostic information for hang conditions which don't involve a mutex
2167or other locking primitive: typically conditions in which a thread in
2168the zio pipeline is looping indefinitely.
2169.
2170.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2171When an I/O operation takes more than this much time to complete,
2172it's marked as slow.
2173Each slow operation causes a delay zevent.
2174Slow I/O counters can be seen with
2175.Nm zpool Cm status Fl s .
2176.
2177.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2178Throttle block allocations in the I/O pipeline.
2179This allows for dynamic allocation distribution when devices are imbalanced.
2180When enabled, the maximum number of pending allocations per top-level vdev
2181is limited by
2182.Sy zfs_vdev_queue_depth_pct .
2183.
2184.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
2185Control the naming scheme used when setting new xattrs in the user namespace.
2186If
2187.Sy 0
2188.Pq the default on Linux ,
2189user namespace xattr names are prefixed with the namespace, to be backwards
2190compatible with previous versions of ZFS on Linux.
2191If
2192.Sy 1
2193.Pq the default on Fx ,
2194user namespace xattr names are not prefixed, to be backwards compatible with
2195previous versions of ZFS on illumos and
2196.Fx .
2197.Pp
2198Either naming scheme can be read on this and future versions of ZFS, regardless
2199of this tunable, but legacy ZFS on illumos or
2200.Fx
2201are unable to read user namespace xattrs written in the Linux format, and
2202legacy versions of ZFS on Linux are unable to read user namespace xattrs written
2203in the legacy ZFS format.
2204.Pp
2205An existing xattr with the alternate naming scheme is removed when overwriting
2206the xattr so as to not accumulate duplicates.
2207.
2208.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2209Prioritize requeued I/O.
2210.
2211.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2212Percentage of online CPUs which will run a worker thread for I/O.
2213These workers are responsible for I/O work such as compression and
2214checksum calculations.
2215Fractional number of CPUs will be rounded down.
2216.Pp
2217The default value of
2218.Sy 80%
2219was chosen to avoid using all CPUs which can result in
2220latency issues and inconsistent application performance,
2221especially when slower compression and/or checksumming is enabled.
2222.
2223.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2224Number of worker threads per taskq.
2225Lower values improve I/O ordering and CPU utilization,
2226while higher reduces lock contention.
2227.Pp
2228If
2229.Sy 0 ,
2230generate a system-dependent value close to 6 threads per taskq.
2231.
2232.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2233Do not create zvol device nodes.
2234This may slightly improve startup time on
2235systems with a very large number of zvols.
2236.
2237.It Sy zvol_major Ns = Ns Sy 230 Pq uint
2238Major number for zvol block devices.
2239.
2240.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq ulong
2241Discard (TRIM) operations done on zvols will be done in batches of this
2242many blocks, where block size is determined by the
2243.Sy volblocksize
2244property of a zvol.
2245.
2246.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2247When adding a zvol to the system, prefetch this many bytes
2248from the start and end of the volume.
2249Prefetching these regions of the volume is desirable,
2250because they are likely to be accessed immediately by
2251.Xr blkid 8
2252or the kernel partitioner.
2253.
2254.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2255When processing I/O requests for a zvol, submit them synchronously.
2256This effectively limits the queue depth to
2257.Em 1
2258for each I/O submitter.
2259When unset, requests are handled asynchronously by a thread pool.
2260The number of requests which can be handled concurrently is controlled by
2261.Sy zvol_threads .
2262.Sy zvol_request_sync
2263is ignored when running on a kernel that supports block multiqueue
2264.Pq Li blk-mq .
2265.
2266.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
2267The number of system wide threads to use for processing zvol block IOs.
2268If
2269.Sy 0
2270(the default) then internally set
2271.Sy zvol_threads
2272to the number of CPUs present or 32 (whichever is greater).
2273.
2274.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
2275The number of threads per zvol to use for queuing IO requests.
2276This parameter will only appear if your kernel supports
2277.Li blk-mq
2278and is only read and assigned to a zvol at zvol load time.
2279If
2280.Sy 0
2281(the default) then internally set
2282.Sy zvol_blk_mq_threads
2283to the number of CPUs present.
2284.
2285.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2286Set to
2287.Sy 1
2288to use the
2289.Li blk-mq
2290API for zvols.
2291Set to
2292.Sy 0
2293(the default) to use the legacy zvol APIs.
2294This setting can give better or worse zvol performance depending on
2295the workload.
2296This parameter will only appear if your kernel supports
2297.Li blk-mq
2298and is only read and assigned to a zvol at zvol load time.
2299.
2300.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
2301If
2302.Sy zvol_use_blk_mq
2303is enabled, then process this number of
2304.Sy volblocksize Ns -sized blocks per zvol thread.
2305This tunable can be use to favor better performance for zvol reads (lower
2306values) or writes (higher values).
2307If set to
2308.Sy 0 ,
2309then the zvol layer will process the maximum number of blocks
2310per thread that it can.
2311This parameter will only appear if your kernel supports
2312.Li blk-mq
2313and is only applied at each zvol's load time.
2314.
2315.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
2316The queue_depth value for the zvol
2317.Li blk-mq
2318interface.
2319This parameter will only appear if your kernel supports
2320.Li blk-mq
2321and is only applied at each zvol's load time.
2322If
2323.Sy 0
2324(the default) then use the kernel's default queue depth.
2325Values are clamped to the kernel's
2326.Dv BLKDEV_MIN_RQ
2327and
2328.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
2329limits.
2330.
2331.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2332Defines zvol block devices behaviour when
2333.Sy volmode Ns = Ns Sy default :
2334.Bl -tag -compact -offset 4n -width "a"
2335.It Sy 1
2336.No equivalent to Sy full
2337.It Sy 2
2338.No equivalent to Sy dev
2339.It Sy 3
2340.No equivalent to Sy none
2341.El
2342.El
2343.
2344.Sh ZFS I/O SCHEDULER
2345ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2346The scheduler determines when and in what order those operations are issued.
2347The scheduler divides operations into five I/O classes,
2348prioritized in the following order: sync read, sync write, async read,
2349async write, and scrub/resilver.
2350Each queue defines the minimum and maximum number of concurrent operations
2351that may be issued to the device.
2352In addition, the device has an aggregate maximum,
2353.Sy zfs_vdev_max_active .
2354Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2355If the sum of the per-queue maxima exceeds the aggregate maximum,
2356then the number of active operations may reach
2357.Sy zfs_vdev_max_active ,
2358in which case no further operations will be issued,
2359regardless of whether all per-queue minima have been met.
2360.Pp
2361For many physical devices, throughput increases with the number of
2362concurrent operations, but latency typically suffers.
2363Furthermore, physical devices typically have a limit
2364at which more concurrent operations have no
2365effect on throughput or can actually cause it to decrease.
2366.Pp
2367The scheduler selects the next operation to issue by first looking for an
2368I/O class whose minimum has not been satisfied.
2369Once all are satisfied and the aggregate maximum has not been hit,
2370the scheduler looks for classes whose maximum has not been satisfied.
2371Iteration through the I/O classes is done in the order specified above.
2372No further operations are issued
2373if the aggregate maximum number of concurrent operations has been hit,
2374or if there are no operations queued for an I/O class that has not hit its maximum.
2375Every time an I/O operation is queued or an operation completes,
2376the scheduler looks for new operations to issue.
2377.Pp
2378In general, smaller
2379.Sy max_active Ns s
2380will lead to lower latency of synchronous operations.
2381Larger
2382.Sy max_active Ns s
2383may lead to higher overall throughput, depending on underlying storage.
2384.Pp
2385The ratio of the queues'
2386.Sy max_active Ns s
2387determines the balance of performance between reads, writes, and scrubs.
2388For example, increasing
2389.Sy zfs_vdev_scrub_max_active
2390will cause the scrub or resilver to complete more quickly,
2391but reads and writes to have higher latency and lower throughput.
2392.Pp
2393All I/O classes have a fixed maximum number of outstanding operations,
2394except for the async write class.
2395Asynchronous writes represent the data that is committed to stable storage
2396during the syncing stage for transaction groups.
2397Transaction groups enter the syncing state periodically,
2398so the number of queued async writes will quickly burst up
2399and then bleed down to zero.
2400Rather than servicing them as quickly as possible,
2401the I/O scheduler changes the maximum number of active async write operations
2402according to the amount of dirty data in the pool.
2403Since both throughput and latency typically increase with the number of
2404concurrent operations issued to physical devices, reducing the
2405burstiness in the number of concurrent operations also stabilizes the
2406response time of operations from other – and in particular synchronous – queues.
2407In broad strokes, the I/O scheduler will issue more concurrent operations
2408from the async write queue as there's more dirty data in the pool.
2409.
2410.Ss Async Writes
2411The number of concurrent operations issued for the async write I/O class
2412follows a piece-wise linear function defined by a few adjustable points:
2413.Bd -literal
2414       |              o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2415  ^    |             /^         |
2416  |    |            / |         |
2417active |           /  |         |
2418 I/O   |          /   |         |
2419count  |         /    |         |
2420       |        /     |         |
2421       |-------o      |         | <-- \fBzfs_vdev_async_write_min_active\fP
2422      0|_______^______|_________|
2423       0%      |      |       100% of \fBzfs_dirty_data_max\fP
2424               |      |
2425               |      `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2426               `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2427.Ed
2428.Pp
2429Until the amount of dirty data exceeds a minimum percentage of the dirty
2430data allowed in the pool, the I/O scheduler will limit the number of
2431concurrent operations to the minimum.
2432As that threshold is crossed, the number of concurrent operations issued
2433increases linearly to the maximum at the specified maximum percentage
2434of the dirty data allowed in the pool.
2435.Pp
2436Ideally, the amount of dirty data on a busy pool will stay in the sloped
2437part of the function between
2438.Sy zfs_vdev_async_write_active_min_dirty_percent
2439and
2440.Sy zfs_vdev_async_write_active_max_dirty_percent .
2441If it exceeds the maximum percentage,
2442this indicates that the rate of incoming data is
2443greater than the rate that the backend storage can handle.
2444In this case, we must further throttle incoming writes,
2445as described in the next section.
2446.
2447.Sh ZFS TRANSACTION DELAY
2448We delay transactions when we've determined that the backend storage
2449isn't able to accommodate the rate of incoming writes.
2450.Pp
2451If there is already a transaction waiting, we delay relative to when
2452that transaction will finish waiting.
2453This way the calculated delay time
2454is independent of the number of threads concurrently executing transactions.
2455.Pp
2456If we are the only waiter, wait relative to when the transaction started,
2457rather than the current time.
2458This credits the transaction for "time already served",
2459e.g. reading indirect blocks.
2460.Pp
2461The minimum time for a transaction to take is calculated as
2462.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2463.Pp
2464The delay has two degrees of freedom that can be adjusted via tunables.
2465The percentage of dirty data at which we start to delay is defined by
2466.Sy zfs_delay_min_dirty_percent .
2467This should typically be at or above
2468.Sy zfs_vdev_async_write_active_max_dirty_percent ,
2469so that we only start to delay after writing at full speed
2470has failed to keep up with the incoming write rate.
2471The scale of the curve is defined by
2472.Sy zfs_delay_scale .
2473Roughly speaking, this variable determines the amount of delay at the midpoint of the curve.
2474.Bd -literal
2475delay
2476 10ms +-------------------------------------------------------------*+
2477      |                                                             *|
2478  9ms +                                                             *+
2479      |                                                             *|
2480  8ms +                                                             *+
2481      |                                                            * |
2482  7ms +                                                            * +
2483      |                                                            * |
2484  6ms +                                                            * +
2485      |                                                            * |
2486  5ms +                                                           *  +
2487      |                                                           *  |
2488  4ms +                                                           *  +
2489      |                                                           *  |
2490  3ms +                                                          *   +
2491      |                                                          *   |
2492  2ms +                                              (midpoint) *    +
2493      |                                                  |    **     |
2494  1ms +                                                  v ***       +
2495      |             \fBzfs_delay_scale\fP ---------->     ********         |
2496    0 +-------------------------------------*********----------------+
2497      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2498.Ed
2499.Pp
2500Note, that since the delay is added to the outstanding time remaining on the
2501most recent transaction it's effectively the inverse of IOPS.
2502Here, the midpoint of
2503.Em 500 us
2504translates to
2505.Em 2000 IOPS .
2506The shape of the curve
2507was chosen such that small changes in the amount of accumulated dirty data
2508in the first three quarters of the curve yield relatively small differences
2509in the amount of delay.
2510.Pp
2511The effects can be easier to understand when the amount of delay is
2512represented on a logarithmic scale:
2513.Bd -literal
2514delay
2515100ms +-------------------------------------------------------------++
2516      +                                                              +
2517      |                                                              |
2518      +                                                             *+
2519 10ms +                                                             *+
2520      +                                                           ** +
2521      |                                              (midpoint)  **  |
2522      +                                                  |     **    +
2523  1ms +                                                  v ****      +
2524      +             \fBzfs_delay_scale\fP ---------->        *****         +
2525      |                                             ****             |
2526      +                                          ****                +
2527100us +                                        **                    +
2528      +                                       *                      +
2529      |                                      *                       |
2530      +                                     *                        +
2531 10us +                                     *                        +
2532      +                                                              +
2533      |                                                              |
2534      +                                                              +
2535      +--------------------------------------------------------------+
2536      0%                    <- \fBzfs_dirty_data_max\fP ->               100%
2537.Ed
2538.Pp
2539Note here that only as the amount of dirty data approaches its limit does
2540the delay start to increase rapidly.
2541The goal of a properly tuned system should be to keep the amount of dirty data
2542out of that range by first ensuring that the appropriate limits are set
2543for the I/O scheduler to reach optimal throughput on the back-end storage,
2544and then by changing the value of
2545.Sy zfs_delay_scale
2546to increase the steepness of the curve.
2547