Lines Matching +full:fine +full:- +full:granular

1 .\" SPDX-License-Identifier: CDDL-1.0
11 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
98 Turbo L2ARC warm-up.
175 Percent of ARC size allowed for L2ARC-only headers.
248 of a top-level vdev before moving on to the next top-level vdev.
251 Enable metaslab groups biasing based on their over- or under-utilization
260 Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
313 if not page-aligned instead of silently falling back to uncached I/O.
316 When attempting to log an output nvlist of an ioctl in the on-disk history,
321 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
327 Enable/disable segment-based metaslab selection.
330 When using segment-based metaslab selection, continue allocating
349 becomes the performance limiting factor on high-performance storage.
393 .Bl -item -compact
401 If that fails then we will have a multi-layer gang block.
404 .Bl -item -compact
414 If that fails then we will have a multi-layer gang block.
424 When a vdev is added, target this number of metaslabs per top-level vdev.
433 Maximum ashift used when optimizing for logical \[->] physical sector size on
435 top-level vdevs.
441 If non-zero, then a Direct I/O write's checksum will be verified every
461 Minimum ashift used when creating new top-level vdevs.
464 Minimum number of metaslabs to create in a top-level vdev.
472 Practical upper limit of total metaslabs per top-level vdev.
506 Max amount of memory to use for RAID-Z expansion I/O.
510 For testing, pause RAID-Z expansion when reflow amount reaches this value.
513 For expanded RAID-Z, aggregate reads that have more rows than this.
555 If this parameter is unset, the traversal skips non-metadata blocks.
557 import has started to stop or start the traversal of non-metadata blocks.
579 It also limits the worst-case time to allocate space.
599 Limits the number of on-disk error log entries that will be converted to the
610 A number of disks in a RAID-Z or dRAID vdev may sit out at the same time, up
614 Defaults to 600 seconds and a value of zero disables disk sit-outs in general,
618 How often each RAID-Z and dRAID vdev will check for slow disk outliers.
626 When performing slow outlier checks for RAID-Z and dRAID vdevs, this value is
632 problems, but may significantly increase the rate of spurious sit-outs.
635 of the inter-quartile range (IQR) that is being used in a Tukey's Fence
637 This is much higher than a normal Tukey's Fence k-value, because the
638 distribution under consideration is probably an extreme-value distribution,
642 During top-level vdev removal, chunks of data are copied from the vdev
653 Logical ashift for file-based devices.
656 Physical ashift for file-based devices.
711 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
720 This is the minimum allocation size that will use scatter (page-based) ABDs.
725 bytes, try to unpin some of it in response to demand for non-metadata.
739 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
747 with 8-byte pointers.
770 Number ARC headers to evict per sub-list before proceeding to another sub-list.
771 This batch-style operation prevents entire sub-lists from being evicted at once
779 Each thread will process one sub-list at a time,
780 until the eviction target is reached or all sub-lists have been processed.
788 1-4 1
789 5-8 2
790 9-15 3
791 16-31 4
792 32-63 6
793 64-95 8
794 96-127 9
795 128-160 11
796 160-191 12
797 192-223 13
798 224-255 14
831 .Sy all_system_memory No \- Sy 1 GiB
873 Number of missing top-level vdevs which will be allowed during
874 pool import (only in read-only mode).
886 .Pa zfs-dbgmsg
891 equivalent to a quarter of the user-wired memory limit under
898 To allow more fine-grained locking, each ARC state contains a series
900 Locking is performed at the level of these "sub-lists".
901 This parameters controls the number of sub-lists per ARC state,
1014 bytes on-disk.
1019 Only attempt to condense indirect vdev mappings if the on-disk size
1076 .Bl -tag -compact -offset 4n -width "continue"
1082 Attempt to recover from a "hung" operation by re-dispatching it
1086 This can be used to facilitate automatic fail-over
1087 to a properly configured fail-over partner.
1106 Enable prefetching dedup-ed blocks which are going to be freed.
1163 Enabling it will reduce worst-case import times, at the cost of increased TXG
1188 OpenZFS will spend no more than this much memory on maintaining the in-memory
1231 OpenZFS pre-release versions and now have compatibility issues.
1242 are not created per-object and instead a hashtable is used where collisions
1251 Upper-bound limit for unflushed metadata changes to be held by the
1269 This tunable is important because it involves a trade-off between import
1298 It effectively limits maximum number of unflushed per-TXG spacemap logs
1346 for 32-bit systems.
1378 The upper limit of write-transaction ZIL log data size in bytes.
1390 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1423 results in the original CPU-based calculation being used.
1581 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1582 the number of concurrently-active I/O operations is limited to
1590 and the number of concurrently-active non-interactive operations is increased to
1601 To prevent non-interactive I/O, like scrub,
1611 The following options may be bitwise-ored together:
1653 The following flags may be bitwise-ored together:
1666 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1673 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages.
1675 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages.
1703 from the error-encountering filesystem is "temporarily leaked".
1717 .Bl -enum -compact -offset 4n -width "1."
1720 e.g. due to a top-level vdev going offline), or
1766 .Xr zpool-initialize 8 .
1770 .Xr zpool-initialize 8 .
1774 The threshold size (in block pointers) at which we create a new sub-livelist.
1886 Setting the threshold to a non-zero percentage will stop allocations
1913 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1973 if a volatile out-of-order write cache is enabled.
1976 Allow no-operation writes.
1993 The number of blocks pointed by indirect (non-L0) block which should be
2022 Disable QAT hardware acceleration for AES-GCM encryption.
2038 top-level vdev.
2139 .Bl -tag -compact -offset 4n -width "a"
2146 The largest mostly-contiguous chunk of found data will be verified first.
2167 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2174 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2209 Including unmodified copies of the spill blocks creates a backwards-compatible
2212 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2223 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2234 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2256 When this variable is set to non-zero a corrective receive:
2257 .Bl -enum -compact -offset 4n -width "1."
2288 These timestamps can later be used for more granular operations, such as
2314 many blocks' size will change, and thus we have to re-allocate
2344 This option is useful for pools constructed from large thinly-provisioned
2360 This setting represents a trade-off between issuing larger,
2384 Max vdev I/O aggregation size for non-rotating media.
2407 the purpose of selecting the least busy mirror member on non-rotational vdevs
2420 Aggregate read I/O operations if the on-disk gap between them is within this
2424 Aggregate write I/O operations if the on-disk gap between them is within this
2430 Variants that don't depend on CPU-specific features
2443 fastest selected by built-in benchmark
2446 sse2 SSE2 instruction set 64-bit x86
2447 ssse3 SSSE3 instruction set 64-bit x86
2448 avx2 AVX2 instruction set 64-bit x86
2449 avx512f AVX512F instruction set 64-bit x86
2450 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2451 aarch64_neon NEON Aarch64/64-bit ARMv8
2452 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2459 .Xr zpool-events 8 .
2476 The number of taskq entries that are pre-populated when the taskq is first
2501 if a volatile out-of-order write cache is enabled.
2533 using LZ4 and zstd-1 passes is enabled.
2540 If non-zero, the zio deadman will produce debugging messages
2593 infrequently-accessed files, until the kernel's memory pressure mechanism
2650 generate a system-dependent value close to 6 threads per taskq.
2703 .Pq Li blk-mq .
2723 .Li blk-mq
2735 .Li blk-mq
2743 .Li blk-mq
2750 .Sy volblocksize Ns -sized blocks per zvol thread.
2758 .Li blk-mq
2763 .Li blk-mq
2766 .Li blk-mq
2780 .Bl -tag -compact -offset 4n -width "a"
2804 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2805 If the sum of the per-queue maxima exceeds the aggregate maximum,
2809 regardless of whether all per-queue minima have been met.
2863 follows a piece-wise linear function defined by a few adjustable points:
2864 .Bd -literal
2865 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2872 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2876 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2877 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2913 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy …
2926 .Bd -literal
2928 10ms +-------------------------------------------------------------*+
2947 | \fBzfs_delay_scale\fP ----------> ******** |
2948 0 +-------------------------------------*********----------------+
2949 0% <- \fBzfs_dirty_data_max\fP -> 100%
2965 .Bd -literal
2967 100ms +-------------------------------------------------------------++
2976 + \fBzfs_delay_scale\fP ----------> ***** +
2987 +--------------------------------------------------------------+
2988 0% <- \fBzfs_dirty_data_max\fP -> 100%
2995 for the I/O scheduler to reach optimal throughput on the back-end storage,