Lines Matching +full:trade +full:- +full:off

1 .\" SPDX-License-Identifier: CDDL-1.0
10 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
32 .Bl -tag -width Ds
103 Turbo L2ARC warm-up.
145 When turning off this feature (setting it to 0), some MRU buffers will
180 Percent of ARC size allowed for L2ARC-only headers.
253 of a top-level vdev before moving on to the next top-level vdev.
256 Enable metaslab groups biasing based on their over- or under-utilization
265 Setting to 1 equals to 2 if the pool is write-bound or 0 otherwise.
316 When attempting to log an output nvlist of an ioctl in the on-disk history,
321 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
327 Enable/disable segment-based metaslab selection.
330 When using segment-based metaslab selection, continue allocating
349 becomes the performance limiting factor on high-performance storage.
393 .Bl -item -compact
401 If that fails then we will have a multi-layer gang block.
404 .Bl -item -compact
414 If that fails then we will have a multi-layer gang block.
424 When a vdev is added, target this number of metaslabs per top-level vdev.
433 Maximum ashift used when optimizing for logical \[->] physical sector size on
435 top-level vdevs.
441 If non-zero, then a Direct I/O write's checksum will be verified every
461 Minimum ashift used when creating new top-level vdevs.
464 Minimum number of metaslabs to create in a top-level vdev.
472 Practical upper limit of total metaslabs per top-level vdev.
506 Max amount of memory to use for RAID-Z expansion I/O.
510 For testing, pause RAID-Z expansion when reflow amount reaches this value.
513 For expanded RAID-Z, aggregate reads that have more rows than this.
555 If this parameter is unset, the traversal skips non-metadata blocks.
557 import has started to stop or start the traversal of non-metadata blocks.
579 It also limits the worst-case time to allocate space.
599 Limits the number of on-disk error log entries that will be converted to the
606 During top-level vdev removal, chunks of data are copied from the vdev
607 which may include free space in order to trade bandwidth for IOPS.
617 Logical ashift for file-based devices.
620 Physical ashift for file-based devices.
675 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
684 This is the minimum allocation size that will use scatter (page-based) ABDs.
689 bytes, try to unpin some of it in response to demand for non-metadata.
703 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
711 with 8-byte pointers.
734 Number ARC headers to evict per sub-list before proceeding to another sub-list.
735 This batch-style operation prevents entire sub-lists from being evicted at once
761 .Sy all_system_memory No \- Sy 1 GiB
803 Number of missing top-level vdevs which will be allowed during
804 pool import (only in read-only mode).
816 .Pa zfs-dbgmsg
821 equivalent to a quarter of the user-wired memory limit under
828 To allow more fine-grained locking, each ARC state contains a series
830 Locking is performed at the level of these "sub-lists".
831 This parameters controls the number of sub-lists per ARC state,
865 .It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
946 bytes on-disk.
951 Only attempt to condense indirect vdev mappings if the on-disk size
1008 .Bl -tag -compact -offset 4n -width "continue"
1014 Attempt to recover from a "hung" operation by re-dispatching it
1018 This can be used to facilitate automatic fail-over
1019 to a properly configured fail-over partner.
1038 Enable prefetching dedup-ed blocks which are going to be freed.
1058 to zero more quickly, but can make it less able to back off if log
1095 Enabling it will reduce worst-case import times, at the cost of increased TXG
1120 OpenZFS will spend no more than this much memory on maintaining the in-memory
1163 OpenZFS pre-release versions and now have compatibility issues.
1174 are not created per-object and instead a hashtable is used where collisions
1183 Upper-bound limit for unflushed metadata changes to be held by the
1201 This tunable is important because it involves a trade-off between import
1230 It effectively limits maximum number of unflushed per-TXG spacemap logs
1278 for 32-bit systems.
1310 The upper limit of write-transaction ZIL log data size in bytes.
1322 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1355 results in the original CPU-based calculation being used.
1512 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1513 the number of concurrently-active I/O operations is limited to
1521 and the number of concurrently-active non-interactive operations is increased to
1532 To prevent non-interactive I/O, like scrub,
1542 The following options may be bitwise-ored together:
1600 The following flags may be bitwise-ored together:
1613 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1620 16384 ZFS_DEBUG_BRT Enable BRT-related debugging messages.
1622 65536 ZFS_DEBUG_DDT Enable DDT-related debugging messages.
1650 from the error-encountering filesystem is "temporarily leaked".
1664 .Bl -enum -compact -offset 4n -width "1."
1667 e.g. due to a top-level vdev going offline), or
1704 .Xr zpool-initialize 8 .
1708 .Xr zpool-initialize 8 .
1712 The threshold size (in block pointers) at which we create a new sub-livelist.
1718 this threshold, the clone turns off the livelist and reverts to the old
1824 Setting the threshold to a non-zero percentage will stop allocations
1851 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1911 if a volatile out-of-order write cache is enabled.
1914 Allow no-operation writes.
1931 The number of blocks pointed by indirect (non-L0) block which should be
1960 Disable QAT hardware acceleration for AES-GCM encryption.
1976 top-level vdev.
2077 .Bl -tag -compact -offset 4n -width "a"
2084 The largest mostly-contiguous chunk of found data will be verified first.
2105 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2112 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2147 Including unmodified copies of the spill blocks creates a backwards-compatible
2150 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2161 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2172 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2194 When this variable is set to non-zero a corrective receive:
2195 .Bl -enum -compact -offset 4n -width "1."
2236 the average number of sync passes; because when we turn compression off,
2237 many blocks' size will change, and thus we have to re-allocate
2267 This option is useful for pools constructed from large thinly-provisioned
2283 This setting represents a trade-off between issuing larger,
2307 Max vdev I/O aggregation size for non-rotating media.
2330 the purpose of selecting the least busy mirror member on non-rotational vdevs
2343 Aggregate read I/O operations if the on-disk gap between them is within this
2347 Aggregate write I/O operations if the on-disk gap between them is within this
2353 Variants that don't depend on CPU-specific features
2366 fastest selected by built-in benchmark
2369 sse2 SSE2 instruction set 64-bit x86
2370 ssse3 SSSE3 instruction set 64-bit x86
2371 avx2 AVX2 instruction set 64-bit x86
2372 avx512f AVX512F instruction set 64-bit x86
2373 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2374 aarch64_neon NEON Aarch64/64-bit ARMv8
2375 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2386 .Xr zpool-events 8 .
2403 The number of taskq entries that are pre-populated when the taskq is first
2428 if a volatile out-of-order write cache is enabled.
2450 Usually, one metaslab from each normal-class vdev is dedicated for use by
2460 using LZ4 and zstd-1 passes is enabled.
2467 If non-zero, the zio deadman will produce debugging messages
2534 generate a system-dependent value close to 6 threads per taskq.
2587 .Pq Li blk-mq .
2607 .Li blk-mq
2619 .Li blk-mq
2627 .Li blk-mq
2634 .Sy volblocksize Ns -sized blocks per zvol thread.
2642 .Li blk-mq
2647 .Li blk-mq
2650 .Li blk-mq
2664 .Bl -tag -compact -offset 4n -width "a"
2688 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2689 If the sum of the per-queue maxima exceeds the aggregate maximum,
2693 regardless of whether all per-queue minima have been met.
2747 follows a piece-wise linear function defined by a few adjustable points:
2748 .Bd -literal
2749 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2756 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2760 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2761 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2797 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy …
2810 .Bd -literal
2812 10ms +-------------------------------------------------------------*+
2831 | \fBzfs_delay_scale\fP ----------> ******** |
2832 0 +-------------------------------------*********----------------+
2833 0% <- \fBzfs_dirty_data_max\fP -> 100%
2849 .Bd -literal
2851 100ms +-------------------------------------------------------------++
2860 + \fBzfs_delay_scale\fP ----------> ***** +
2871 +--------------------------------------------------------------+
2872 0% <- \fBzfs_dirty_data_max\fP -> 100%
2879 for the I/O scheduler to reach optimal throughput on the back-end storage,