Lines Matching +full:armv8 +full:- +full:based
9 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
76 the array is dynamically sized based on total system memory.
102 Turbo L2ARC warm-up.
179 Percent of ARC size allowed for L2ARC-only headers.
252 before moving on to the next top-level vdev.
255 Enable metaslab group biasing based on their vdevs' over- or under-utilization
305 When attempting to log an output nvlist of an ioctl in the on-disk history,
310 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
316 Enable/disable segment-based metaslab selection.
319 When using segment-based metaslab selection, continue allocating
338 becomes the performance limiting factor on high-performance storage.
382 .Bl -item -compact
390 If that fails then we will have a multi-layer gang block.
393 .Bl -item -compact
403 If that fails then we will have a multi-layer gang block.
413 When a vdev is added, target this number of metaslabs per top-level vdev.
422 Maximum ashift used when optimizing for logical \[->] physical sector size on
424 top-level vdevs.
430 If non-zero, then a Direct I/O write's checksum will be verified every
450 Minimum ashift used when creating new top-level vdevs.
453 Minimum number of metaslabs to create in a top-level vdev.
461 Practical upper limit of total metaslabs per top-level vdev.
495 Max amount of memory to use for RAID-Z expansion I/O.
499 For testing, pause RAID-Z expansion when reflow amount reaches this value.
502 For expanded RAID-Z, aggregate reads that have more rows than this.
544 If this parameter is unset, the traversal skips non-metadata blocks.
546 import has started to stop or start the traversal of non-metadata blocks.
568 It also limits the worst-case time to allocate space.
588 Limits the number of on-disk error log entries that will be converted to the
595 During top-level vdev removal, chunks of data are copied from the vdev
606 Logical ashift for file-based devices.
609 Physical ashift for file-based devices.
664 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
673 This is the minimum allocation size that will use scatter (page-based) ABDs.
678 bytes, try to unpin some of it in response to demand for non-metadata.
681 which indicates that a percent which is based on
692 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
697 The ARC's buffer hash table is sized based on the assumption of an average
700 with 8-byte pointers.
723 Number ARC headers to evict per sub-list before proceeding to another sub-list.
724 This batch-style operation prevents entire sub-lists from being evicted at once
750 .Sy all_system_memory No \- Sy 1 GiB
792 Number of missing top-level vdevs which will be allowed during
793 pool import (only in read-only mode).
805 .Pa zfs-dbgmsg
810 equivalent to a quarter of the user-wired memory limit under
817 To allow more fine-grained locking, each ARC state contains a series
819 Locking is performed at the level of these "sub-lists".
820 This parameters controls the number of sub-lists per ARC state,
912 The timeout is scaled based on a percentage of the last lwb
935 bytes on-disk.
940 Only attempt to condense indirect vdev mappings if the on-disk size
997 .Bl -tag -compact -offset 4n -width "continue"
1003 Attempt to recover from a "hung" operation by re-dispatching it
1007 This can be used to facilitate automatic fail-over
1008 to a properly configured fail-over partner.
1027 Enable prefetching dedup-ed blocks which are going to be freed.
1094 OpenZFS will spend no more than this much memory on maintaining the in-memory
1137 OpenZFS pre-release versions and now have compatibility issues.
1148 are not created per-object and instead a hashtable is used where collisions
1157 Upper-bound limit for unflushed metadata changes to be held by the
1175 This tunable is important because it involves a trade-off between import
1204 It effectively limits maximum number of unflushed per-TXG spacemap logs
1252 for 32-bit systems.
1284 The upper limit of write-transaction zil log data size in bytes.
1296 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1329 results in the original CPU-based calculation being used.
1486 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1487 the number of concurrently-active I/O operations is limited to
1495 and the number of concurrently-active non-interactive operations is increased to
1506 To prevent non-interactive I/O, like scrub,
1515 Maximum number of queued allocations per top-level vdev expressed as
1533 The following options may be bitwise-ored together:
1591 The following flags may be bitwise-ored together:
1604 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1637 from the error-encountering filesystem is "temporarily leaked".
1651 .Bl -enum -compact -offset 4n -width "1."
1654 e.g. due to a top-level vdev going offline), or
1691 .Xr zpool-initialize 8 .
1695 .Xr zpool-initialize 8 .
1699 The threshold size (in block pointers) at which we create a new sub-livelist.
1811 Setting the threshold to a non-zero percentage will stop allocations
1838 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1898 if a volatile out-of-order write cache is enabled.
1901 Allow no-operation writes.
1918 The number of blocks pointed by indirect (non-L0) block which should be
1947 Disable QAT hardware acceleration for AES-GCM encryption.
1963 top-level vdev.
2064 .Bl -tag -compact -offset 4n -width "a"
2071 The largest mostly-contiguous chunk of found data will be verified first.
2092 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2099 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2134 Including unmodified copies of the spill blocks creates a backwards-compatible
2137 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2148 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2159 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2181 When this variable is set to non-zero a corrective receive:
2182 .Bl -enum -compact -offset 4n -width "1."
2224 many blocks' size will change, and thus we have to re-allocate
2254 This option is useful for pools constructed from large thinly-provisioned
2270 This setting represents a trade-off between issuing larger,
2294 Max vdev I/O aggregation size for non-rotating media.
2300 for the purpose of making decisions based on load.
2317 the purpose of selecting the least busy mirror member on non-rotational vdevs
2330 Aggregate read I/O operations if the on-disk gap between them is within this
2334 Aggregate write I/O operations if the on-disk gap between them is within this
2340 Variants that don't depend on CPU-specific features
2353 fastest selected by built-in benchmark
2356 sse2 SSE2 instruction set 64-bit x86
2357 ssse3 SSSE3 instruction set 64-bit x86
2358 avx2 AVX2 instruction set 64-bit x86
2359 avx512f AVX512F instruction set 64-bit x86
2360 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2361 aarch64_neon NEON Aarch64/64-bit ARMv8
2362 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2373 .Xr zpool-events 8 .
2390 The number of taskq entries that are pre-populated when the taskq is first
2415 if a volatile out-of-order write cache is enabled.
2437 Usually, one metaslab from each normal-class vdev is dedicated for use by
2447 using LZ4 and zstd-1 passes is enabled.
2454 If non-zero, the zio deadman will produce debugging messages
2472 When enabled, the maximum number of pending allocations per top-level vdev
2524 generate a system-dependent value close to 6 threads per taskq.
2577 .Pq Li blk-mq .
2597 .Li blk-mq
2609 .Li blk-mq
2617 .Li blk-mq
2624 .Sy volblocksize Ns -sized blocks per zvol thread.
2632 .Li blk-mq
2637 .Li blk-mq
2640 .Li blk-mq
2654 .Bl -tag -compact -offset 4n -width "a"
2678 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2679 If the sum of the per-queue maxima exceeds the aggregate maximum,
2683 regardless of whether all per-queue minima have been met.
2737 follows a piece-wise linear function defined by a few adjustable points:
2738 .Bd -literal
2739 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2746 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2750 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2751 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2787 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy …
2800 .Bd -literal
2802 10ms +-------------------------------------------------------------*+
2821 | \fBzfs_delay_scale\fP ----------> ******** |
2822 0 +-------------------------------------*********----------------+
2823 0% <- \fBzfs_dirty_data_max\fP -> 100%
2839 .Bd -literal
2841 100ms +-------------------------------------------------------------++
2850 + \fBzfs_delay_scale\fP ----------> ***** +
2861 +--------------------------------------------------------------+
2862 0% <- \fBzfs_dirty_data_max\fP -> 100%
2869 for the I/O scheduler to reach optimal throughput on the back-end storage,