Lines Matching +full:use +full:- +full:parity
1 .\" SPDX-License-Identifier: CDDL-1.0
7 .\" You may not use this file except in compliance with the License.
10 .\" or https://opensource.org/licenses/CDDL-1.0.
28 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
43 .Bl -tag -width "special"
47 ZFS can use individual slices or partitions, though the recommended mode of
48 operation is to use whole disks.
61 The use of files as a backing store is strongly discouraged.
69 .Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
72 A distributed-parity layout, similar to RAID-5/6, with improved distribution of
73 parity, and which does not suffer from the RAID-5/6
75 .Pq in which data and parity become inconsistent after a power loss .
76 Data and parity is striped across all disks within a raidz group, though not
79 A raidz group can have single, double, or triple parity, meaning that the
84 vdev type specifies a single-parity raidz group; the
86 vdev type specifies a double-parity raidz group; and the
88 vdev type specifies a triple-parity raidz group.
95 .Em N No disks of size Em X No with Em P No parity disks can hold approximately
96 .Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
98 parity disks.
104 .Em D No data devices and Em P No parity devices .
126 .Sy floor((N-S)/(D+P))*single_drive_IOPS .
128 Like raidz, a dRAID can have single-, double-, or triple-parity.
134 types can be used to specify the parity level.
142 .No parity level, and Em S No distributed hot spares can hold approximately
143 .Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
145 .It Sy draid Ns Oo Ar parity Oc Ns Oo Sy \&: Ns Ar data Ns Sy d Oc Ns Oo Sy \&: Ns Ar children Ns S…
146 A non-default dRAID configuration can be specified by appending one or more
150 .Bl -tag -compact -width "children"
151 .It Ar parity
152 The parity level (1-3).
159 .Em 8 , No unless Em N-P-S No is less than Em 8 .
162 Useful as a cross-check when listing a large number of devices.
169 A pseudo-vdev which keeps track of available hot spares for a pool.
175 If more than one log device is specified, then writes are load-balanced between
187 allocations are load-balanced between those devices.
194 allocations are load-balanced between those devices.
215 Data is dynamically distributed across all top-level devices to balance data
235 In order to take advantage of these features, a pool must make use of some form
237 While ZFS supports running in a non-redundant configuration, where each root
249 The health of the top-level vdev, such as a mirror or raidz device,
252 A top-level vdev or component device is in one of the following states:
253 .Bl -tag -width "DEGRADED"
255 One or more top-level vdevs is in the degraded state because one or more
262 .Bl -bullet -compact
266 ZFS continues to use the device as necessary.
273 One or more top-level vdevs is in the faulted state because one or more
280 .Bl -bullet -compact
285 prevent further use of the device.
295 Device removal detection is hardware-dependent and may not be supported on all
312 (e.g. from raidz parity or a mirrored copy).
321 If a device is removed and later re-attached to the system,
323 Device attachment detection is hardware-dependent
350 exported, since other pools may use this shared spare, which may lead to
356 both could attempt to use the spare at the same time.
359 An in-progress spare replacement can be canceled by detaching the hot spare.
368 .Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
369 .No which is a single parity dRAID Pc
380 NFS and other applications can also use
397 Mirrored devices can be removed by specifying the top-level mirror vdev.
404 For read-heavy workloads, where the working set size is much larger than what
408 read-workloads of mostly static content.
436 will result in scanning the full-length ARC lists for cacheable content to be
447 The user can off- and online the cache device when there is less memory
458 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
475 .Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
495 .Pq non- Ns Sy dedup Ns /- Ns Sy special
507 Inclusion of small file or zvol blocks in the special class is opt-in.