Lines Matching +full:device +full:- +full:level
9 .\" or https://opensource.org/licenses/CDDL-1.0.
27 .\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
39 A "virtual device" describes a single device or a collection of devices,
42 .Bl -tag -width "special"
44 A block device, typically located under
68 .Em N No disks of size Em X No can hold Em X No bytes and can withstand Em N-1
71 A distributed-parity layout, similar to RAID-5/6, with improved distribution of
72 parity, and which does not suffer from the RAID-5/6
83 vdev type specifies a single-parity raidz group; the
85 vdev type specifies a double-parity raidz group; and the
87 vdev type specifies a triple-parity raidz group.
95 .Em (N-P)*X No bytes and can withstand Em P No devices failing without losing data .
125 .Sy floor((N-S)/(D+P))*single_drive_IOPS .
127 Like raidz, a dRAID can have single-, double-, or triple-parity.
133 types can be used to specify the parity level.
141 .No parity level, and Em S No distributed hot spares can hold approximately
142 .Em (N-S)*(D/(D+P))*X No bytes and can withstand Em P
145 A non-default dRAID configuration can be specified by appending one or more
149 .Bl -tag -compact -width "children"
151 The parity level (1-3).
158 .Em 8 , No unless Em N-P-S No is less than Em 8 .
161 Useful as a cross-check when listing a large number of devices.
168 A pseudo-vdev which keeps track of available hot spares for a pool.
173 A separate intent log device.
174 If more than one log device is specified, then writes are load-balanced between
182 A device solely dedicated for deduplication tables.
183 The redundancy of this device should match the redundancy of the other normal
185 If more than one dedup device is specified, then
186 allocations are load-balanced between those devices.
188 A device dedicated solely for allocating various kinds of internal metadata,
190 The redundancy of this device should match the redundancy of the other normal
192 If more than one special device is specified, then
193 allocations are load-balanced between those devices.
199 A device used to cache storage pool data.
200 A cache device cannot be configured as a mirror or raidz group.
207 A mirror, raidz or draid virtual device can only be created with files or disks.
214 Data is dynamically distributed across all top-level devices to balance data
228 .Ss Device Failure and Recovery
229 ZFS supports a rich set of mechanisms for handling device failure and data
236 While ZFS supports running in a non-redundant configuration, where each root
248 The health of the top-level vdev, such as a mirror or raidz device,
251 A top-level vdev or component device is in one of the following states:
252 .Bl -tag -width "DEGRADED"
254 One or more top-level vdevs is in the degraded state because one or more
261 .Bl -bullet -compact
264 device is degraded as an indication that something may be wrong.
265 ZFS continues to use the device as necessary.
268 The device could not be marked as faulted because there are insufficient
272 One or more top-level vdevs is in the faulted state because one or more
279 .Bl -bullet -compact
281 The device could be opened, but the contents did not match expected values.
283 The number of I/O errors exceeds acceptable levels and the device is faulted to
284 prevent further use of the device.
287 The device was explicitly taken offline by the
291 The device is online and functioning.
293 The device was physically removed while the system was running.
294 Device removal detection is hardware-dependent and may not be supported on all
297 The device could not be opened.
298 If a pool is imported when a device was unavailable, then the device will be
320 If a device is removed and later re-attached to the system,
321 ZFS attempts to bring the device online automatically.
322 Device attachment detection is hardware-dependent
329 But, when an active device
345 original device is replaced.
346 At this point, the hot spare becomes available again, if another device fails.
354 and both pools suffer a device failure at the same time,
358 An in-progress spare replacement can be cancelled by detaching the hot spare.
359 If the original faulted device is detached, then the hot spare assumes its
367 .Po Sy draid1 Ns - Ns Ar 2 Ns - Ns Ar 3 No specifies spare Ar 3 No of vdev Ar 2 ,
396 Mirrored devices can be removed by specifying the top-level mirror vdev.
403 For read-heavy workloads, where the working set size is much larger than what
407 read-workloads of mostly static content.
416 If a read error is encountered on a cache device, that read I/O is reissued to
417 the original storage pool device, which might be part of a mirrored or raidz
430 The cache device header
435 will result in scanning the full-length ARC lists for cacheable content to be
437 If a cache device is added with
440 restored in L2ARC, even if the device was previously part of the pool.
441 If a cache device is onlined with
445 where the contents of the cache device are not fully restored in L2ARC.
446 The user can off- and online the cache device when there is less memory
457 A pool checkpoint can be thought of as a pool-wide snapshot and should be used
474 .Dl # Nm zpool Cm import Fl -rewind-to-checkpoint Ar pool
493 .Pq non- Ns Sy dedup Ns /- Ns Sy special
505 Inclusion of small file blocks in the special class is opt-in.