xref: /freebsd/sys/contrib/openzfs/man/man8/zpool-attach.8 (revision 06c3fb2749bda94cb5201f81ffdb8fa6c3161b2e)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or https://opensource.org/licenses/CDDL-1.0.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
23.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
24.\" Copyright (c) 2017 Datto Inc.
25.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
26.\" Copyright 2017 Nexenta Systems, Inc.
27.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
28.\"
29.Dd June 28, 2023
30.Dt ZPOOL-ATTACH 8
31.Os
32.
33.Sh NAME
34.Nm zpool-attach
35.Nd attach new device to existing ZFS vdev
36.Sh SYNOPSIS
37.Nm zpool
38.Cm attach
39.Op Fl fsw
40.Oo Fl o Ar property Ns = Ns Ar value Oc
41.Ar pool device new_device
42.
43.Sh DESCRIPTION
44Attaches
45.Ar new_device
46to the existing
47.Ar device .
48The behavior differs depending on if the existing
49.Ar device
50is a RAID-Z device, or a mirror/plain device.
51.Pp
52If the existing device is a mirror or plain device
53.Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 ,
54the new device will be mirrored with the existing device, a resilver will be
55initiated, and the new device will contribute to additional redundancy once the
56resilver completes.
57If
58.Ar device
59is not currently part of a mirrored configuration,
60.Ar device
61automatically transforms into a two-way mirror of
62.Ar device
63and
64.Ar new_device .
65If
66.Ar device
67is part of a two-way mirror, attaching
68.Ar new_device
69creates a three-way mirror, and so on.
70In either case,
71.Ar new_device
72begins to resilver immediately and any running scrub is cancelled.
73.Pp
74If the existing device is a RAID-Z device
75.Pq e.g. specified as Qq Ar raidz2-0 ,
76the new device will become part of that RAID-Z group.
77A "raidz expansion" will be initiated, and once the expansion completes,
78the new device will contribute additional space to the RAID-Z group.
79The expansion entails reading all allocated space from existing disks in the
80RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including
81the newly added
82.Ar device ) .
83Its progress can be monitored with
84.Nm zpool Cm status .
85.Pp
86Data redundancy is maintained during and after the expansion.
87If a disk fails while the expansion is in progress, the expansion pauses until
88the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk
89and waiting for reconstruction to complete).
90Expansion does not change the number of failures that can be tolerated
91without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion).
92A RAID-Z vdev can be expanded multiple times.
93.Pp
94After the expansion completes, old blocks retain their old data-to-parity
95ratio
96.Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity
97but distributed among the larger set of disks.
98New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide
99RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity).
100However, the vdev's assumed parity ratio does not change, so slightly less
101space than is expected may be reported for newly-written blocks, according to
102.Nm zfs Cm list ,
103.Nm df ,
104.Nm ls Fl s ,
105and similar tools.
106.Pp
107A pool-wide scrub is initiated at the end of the expansion in order to verify
108the checksums of all blocks which have been copied during the expansion.
109.Bl -tag -width Ds
110.It Fl f
111Forces use of
112.Ar new_device ,
113even if it appears to be in use.
114Not all devices can be overridden in this manner.
115.It Fl o Ar property Ns = Ns Ar value
116Sets the given pool properties.
117See the
118.Xr zpoolprops 7
119manual page for a list of valid properties that can be set.
120The only property supported at the moment is
121.Sy ashift .
122.It Fl s
123When attaching to a mirror or plain device, the
124.Ar new_device
125is reconstructed sequentially to restore redundancy as quickly as possible.
126Checksums are not verified during sequential reconstruction so a scrub is
127started when the resilver completes.
128.It Fl w
129Waits until
130.Ar new_device
131has finished resilvering or expanding before returning.
132.El
133.
134.Sh SEE ALSO
135.Xr zpool-add 8 ,
136.Xr zpool-detach 8 ,
137.Xr zpool-import 8 ,
138.Xr zpool-initialize 8 ,
139.Xr zpool-online 8 ,
140.Xr zpool-replace 8 ,
141.Xr zpool-resilver 8
142