xref: /freebsd/sys/contrib/openzfs/man/man8/zpool-attach.8 (revision 61145dc2b94f12f6a47344fb9aac702321880e43)
1.\" SPDX-License-Identifier: CDDL-1.0
2.\"
3.\" CDDL HEADER START
4.\"
5.\" The contents of this file are subject to the terms of the
6.\" Common Development and Distribution License (the "License").
7.\" You may not use this file except in compliance with the License.
8.\"
9.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
10.\" or https://opensource.org/licenses/CDDL-1.0.
11.\" See the License for the specific language governing permissions
12.\" and limitations under the License.
13.\"
14.\" When distributing Covered Code, include this CDDL HEADER in each
15.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
16.\" If applicable, add the following below this CDDL HEADER, with the
17.\" fields enclosed by brackets "[]" replaced with your own identifying
18.\" information: Portions Copyright [yyyy] [name of copyright owner]
19.\"
20.\" CDDL HEADER END
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25.\" Copyright (c) 2017 Datto Inc.
26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27.\" Copyright 2017 Nexenta Systems, Inc.
28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29.\"
30.Dd June 28, 2023
31.Dt ZPOOL-ATTACH 8
32.Os
33.
34.Sh NAME
35.Nm zpool-attach
36.Nd attach new device to existing ZFS vdev
37.Sh SYNOPSIS
38.Nm zpool
39.Cm attach
40.Op Fl fsw
41.Oo Fl o Ar property Ns = Ns Ar value Oc
42.Ar pool device new_device
43.
44.Sh DESCRIPTION
45Attaches
46.Ar new_device
47to the existing
48.Ar device .
49The behavior differs depending on if the existing
50.Ar device
51is a RAID-Z device, or a mirror/plain device.
52.Pp
53If the existing device is a mirror or plain device
54.Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 ,
55the new device will be mirrored with the existing device, a resilver will be
56initiated, and the new device will contribute to additional redundancy once the
57resilver completes.
58If
59.Ar device
60is not currently part of a mirrored configuration,
61.Ar device
62automatically transforms into a two-way mirror of
63.Ar device
64and
65.Ar new_device .
66If
67.Ar device
68is part of a two-way mirror, attaching
69.Ar new_device
70creates a three-way mirror, and so on.
71In either case,
72.Ar new_device
73begins to resilver immediately and any running scrub is canceled.
74.Pp
75If the existing device is a RAID-Z device
76.Pq e.g. specified as Qq Ar raidz2-0 ,
77the new device will become part of that RAID-Z group.
78A "raidz expansion" will be initiated, and once the expansion completes,
79the new device will contribute additional space to the RAID-Z group.
80The expansion entails reading all allocated space from existing disks in the
81RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including
82the newly added
83.Ar device ) .
84Its progress can be monitored with
85.Nm zpool Cm status .
86.Pp
87Data redundancy is maintained during and after the expansion.
88If a disk fails while the expansion is in progress, the expansion pauses until
89the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk
90and waiting for reconstruction to complete).
91Expansion does not change the number of failures that can be tolerated
92without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion).
93A RAID-Z vdev can be expanded multiple times.
94.Pp
95After the expansion completes, old blocks retain their old data-to-parity
96ratio
97.Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity
98but distributed among the larger set of disks.
99New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide
100RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity).
101However, the vdev's assumed parity ratio does not change, so slightly less
102space than is expected may be reported for newly-written blocks, according to
103.Nm zfs Cm list ,
104.Nm df ,
105.Nm ls Fl s ,
106and similar tools.
107.Pp
108A pool-wide scrub is initiated at the end of the expansion in order to verify
109the checksums of all blocks which have been copied during the expansion.
110.Bl -tag -width Ds
111.It Fl f
112Forces use of
113.Ar new_device ,
114even if it appears to be in use.
115Not all devices can be overridden in this manner.
116.It Fl o Ar property Ns = Ns Ar value
117Sets the given pool properties.
118See the
119.Xr zpoolprops 7
120manual page for a list of valid properties that can be set.
121The only property supported at the moment is
122.Sy ashift .
123.It Fl s
124When attaching to a mirror or plain device, the
125.Ar new_device
126is reconstructed sequentially to restore redundancy as quickly as possible.
127Checksums are not verified during sequential reconstruction so a scrub is
128started when the resilver completes.
129.It Fl w
130Waits until
131.Ar new_device
132has finished resilvering or expanding before returning.
133.El
134.
135.Sh SEE ALSO
136.Xr zpool-add 8 ,
137.Xr zpool-detach 8 ,
138.Xr zpool-import 8 ,
139.Xr zpool-initialize 8 ,
140.Xr zpool-online 8 ,
141.Xr zpool-replace 8 ,
142.Xr zpool-resilver 8
143