1.\" 2.\" CDDL HEADER START 3.\" 4.\" The contents of this file are subject to the terms of the 5.\" Common Development and Distribution License (the "License"). 6.\" You may not use this file except in compliance with the License. 7.\" 8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9.\" or https://opensource.org/licenses/CDDL-1.0. 10.\" See the License for the specific language governing permissions 11.\" and limitations under the License. 12.\" 13.\" When distributing Covered Code, include this CDDL HEADER in each 14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15.\" If applicable, add the following below this CDDL HEADER, with the 16.\" fields enclosed by brackets "[]" replaced with your own identifying 17.\" information: Portions Copyright [yyyy] [name of copyright owner] 18.\" 19.\" CDDL HEADER END 20.\" 21.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. 22.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> 23.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. 24.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. 25.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. 26.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. 27.\" Copyright (c) 2014 Integros [integros.com] 28.\" Copyright 2019 Richard Laager. All rights reserved. 29.\" Copyright 2018 Nexenta Systems, Inc. 30.\" Copyright 2019 Joyent, Inc. 31.\" 32.Dd March 16, 2022 33.Dt ZFS-DESTROY 8 34.Os 35. 36.Sh NAME 37.Nm zfs-destroy 38.Nd destroy ZFS dataset, snapshots, or bookmark 39.Sh SYNOPSIS 40.Nm zfs 41.Cm destroy 42.Op Fl Rfnprv 43.Ar filesystem Ns | Ns Ar volume 44.Nm zfs 45.Cm destroy 46.Op Fl Rdnprv 47.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns 48.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns … 49.Nm zfs 50.Cm destroy 51.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark 52. 53.Sh DESCRIPTION 54.Bl -tag -width "" 55.It Xo 56.Nm zfs 57.Cm destroy 58.Op Fl Rfnprv 59.Ar filesystem Ns | Ns Ar volume 60.Xc 61Destroys the given dataset. 62By default, the command unshares any file systems that are currently shared, 63unmounts any file systems that are currently mounted, and refuses to destroy a 64dataset that has active dependents 65.Pq children or clones . 66.Bl -tag -width "-R" 67.It Fl R 68Recursively destroy all dependents, including cloned file systems outside the 69target hierarchy. 70.It Fl f 71Forcibly unmount file systems. 72This option has no effect on non-file systems or unmounted file systems. 73.It Fl n 74Do a dry-run 75.Pq Qq No-op 76deletion. 77No data will be deleted. 78This is useful in conjunction with the 79.Fl v 80or 81.Fl p 82flags to determine what data would be deleted. 83.It Fl p 84Print machine-parsable verbose information about the deleted data. 85.It Fl r 86Recursively destroy all children. 87.It Fl v 88Print verbose information about the deleted data. 89.El 90.Pp 91Extreme care should be taken when applying either the 92.Fl r 93or the 94.Fl R 95options, as they can destroy large portions of a pool and cause unexpected 96behavior for mounted file systems in use. 97.It Xo 98.Nm zfs 99.Cm destroy 100.Op Fl Rdnprv 101.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns 102.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns … 103.Xc 104The given snapshots are destroyed immediately if and only if the 105.Nm zfs Cm destroy 106command without the 107.Fl d 108option would have destroyed it. 109Such immediate destruction would occur, for example, if the snapshot had no 110clones and the user-initiated reference count were zero. 111.Pp 112If a snapshot does not qualify for immediate destruction, it is marked for 113deferred deletion. 114In this state, it exists as a usable, visible snapshot until both of the 115preconditions listed above are met, at which point it is destroyed. 116.Pp 117An inclusive range of snapshots may be specified by separating the first and 118last snapshots with a percent sign. 119The first and/or last snapshots may be left blank, in which case the 120filesystem's oldest or newest snapshot will be implied. 121.Pp 122Multiple snapshots 123.Pq or ranges of snapshots 124of the same filesystem or volume may be specified in a comma-separated list of 125snapshots. 126Only the snapshot's short name 127.Po the part after the 128.Sy @ 129.Pc 130should be specified when using a range or comma-separated list to identify 131multiple snapshots. 132.Bl -tag -width "-R" 133.It Fl R 134Recursively destroy all clones of these snapshots, including the clones, 135snapshots, and children. 136If this flag is specified, the 137.Fl d 138flag will have no effect. 139.It Fl d 140Destroy immediately. 141If a snapshot cannot be destroyed now, mark it for deferred destruction. 142.It Fl n 143Do a dry-run 144.Pq Qq No-op 145deletion. 146No data will be deleted. 147This is useful in conjunction with the 148.Fl p 149or 150.Fl v 151flags to determine what data would be deleted. 152.It Fl p 153Print machine-parsable verbose information about the deleted data. 154.It Fl r 155Destroy 156.Pq or mark for deferred deletion 157all snapshots with this name in descendent file systems. 158.It Fl v 159Print verbose information about the deleted data. 160.Pp 161Extreme care should be taken when applying either the 162.Fl r 163or the 164.Fl R 165options, as they can destroy large portions of a pool and cause unexpected 166behavior for mounted file systems in use. 167.El 168.It Xo 169.Nm zfs 170.Cm destroy 171.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark 172.Xc 173The given bookmark is destroyed. 174.El 175. 176.Sh EXAMPLES 177.\" These are, respectively, examples 3, 10, 15 from zfs.8 178.\" Make sure to update them bidirectionally 179.Ss Example 1 : No Creating and Destroying Multiple Snapshots 180The following command creates snapshots named 181.Ar yesterday No of Ar pool/home 182and all of its descendent file systems. 183Each snapshot is mounted on demand in the 184.Pa .zfs/snapshot 185directory at the root of its file system. 186The second command destroys the newly created snapshots. 187.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday 188.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday 189. 190.Ss Example 2 : No Promoting a ZFS Clone 191The following commands illustrate how to test out changes to a file system, and 192then replace the original file system with the changed one, using clones, clone 193promotion, and renaming: 194.Bd -literal -compact -offset Ds 195.No # Nm zfs Cm create Ar pool/project/production 196 populate /pool/project/production with data 197.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today 198.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta 199 make changes to /pool/project/beta and test them 200.No # Nm zfs Cm promote Ar pool/project/beta 201.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy 202.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production 203 once the legacy version is no longer needed, it can be destroyed 204.No # Nm zfs Cm destroy Ar pool/project/legacy 205.Ed 206. 207.Ss Example 3 : No Performing a Rolling Snapshot 208The following example shows how to maintain a history of snapshots with a 209consistent naming scheme. 210To keep a week's worth of snapshots, the user destroys the oldest snapshot, 211renames the remaining snapshots, and then creates a new snapshot, as follows: 212.Bd -literal -compact -offset Ds 213.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago 214.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago 215.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago 216.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago 217.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago 218.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago 219.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago 220.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday 221.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today 222.Ed 223. 224.Sh SEE ALSO 225.Xr zfs-create 8 , 226.Xr zfs-hold 8 227