1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 23 * Use is subject to license terms. 24 */ 25 26 /* 27 * Virtual Device Labels 28 * --------------------- 29 * 30 * The vdev label serves several distinct purposes: 31 * 32 * 1. Uniquely identify this device as part of a ZFS pool and confirm its 33 * identity within the pool. 34 * 35 * 2. Verify that all the devices given in a configuration are present 36 * within the pool. 37 * 38 * 3. Determine the uberblock for the pool. 39 * 40 * 4. In case of an import operation, determine the configuration of the 41 * toplevel vdev of which it is a part. 42 * 43 * 5. If an import operation cannot find all the devices in the pool, 44 * provide enough information to the administrator to determine which 45 * devices are missing. 46 * 47 * It is important to note that while the kernel is responsible for writing the 48 * label, it only consumes the information in the first three cases. The 49 * latter information is only consumed in userland when determining the 50 * configuration to import a pool. 51 * 52 * 53 * Label Organization 54 * ------------------ 55 * 56 * Before describing the contents of the label, it's important to understand how 57 * the labels are written and updated with respect to the uberblock. 58 * 59 * When the pool configuration is altered, either because it was newly created 60 * or a device was added, we want to update all the labels such that we can deal 61 * with fatal failure at any point. To this end, each disk has two labels which 62 * are updated before and after the uberblock is synced. Assuming we have 63 * labels and an uberblock with the following transaction groups: 64 * 65 * L1 UB L2 66 * +------+ +------+ +------+ 67 * | | | | | | 68 * | t10 | | t10 | | t10 | 69 * | | | | | | 70 * +------+ +------+ +------+ 71 * 72 * In this stable state, the labels and the uberblock were all updated within 73 * the same transaction group (10). Each label is mirrored and checksummed, so 74 * that we can detect when we fail partway through writing the label. 75 * 76 * In order to identify which labels are valid, the labels are written in the 77 * following manner: 78 * 79 * 1. For each vdev, update 'L1' to the new label 80 * 2. Update the uberblock 81 * 3. For each vdev, update 'L2' to the new label 82 * 83 * Given arbitrary failure, we can determine the correct label to use based on 84 * the transaction group. If we fail after updating L1 but before updating the 85 * UB, we will notice that L1's transaction group is greater than the uberblock, 86 * so L2 must be valid. If we fail after writing the uberblock but before 87 * writing L2, we will notice that L2's transaction group is less than L1, and 88 * therefore L1 is valid. 89 * 90 * Another added complexity is that not every label is updated when the config 91 * is synced. If we add a single device, we do not want to have to re-write 92 * every label for every device in the pool. This means that both L1 and L2 may 93 * be older than the pool uberblock, because the necessary information is stored 94 * on another vdev. 95 * 96 * 97 * On-disk Format 98 * -------------- 99 * 100 * The vdev label consists of two distinct parts, and is wrapped within the 101 * vdev_label_t structure. The label includes 8k of padding to permit legacy 102 * VTOC disk labels, but is otherwise ignored. 103 * 104 * The first half of the label is a packed nvlist which contains pool wide 105 * properties, per-vdev properties, and configuration information. It is 106 * described in more detail below. 107 * 108 * The latter half of the label consists of a redundant array of uberblocks. 109 * These uberblocks are updated whenever a transaction group is committed, 110 * or when the configuration is updated. When a pool is loaded, we scan each 111 * vdev for the 'best' uberblock. 112 * 113 * 114 * Configuration Information 115 * ------------------------- 116 * 117 * The nvlist describing the pool and vdev contains the following elements: 118 * 119 * version ZFS on-disk version 120 * name Pool name 121 * state Pool state 122 * txg Transaction group in which this label was written 123 * pool_guid Unique identifier for this pool 124 * vdev_tree An nvlist describing vdev tree. 125 * 126 * Each leaf device label also contains the following: 127 * 128 * top_guid Unique ID for top-level vdev in which this is contained 129 * guid Unique ID for the leaf vdev 130 * 131 * The 'vs' configuration follows the format described in 'spa_config.c'. 132 */ 133 134 #include <sys/zfs_context.h> 135 #include <sys/spa.h> 136 #include <sys/spa_impl.h> 137 #include <sys/dmu.h> 138 #include <sys/zap.h> 139 #include <sys/vdev.h> 140 #include <sys/vdev_impl.h> 141 #include <sys/uberblock_impl.h> 142 #include <sys/metaslab.h> 143 #include <sys/zio.h> 144 #include <sys/fs/zfs.h> 145 146 /* 147 * Basic routines to read and write from a vdev label. 148 * Used throughout the rest of this file. 149 */ 150 uint64_t 151 vdev_label_offset(uint64_t psize, int l, uint64_t offset) 152 { 153 ASSERT(offset < sizeof (vdev_label_t)); 154 ASSERT(P2PHASE_TYPED(psize, sizeof (vdev_label_t), uint64_t) == 0); 155 156 return (offset + l * sizeof (vdev_label_t) + (l < VDEV_LABELS / 2 ? 157 0 : psize - VDEV_LABELS * sizeof (vdev_label_t))); 158 } 159 160 /* 161 * Returns back the vdev label associated with the passed in offset. 162 */ 163 int 164 vdev_label_number(uint64_t psize, uint64_t offset) 165 { 166 int l; 167 168 if (offset >= psize - VDEV_LABEL_END_SIZE) { 169 offset -= psize - VDEV_LABEL_END_SIZE; 170 offset += (VDEV_LABELS / 2) * sizeof (vdev_label_t); 171 } 172 l = offset / sizeof (vdev_label_t); 173 return (l < VDEV_LABELS ? l : -1); 174 } 175 176 static void 177 vdev_label_read(zio_t *zio, vdev_t *vd, int l, void *buf, uint64_t offset, 178 uint64_t size, zio_done_func_t *done, void *private, int flags) 179 { 180 ASSERT(spa_config_held(zio->io_spa, SCL_STATE_ALL, RW_WRITER) == 181 SCL_STATE_ALL); 182 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 183 184 zio_nowait(zio_read_phys(zio, vd, 185 vdev_label_offset(vd->vdev_psize, l, offset), 186 size, buf, ZIO_CHECKSUM_LABEL, done, private, 187 ZIO_PRIORITY_SYNC_READ, flags, B_TRUE)); 188 } 189 190 static void 191 vdev_label_write(zio_t *zio, vdev_t *vd, int l, void *buf, uint64_t offset, 192 uint64_t size, zio_done_func_t *done, void *private, int flags) 193 { 194 ASSERT(spa_config_held(zio->io_spa, SCL_ALL, RW_WRITER) == SCL_ALL || 195 (spa_config_held(zio->io_spa, SCL_CONFIG | SCL_STATE, RW_READER) == 196 (SCL_CONFIG | SCL_STATE) && 197 dsl_pool_sync_context(spa_get_dsl(zio->io_spa)))); 198 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 199 200 zio_nowait(zio_write_phys(zio, vd, 201 vdev_label_offset(vd->vdev_psize, l, offset), 202 size, buf, ZIO_CHECKSUM_LABEL, done, private, 203 ZIO_PRIORITY_SYNC_WRITE, flags, B_TRUE)); 204 } 205 206 /* 207 * Generate the nvlist representing this vdev's config. 208 */ 209 nvlist_t * 210 vdev_config_generate(spa_t *spa, vdev_t *vd, boolean_t getstats, 211 boolean_t isspare, boolean_t isl2cache) 212 { 213 nvlist_t *nv = NULL; 214 215 VERIFY(nvlist_alloc(&nv, NV_UNIQUE_NAME, KM_SLEEP) == 0); 216 217 VERIFY(nvlist_add_string(nv, ZPOOL_CONFIG_TYPE, 218 vd->vdev_ops->vdev_op_type) == 0); 219 if (!isspare && !isl2cache) 220 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_ID, vd->vdev_id) 221 == 0); 222 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_GUID, vd->vdev_guid) == 0); 223 224 if (vd->vdev_path != NULL) 225 VERIFY(nvlist_add_string(nv, ZPOOL_CONFIG_PATH, 226 vd->vdev_path) == 0); 227 228 if (vd->vdev_devid != NULL) 229 VERIFY(nvlist_add_string(nv, ZPOOL_CONFIG_DEVID, 230 vd->vdev_devid) == 0); 231 232 if (vd->vdev_physpath != NULL) 233 VERIFY(nvlist_add_string(nv, ZPOOL_CONFIG_PHYS_PATH, 234 vd->vdev_physpath) == 0); 235 236 if (vd->vdev_fru != NULL) 237 VERIFY(nvlist_add_string(nv, ZPOOL_CONFIG_FRU, 238 vd->vdev_fru) == 0); 239 240 if (vd->vdev_nparity != 0) { 241 ASSERT(strcmp(vd->vdev_ops->vdev_op_type, 242 VDEV_TYPE_RAIDZ) == 0); 243 244 /* 245 * Make sure someone hasn't managed to sneak a fancy new vdev 246 * into a crufty old storage pool. 247 */ 248 ASSERT(vd->vdev_nparity == 1 || 249 (vd->vdev_nparity <= 2 && 250 spa_version(spa) >= SPA_VERSION_RAIDZ2) || 251 (vd->vdev_nparity <= 3 && 252 spa_version(spa) >= SPA_VERSION_RAIDZ3)); 253 254 /* 255 * Note that we'll add the nparity tag even on storage pools 256 * that only support a single parity device -- older software 257 * will just ignore it. 258 */ 259 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_NPARITY, 260 vd->vdev_nparity) == 0); 261 } 262 263 if (vd->vdev_wholedisk != -1ULL) 264 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_WHOLE_DISK, 265 vd->vdev_wholedisk) == 0); 266 267 if (vd->vdev_not_present) 268 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_NOT_PRESENT, 1) == 0); 269 270 if (vd->vdev_isspare) 271 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_IS_SPARE, 1) == 0); 272 273 if (!isspare && !isl2cache && vd == vd->vdev_top) { 274 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_ARRAY, 275 vd->vdev_ms_array) == 0); 276 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_SHIFT, 277 vd->vdev_ms_shift) == 0); 278 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_ASHIFT, 279 vd->vdev_ashift) == 0); 280 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_ASIZE, 281 vd->vdev_asize) == 0); 282 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_IS_LOG, 283 vd->vdev_islog) == 0); 284 } 285 286 if (vd->vdev_dtl_smo.smo_object != 0) 287 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_DTL, 288 vd->vdev_dtl_smo.smo_object) == 0); 289 290 if (vd->vdev_crtxg) 291 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_CREATE_TXG, 292 vd->vdev_crtxg) == 0); 293 294 if (getstats) { 295 vdev_stat_t vs; 296 vdev_get_stats(vd, &vs); 297 VERIFY(nvlist_add_uint64_array(nv, ZPOOL_CONFIG_STATS, 298 (uint64_t *)&vs, sizeof (vs) / sizeof (uint64_t)) == 0); 299 } 300 301 if (!vd->vdev_ops->vdev_op_leaf) { 302 nvlist_t **child; 303 int c; 304 305 ASSERT(!vd->vdev_ishole); 306 307 child = kmem_alloc(vd->vdev_children * sizeof (nvlist_t *), 308 KM_SLEEP); 309 310 for (c = 0; c < vd->vdev_children; c++) 311 child[c] = vdev_config_generate(spa, vd->vdev_child[c], 312 getstats, isspare, isl2cache); 313 314 VERIFY(nvlist_add_nvlist_array(nv, ZPOOL_CONFIG_CHILDREN, 315 child, vd->vdev_children) == 0); 316 317 for (c = 0; c < vd->vdev_children; c++) 318 nvlist_free(child[c]); 319 320 kmem_free(child, vd->vdev_children * sizeof (nvlist_t *)); 321 322 } else { 323 if (vd->vdev_offline && !vd->vdev_tmpoffline) 324 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_OFFLINE, 325 B_TRUE) == 0); 326 if (vd->vdev_faulted) 327 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_FAULTED, 328 B_TRUE) == 0); 329 if (vd->vdev_degraded) 330 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_DEGRADED, 331 B_TRUE) == 0); 332 if (vd->vdev_removed) 333 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_REMOVED, 334 B_TRUE) == 0); 335 if (vd->vdev_unspare) 336 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_UNSPARE, 337 B_TRUE) == 0); 338 if (vd->vdev_ishole) 339 VERIFY(nvlist_add_uint64(nv, ZPOOL_CONFIG_IS_HOLE, 340 B_TRUE) == 0); 341 } 342 343 return (nv); 344 } 345 346 /* 347 * Generate a view of the top-level vdevs. If we currently have holes 348 * in the namespace, then generate an array which contains a list of holey 349 * vdevs. Additionally, add the number of top-level children that currently 350 * exist. 351 */ 352 void 353 vdev_top_config_generate(spa_t *spa, nvlist_t *config) 354 { 355 vdev_t *rvd = spa->spa_root_vdev; 356 uint64_t *array; 357 uint_t idx; 358 359 array = kmem_alloc(rvd->vdev_children * sizeof (uint64_t), KM_SLEEP); 360 361 idx = 0; 362 for (int c = 0; c < rvd->vdev_children; c++) { 363 vdev_t *tvd = rvd->vdev_child[c]; 364 365 if (tvd->vdev_ishole) 366 array[idx++] = c; 367 } 368 369 if (idx) { 370 VERIFY(nvlist_add_uint64_array(config, ZPOOL_CONFIG_HOLE_ARRAY, 371 array, idx) == 0); 372 } 373 374 VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_VDEV_CHILDREN, 375 rvd->vdev_children) == 0); 376 377 kmem_free(array, rvd->vdev_children * sizeof (uint64_t)); 378 } 379 380 nvlist_t * 381 vdev_label_read_config(vdev_t *vd) 382 { 383 spa_t *spa = vd->vdev_spa; 384 nvlist_t *config = NULL; 385 vdev_phys_t *vp; 386 zio_t *zio; 387 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 388 ZIO_FLAG_SPECULATIVE; 389 390 ASSERT(spa_config_held(spa, SCL_STATE_ALL, RW_WRITER) == SCL_STATE_ALL); 391 392 if (!vdev_readable(vd)) 393 return (NULL); 394 395 vp = zio_buf_alloc(sizeof (vdev_phys_t)); 396 397 retry: 398 for (int l = 0; l < VDEV_LABELS; l++) { 399 400 zio = zio_root(spa, NULL, NULL, flags); 401 402 vdev_label_read(zio, vd, l, vp, 403 offsetof(vdev_label_t, vl_vdev_phys), 404 sizeof (vdev_phys_t), NULL, NULL, flags); 405 406 if (zio_wait(zio) == 0 && 407 nvlist_unpack(vp->vp_nvlist, sizeof (vp->vp_nvlist), 408 &config, 0) == 0) 409 break; 410 411 if (config != NULL) { 412 nvlist_free(config); 413 config = NULL; 414 } 415 } 416 417 if (config == NULL && !(flags & ZIO_FLAG_TRYHARD)) { 418 flags |= ZIO_FLAG_TRYHARD; 419 goto retry; 420 } 421 422 zio_buf_free(vp, sizeof (vdev_phys_t)); 423 424 return (config); 425 } 426 427 /* 428 * Determine if a device is in use. The 'spare_guid' parameter will be filled 429 * in with the device guid if this spare is active elsewhere on the system. 430 */ 431 static boolean_t 432 vdev_inuse(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason, 433 uint64_t *spare_guid, uint64_t *l2cache_guid) 434 { 435 spa_t *spa = vd->vdev_spa; 436 uint64_t state, pool_guid, device_guid, txg, spare_pool; 437 uint64_t vdtxg = 0; 438 nvlist_t *label; 439 440 if (spare_guid) 441 *spare_guid = 0ULL; 442 if (l2cache_guid) 443 *l2cache_guid = 0ULL; 444 445 /* 446 * Read the label, if any, and perform some basic sanity checks. 447 */ 448 if ((label = vdev_label_read_config(vd)) == NULL) 449 return (B_FALSE); 450 451 (void) nvlist_lookup_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 452 &vdtxg); 453 454 if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_STATE, 455 &state) != 0 || 456 nvlist_lookup_uint64(label, ZPOOL_CONFIG_GUID, 457 &device_guid) != 0) { 458 nvlist_free(label); 459 return (B_FALSE); 460 } 461 462 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 463 (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_GUID, 464 &pool_guid) != 0 || 465 nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_TXG, 466 &txg) != 0)) { 467 nvlist_free(label); 468 return (B_FALSE); 469 } 470 471 nvlist_free(label); 472 473 /* 474 * Check to see if this device indeed belongs to the pool it claims to 475 * be a part of. The only way this is allowed is if the device is a hot 476 * spare (which we check for later on). 477 */ 478 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 479 !spa_guid_exists(pool_guid, device_guid) && 480 !spa_spare_exists(device_guid, NULL, NULL) && 481 !spa_l2cache_exists(device_guid, NULL)) 482 return (B_FALSE); 483 484 /* 485 * If the transaction group is zero, then this an initialized (but 486 * unused) label. This is only an error if the create transaction 487 * on-disk is the same as the one we're using now, in which case the 488 * user has attempted to add the same vdev multiple times in the same 489 * transaction. 490 */ 491 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 492 txg == 0 && vdtxg == crtxg) 493 return (B_TRUE); 494 495 /* 496 * Check to see if this is a spare device. We do an explicit check for 497 * spa_has_spare() here because it may be on our pending list of spares 498 * to add. We also check if it is an l2cache device. 499 */ 500 if (spa_spare_exists(device_guid, &spare_pool, NULL) || 501 spa_has_spare(spa, device_guid)) { 502 if (spare_guid) 503 *spare_guid = device_guid; 504 505 switch (reason) { 506 case VDEV_LABEL_CREATE: 507 case VDEV_LABEL_L2CACHE: 508 return (B_TRUE); 509 510 case VDEV_LABEL_REPLACE: 511 return (!spa_has_spare(spa, device_guid) || 512 spare_pool != 0ULL); 513 514 case VDEV_LABEL_SPARE: 515 return (spa_has_spare(spa, device_guid)); 516 } 517 } 518 519 /* 520 * Check to see if this is an l2cache device. 521 */ 522 if (spa_l2cache_exists(device_guid, NULL)) 523 return (B_TRUE); 524 525 /* 526 * If the device is marked ACTIVE, then this device is in use by another 527 * pool on the system. 528 */ 529 return (state == POOL_STATE_ACTIVE); 530 } 531 532 /* 533 * Initialize a vdev label. We check to make sure each leaf device is not in 534 * use, and writable. We put down an initial label which we will later 535 * overwrite with a complete label. Note that it's important to do this 536 * sequentially, not in parallel, so that we catch cases of multiple use of the 537 * same leaf vdev in the vdev we're creating -- e.g. mirroring a disk with 538 * itself. 539 */ 540 int 541 vdev_label_init(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason) 542 { 543 spa_t *spa = vd->vdev_spa; 544 nvlist_t *label; 545 vdev_phys_t *vp; 546 char *pad2; 547 uberblock_t *ub; 548 zio_t *zio; 549 char *buf; 550 size_t buflen; 551 int error; 552 uint64_t spare_guid, l2cache_guid; 553 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 554 555 ASSERT(spa_config_held(spa, SCL_ALL, RW_WRITER) == SCL_ALL); 556 557 for (int c = 0; c < vd->vdev_children; c++) 558 if ((error = vdev_label_init(vd->vdev_child[c], 559 crtxg, reason)) != 0) 560 return (error); 561 562 /* Track the creation time for this vdev */ 563 vd->vdev_crtxg = crtxg; 564 565 if (!vd->vdev_ops->vdev_op_leaf) 566 return (0); 567 568 /* 569 * Dead vdevs cannot be initialized. 570 */ 571 if (vdev_is_dead(vd)) 572 return (EIO); 573 574 /* 575 * Determine if the vdev is in use. 576 */ 577 if (reason != VDEV_LABEL_REMOVE && 578 vdev_inuse(vd, crtxg, reason, &spare_guid, &l2cache_guid)) 579 return (EBUSY); 580 581 /* 582 * If this is a request to add or replace a spare or l2cache device 583 * that is in use elsewhere on the system, then we must update the 584 * guid (which was initialized to a random value) to reflect the 585 * actual GUID (which is shared between multiple pools). 586 */ 587 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_L2CACHE && 588 spare_guid != 0ULL) { 589 uint64_t guid_delta = spare_guid - vd->vdev_guid; 590 591 vd->vdev_guid += guid_delta; 592 593 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 594 pvd->vdev_guid_sum += guid_delta; 595 596 /* 597 * If this is a replacement, then we want to fallthrough to the 598 * rest of the code. If we're adding a spare, then it's already 599 * labeled appropriately and we can just return. 600 */ 601 if (reason == VDEV_LABEL_SPARE) 602 return (0); 603 ASSERT(reason == VDEV_LABEL_REPLACE); 604 } 605 606 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPARE && 607 l2cache_guid != 0ULL) { 608 uint64_t guid_delta = l2cache_guid - vd->vdev_guid; 609 610 vd->vdev_guid += guid_delta; 611 612 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 613 pvd->vdev_guid_sum += guid_delta; 614 615 /* 616 * If this is a replacement, then we want to fallthrough to the 617 * rest of the code. If we're adding an l2cache, then it's 618 * already labeled appropriately and we can just return. 619 */ 620 if (reason == VDEV_LABEL_L2CACHE) 621 return (0); 622 ASSERT(reason == VDEV_LABEL_REPLACE); 623 } 624 625 /* 626 * Initialize its label. 627 */ 628 vp = zio_buf_alloc(sizeof (vdev_phys_t)); 629 bzero(vp, sizeof (vdev_phys_t)); 630 631 /* 632 * Generate a label describing the pool and our top-level vdev. 633 * We mark it as being from txg 0 to indicate that it's not 634 * really part of an active pool just yet. The labels will 635 * be written again with a meaningful txg by spa_sync(). 636 */ 637 if (reason == VDEV_LABEL_SPARE || 638 (reason == VDEV_LABEL_REMOVE && vd->vdev_isspare)) { 639 /* 640 * For inactive hot spares, we generate a special label that 641 * identifies as a mutually shared hot spare. We write the 642 * label if we are adding a hot spare, or if we are removing an 643 * active hot spare (in which case we want to revert the 644 * labels). 645 */ 646 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 647 648 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 649 spa_version(spa)) == 0); 650 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 651 POOL_STATE_SPARE) == 0); 652 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 653 vd->vdev_guid) == 0); 654 } else if (reason == VDEV_LABEL_L2CACHE || 655 (reason == VDEV_LABEL_REMOVE && vd->vdev_isl2cache)) { 656 /* 657 * For level 2 ARC devices, add a special label. 658 */ 659 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 660 661 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 662 spa_version(spa)) == 0); 663 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 664 POOL_STATE_L2CACHE) == 0); 665 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 666 vd->vdev_guid) == 0); 667 } else { 668 label = spa_config_generate(spa, vd, 0ULL, B_FALSE); 669 670 /* 671 * Add our creation time. This allows us to detect multiple 672 * vdev uses as described above, and automatically expires if we 673 * fail. 674 */ 675 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 676 crtxg) == 0); 677 } 678 679 buf = vp->vp_nvlist; 680 buflen = sizeof (vp->vp_nvlist); 681 682 error = nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP); 683 if (error != 0) { 684 nvlist_free(label); 685 zio_buf_free(vp, sizeof (vdev_phys_t)); 686 /* EFAULT means nvlist_pack ran out of room */ 687 return (error == EFAULT ? ENAMETOOLONG : EINVAL); 688 } 689 690 /* 691 * Initialize uberblock template. 692 */ 693 ub = zio_buf_alloc(VDEV_UBERBLOCK_RING); 694 bzero(ub, VDEV_UBERBLOCK_RING); 695 *ub = spa->spa_uberblock; 696 ub->ub_txg = 0; 697 698 /* Initialize the 2nd padding area. */ 699 pad2 = zio_buf_alloc(VDEV_PAD_SIZE); 700 bzero(pad2, VDEV_PAD_SIZE); 701 702 /* 703 * Write everything in parallel. 704 */ 705 retry: 706 zio = zio_root(spa, NULL, NULL, flags); 707 708 for (int l = 0; l < VDEV_LABELS; l++) { 709 710 vdev_label_write(zio, vd, l, vp, 711 offsetof(vdev_label_t, vl_vdev_phys), 712 sizeof (vdev_phys_t), NULL, NULL, flags); 713 714 /* 715 * Skip the 1st padding area. 716 * Zero out the 2nd padding area where it might have 717 * left over data from previous filesystem format. 718 */ 719 vdev_label_write(zio, vd, l, pad2, 720 offsetof(vdev_label_t, vl_pad2), 721 VDEV_PAD_SIZE, NULL, NULL, flags); 722 723 vdev_label_write(zio, vd, l, ub, 724 offsetof(vdev_label_t, vl_uberblock), 725 VDEV_UBERBLOCK_RING, NULL, NULL, flags); 726 } 727 728 error = zio_wait(zio); 729 730 if (error != 0 && !(flags & ZIO_FLAG_TRYHARD)) { 731 flags |= ZIO_FLAG_TRYHARD; 732 goto retry; 733 } 734 735 nvlist_free(label); 736 zio_buf_free(pad2, VDEV_PAD_SIZE); 737 zio_buf_free(ub, VDEV_UBERBLOCK_RING); 738 zio_buf_free(vp, sizeof (vdev_phys_t)); 739 740 /* 741 * If this vdev hasn't been previously identified as a spare, then we 742 * mark it as such only if a) we are labeling it as a spare, or b) it 743 * exists as a spare elsewhere in the system. Do the same for 744 * level 2 ARC devices. 745 */ 746 if (error == 0 && !vd->vdev_isspare && 747 (reason == VDEV_LABEL_SPARE || 748 spa_spare_exists(vd->vdev_guid, NULL, NULL))) 749 spa_spare_add(vd); 750 751 if (error == 0 && !vd->vdev_isl2cache && 752 (reason == VDEV_LABEL_L2CACHE || 753 spa_l2cache_exists(vd->vdev_guid, NULL))) 754 spa_l2cache_add(vd); 755 756 return (error); 757 } 758 759 /* 760 * ========================================================================== 761 * uberblock load/sync 762 * ========================================================================== 763 */ 764 765 /* 766 * For use by zdb and debugging purposes only 767 */ 768 uint64_t ub_max_txg = UINT64_MAX; 769 770 /* 771 * Consider the following situation: txg is safely synced to disk. We've 772 * written the first uberblock for txg + 1, and then we lose power. When we 773 * come back up, we fail to see the uberblock for txg + 1 because, say, 774 * it was on a mirrored device and the replica to which we wrote txg + 1 775 * is now offline. If we then make some changes and sync txg + 1, and then 776 * the missing replica comes back, then for a new seconds we'll have two 777 * conflicting uberblocks on disk with the same txg. The solution is simple: 778 * among uberblocks with equal txg, choose the one with the latest timestamp. 779 */ 780 static int 781 vdev_uberblock_compare(uberblock_t *ub1, uberblock_t *ub2) 782 { 783 if (ub1->ub_txg < ub2->ub_txg) 784 return (-1); 785 if (ub1->ub_txg > ub2->ub_txg) 786 return (1); 787 788 if (ub1->ub_timestamp < ub2->ub_timestamp) 789 return (-1); 790 if (ub1->ub_timestamp > ub2->ub_timestamp) 791 return (1); 792 793 return (0); 794 } 795 796 static void 797 vdev_uberblock_load_done(zio_t *zio) 798 { 799 zio_t *rio = zio->io_private; 800 uberblock_t *ub = zio->io_data; 801 uberblock_t *ubbest = rio->io_private; 802 803 ASSERT3U(zio->io_size, ==, VDEV_UBERBLOCK_SIZE(zio->io_vd)); 804 805 if (zio->io_error == 0 && uberblock_verify(ub) == 0) { 806 mutex_enter(&rio->io_lock); 807 if (ub->ub_txg <= ub_max_txg && 808 vdev_uberblock_compare(ub, ubbest) > 0) 809 *ubbest = *ub; 810 mutex_exit(&rio->io_lock); 811 } 812 813 zio_buf_free(zio->io_data, zio->io_size); 814 } 815 816 void 817 vdev_uberblock_load(zio_t *zio, vdev_t *vd, uberblock_t *ubbest) 818 { 819 spa_t *spa = vd->vdev_spa; 820 vdev_t *rvd = spa->spa_root_vdev; 821 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 822 ZIO_FLAG_SPECULATIVE | ZIO_FLAG_TRYHARD; 823 824 if (vd == rvd) { 825 ASSERT(zio == NULL); 826 spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER); 827 zio = zio_root(spa, NULL, ubbest, flags); 828 bzero(ubbest, sizeof (uberblock_t)); 829 } 830 831 ASSERT(zio != NULL); 832 833 for (int c = 0; c < vd->vdev_children; c++) 834 vdev_uberblock_load(zio, vd->vdev_child[c], ubbest); 835 836 if (vd->vdev_ops->vdev_op_leaf && vdev_readable(vd)) { 837 for (int l = 0; l < VDEV_LABELS; l++) { 838 for (int n = 0; n < VDEV_UBERBLOCK_COUNT(vd); n++) { 839 vdev_label_read(zio, vd, l, 840 zio_buf_alloc(VDEV_UBERBLOCK_SIZE(vd)), 841 VDEV_UBERBLOCK_OFFSET(vd, n), 842 VDEV_UBERBLOCK_SIZE(vd), 843 vdev_uberblock_load_done, zio, flags); 844 } 845 } 846 } 847 848 if (vd == rvd) { 849 (void) zio_wait(zio); 850 spa_config_exit(spa, SCL_ALL, FTAG); 851 } 852 } 853 854 /* 855 * On success, increment root zio's count of good writes. 856 * We only get credit for writes to known-visible vdevs; see spa_vdev_add(). 857 */ 858 static void 859 vdev_uberblock_sync_done(zio_t *zio) 860 { 861 uint64_t *good_writes = zio->io_private; 862 863 if (zio->io_error == 0 && zio->io_vd->vdev_top->vdev_ms_array != 0) 864 atomic_add_64(good_writes, 1); 865 } 866 867 /* 868 * Write the uberblock to all labels of all leaves of the specified vdev. 869 */ 870 static void 871 vdev_uberblock_sync(zio_t *zio, uberblock_t *ub, vdev_t *vd, int flags) 872 { 873 uberblock_t *ubbuf; 874 int n; 875 876 for (int c = 0; c < vd->vdev_children; c++) 877 vdev_uberblock_sync(zio, ub, vd->vdev_child[c], flags); 878 879 if (!vd->vdev_ops->vdev_op_leaf) 880 return; 881 882 if (!vdev_writeable(vd)) 883 return; 884 885 n = ub->ub_txg & (VDEV_UBERBLOCK_COUNT(vd) - 1); 886 887 ubbuf = zio_buf_alloc(VDEV_UBERBLOCK_SIZE(vd)); 888 bzero(ubbuf, VDEV_UBERBLOCK_SIZE(vd)); 889 *ubbuf = *ub; 890 891 for (int l = 0; l < VDEV_LABELS; l++) 892 vdev_label_write(zio, vd, l, ubbuf, 893 VDEV_UBERBLOCK_OFFSET(vd, n), VDEV_UBERBLOCK_SIZE(vd), 894 vdev_uberblock_sync_done, zio->io_private, 895 flags | ZIO_FLAG_DONT_PROPAGATE); 896 897 zio_buf_free(ubbuf, VDEV_UBERBLOCK_SIZE(vd)); 898 } 899 900 int 901 vdev_uberblock_sync_list(vdev_t **svd, int svdcount, uberblock_t *ub, int flags) 902 { 903 spa_t *spa = svd[0]->vdev_spa; 904 zio_t *zio; 905 uint64_t good_writes = 0; 906 907 zio = zio_root(spa, NULL, &good_writes, flags); 908 909 for (int v = 0; v < svdcount; v++) 910 vdev_uberblock_sync(zio, ub, svd[v], flags); 911 912 (void) zio_wait(zio); 913 914 /* 915 * Flush the uberblocks to disk. This ensures that the odd labels 916 * are no longer needed (because the new uberblocks and the even 917 * labels are safely on disk), so it is safe to overwrite them. 918 */ 919 zio = zio_root(spa, NULL, NULL, flags); 920 921 for (int v = 0; v < svdcount; v++) 922 zio_flush(zio, svd[v]); 923 924 (void) zio_wait(zio); 925 926 return (good_writes >= 1 ? 0 : EIO); 927 } 928 929 /* 930 * On success, increment the count of good writes for our top-level vdev. 931 */ 932 static void 933 vdev_label_sync_done(zio_t *zio) 934 { 935 uint64_t *good_writes = zio->io_private; 936 937 if (zio->io_error == 0) 938 atomic_add_64(good_writes, 1); 939 } 940 941 /* 942 * If there weren't enough good writes, indicate failure to the parent. 943 */ 944 static void 945 vdev_label_sync_top_done(zio_t *zio) 946 { 947 uint64_t *good_writes = zio->io_private; 948 949 if (*good_writes == 0) 950 zio->io_error = EIO; 951 952 kmem_free(good_writes, sizeof (uint64_t)); 953 } 954 955 /* 956 * We ignore errors for log and cache devices, simply free the private data. 957 */ 958 static void 959 vdev_label_sync_ignore_done(zio_t *zio) 960 { 961 kmem_free(zio->io_private, sizeof (uint64_t)); 962 } 963 964 /* 965 * Write all even or odd labels to all leaves of the specified vdev. 966 */ 967 static void 968 vdev_label_sync(zio_t *zio, vdev_t *vd, int l, uint64_t txg, int flags) 969 { 970 nvlist_t *label; 971 vdev_phys_t *vp; 972 char *buf; 973 size_t buflen; 974 975 for (int c = 0; c < vd->vdev_children; c++) 976 vdev_label_sync(zio, vd->vdev_child[c], l, txg, flags); 977 978 if (!vd->vdev_ops->vdev_op_leaf) 979 return; 980 981 if (!vdev_writeable(vd)) 982 return; 983 984 /* 985 * Generate a label describing the top-level config to which we belong. 986 */ 987 label = spa_config_generate(vd->vdev_spa, vd, txg, B_FALSE); 988 989 vp = zio_buf_alloc(sizeof (vdev_phys_t)); 990 bzero(vp, sizeof (vdev_phys_t)); 991 992 buf = vp->vp_nvlist; 993 buflen = sizeof (vp->vp_nvlist); 994 995 if (nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP) == 0) { 996 for (; l < VDEV_LABELS; l += 2) { 997 vdev_label_write(zio, vd, l, vp, 998 offsetof(vdev_label_t, vl_vdev_phys), 999 sizeof (vdev_phys_t), 1000 vdev_label_sync_done, zio->io_private, 1001 flags | ZIO_FLAG_DONT_PROPAGATE); 1002 } 1003 } 1004 1005 zio_buf_free(vp, sizeof (vdev_phys_t)); 1006 nvlist_free(label); 1007 } 1008 1009 int 1010 vdev_label_sync_list(spa_t *spa, int l, uint64_t txg, int flags) 1011 { 1012 list_t *dl = &spa->spa_config_dirty_list; 1013 vdev_t *vd; 1014 zio_t *zio; 1015 int error; 1016 1017 /* 1018 * Write the new labels to disk. 1019 */ 1020 zio = zio_root(spa, NULL, NULL, flags); 1021 1022 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) { 1023 uint64_t *good_writes = kmem_zalloc(sizeof (uint64_t), 1024 KM_SLEEP); 1025 1026 ASSERT(!vd->vdev_ishole); 1027 1028 zio_t *vio = zio_null(zio, spa, NULL, 1029 (vd->vdev_islog || vd->vdev_aux != NULL) ? 1030 vdev_label_sync_ignore_done : vdev_label_sync_top_done, 1031 good_writes, flags); 1032 vdev_label_sync(vio, vd, l, txg, flags); 1033 zio_nowait(vio); 1034 } 1035 1036 error = zio_wait(zio); 1037 1038 /* 1039 * Flush the new labels to disk. 1040 */ 1041 zio = zio_root(spa, NULL, NULL, flags); 1042 1043 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) 1044 zio_flush(zio, vd); 1045 1046 (void) zio_wait(zio); 1047 1048 return (error); 1049 } 1050 1051 /* 1052 * Sync the uberblock and any changes to the vdev configuration. 1053 * 1054 * The order of operations is carefully crafted to ensure that 1055 * if the system panics or loses power at any time, the state on disk 1056 * is still transactionally consistent. The in-line comments below 1057 * describe the failure semantics at each stage. 1058 * 1059 * Moreover, vdev_config_sync() is designed to be idempotent: if it fails 1060 * at any time, you can just call it again, and it will resume its work. 1061 */ 1062 int 1063 vdev_config_sync(vdev_t **svd, int svdcount, uint64_t txg, boolean_t tryhard) 1064 { 1065 spa_t *spa = svd[0]->vdev_spa; 1066 uberblock_t *ub = &spa->spa_uberblock; 1067 vdev_t *vd; 1068 zio_t *zio; 1069 int error; 1070 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 1071 1072 /* 1073 * Normally, we don't want to try too hard to write every label and 1074 * uberblock. If there is a flaky disk, we don't want the rest of the 1075 * sync process to block while we retry. But if we can't write a 1076 * single label out, we should retry with ZIO_FLAG_TRYHARD before 1077 * bailing out and declaring the pool faulted. 1078 */ 1079 if (tryhard) 1080 flags |= ZIO_FLAG_TRYHARD; 1081 1082 ASSERT(ub->ub_txg <= txg); 1083 1084 /* 1085 * If this isn't a resync due to I/O errors, 1086 * and nothing changed in this transaction group, 1087 * and the vdev configuration hasn't changed, 1088 * then there's nothing to do. 1089 */ 1090 if (ub->ub_txg < txg && 1091 uberblock_update(ub, spa->spa_root_vdev, txg) == B_FALSE && 1092 list_is_empty(&spa->spa_config_dirty_list)) 1093 return (0); 1094 1095 if (txg > spa_freeze_txg(spa)) 1096 return (0); 1097 1098 ASSERT(txg <= spa->spa_final_txg); 1099 1100 /* 1101 * Flush the write cache of every disk that's been written to 1102 * in this transaction group. This ensures that all blocks 1103 * written in this txg will be committed to stable storage 1104 * before any uberblock that references them. 1105 */ 1106 zio = zio_root(spa, NULL, NULL, flags); 1107 1108 for (vd = txg_list_head(&spa->spa_vdev_txg_list, TXG_CLEAN(txg)); vd; 1109 vd = txg_list_next(&spa->spa_vdev_txg_list, vd, TXG_CLEAN(txg))) 1110 zio_flush(zio, vd); 1111 1112 (void) zio_wait(zio); 1113 1114 /* 1115 * Sync out the even labels (L0, L2) for every dirty vdev. If the 1116 * system dies in the middle of this process, that's OK: all of the 1117 * even labels that made it to disk will be newer than any uberblock, 1118 * and will therefore be considered invalid. The odd labels (L1, L3), 1119 * which have not yet been touched, will still be valid. We flush 1120 * the new labels to disk to ensure that all even-label updates 1121 * are committed to stable storage before the uberblock update. 1122 */ 1123 if ((error = vdev_label_sync_list(spa, 0, txg, flags)) != 0) 1124 return (error); 1125 1126 /* 1127 * Sync the uberblocks to all vdevs in svd[]. 1128 * If the system dies in the middle of this step, there are two cases 1129 * to consider, and the on-disk state is consistent either way: 1130 * 1131 * (1) If none of the new uberblocks made it to disk, then the 1132 * previous uberblock will be the newest, and the odd labels 1133 * (which had not yet been touched) will be valid with respect 1134 * to that uberblock. 1135 * 1136 * (2) If one or more new uberblocks made it to disk, then they 1137 * will be the newest, and the even labels (which had all 1138 * been successfully committed) will be valid with respect 1139 * to the new uberblocks. 1140 */ 1141 if ((error = vdev_uberblock_sync_list(svd, svdcount, ub, flags)) != 0) 1142 return (error); 1143 1144 /* 1145 * Sync out odd labels for every dirty vdev. If the system dies 1146 * in the middle of this process, the even labels and the new 1147 * uberblocks will suffice to open the pool. The next time 1148 * the pool is opened, the first thing we'll do -- before any 1149 * user data is modified -- is mark every vdev dirty so that 1150 * all labels will be brought up to date. We flush the new labels 1151 * to disk to ensure that all odd-label updates are committed to 1152 * stable storage before the next transaction group begins. 1153 */ 1154 return (vdev_label_sync_list(spa, 1, txg, flags)); 1155 } 1156