1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 22 /* 23 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 24 * Copyright (c) 2012, 2018 by Delphix. All rights reserved. 25 * Copyright 2019 Joyent, Inc. 26 */ 27 28 /* 29 * Virtual Device Labels 30 * --------------------- 31 * 32 * The vdev label serves several distinct purposes: 33 * 34 * 1. Uniquely identify this device as part of a ZFS pool and confirm its 35 * identity within the pool. 36 * 37 * 2. Verify that all the devices given in a configuration are present 38 * within the pool. 39 * 40 * 3. Determine the uberblock for the pool. 41 * 42 * 4. In case of an import operation, determine the configuration of the 43 * toplevel vdev of which it is a part. 44 * 45 * 5. If an import operation cannot find all the devices in the pool, 46 * provide enough information to the administrator to determine which 47 * devices are missing. 48 * 49 * It is important to note that while the kernel is responsible for writing the 50 * label, it only consumes the information in the first three cases. The 51 * latter information is only consumed in userland when determining the 52 * configuration to import a pool. 53 * 54 * 55 * Label Organization 56 * ------------------ 57 * 58 * Before describing the contents of the label, it's important to understand how 59 * the labels are written and updated with respect to the uberblock. 60 * 61 * When the pool configuration is altered, either because it was newly created 62 * or a device was added, we want to update all the labels such that we can deal 63 * with fatal failure at any point. To this end, each disk has two labels which 64 * are updated before and after the uberblock is synced. Assuming we have 65 * labels and an uberblock with the following transaction groups: 66 * 67 * L1 UB L2 68 * +------+ +------+ +------+ 69 * | | | | | | 70 * | t10 | | t10 | | t10 | 71 * | | | | | | 72 * +------+ +------+ +------+ 73 * 74 * In this stable state, the labels and the uberblock were all updated within 75 * the same transaction group (10). Each label is mirrored and checksummed, so 76 * that we can detect when we fail partway through writing the label. 77 * 78 * In order to identify which labels are valid, the labels are written in the 79 * following manner: 80 * 81 * 1. For each vdev, update 'L1' to the new label 82 * 2. Update the uberblock 83 * 3. For each vdev, update 'L2' to the new label 84 * 85 * Given arbitrary failure, we can determine the correct label to use based on 86 * the transaction group. If we fail after updating L1 but before updating the 87 * UB, we will notice that L1's transaction group is greater than the uberblock, 88 * so L2 must be valid. If we fail after writing the uberblock but before 89 * writing L2, we will notice that L2's transaction group is less than L1, and 90 * therefore L1 is valid. 91 * 92 * Another added complexity is that not every label is updated when the config 93 * is synced. If we add a single device, we do not want to have to re-write 94 * every label for every device in the pool. This means that both L1 and L2 may 95 * be older than the pool uberblock, because the necessary information is stored 96 * on another vdev. 97 * 98 * 99 * On-disk Format 100 * -------------- 101 * 102 * The vdev label consists of two distinct parts, and is wrapped within the 103 * vdev_label_t structure. The label includes 8k of padding to permit legacy 104 * VTOC disk labels, but is otherwise ignored. 105 * 106 * The first half of the label is a packed nvlist which contains pool wide 107 * properties, per-vdev properties, and configuration information. It is 108 * described in more detail below. 109 * 110 * The latter half of the label consists of a redundant array of uberblocks. 111 * These uberblocks are updated whenever a transaction group is committed, 112 * or when the configuration is updated. When a pool is loaded, we scan each 113 * vdev for the 'best' uberblock. 114 * 115 * 116 * Configuration Information 117 * ------------------------- 118 * 119 * The nvlist describing the pool and vdev contains the following elements: 120 * 121 * version ZFS on-disk version 122 * name Pool name 123 * state Pool state 124 * txg Transaction group in which this label was written 125 * pool_guid Unique identifier for this pool 126 * vdev_tree An nvlist describing vdev tree. 127 * features_for_read 128 * An nvlist of the features necessary for reading the MOS. 129 * 130 * Each leaf device label also contains the following: 131 * 132 * top_guid Unique ID for top-level vdev in which this is contained 133 * guid Unique ID for the leaf vdev 134 * 135 * The 'vs' configuration follows the format described in 'spa_config.c'. 136 */ 137 138 #include <sys/zfs_context.h> 139 #include <sys/spa.h> 140 #include <sys/spa_impl.h> 141 #include <sys/dmu.h> 142 #include <sys/zap.h> 143 #include <sys/vdev.h> 144 #include <sys/vdev_impl.h> 145 #include <sys/uberblock_impl.h> 146 #include <sys/metaslab.h> 147 #include <sys/metaslab_impl.h> 148 #include <sys/zio.h> 149 #include <sys/dsl_scan.h> 150 #include <sys/abd.h> 151 #include <sys/fs/zfs.h> 152 153 /* 154 * Basic routines to read and write from a vdev label. 155 * Used throughout the rest of this file. 156 */ 157 uint64_t 158 vdev_label_offset(uint64_t psize, int l, uint64_t offset) 159 { 160 ASSERT(offset < sizeof (vdev_label_t)); 161 ASSERT(P2PHASE_TYPED(psize, sizeof (vdev_label_t), uint64_t) == 0); 162 163 return (offset + l * sizeof (vdev_label_t) + (l < VDEV_LABELS / 2 ? 164 0 : psize - VDEV_LABELS * sizeof (vdev_label_t))); 165 } 166 167 /* 168 * Returns back the vdev label associated with the passed in offset. 169 */ 170 int 171 vdev_label_number(uint64_t psize, uint64_t offset) 172 { 173 int l; 174 175 if (offset >= psize - VDEV_LABEL_END_SIZE) { 176 offset -= psize - VDEV_LABEL_END_SIZE; 177 offset += (VDEV_LABELS / 2) * sizeof (vdev_label_t); 178 } 179 l = offset / sizeof (vdev_label_t); 180 return (l < VDEV_LABELS ? l : -1); 181 } 182 183 static void 184 vdev_label_read(zio_t *zio, vdev_t *vd, int l, abd_t *buf, uint64_t offset, 185 uint64_t size, zio_done_func_t *done, void *private, int flags) 186 { 187 ASSERT(spa_config_held(zio->io_spa, SCL_STATE_ALL, RW_WRITER) == 188 SCL_STATE_ALL); 189 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 190 191 zio_nowait(zio_read_phys(zio, vd, 192 vdev_label_offset(vd->vdev_psize, l, offset), 193 size, buf, ZIO_CHECKSUM_LABEL, done, private, 194 ZIO_PRIORITY_SYNC_READ, flags, B_TRUE)); 195 } 196 197 void 198 vdev_label_write(zio_t *zio, vdev_t *vd, int l, abd_t *buf, uint64_t offset, 199 uint64_t size, zio_done_func_t *done, void *private, int flags) 200 { 201 #ifdef _KERNEL 202 /* 203 * This assert is invalid in the user-level ztest MMP code because 204 * the ztest thread is not in dsl_pool_sync_context. ZoL does not 205 * build the user-level code with DEBUG so this is not an issue there. 206 */ 207 ASSERT(spa_config_held(zio->io_spa, SCL_ALL, RW_WRITER) == SCL_ALL || 208 (spa_config_held(zio->io_spa, SCL_CONFIG | SCL_STATE, RW_READER) == 209 (SCL_CONFIG | SCL_STATE) && 210 dsl_pool_sync_context(spa_get_dsl(zio->io_spa)))); 211 #endif 212 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 213 214 zio_nowait(zio_write_phys(zio, vd, 215 vdev_label_offset(vd->vdev_psize, l, offset), 216 size, buf, ZIO_CHECKSUM_LABEL, done, private, 217 ZIO_PRIORITY_SYNC_WRITE, flags, B_TRUE)); 218 } 219 220 static void 221 root_vdev_actions_getprogress(vdev_t *vd, nvlist_t *nvl) 222 { 223 spa_t *spa = vd->vdev_spa; 224 225 if (vd != spa->spa_root_vdev) 226 return; 227 228 /* provide either current or previous scan information */ 229 pool_scan_stat_t ps; 230 if (spa_scan_get_stats(spa, &ps) == 0) { 231 fnvlist_add_uint64_array(nvl, 232 ZPOOL_CONFIG_SCAN_STATS, (uint64_t *)&ps, 233 sizeof (pool_scan_stat_t) / sizeof (uint64_t)); 234 } 235 236 pool_removal_stat_t prs; 237 if (spa_removal_get_stats(spa, &prs) == 0) { 238 fnvlist_add_uint64_array(nvl, 239 ZPOOL_CONFIG_REMOVAL_STATS, (uint64_t *)&prs, 240 sizeof (prs) / sizeof (uint64_t)); 241 } 242 243 pool_checkpoint_stat_t pcs; 244 if (spa_checkpoint_get_stats(spa, &pcs) == 0) { 245 fnvlist_add_uint64_array(nvl, 246 ZPOOL_CONFIG_CHECKPOINT_STATS, (uint64_t *)&pcs, 247 sizeof (pcs) / sizeof (uint64_t)); 248 } 249 } 250 251 /* 252 * Generate the nvlist representing this vdev's config. 253 */ 254 nvlist_t * 255 vdev_config_generate(spa_t *spa, vdev_t *vd, boolean_t getstats, 256 vdev_config_flag_t flags) 257 { 258 nvlist_t *nv = NULL; 259 vdev_indirect_config_t *vic = &vd->vdev_indirect_config; 260 261 nv = fnvlist_alloc(); 262 263 fnvlist_add_string(nv, ZPOOL_CONFIG_TYPE, vd->vdev_ops->vdev_op_type); 264 if (!(flags & (VDEV_CONFIG_SPARE | VDEV_CONFIG_L2CACHE))) 265 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ID, vd->vdev_id); 266 fnvlist_add_uint64(nv, ZPOOL_CONFIG_GUID, vd->vdev_guid); 267 268 if (vd->vdev_path != NULL) 269 fnvlist_add_string(nv, ZPOOL_CONFIG_PATH, vd->vdev_path); 270 271 if (vd->vdev_devid != NULL) 272 fnvlist_add_string(nv, ZPOOL_CONFIG_DEVID, vd->vdev_devid); 273 274 if (vd->vdev_physpath != NULL) 275 fnvlist_add_string(nv, ZPOOL_CONFIG_PHYS_PATH, 276 vd->vdev_physpath); 277 278 if (vd->vdev_fru != NULL) 279 fnvlist_add_string(nv, ZPOOL_CONFIG_FRU, vd->vdev_fru); 280 281 if (vd->vdev_nparity != 0) { 282 ASSERT(strcmp(vd->vdev_ops->vdev_op_type, 283 VDEV_TYPE_RAIDZ) == 0); 284 285 /* 286 * Make sure someone hasn't managed to sneak a fancy new vdev 287 * into a crufty old storage pool. 288 */ 289 ASSERT(vd->vdev_nparity == 1 || 290 (vd->vdev_nparity <= 2 && 291 spa_version(spa) >= SPA_VERSION_RAIDZ2) || 292 (vd->vdev_nparity <= 3 && 293 spa_version(spa) >= SPA_VERSION_RAIDZ3)); 294 295 /* 296 * Note that we'll add the nparity tag even on storage pools 297 * that only support a single parity device -- older software 298 * will just ignore it. 299 */ 300 fnvlist_add_uint64(nv, ZPOOL_CONFIG_NPARITY, vd->vdev_nparity); 301 } 302 303 if (vd->vdev_wholedisk != -1ULL) 304 fnvlist_add_uint64(nv, ZPOOL_CONFIG_WHOLE_DISK, 305 vd->vdev_wholedisk); 306 307 if (vd->vdev_not_present && !(flags & VDEV_CONFIG_MISSING)) 308 fnvlist_add_uint64(nv, ZPOOL_CONFIG_NOT_PRESENT, 1); 309 310 if (vd->vdev_isspare) 311 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_SPARE, 1); 312 313 if (!(flags & (VDEV_CONFIG_SPARE | VDEV_CONFIG_L2CACHE)) && 314 vd == vd->vdev_top) { 315 fnvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_ARRAY, 316 vd->vdev_ms_array); 317 fnvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_SHIFT, 318 vd->vdev_ms_shift); 319 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ASHIFT, vd->vdev_ashift); 320 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ASIZE, 321 vd->vdev_asize); 322 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_LOG, vd->vdev_islog); 323 if (vd->vdev_removing) { 324 fnvlist_add_uint64(nv, ZPOOL_CONFIG_REMOVING, 325 vd->vdev_removing); 326 } 327 } 328 329 if (vd->vdev_dtl_sm != NULL) { 330 fnvlist_add_uint64(nv, ZPOOL_CONFIG_DTL, 331 space_map_object(vd->vdev_dtl_sm)); 332 } 333 334 if (vic->vic_mapping_object != 0) { 335 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_OBJECT, 336 vic->vic_mapping_object); 337 } 338 339 if (vic->vic_births_object != 0) { 340 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_BIRTHS, 341 vic->vic_births_object); 342 } 343 344 if (vic->vic_prev_indirect_vdev != UINT64_MAX) { 345 fnvlist_add_uint64(nv, ZPOOL_CONFIG_PREV_INDIRECT_VDEV, 346 vic->vic_prev_indirect_vdev); 347 } 348 349 if (vd->vdev_crtxg) 350 fnvlist_add_uint64(nv, ZPOOL_CONFIG_CREATE_TXG, vd->vdev_crtxg); 351 352 if (flags & VDEV_CONFIG_MOS) { 353 if (vd->vdev_leaf_zap != 0) { 354 ASSERT(vd->vdev_ops->vdev_op_leaf); 355 fnvlist_add_uint64(nv, ZPOOL_CONFIG_VDEV_LEAF_ZAP, 356 vd->vdev_leaf_zap); 357 } 358 359 if (vd->vdev_top_zap != 0) { 360 ASSERT(vd == vd->vdev_top); 361 fnvlist_add_uint64(nv, ZPOOL_CONFIG_VDEV_TOP_ZAP, 362 vd->vdev_top_zap); 363 } 364 } 365 366 if (getstats) { 367 vdev_stat_t vs; 368 369 vdev_get_stats(vd, &vs); 370 fnvlist_add_uint64_array(nv, ZPOOL_CONFIG_VDEV_STATS, 371 (uint64_t *)&vs, sizeof (vs) / sizeof (uint64_t)); 372 373 root_vdev_actions_getprogress(vd, nv); 374 375 /* 376 * Note: this can be called from open context 377 * (spa_get_stats()), so we need the rwlock to prevent 378 * the mapping from being changed by condensing. 379 */ 380 rw_enter(&vd->vdev_indirect_rwlock, RW_READER); 381 if (vd->vdev_indirect_mapping != NULL) { 382 ASSERT(vd->vdev_indirect_births != NULL); 383 vdev_indirect_mapping_t *vim = 384 vd->vdev_indirect_mapping; 385 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_SIZE, 386 vdev_indirect_mapping_size(vim)); 387 } 388 rw_exit(&vd->vdev_indirect_rwlock); 389 if (vd->vdev_mg != NULL && 390 vd->vdev_mg->mg_fragmentation != ZFS_FRAG_INVALID) { 391 /* 392 * Compute approximately how much memory would be used 393 * for the indirect mapping if this device were to 394 * be removed. 395 * 396 * Note: If the frag metric is invalid, then not 397 * enough metaslabs have been converted to have 398 * histograms. 399 */ 400 uint64_t seg_count = 0; 401 uint64_t to_alloc = vd->vdev_stat.vs_alloc; 402 403 /* 404 * There are the same number of allocated segments 405 * as free segments, so we will have at least one 406 * entry per free segment. However, small free 407 * segments (smaller than vdev_removal_max_span) 408 * will be combined with adjacent allocated segments 409 * as a single mapping. 410 */ 411 for (int i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++) { 412 if (1ULL << (i + 1) < vdev_removal_max_span) { 413 to_alloc += 414 vd->vdev_mg->mg_histogram[i] << 415 i + 1; 416 } else { 417 seg_count += 418 vd->vdev_mg->mg_histogram[i]; 419 } 420 } 421 422 /* 423 * The maximum length of a mapping is 424 * zfs_remove_max_segment, so we need at least one entry 425 * per zfs_remove_max_segment of allocated data. 426 */ 427 seg_count += to_alloc / zfs_remove_max_segment; 428 429 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_SIZE, 430 seg_count * 431 sizeof (vdev_indirect_mapping_entry_phys_t)); 432 } 433 } 434 435 if (!vd->vdev_ops->vdev_op_leaf) { 436 nvlist_t **child; 437 int c, idx; 438 439 ASSERT(!vd->vdev_ishole); 440 441 child = kmem_alloc(vd->vdev_children * sizeof (nvlist_t *), 442 KM_SLEEP); 443 444 for (c = 0, idx = 0; c < vd->vdev_children; c++) { 445 vdev_t *cvd = vd->vdev_child[c]; 446 447 /* 448 * If we're generating an nvlist of removing 449 * vdevs then skip over any device which is 450 * not being removed. 451 */ 452 if ((flags & VDEV_CONFIG_REMOVING) && 453 !cvd->vdev_removing) 454 continue; 455 456 child[idx++] = vdev_config_generate(spa, cvd, 457 getstats, flags); 458 } 459 460 if (idx) { 461 fnvlist_add_nvlist_array(nv, ZPOOL_CONFIG_CHILDREN, 462 child, idx); 463 } 464 465 for (c = 0; c < idx; c++) 466 nvlist_free(child[c]); 467 468 kmem_free(child, vd->vdev_children * sizeof (nvlist_t *)); 469 470 } else { 471 const char *aux = NULL; 472 473 if (vd->vdev_offline && !vd->vdev_tmpoffline) 474 fnvlist_add_uint64(nv, ZPOOL_CONFIG_OFFLINE, B_TRUE); 475 if (vd->vdev_resilver_txg != 0) 476 fnvlist_add_uint64(nv, ZPOOL_CONFIG_RESILVER_TXG, 477 vd->vdev_resilver_txg); 478 if (vd->vdev_faulted) 479 fnvlist_add_uint64(nv, ZPOOL_CONFIG_FAULTED, B_TRUE); 480 if (vd->vdev_degraded) 481 fnvlist_add_uint64(nv, ZPOOL_CONFIG_DEGRADED, B_TRUE); 482 if (vd->vdev_removed) 483 fnvlist_add_uint64(nv, ZPOOL_CONFIG_REMOVED, B_TRUE); 484 if (vd->vdev_unspare) 485 fnvlist_add_uint64(nv, ZPOOL_CONFIG_UNSPARE, B_TRUE); 486 if (vd->vdev_ishole) 487 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_HOLE, B_TRUE); 488 489 switch (vd->vdev_stat.vs_aux) { 490 case VDEV_AUX_ERR_EXCEEDED: 491 aux = "err_exceeded"; 492 break; 493 494 case VDEV_AUX_EXTERNAL: 495 aux = "external"; 496 break; 497 } 498 499 if (aux != NULL) 500 fnvlist_add_string(nv, ZPOOL_CONFIG_AUX_STATE, aux); 501 502 if (vd->vdev_splitting && vd->vdev_orig_guid != 0LL) { 503 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ORIG_GUID, 504 vd->vdev_orig_guid); 505 } 506 } 507 508 return (nv); 509 } 510 511 /* 512 * Generate a view of the top-level vdevs. If we currently have holes 513 * in the namespace, then generate an array which contains a list of holey 514 * vdevs. Additionally, add the number of top-level children that currently 515 * exist. 516 */ 517 void 518 vdev_top_config_generate(spa_t *spa, nvlist_t *config) 519 { 520 vdev_t *rvd = spa->spa_root_vdev; 521 uint64_t *array; 522 uint_t c, idx; 523 524 array = kmem_alloc(rvd->vdev_children * sizeof (uint64_t), KM_SLEEP); 525 526 for (c = 0, idx = 0; c < rvd->vdev_children; c++) { 527 vdev_t *tvd = rvd->vdev_child[c]; 528 529 if (tvd->vdev_ishole) { 530 array[idx++] = c; 531 } 532 } 533 534 if (idx) { 535 VERIFY(nvlist_add_uint64_array(config, ZPOOL_CONFIG_HOLE_ARRAY, 536 array, idx) == 0); 537 } 538 539 VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_VDEV_CHILDREN, 540 rvd->vdev_children) == 0); 541 542 kmem_free(array, rvd->vdev_children * sizeof (uint64_t)); 543 } 544 545 /* 546 * Returns the configuration from the label of the given vdev. For vdevs 547 * which don't have a txg value stored on their label (i.e. spares/cache) 548 * or have not been completely initialized (txg = 0) just return 549 * the configuration from the first valid label we find. Otherwise, 550 * find the most up-to-date label that does not exceed the specified 551 * 'txg' value. 552 */ 553 nvlist_t * 554 vdev_label_read_config(vdev_t *vd, uint64_t txg) 555 { 556 spa_t *spa = vd->vdev_spa; 557 nvlist_t *config = NULL; 558 vdev_phys_t *vp; 559 abd_t *vp_abd; 560 zio_t *zio; 561 uint64_t best_txg = 0; 562 uint64_t label_txg = 0; 563 int error = 0; 564 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 565 ZIO_FLAG_SPECULATIVE; 566 567 ASSERT(spa_config_held(spa, SCL_STATE_ALL, RW_WRITER) == SCL_STATE_ALL); 568 569 if (!vdev_readable(vd)) 570 return (NULL); 571 572 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 573 vp = abd_to_buf(vp_abd); 574 575 retry: 576 for (int l = 0; l < VDEV_LABELS; l++) { 577 nvlist_t *label = NULL; 578 579 zio = zio_root(spa, NULL, NULL, flags); 580 581 vdev_label_read(zio, vd, l, vp_abd, 582 offsetof(vdev_label_t, vl_vdev_phys), 583 sizeof (vdev_phys_t), NULL, NULL, flags); 584 585 if (zio_wait(zio) == 0 && 586 nvlist_unpack(vp->vp_nvlist, sizeof (vp->vp_nvlist), 587 &label, 0) == 0) { 588 /* 589 * Auxiliary vdevs won't have txg values in their 590 * labels and newly added vdevs may not have been 591 * completely initialized so just return the 592 * configuration from the first valid label we 593 * encounter. 594 */ 595 error = nvlist_lookup_uint64(label, 596 ZPOOL_CONFIG_POOL_TXG, &label_txg); 597 if ((error || label_txg == 0) && !config) { 598 config = label; 599 break; 600 } else if (label_txg <= txg && label_txg > best_txg) { 601 best_txg = label_txg; 602 nvlist_free(config); 603 config = fnvlist_dup(label); 604 } 605 } 606 607 if (label != NULL) { 608 nvlist_free(label); 609 label = NULL; 610 } 611 } 612 613 if (config == NULL && !(flags & ZIO_FLAG_TRYHARD)) { 614 flags |= ZIO_FLAG_TRYHARD; 615 goto retry; 616 } 617 618 /* 619 * We found a valid label but it didn't pass txg restrictions. 620 */ 621 if (config == NULL && label_txg != 0) { 622 vdev_dbgmsg(vd, "label discarded as txg is too large " 623 "(%llu > %llu)", (u_longlong_t)label_txg, 624 (u_longlong_t)txg); 625 } 626 627 abd_free(vp_abd); 628 629 return (config); 630 } 631 632 /* 633 * Determine if a device is in use. The 'spare_guid' parameter will be filled 634 * in with the device guid if this spare is active elsewhere on the system. 635 */ 636 static boolean_t 637 vdev_inuse(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason, 638 uint64_t *spare_guid, uint64_t *l2cache_guid) 639 { 640 spa_t *spa = vd->vdev_spa; 641 uint64_t state, pool_guid, device_guid, txg, spare_pool; 642 uint64_t vdtxg = 0; 643 nvlist_t *label; 644 645 if (spare_guid) 646 *spare_guid = 0ULL; 647 if (l2cache_guid) 648 *l2cache_guid = 0ULL; 649 650 /* 651 * Read the label, if any, and perform some basic sanity checks. 652 */ 653 if ((label = vdev_label_read_config(vd, -1ULL)) == NULL) 654 return (B_FALSE); 655 656 (void) nvlist_lookup_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 657 &vdtxg); 658 659 if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_STATE, 660 &state) != 0 || 661 nvlist_lookup_uint64(label, ZPOOL_CONFIG_GUID, 662 &device_guid) != 0) { 663 nvlist_free(label); 664 return (B_FALSE); 665 } 666 667 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 668 (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_GUID, 669 &pool_guid) != 0 || 670 nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_TXG, 671 &txg) != 0)) { 672 nvlist_free(label); 673 return (B_FALSE); 674 } 675 676 nvlist_free(label); 677 678 /* 679 * Check to see if this device indeed belongs to the pool it claims to 680 * be a part of. The only way this is allowed is if the device is a hot 681 * spare (which we check for later on). 682 */ 683 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 684 !spa_guid_exists(pool_guid, device_guid) && 685 !spa_spare_exists(device_guid, NULL, NULL) && 686 !spa_l2cache_exists(device_guid, NULL)) 687 return (B_FALSE); 688 689 /* 690 * If the transaction group is zero, then this an initialized (but 691 * unused) label. This is only an error if the create transaction 692 * on-disk is the same as the one we're using now, in which case the 693 * user has attempted to add the same vdev multiple times in the same 694 * transaction. 695 */ 696 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 697 txg == 0 && vdtxg == crtxg) 698 return (B_TRUE); 699 700 /* 701 * Check to see if this is a spare device. We do an explicit check for 702 * spa_has_spare() here because it may be on our pending list of spares 703 * to add. We also check if it is an l2cache device. 704 */ 705 if (spa_spare_exists(device_guid, &spare_pool, NULL) || 706 spa_has_spare(spa, device_guid)) { 707 if (spare_guid) 708 *spare_guid = device_guid; 709 710 switch (reason) { 711 case VDEV_LABEL_CREATE: 712 case VDEV_LABEL_L2CACHE: 713 return (B_TRUE); 714 715 case VDEV_LABEL_REPLACE: 716 return (!spa_has_spare(spa, device_guid) || 717 spare_pool != 0ULL); 718 719 case VDEV_LABEL_SPARE: 720 return (spa_has_spare(spa, device_guid)); 721 } 722 } 723 724 /* 725 * Check to see if this is an l2cache device. 726 */ 727 if (spa_l2cache_exists(device_guid, NULL)) 728 return (B_TRUE); 729 730 /* 731 * We can't rely on a pool's state if it's been imported 732 * read-only. Instead we look to see if the pools is marked 733 * read-only in the namespace and set the state to active. 734 */ 735 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 736 (spa = spa_by_guid(pool_guid, device_guid)) != NULL && 737 spa_mode(spa) == FREAD) 738 state = POOL_STATE_ACTIVE; 739 740 /* 741 * If the device is marked ACTIVE, then this device is in use by another 742 * pool on the system. 743 */ 744 return (state == POOL_STATE_ACTIVE); 745 } 746 747 /* 748 * Initialize a vdev label. We check to make sure each leaf device is not in 749 * use, and writable. We put down an initial label which we will later 750 * overwrite with a complete label. Note that it's important to do this 751 * sequentially, not in parallel, so that we catch cases of multiple use of the 752 * same leaf vdev in the vdev we're creating -- e.g. mirroring a disk with 753 * itself. 754 */ 755 int 756 vdev_label_init(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason) 757 { 758 spa_t *spa = vd->vdev_spa; 759 nvlist_t *label; 760 vdev_phys_t *vp; 761 abd_t *vp_abd; 762 abd_t *pad2; 763 uberblock_t *ub; 764 abd_t *ub_abd; 765 zio_t *zio; 766 char *buf; 767 size_t buflen; 768 int error; 769 uint64_t spare_guid, l2cache_guid; 770 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 771 772 ASSERT(spa_config_held(spa, SCL_ALL, RW_WRITER) == SCL_ALL); 773 774 for (int c = 0; c < vd->vdev_children; c++) 775 if ((error = vdev_label_init(vd->vdev_child[c], 776 crtxg, reason)) != 0) 777 return (error); 778 779 /* Track the creation time for this vdev */ 780 vd->vdev_crtxg = crtxg; 781 782 if (!vd->vdev_ops->vdev_op_leaf || !spa_writeable(spa)) 783 return (0); 784 785 /* 786 * Dead vdevs cannot be initialized. 787 */ 788 if (vdev_is_dead(vd)) 789 return (SET_ERROR(EIO)); 790 791 /* 792 * Determine if the vdev is in use. 793 */ 794 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPLIT && 795 vdev_inuse(vd, crtxg, reason, &spare_guid, &l2cache_guid)) 796 return (SET_ERROR(EBUSY)); 797 798 /* 799 * If this is a request to add or replace a spare or l2cache device 800 * that is in use elsewhere on the system, then we must update the 801 * guid (which was initialized to a random value) to reflect the 802 * actual GUID (which is shared between multiple pools). 803 */ 804 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_L2CACHE && 805 spare_guid != 0ULL) { 806 uint64_t guid_delta = spare_guid - vd->vdev_guid; 807 808 vd->vdev_guid += guid_delta; 809 810 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 811 pvd->vdev_guid_sum += guid_delta; 812 813 /* 814 * If this is a replacement, then we want to fallthrough to the 815 * rest of the code. If we're adding a spare, then it's already 816 * labeled appropriately and we can just return. 817 */ 818 if (reason == VDEV_LABEL_SPARE) 819 return (0); 820 ASSERT(reason == VDEV_LABEL_REPLACE || 821 reason == VDEV_LABEL_SPLIT); 822 } 823 824 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPARE && 825 l2cache_guid != 0ULL) { 826 uint64_t guid_delta = l2cache_guid - vd->vdev_guid; 827 828 vd->vdev_guid += guid_delta; 829 830 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 831 pvd->vdev_guid_sum += guid_delta; 832 833 /* 834 * If this is a replacement, then we want to fallthrough to the 835 * rest of the code. If we're adding an l2cache, then it's 836 * already labeled appropriately and we can just return. 837 */ 838 if (reason == VDEV_LABEL_L2CACHE) 839 return (0); 840 ASSERT(reason == VDEV_LABEL_REPLACE); 841 } 842 843 /* 844 * Initialize its label. 845 */ 846 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 847 abd_zero(vp_abd, sizeof (vdev_phys_t)); 848 vp = abd_to_buf(vp_abd); 849 850 /* 851 * Generate a label describing the pool and our top-level vdev. 852 * We mark it as being from txg 0 to indicate that it's not 853 * really part of an active pool just yet. The labels will 854 * be written again with a meaningful txg by spa_sync(). 855 */ 856 if (reason == VDEV_LABEL_SPARE || 857 (reason == VDEV_LABEL_REMOVE && vd->vdev_isspare)) { 858 /* 859 * For inactive hot spares, we generate a special label that 860 * identifies as a mutually shared hot spare. We write the 861 * label if we are adding a hot spare, or if we are removing an 862 * active hot spare (in which case we want to revert the 863 * labels). 864 */ 865 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 866 867 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 868 spa_version(spa)) == 0); 869 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 870 POOL_STATE_SPARE) == 0); 871 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 872 vd->vdev_guid) == 0); 873 } else if (reason == VDEV_LABEL_L2CACHE || 874 (reason == VDEV_LABEL_REMOVE && vd->vdev_isl2cache)) { 875 /* 876 * For level 2 ARC devices, add a special label. 877 */ 878 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 879 880 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 881 spa_version(spa)) == 0); 882 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 883 POOL_STATE_L2CACHE) == 0); 884 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 885 vd->vdev_guid) == 0); 886 } else { 887 uint64_t txg = 0ULL; 888 889 if (reason == VDEV_LABEL_SPLIT) 890 txg = spa->spa_uberblock.ub_txg; 891 label = spa_config_generate(spa, vd, txg, B_FALSE); 892 893 /* 894 * Add our creation time. This allows us to detect multiple 895 * vdev uses as described above, and automatically expires if we 896 * fail. 897 */ 898 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 899 crtxg) == 0); 900 } 901 902 buf = vp->vp_nvlist; 903 buflen = sizeof (vp->vp_nvlist); 904 905 error = nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP); 906 if (error != 0) { 907 nvlist_free(label); 908 abd_free(vp_abd); 909 /* EFAULT means nvlist_pack ran out of room */ 910 return (error == EFAULT ? ENAMETOOLONG : EINVAL); 911 } 912 913 /* 914 * Initialize uberblock template. 915 */ 916 ub_abd = abd_alloc_linear(VDEV_UBERBLOCK_RING, B_TRUE); 917 abd_zero(ub_abd, VDEV_UBERBLOCK_RING); 918 abd_copy_from_buf(ub_abd, &spa->spa_uberblock, sizeof (uberblock_t)); 919 ub = abd_to_buf(ub_abd); 920 ub->ub_txg = 0; 921 922 /* Initialize the 2nd padding area. */ 923 pad2 = abd_alloc_for_io(VDEV_PAD_SIZE, B_TRUE); 924 abd_zero(pad2, VDEV_PAD_SIZE); 925 926 /* 927 * Write everything in parallel. 928 */ 929 retry: 930 zio = zio_root(spa, NULL, NULL, flags); 931 932 for (int l = 0; l < VDEV_LABELS; l++) { 933 934 vdev_label_write(zio, vd, l, vp_abd, 935 offsetof(vdev_label_t, vl_vdev_phys), 936 sizeof (vdev_phys_t), NULL, NULL, flags); 937 938 /* 939 * Skip the 1st padding area. 940 * Zero out the 2nd padding area where it might have 941 * left over data from previous filesystem format. 942 */ 943 vdev_label_write(zio, vd, l, pad2, 944 offsetof(vdev_label_t, vl_pad2), 945 VDEV_PAD_SIZE, NULL, NULL, flags); 946 947 vdev_label_write(zio, vd, l, ub_abd, 948 offsetof(vdev_label_t, vl_uberblock), 949 VDEV_UBERBLOCK_RING, NULL, NULL, flags); 950 } 951 952 error = zio_wait(zio); 953 954 if (error != 0 && !(flags & ZIO_FLAG_TRYHARD)) { 955 flags |= ZIO_FLAG_TRYHARD; 956 goto retry; 957 } 958 959 nvlist_free(label); 960 abd_free(pad2); 961 abd_free(ub_abd); 962 abd_free(vp_abd); 963 964 /* 965 * If this vdev hasn't been previously identified as a spare, then we 966 * mark it as such only if a) we are labeling it as a spare, or b) it 967 * exists as a spare elsewhere in the system. Do the same for 968 * level 2 ARC devices. 969 */ 970 if (error == 0 && !vd->vdev_isspare && 971 (reason == VDEV_LABEL_SPARE || 972 spa_spare_exists(vd->vdev_guid, NULL, NULL))) 973 spa_spare_add(vd); 974 975 if (error == 0 && !vd->vdev_isl2cache && 976 (reason == VDEV_LABEL_L2CACHE || 977 spa_l2cache_exists(vd->vdev_guid, NULL))) 978 spa_l2cache_add(vd); 979 980 return (error); 981 } 982 983 /* 984 * ========================================================================== 985 * uberblock load/sync 986 * ========================================================================== 987 */ 988 989 /* 990 * Consider the following situation: txg is safely synced to disk. We've 991 * written the first uberblock for txg + 1, and then we lose power. When we 992 * come back up, we fail to see the uberblock for txg + 1 because, say, 993 * it was on a mirrored device and the replica to which we wrote txg + 1 994 * is now offline. If we then make some changes and sync txg + 1, and then 995 * the missing replica comes back, then for a few seconds we'll have two 996 * conflicting uberblocks on disk with the same txg. The solution is simple: 997 * among uberblocks with equal txg, choose the one with the latest timestamp. 998 */ 999 static int 1000 vdev_uberblock_compare(uberblock_t *ub1, uberblock_t *ub2) 1001 { 1002 if (ub1->ub_txg < ub2->ub_txg) 1003 return (-1); 1004 if (ub1->ub_txg > ub2->ub_txg) 1005 return (1); 1006 1007 if (ub1->ub_timestamp < ub2->ub_timestamp) 1008 return (-1); 1009 if (ub1->ub_timestamp > ub2->ub_timestamp) 1010 return (1); 1011 1012 return (0); 1013 } 1014 1015 struct ubl_cbdata { 1016 uberblock_t *ubl_ubbest; /* Best uberblock */ 1017 vdev_t *ubl_vd; /* vdev associated with the above */ 1018 }; 1019 1020 static void 1021 vdev_uberblock_load_done(zio_t *zio) 1022 { 1023 vdev_t *vd = zio->io_vd; 1024 spa_t *spa = zio->io_spa; 1025 zio_t *rio = zio->io_private; 1026 uberblock_t *ub = abd_to_buf(zio->io_abd); 1027 struct ubl_cbdata *cbp = rio->io_private; 1028 1029 ASSERT3U(zio->io_size, ==, VDEV_UBERBLOCK_SIZE(vd)); 1030 1031 if (zio->io_error == 0 && uberblock_verify(ub) == 0) { 1032 mutex_enter(&rio->io_lock); 1033 if (ub->ub_txg <= spa->spa_load_max_txg && 1034 vdev_uberblock_compare(ub, cbp->ubl_ubbest) > 0) { 1035 /* 1036 * Keep track of the vdev in which this uberblock 1037 * was found. We will use this information later 1038 * to obtain the config nvlist associated with 1039 * this uberblock. 1040 */ 1041 *cbp->ubl_ubbest = *ub; 1042 cbp->ubl_vd = vd; 1043 } 1044 mutex_exit(&rio->io_lock); 1045 } 1046 1047 abd_free(zio->io_abd); 1048 } 1049 1050 static void 1051 vdev_uberblock_load_impl(zio_t *zio, vdev_t *vd, int flags, 1052 struct ubl_cbdata *cbp) 1053 { 1054 for (int c = 0; c < vd->vdev_children; c++) 1055 vdev_uberblock_load_impl(zio, vd->vdev_child[c], flags, cbp); 1056 1057 if (vd->vdev_ops->vdev_op_leaf && vdev_readable(vd)) { 1058 for (int l = 0; l < VDEV_LABELS; l++) { 1059 for (int n = 0; n < VDEV_UBERBLOCK_COUNT(vd); n++) { 1060 vdev_label_read(zio, vd, l, 1061 abd_alloc_linear(VDEV_UBERBLOCK_SIZE(vd), 1062 B_TRUE), VDEV_UBERBLOCK_OFFSET(vd, n), 1063 VDEV_UBERBLOCK_SIZE(vd), 1064 vdev_uberblock_load_done, zio, flags); 1065 } 1066 } 1067 } 1068 } 1069 1070 /* 1071 * Reads the 'best' uberblock from disk along with its associated 1072 * configuration. First, we read the uberblock array of each label of each 1073 * vdev, keeping track of the uberblock with the highest txg in each array. 1074 * Then, we read the configuration from the same vdev as the best uberblock. 1075 */ 1076 void 1077 vdev_uberblock_load(vdev_t *rvd, uberblock_t *ub, nvlist_t **config) 1078 { 1079 zio_t *zio; 1080 spa_t *spa = rvd->vdev_spa; 1081 struct ubl_cbdata cb; 1082 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 1083 ZIO_FLAG_SPECULATIVE | ZIO_FLAG_TRYHARD; 1084 1085 ASSERT(ub); 1086 ASSERT(config); 1087 1088 bzero(ub, sizeof (uberblock_t)); 1089 *config = NULL; 1090 1091 cb.ubl_ubbest = ub; 1092 cb.ubl_vd = NULL; 1093 1094 spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER); 1095 zio = zio_root(spa, NULL, &cb, flags); 1096 vdev_uberblock_load_impl(zio, rvd, flags, &cb); 1097 (void) zio_wait(zio); 1098 1099 /* 1100 * It's possible that the best uberblock was discovered on a label 1101 * that has a configuration which was written in a future txg. 1102 * Search all labels on this vdev to find the configuration that 1103 * matches the txg for our uberblock. 1104 */ 1105 if (cb.ubl_vd != NULL) { 1106 vdev_dbgmsg(cb.ubl_vd, "best uberblock found for spa %s. " 1107 "txg %llu", spa->spa_name, (u_longlong_t)ub->ub_txg); 1108 1109 *config = vdev_label_read_config(cb.ubl_vd, ub->ub_txg); 1110 if (*config == NULL && spa->spa_extreme_rewind) { 1111 vdev_dbgmsg(cb.ubl_vd, "failed to read label config. " 1112 "Trying again without txg restrictions."); 1113 *config = vdev_label_read_config(cb.ubl_vd, UINT64_MAX); 1114 } 1115 if (*config == NULL) { 1116 vdev_dbgmsg(cb.ubl_vd, "failed to read label config"); 1117 } 1118 } 1119 spa_config_exit(spa, SCL_ALL, FTAG); 1120 } 1121 1122 /* 1123 * On success, increment root zio's count of good writes. 1124 * We only get credit for writes to known-visible vdevs; see spa_vdev_add(). 1125 */ 1126 static void 1127 vdev_uberblock_sync_done(zio_t *zio) 1128 { 1129 uint64_t *good_writes = zio->io_private; 1130 1131 if (zio->io_error == 0 && zio->io_vd->vdev_top->vdev_ms_array != 0) 1132 atomic_inc_64(good_writes); 1133 } 1134 1135 /* 1136 * Write the uberblock to all labels of all leaves of the specified vdev. 1137 */ 1138 static void 1139 vdev_uberblock_sync(zio_t *zio, uint64_t *good_writes, 1140 uberblock_t *ub, vdev_t *vd, int flags) 1141 { 1142 for (uint64_t c = 0; c < vd->vdev_children; c++) { 1143 vdev_uberblock_sync(zio, good_writes, 1144 ub, vd->vdev_child[c], flags); 1145 } 1146 1147 if (!vd->vdev_ops->vdev_op_leaf) 1148 return; 1149 1150 if (!vdev_writeable(vd)) 1151 return; 1152 1153 int m = spa_multihost(vd->vdev_spa) ? MMP_BLOCKS_PER_LABEL : 0; 1154 int n = ub->ub_txg % (VDEV_UBERBLOCK_COUNT(vd) - m); 1155 1156 /* Copy the uberblock_t into the ABD */ 1157 abd_t *ub_abd = abd_alloc_for_io(VDEV_UBERBLOCK_SIZE(vd), B_TRUE); 1158 abd_zero(ub_abd, VDEV_UBERBLOCK_SIZE(vd)); 1159 abd_copy_from_buf(ub_abd, ub, sizeof (uberblock_t)); 1160 1161 for (int l = 0; l < VDEV_LABELS; l++) 1162 vdev_label_write(zio, vd, l, ub_abd, 1163 VDEV_UBERBLOCK_OFFSET(vd, n), VDEV_UBERBLOCK_SIZE(vd), 1164 vdev_uberblock_sync_done, good_writes, 1165 flags | ZIO_FLAG_DONT_PROPAGATE); 1166 1167 abd_free(ub_abd); 1168 } 1169 1170 /* Sync the uberblocks to all vdevs in svd[] */ 1171 int 1172 vdev_uberblock_sync_list(vdev_t **svd, int svdcount, uberblock_t *ub, int flags) 1173 { 1174 spa_t *spa = svd[0]->vdev_spa; 1175 zio_t *zio; 1176 uint64_t good_writes = 0; 1177 1178 zio = zio_root(spa, NULL, NULL, flags); 1179 1180 for (int v = 0; v < svdcount; v++) 1181 vdev_uberblock_sync(zio, &good_writes, ub, svd[v], flags); 1182 1183 (void) zio_wait(zio); 1184 1185 /* 1186 * Flush the uberblocks to disk. This ensures that the odd labels 1187 * are no longer needed (because the new uberblocks and the even 1188 * labels are safely on disk), so it is safe to overwrite them. 1189 */ 1190 zio = zio_root(spa, NULL, NULL, flags); 1191 1192 for (int v = 0; v < svdcount; v++) { 1193 if (vdev_writeable(svd[v])) { 1194 zio_flush(zio, svd[v]); 1195 } 1196 } 1197 1198 (void) zio_wait(zio); 1199 1200 return (good_writes >= 1 ? 0 : EIO); 1201 } 1202 1203 /* 1204 * On success, increment the count of good writes for our top-level vdev. 1205 */ 1206 static void 1207 vdev_label_sync_done(zio_t *zio) 1208 { 1209 uint64_t *good_writes = zio->io_private; 1210 1211 if (zio->io_error == 0) 1212 atomic_inc_64(good_writes); 1213 } 1214 1215 /* 1216 * If there weren't enough good writes, indicate failure to the parent. 1217 */ 1218 static void 1219 vdev_label_sync_top_done(zio_t *zio) 1220 { 1221 uint64_t *good_writes = zio->io_private; 1222 1223 if (*good_writes == 0) 1224 zio->io_error = SET_ERROR(EIO); 1225 1226 kmem_free(good_writes, sizeof (uint64_t)); 1227 } 1228 1229 /* 1230 * We ignore errors for log and cache devices, simply free the private data. 1231 */ 1232 static void 1233 vdev_label_sync_ignore_done(zio_t *zio) 1234 { 1235 kmem_free(zio->io_private, sizeof (uint64_t)); 1236 } 1237 1238 /* 1239 * Write all even or odd labels to all leaves of the specified vdev. 1240 */ 1241 static void 1242 vdev_label_sync(zio_t *zio, uint64_t *good_writes, 1243 vdev_t *vd, int l, uint64_t txg, int flags) 1244 { 1245 nvlist_t *label; 1246 vdev_phys_t *vp; 1247 abd_t *vp_abd; 1248 char *buf; 1249 size_t buflen; 1250 1251 for (int c = 0; c < vd->vdev_children; c++) { 1252 vdev_label_sync(zio, good_writes, 1253 vd->vdev_child[c], l, txg, flags); 1254 } 1255 1256 if (!vd->vdev_ops->vdev_op_leaf) 1257 return; 1258 1259 if (!vdev_writeable(vd)) 1260 return; 1261 1262 /* 1263 * Generate a label describing the top-level config to which we belong. 1264 */ 1265 label = spa_config_generate(vd->vdev_spa, vd, txg, B_FALSE); 1266 1267 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 1268 abd_zero(vp_abd, sizeof (vdev_phys_t)); 1269 vp = abd_to_buf(vp_abd); 1270 1271 buf = vp->vp_nvlist; 1272 buflen = sizeof (vp->vp_nvlist); 1273 1274 if (nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP) == 0) { 1275 for (; l < VDEV_LABELS; l += 2) { 1276 vdev_label_write(zio, vd, l, vp_abd, 1277 offsetof(vdev_label_t, vl_vdev_phys), 1278 sizeof (vdev_phys_t), 1279 vdev_label_sync_done, good_writes, 1280 flags | ZIO_FLAG_DONT_PROPAGATE); 1281 } 1282 } 1283 1284 abd_free(vp_abd); 1285 nvlist_free(label); 1286 } 1287 1288 int 1289 vdev_label_sync_list(spa_t *spa, int l, uint64_t txg, int flags) 1290 { 1291 list_t *dl = &spa->spa_config_dirty_list; 1292 vdev_t *vd; 1293 zio_t *zio; 1294 int error; 1295 1296 /* 1297 * Write the new labels to disk. 1298 */ 1299 zio = zio_root(spa, NULL, NULL, flags); 1300 1301 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) { 1302 uint64_t *good_writes = kmem_zalloc(sizeof (uint64_t), 1303 KM_SLEEP); 1304 1305 ASSERT(!vd->vdev_ishole); 1306 1307 zio_t *vio = zio_null(zio, spa, NULL, 1308 (vd->vdev_islog || vd->vdev_aux != NULL) ? 1309 vdev_label_sync_ignore_done : vdev_label_sync_top_done, 1310 good_writes, flags); 1311 vdev_label_sync(vio, good_writes, vd, l, txg, flags); 1312 zio_nowait(vio); 1313 } 1314 1315 error = zio_wait(zio); 1316 1317 /* 1318 * Flush the new labels to disk. 1319 */ 1320 zio = zio_root(spa, NULL, NULL, flags); 1321 1322 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) 1323 zio_flush(zio, vd); 1324 1325 (void) zio_wait(zio); 1326 1327 return (error); 1328 } 1329 1330 /* 1331 * Sync the uberblock and any changes to the vdev configuration. 1332 * 1333 * The order of operations is carefully crafted to ensure that 1334 * if the system panics or loses power at any time, the state on disk 1335 * is still transactionally consistent. The in-line comments below 1336 * describe the failure semantics at each stage. 1337 * 1338 * Moreover, vdev_config_sync() is designed to be idempotent: if it fails 1339 * at any time, you can just call it again, and it will resume its work. 1340 */ 1341 int 1342 vdev_config_sync(vdev_t **svd, int svdcount, uint64_t txg) 1343 { 1344 spa_t *spa = svd[0]->vdev_spa; 1345 uberblock_t *ub = &spa->spa_uberblock; 1346 int error = 0; 1347 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 1348 1349 ASSERT(svdcount != 0); 1350 retry: 1351 /* 1352 * Normally, we don't want to try too hard to write every label and 1353 * uberblock. If there is a flaky disk, we don't want the rest of the 1354 * sync process to block while we retry. But if we can't write a 1355 * single label out, we should retry with ZIO_FLAG_TRYHARD before 1356 * bailing out and declaring the pool faulted. 1357 */ 1358 if (error != 0) { 1359 if ((flags & ZIO_FLAG_TRYHARD) != 0) 1360 return (error); 1361 flags |= ZIO_FLAG_TRYHARD; 1362 } 1363 1364 ASSERT(ub->ub_txg <= txg); 1365 1366 /* 1367 * If this isn't a resync due to I/O errors, 1368 * and nothing changed in this transaction group, 1369 * and the vdev configuration hasn't changed, 1370 * then there's nothing to do. 1371 */ 1372 if (ub->ub_txg < txg) { 1373 boolean_t changed = uberblock_update(ub, spa->spa_root_vdev, 1374 txg, spa->spa_mmp.mmp_delay); 1375 1376 if (!changed && list_is_empty(&spa->spa_config_dirty_list)) 1377 return (0); 1378 } 1379 1380 if (txg > spa_freeze_txg(spa)) 1381 return (0); 1382 1383 ASSERT(txg <= spa->spa_final_txg); 1384 1385 /* 1386 * Flush the write cache of every disk that's been written to 1387 * in this transaction group. This ensures that all blocks 1388 * written in this txg will be committed to stable storage 1389 * before any uberblock that references them. 1390 */ 1391 zio_t *zio = zio_root(spa, NULL, NULL, flags); 1392 1393 for (vdev_t *vd = 1394 txg_list_head(&spa->spa_vdev_txg_list, TXG_CLEAN(txg)); vd != NULL; 1395 vd = txg_list_next(&spa->spa_vdev_txg_list, vd, TXG_CLEAN(txg))) 1396 zio_flush(zio, vd); 1397 1398 (void) zio_wait(zio); 1399 1400 /* 1401 * Sync out the even labels (L0, L2) for every dirty vdev. If the 1402 * system dies in the middle of this process, that's OK: all of the 1403 * even labels that made it to disk will be newer than any uberblock, 1404 * and will therefore be considered invalid. The odd labels (L1, L3), 1405 * which have not yet been touched, will still be valid. We flush 1406 * the new labels to disk to ensure that all even-label updates 1407 * are committed to stable storage before the uberblock update. 1408 */ 1409 if ((error = vdev_label_sync_list(spa, 0, txg, flags)) != 0) { 1410 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1411 zfs_dbgmsg("vdev_label_sync_list() returned error %d " 1412 "for pool '%s' when syncing out the even labels " 1413 "of dirty vdevs", error, spa_name(spa)); 1414 } 1415 goto retry; 1416 } 1417 1418 /* 1419 * Sync the uberblocks to all vdevs in svd[]. 1420 * If the system dies in the middle of this step, there are two cases 1421 * to consider, and the on-disk state is consistent either way: 1422 * 1423 * (1) If none of the new uberblocks made it to disk, then the 1424 * previous uberblock will be the newest, and the odd labels 1425 * (which had not yet been touched) will be valid with respect 1426 * to that uberblock. 1427 * 1428 * (2) If one or more new uberblocks made it to disk, then they 1429 * will be the newest, and the even labels (which had all 1430 * been successfully committed) will be valid with respect 1431 * to the new uberblocks. 1432 */ 1433 if ((error = vdev_uberblock_sync_list(svd, svdcount, ub, flags)) != 0) { 1434 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1435 zfs_dbgmsg("vdev_uberblock_sync_list() returned error " 1436 "%d for pool '%s'", error, spa_name(spa)); 1437 } 1438 goto retry; 1439 } 1440 1441 if (spa_multihost(spa)) 1442 mmp_update_uberblock(spa, ub); 1443 1444 /* 1445 * Sync out odd labels for every dirty vdev. If the system dies 1446 * in the middle of this process, the even labels and the new 1447 * uberblocks will suffice to open the pool. The next time 1448 * the pool is opened, the first thing we'll do -- before any 1449 * user data is modified -- is mark every vdev dirty so that 1450 * all labels will be brought up to date. We flush the new labels 1451 * to disk to ensure that all odd-label updates are committed to 1452 * stable storage before the next transaction group begins. 1453 */ 1454 if ((error = vdev_label_sync_list(spa, 1, txg, flags)) != 0) { 1455 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1456 zfs_dbgmsg("vdev_label_sync_list() returned error %d " 1457 "for pool '%s' when syncing out the odd labels of " 1458 "dirty vdevs", error, spa_name(spa)); 1459 } 1460 goto retry; 1461 } 1462 1463 return (0); 1464 } 1465