1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 22 /* 23 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 24 * Copyright (c) 2012, 2018 by Delphix. All rights reserved. 25 * Copyright (c) 2017, Intel Corporation. 26 * Copyright 2019 Joyent, Inc. 27 */ 28 29 /* 30 * Virtual Device Labels 31 * --------------------- 32 * 33 * The vdev label serves several distinct purposes: 34 * 35 * 1. Uniquely identify this device as part of a ZFS pool and confirm its 36 * identity within the pool. 37 * 38 * 2. Verify that all the devices given in a configuration are present 39 * within the pool. 40 * 41 * 3. Determine the uberblock for the pool. 42 * 43 * 4. In case of an import operation, determine the configuration of the 44 * toplevel vdev of which it is a part. 45 * 46 * 5. If an import operation cannot find all the devices in the pool, 47 * provide enough information to the administrator to determine which 48 * devices are missing. 49 * 50 * It is important to note that while the kernel is responsible for writing the 51 * label, it only consumes the information in the first three cases. The 52 * latter information is only consumed in userland when determining the 53 * configuration to import a pool. 54 * 55 * 56 * Label Organization 57 * ------------------ 58 * 59 * Before describing the contents of the label, it's important to understand how 60 * the labels are written and updated with respect to the uberblock. 61 * 62 * When the pool configuration is altered, either because it was newly created 63 * or a device was added, we want to update all the labels such that we can deal 64 * with fatal failure at any point. To this end, each disk has two labels which 65 * are updated before and after the uberblock is synced. Assuming we have 66 * labels and an uberblock with the following transaction groups: 67 * 68 * L1 UB L2 69 * +------+ +------+ +------+ 70 * | | | | | | 71 * | t10 | | t10 | | t10 | 72 * | | | | | | 73 * +------+ +------+ +------+ 74 * 75 * In this stable state, the labels and the uberblock were all updated within 76 * the same transaction group (10). Each label is mirrored and checksummed, so 77 * that we can detect when we fail partway through writing the label. 78 * 79 * In order to identify which labels are valid, the labels are written in the 80 * following manner: 81 * 82 * 1. For each vdev, update 'L1' to the new label 83 * 2. Update the uberblock 84 * 3. For each vdev, update 'L2' to the new label 85 * 86 * Given arbitrary failure, we can determine the correct label to use based on 87 * the transaction group. If we fail after updating L1 but before updating the 88 * UB, we will notice that L1's transaction group is greater than the uberblock, 89 * so L2 must be valid. If we fail after writing the uberblock but before 90 * writing L2, we will notice that L2's transaction group is less than L1, and 91 * therefore L1 is valid. 92 * 93 * Another added complexity is that not every label is updated when the config 94 * is synced. If we add a single device, we do not want to have to re-write 95 * every label for every device in the pool. This means that both L1 and L2 may 96 * be older than the pool uberblock, because the necessary information is stored 97 * on another vdev. 98 * 99 * 100 * On-disk Format 101 * -------------- 102 * 103 * The vdev label consists of two distinct parts, and is wrapped within the 104 * vdev_label_t structure. The label includes 8k of padding to permit legacy 105 * VTOC disk labels, but is otherwise ignored. 106 * 107 * The first half of the label is a packed nvlist which contains pool wide 108 * properties, per-vdev properties, and configuration information. It is 109 * described in more detail below. 110 * 111 * The latter half of the label consists of a redundant array of uberblocks. 112 * These uberblocks are updated whenever a transaction group is committed, 113 * or when the configuration is updated. When a pool is loaded, we scan each 114 * vdev for the 'best' uberblock. 115 * 116 * 117 * Configuration Information 118 * ------------------------- 119 * 120 * The nvlist describing the pool and vdev contains the following elements: 121 * 122 * version ZFS on-disk version 123 * name Pool name 124 * state Pool state 125 * txg Transaction group in which this label was written 126 * pool_guid Unique identifier for this pool 127 * vdev_tree An nvlist describing vdev tree. 128 * features_for_read 129 * An nvlist of the features necessary for reading the MOS. 130 * 131 * Each leaf device label also contains the following: 132 * 133 * top_guid Unique ID for top-level vdev in which this is contained 134 * guid Unique ID for the leaf vdev 135 * 136 * The 'vs' configuration follows the format described in 'spa_config.c'. 137 */ 138 139 #include <sys/zfs_context.h> 140 #include <sys/spa.h> 141 #include <sys/spa_impl.h> 142 #include <sys/dmu.h> 143 #include <sys/zap.h> 144 #include <sys/vdev.h> 145 #include <sys/vdev_impl.h> 146 #include <sys/uberblock_impl.h> 147 #include <sys/metaslab.h> 148 #include <sys/metaslab_impl.h> 149 #include <sys/zio.h> 150 #include <sys/dsl_scan.h> 151 #include <sys/abd.h> 152 #include <sys/fs/zfs.h> 153 154 /* 155 * Basic routines to read and write from a vdev label. 156 * Used throughout the rest of this file. 157 */ 158 uint64_t 159 vdev_label_offset(uint64_t psize, int l, uint64_t offset) 160 { 161 ASSERT(offset < sizeof (vdev_label_t)); 162 ASSERT(P2PHASE_TYPED(psize, sizeof (vdev_label_t), uint64_t) == 0); 163 164 return (offset + l * sizeof (vdev_label_t) + (l < VDEV_LABELS / 2 ? 165 0 : psize - VDEV_LABELS * sizeof (vdev_label_t))); 166 } 167 168 /* 169 * Returns back the vdev label associated with the passed in offset. 170 */ 171 int 172 vdev_label_number(uint64_t psize, uint64_t offset) 173 { 174 int l; 175 176 if (offset >= psize - VDEV_LABEL_END_SIZE) { 177 offset -= psize - VDEV_LABEL_END_SIZE; 178 offset += (VDEV_LABELS / 2) * sizeof (vdev_label_t); 179 } 180 l = offset / sizeof (vdev_label_t); 181 return (l < VDEV_LABELS ? l : -1); 182 } 183 184 static void 185 vdev_label_read(zio_t *zio, vdev_t *vd, int l, abd_t *buf, uint64_t offset, 186 uint64_t size, zio_done_func_t *done, void *private, int flags) 187 { 188 ASSERT( 189 spa_config_held(zio->io_spa, SCL_STATE, RW_READER) == SCL_STATE || 190 spa_config_held(zio->io_spa, SCL_STATE, RW_WRITER) == SCL_STATE); 191 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 192 193 zio_nowait(zio_read_phys(zio, vd, 194 vdev_label_offset(vd->vdev_psize, l, offset), 195 size, buf, ZIO_CHECKSUM_LABEL, done, private, 196 ZIO_PRIORITY_SYNC_READ, flags, B_TRUE)); 197 } 198 199 void 200 vdev_label_write(zio_t *zio, vdev_t *vd, int l, abd_t *buf, uint64_t offset, 201 uint64_t size, zio_done_func_t *done, void *private, int flags) 202 { 203 ASSERT( 204 spa_config_held(zio->io_spa, SCL_STATE, RW_READER) == SCL_STATE || 205 spa_config_held(zio->io_spa, SCL_STATE, RW_WRITER) == SCL_STATE); 206 ASSERT(flags & ZIO_FLAG_CONFIG_WRITER); 207 208 zio_nowait(zio_write_phys(zio, vd, 209 vdev_label_offset(vd->vdev_psize, l, offset), 210 size, buf, ZIO_CHECKSUM_LABEL, done, private, 211 ZIO_PRIORITY_SYNC_WRITE, flags, B_TRUE)); 212 } 213 214 static void 215 root_vdev_actions_getprogress(vdev_t *vd, nvlist_t *nvl) 216 { 217 spa_t *spa = vd->vdev_spa; 218 219 if (vd != spa->spa_root_vdev) 220 return; 221 222 /* provide either current or previous scan information */ 223 pool_scan_stat_t ps; 224 if (spa_scan_get_stats(spa, &ps) == 0) { 225 fnvlist_add_uint64_array(nvl, 226 ZPOOL_CONFIG_SCAN_STATS, (uint64_t *)&ps, 227 sizeof (pool_scan_stat_t) / sizeof (uint64_t)); 228 } 229 230 pool_removal_stat_t prs; 231 if (spa_removal_get_stats(spa, &prs) == 0) { 232 fnvlist_add_uint64_array(nvl, 233 ZPOOL_CONFIG_REMOVAL_STATS, (uint64_t *)&prs, 234 sizeof (prs) / sizeof (uint64_t)); 235 } 236 237 pool_checkpoint_stat_t pcs; 238 if (spa_checkpoint_get_stats(spa, &pcs) == 0) { 239 fnvlist_add_uint64_array(nvl, 240 ZPOOL_CONFIG_CHECKPOINT_STATS, (uint64_t *)&pcs, 241 sizeof (pcs) / sizeof (uint64_t)); 242 } 243 } 244 245 /* 246 * Generate the nvlist representing this vdev's config. 247 */ 248 nvlist_t * 249 vdev_config_generate(spa_t *spa, vdev_t *vd, boolean_t getstats, 250 vdev_config_flag_t flags) 251 { 252 nvlist_t *nv = NULL; 253 vdev_indirect_config_t *vic = &vd->vdev_indirect_config; 254 255 nv = fnvlist_alloc(); 256 257 fnvlist_add_string(nv, ZPOOL_CONFIG_TYPE, vd->vdev_ops->vdev_op_type); 258 if (!(flags & (VDEV_CONFIG_SPARE | VDEV_CONFIG_L2CACHE))) 259 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ID, vd->vdev_id); 260 fnvlist_add_uint64(nv, ZPOOL_CONFIG_GUID, vd->vdev_guid); 261 262 if (vd->vdev_path != NULL) 263 fnvlist_add_string(nv, ZPOOL_CONFIG_PATH, vd->vdev_path); 264 265 if (vd->vdev_devid != NULL) 266 fnvlist_add_string(nv, ZPOOL_CONFIG_DEVID, vd->vdev_devid); 267 268 if (vd->vdev_physpath != NULL) 269 fnvlist_add_string(nv, ZPOOL_CONFIG_PHYS_PATH, 270 vd->vdev_physpath); 271 272 if (vd->vdev_fru != NULL) 273 fnvlist_add_string(nv, ZPOOL_CONFIG_FRU, vd->vdev_fru); 274 275 if (vd->vdev_nparity != 0) { 276 ASSERT(strcmp(vd->vdev_ops->vdev_op_type, 277 VDEV_TYPE_RAIDZ) == 0); 278 279 /* 280 * Make sure someone hasn't managed to sneak a fancy new vdev 281 * into a crufty old storage pool. 282 */ 283 ASSERT(vd->vdev_nparity == 1 || 284 (vd->vdev_nparity <= 2 && 285 spa_version(spa) >= SPA_VERSION_RAIDZ2) || 286 (vd->vdev_nparity <= 3 && 287 spa_version(spa) >= SPA_VERSION_RAIDZ3)); 288 289 /* 290 * Note that we'll add the nparity tag even on storage pools 291 * that only support a single parity device -- older software 292 * will just ignore it. 293 */ 294 fnvlist_add_uint64(nv, ZPOOL_CONFIG_NPARITY, vd->vdev_nparity); 295 } 296 297 if (vd->vdev_wholedisk != -1ULL) 298 fnvlist_add_uint64(nv, ZPOOL_CONFIG_WHOLE_DISK, 299 vd->vdev_wholedisk); 300 301 if (vd->vdev_not_present && !(flags & VDEV_CONFIG_MISSING)) 302 fnvlist_add_uint64(nv, ZPOOL_CONFIG_NOT_PRESENT, 1); 303 304 if (vd->vdev_isspare) 305 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_SPARE, 1); 306 307 if (!(flags & (VDEV_CONFIG_SPARE | VDEV_CONFIG_L2CACHE)) && 308 vd == vd->vdev_top) { 309 fnvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_ARRAY, 310 vd->vdev_ms_array); 311 fnvlist_add_uint64(nv, ZPOOL_CONFIG_METASLAB_SHIFT, 312 vd->vdev_ms_shift); 313 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ASHIFT, vd->vdev_ashift); 314 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ASIZE, 315 vd->vdev_asize); 316 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_LOG, vd->vdev_islog); 317 if (vd->vdev_removing) { 318 fnvlist_add_uint64(nv, ZPOOL_CONFIG_REMOVING, 319 vd->vdev_removing); 320 } 321 322 /* zpool command expects alloc class data */ 323 if (getstats && vd->vdev_alloc_bias != VDEV_BIAS_NONE) { 324 const char *bias = NULL; 325 326 switch (vd->vdev_alloc_bias) { 327 case VDEV_BIAS_LOG: 328 bias = VDEV_ALLOC_BIAS_LOG; 329 break; 330 case VDEV_BIAS_SPECIAL: 331 bias = VDEV_ALLOC_BIAS_SPECIAL; 332 break; 333 case VDEV_BIAS_DEDUP: 334 bias = VDEV_ALLOC_BIAS_DEDUP; 335 break; 336 default: 337 ASSERT3U(vd->vdev_alloc_bias, ==, 338 VDEV_BIAS_NONE); 339 } 340 fnvlist_add_string(nv, ZPOOL_CONFIG_ALLOCATION_BIAS, 341 bias); 342 } 343 } 344 345 if (vd->vdev_dtl_sm != NULL) { 346 fnvlist_add_uint64(nv, ZPOOL_CONFIG_DTL, 347 space_map_object(vd->vdev_dtl_sm)); 348 } 349 350 if (vic->vic_mapping_object != 0) { 351 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_OBJECT, 352 vic->vic_mapping_object); 353 } 354 355 if (vic->vic_births_object != 0) { 356 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_BIRTHS, 357 vic->vic_births_object); 358 } 359 360 if (vic->vic_prev_indirect_vdev != UINT64_MAX) { 361 fnvlist_add_uint64(nv, ZPOOL_CONFIG_PREV_INDIRECT_VDEV, 362 vic->vic_prev_indirect_vdev); 363 } 364 365 if (vd->vdev_crtxg) 366 fnvlist_add_uint64(nv, ZPOOL_CONFIG_CREATE_TXG, vd->vdev_crtxg); 367 368 if (flags & VDEV_CONFIG_MOS) { 369 if (vd->vdev_leaf_zap != 0) { 370 ASSERT(vd->vdev_ops->vdev_op_leaf); 371 fnvlist_add_uint64(nv, ZPOOL_CONFIG_VDEV_LEAF_ZAP, 372 vd->vdev_leaf_zap); 373 } 374 375 if (vd->vdev_top_zap != 0) { 376 ASSERT(vd == vd->vdev_top); 377 fnvlist_add_uint64(nv, ZPOOL_CONFIG_VDEV_TOP_ZAP, 378 vd->vdev_top_zap); 379 } 380 381 if (vd->vdev_resilver_deferred) { 382 ASSERT(vd->vdev_ops->vdev_op_leaf); 383 ASSERT(spa->spa_resilver_deferred); 384 fnvlist_add_boolean(nv, ZPOOL_CONFIG_RESILVER_DEFER); 385 } 386 } 387 388 if (getstats) { 389 vdev_stat_t vs; 390 391 vdev_get_stats(vd, &vs); 392 fnvlist_add_uint64_array(nv, ZPOOL_CONFIG_VDEV_STATS, 393 (uint64_t *)&vs, sizeof (vs) / sizeof (uint64_t)); 394 395 root_vdev_actions_getprogress(vd, nv); 396 397 /* 398 * Note: this can be called from open context 399 * (spa_get_stats()), so we need the rwlock to prevent 400 * the mapping from being changed by condensing. 401 */ 402 rw_enter(&vd->vdev_indirect_rwlock, RW_READER); 403 if (vd->vdev_indirect_mapping != NULL) { 404 ASSERT(vd->vdev_indirect_births != NULL); 405 vdev_indirect_mapping_t *vim = 406 vd->vdev_indirect_mapping; 407 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_SIZE, 408 vdev_indirect_mapping_size(vim)); 409 } 410 rw_exit(&vd->vdev_indirect_rwlock); 411 if (vd->vdev_mg != NULL && 412 vd->vdev_mg->mg_fragmentation != ZFS_FRAG_INVALID) { 413 /* 414 * Compute approximately how much memory would be used 415 * for the indirect mapping if this device were to 416 * be removed. 417 * 418 * Note: If the frag metric is invalid, then not 419 * enough metaslabs have been converted to have 420 * histograms. 421 */ 422 uint64_t seg_count = 0; 423 uint64_t to_alloc = vd->vdev_stat.vs_alloc; 424 425 /* 426 * There are the same number of allocated segments 427 * as free segments, so we will have at least one 428 * entry per free segment. However, small free 429 * segments (smaller than vdev_removal_max_span) 430 * will be combined with adjacent allocated segments 431 * as a single mapping. 432 */ 433 for (int i = 0; i < RANGE_TREE_HISTOGRAM_SIZE; i++) { 434 if (1ULL << (i + 1) < vdev_removal_max_span) { 435 to_alloc += 436 vd->vdev_mg->mg_histogram[i] << 437 i + 1; 438 } else { 439 seg_count += 440 vd->vdev_mg->mg_histogram[i]; 441 } 442 } 443 444 /* 445 * The maximum length of a mapping is 446 * zfs_remove_max_segment, so we need at least one entry 447 * per zfs_remove_max_segment of allocated data. 448 */ 449 seg_count += to_alloc / zfs_remove_max_segment; 450 451 fnvlist_add_uint64(nv, ZPOOL_CONFIG_INDIRECT_SIZE, 452 seg_count * 453 sizeof (vdev_indirect_mapping_entry_phys_t)); 454 } 455 } 456 457 if (!vd->vdev_ops->vdev_op_leaf) { 458 nvlist_t **child; 459 int c, idx; 460 461 ASSERT(!vd->vdev_ishole); 462 463 child = kmem_alloc(vd->vdev_children * sizeof (nvlist_t *), 464 KM_SLEEP); 465 466 for (c = 0, idx = 0; c < vd->vdev_children; c++) { 467 vdev_t *cvd = vd->vdev_child[c]; 468 469 /* 470 * If we're generating an nvlist of removing 471 * vdevs then skip over any device which is 472 * not being removed. 473 */ 474 if ((flags & VDEV_CONFIG_REMOVING) && 475 !cvd->vdev_removing) 476 continue; 477 478 child[idx++] = vdev_config_generate(spa, cvd, 479 getstats, flags); 480 } 481 482 if (idx) { 483 fnvlist_add_nvlist_array(nv, ZPOOL_CONFIG_CHILDREN, 484 child, idx); 485 } 486 487 for (c = 0; c < idx; c++) 488 nvlist_free(child[c]); 489 490 kmem_free(child, vd->vdev_children * sizeof (nvlist_t *)); 491 492 } else { 493 const char *aux = NULL; 494 495 if (vd->vdev_offline && !vd->vdev_tmpoffline) 496 fnvlist_add_uint64(nv, ZPOOL_CONFIG_OFFLINE, B_TRUE); 497 if (vd->vdev_resilver_txg != 0) 498 fnvlist_add_uint64(nv, ZPOOL_CONFIG_RESILVER_TXG, 499 vd->vdev_resilver_txg); 500 if (vd->vdev_faulted) 501 fnvlist_add_uint64(nv, ZPOOL_CONFIG_FAULTED, B_TRUE); 502 if (vd->vdev_degraded) 503 fnvlist_add_uint64(nv, ZPOOL_CONFIG_DEGRADED, B_TRUE); 504 if (vd->vdev_removed) 505 fnvlist_add_uint64(nv, ZPOOL_CONFIG_REMOVED, B_TRUE); 506 if (vd->vdev_unspare) 507 fnvlist_add_uint64(nv, ZPOOL_CONFIG_UNSPARE, B_TRUE); 508 if (vd->vdev_ishole) 509 fnvlist_add_uint64(nv, ZPOOL_CONFIG_IS_HOLE, B_TRUE); 510 511 switch (vd->vdev_stat.vs_aux) { 512 case VDEV_AUX_ERR_EXCEEDED: 513 aux = "err_exceeded"; 514 break; 515 516 case VDEV_AUX_EXTERNAL: 517 aux = "external"; 518 break; 519 } 520 521 if (aux != NULL) 522 fnvlist_add_string(nv, ZPOOL_CONFIG_AUX_STATE, aux); 523 524 if (vd->vdev_splitting && vd->vdev_orig_guid != 0LL) { 525 fnvlist_add_uint64(nv, ZPOOL_CONFIG_ORIG_GUID, 526 vd->vdev_orig_guid); 527 } 528 } 529 530 return (nv); 531 } 532 533 /* 534 * Generate a view of the top-level vdevs. If we currently have holes 535 * in the namespace, then generate an array which contains a list of holey 536 * vdevs. Additionally, add the number of top-level children that currently 537 * exist. 538 */ 539 void 540 vdev_top_config_generate(spa_t *spa, nvlist_t *config) 541 { 542 vdev_t *rvd = spa->spa_root_vdev; 543 uint64_t *array; 544 uint_t c, idx; 545 546 array = kmem_alloc(rvd->vdev_children * sizeof (uint64_t), KM_SLEEP); 547 548 for (c = 0, idx = 0; c < rvd->vdev_children; c++) { 549 vdev_t *tvd = rvd->vdev_child[c]; 550 551 if (tvd->vdev_ishole) { 552 array[idx++] = c; 553 } 554 } 555 556 if (idx) { 557 VERIFY(nvlist_add_uint64_array(config, ZPOOL_CONFIG_HOLE_ARRAY, 558 array, idx) == 0); 559 } 560 561 VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_VDEV_CHILDREN, 562 rvd->vdev_children) == 0); 563 564 kmem_free(array, rvd->vdev_children * sizeof (uint64_t)); 565 } 566 567 /* 568 * Returns the configuration from the label of the given vdev. For vdevs 569 * which don't have a txg value stored on their label (i.e. spares/cache) 570 * or have not been completely initialized (txg = 0) just return 571 * the configuration from the first valid label we find. Otherwise, 572 * find the most up-to-date label that does not exceed the specified 573 * 'txg' value. 574 */ 575 nvlist_t * 576 vdev_label_read_config(vdev_t *vd, uint64_t txg) 577 { 578 spa_t *spa = vd->vdev_spa; 579 nvlist_t *config = NULL; 580 vdev_phys_t *vp; 581 abd_t *vp_abd; 582 zio_t *zio; 583 uint64_t best_txg = 0; 584 uint64_t label_txg = 0; 585 int error = 0; 586 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 587 ZIO_FLAG_SPECULATIVE; 588 589 ASSERT(spa_config_held(spa, SCL_STATE_ALL, RW_WRITER) == SCL_STATE_ALL); 590 591 if (!vdev_readable(vd)) 592 return (NULL); 593 594 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 595 vp = abd_to_buf(vp_abd); 596 597 retry: 598 for (int l = 0; l < VDEV_LABELS; l++) { 599 nvlist_t *label = NULL; 600 601 zio = zio_root(spa, NULL, NULL, flags); 602 603 vdev_label_read(zio, vd, l, vp_abd, 604 offsetof(vdev_label_t, vl_vdev_phys), 605 sizeof (vdev_phys_t), NULL, NULL, flags); 606 607 if (zio_wait(zio) == 0 && 608 nvlist_unpack(vp->vp_nvlist, sizeof (vp->vp_nvlist), 609 &label, 0) == 0) { 610 /* 611 * Auxiliary vdevs won't have txg values in their 612 * labels and newly added vdevs may not have been 613 * completely initialized so just return the 614 * configuration from the first valid label we 615 * encounter. 616 */ 617 error = nvlist_lookup_uint64(label, 618 ZPOOL_CONFIG_POOL_TXG, &label_txg); 619 if ((error || label_txg == 0) && !config) { 620 config = label; 621 break; 622 } else if (label_txg <= txg && label_txg > best_txg) { 623 best_txg = label_txg; 624 nvlist_free(config); 625 config = fnvlist_dup(label); 626 } 627 } 628 629 if (label != NULL) { 630 nvlist_free(label); 631 label = NULL; 632 } 633 } 634 635 if (config == NULL && !(flags & ZIO_FLAG_TRYHARD)) { 636 flags |= ZIO_FLAG_TRYHARD; 637 goto retry; 638 } 639 640 /* 641 * We found a valid label but it didn't pass txg restrictions. 642 */ 643 if (config == NULL && label_txg != 0) { 644 vdev_dbgmsg(vd, "label discarded as txg is too large " 645 "(%llu > %llu)", (u_longlong_t)label_txg, 646 (u_longlong_t)txg); 647 } 648 649 abd_free(vp_abd); 650 651 return (config); 652 } 653 654 /* 655 * Determine if a device is in use. The 'spare_guid' parameter will be filled 656 * in with the device guid if this spare is active elsewhere on the system. 657 */ 658 static boolean_t 659 vdev_inuse(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason, 660 uint64_t *spare_guid, uint64_t *l2cache_guid) 661 { 662 spa_t *spa = vd->vdev_spa; 663 uint64_t state, pool_guid, device_guid, txg, spare_pool; 664 uint64_t vdtxg = 0; 665 nvlist_t *label; 666 667 if (spare_guid) 668 *spare_guid = 0ULL; 669 if (l2cache_guid) 670 *l2cache_guid = 0ULL; 671 672 /* 673 * Read the label, if any, and perform some basic sanity checks. 674 */ 675 if ((label = vdev_label_read_config(vd, -1ULL)) == NULL) 676 return (B_FALSE); 677 678 (void) nvlist_lookup_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 679 &vdtxg); 680 681 if (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_STATE, 682 &state) != 0 || 683 nvlist_lookup_uint64(label, ZPOOL_CONFIG_GUID, 684 &device_guid) != 0) { 685 nvlist_free(label); 686 return (B_FALSE); 687 } 688 689 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 690 (nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_GUID, 691 &pool_guid) != 0 || 692 nvlist_lookup_uint64(label, ZPOOL_CONFIG_POOL_TXG, 693 &txg) != 0)) { 694 nvlist_free(label); 695 return (B_FALSE); 696 } 697 698 nvlist_free(label); 699 700 /* 701 * Check to see if this device indeed belongs to the pool it claims to 702 * be a part of. The only way this is allowed is if the device is a hot 703 * spare (which we check for later on). 704 */ 705 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 706 !spa_guid_exists(pool_guid, device_guid) && 707 !spa_spare_exists(device_guid, NULL, NULL) && 708 !spa_l2cache_exists(device_guid, NULL)) 709 return (B_FALSE); 710 711 /* 712 * If the transaction group is zero, then this an initialized (but 713 * unused) label. This is only an error if the create transaction 714 * on-disk is the same as the one we're using now, in which case the 715 * user has attempted to add the same vdev multiple times in the same 716 * transaction. 717 */ 718 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 719 txg == 0 && vdtxg == crtxg) 720 return (B_TRUE); 721 722 /* 723 * Check to see if this is a spare device. We do an explicit check for 724 * spa_has_spare() here because it may be on our pending list of spares 725 * to add. We also check if it is an l2cache device. 726 */ 727 if (spa_spare_exists(device_guid, &spare_pool, NULL) || 728 spa_has_spare(spa, device_guid)) { 729 if (spare_guid) 730 *spare_guid = device_guid; 731 732 switch (reason) { 733 case VDEV_LABEL_CREATE: 734 case VDEV_LABEL_L2CACHE: 735 return (B_TRUE); 736 737 case VDEV_LABEL_REPLACE: 738 return (!spa_has_spare(spa, device_guid) || 739 spare_pool != 0ULL); 740 741 case VDEV_LABEL_SPARE: 742 return (spa_has_spare(spa, device_guid)); 743 } 744 } 745 746 /* 747 * Check to see if this is an l2cache device. 748 */ 749 if (spa_l2cache_exists(device_guid, NULL)) 750 return (B_TRUE); 751 752 /* 753 * We can't rely on a pool's state if it's been imported 754 * read-only. Instead we look to see if the pools is marked 755 * read-only in the namespace and set the state to active. 756 */ 757 if (state != POOL_STATE_SPARE && state != POOL_STATE_L2CACHE && 758 (spa = spa_by_guid(pool_guid, device_guid)) != NULL && 759 spa_mode(spa) == FREAD) 760 state = POOL_STATE_ACTIVE; 761 762 /* 763 * If the device is marked ACTIVE, then this device is in use by another 764 * pool on the system. 765 */ 766 return (state == POOL_STATE_ACTIVE); 767 } 768 769 /* 770 * Initialize a vdev label. We check to make sure each leaf device is not in 771 * use, and writable. We put down an initial label which we will later 772 * overwrite with a complete label. Note that it's important to do this 773 * sequentially, not in parallel, so that we catch cases of multiple use of the 774 * same leaf vdev in the vdev we're creating -- e.g. mirroring a disk with 775 * itself. 776 */ 777 int 778 vdev_label_init(vdev_t *vd, uint64_t crtxg, vdev_labeltype_t reason) 779 { 780 spa_t *spa = vd->vdev_spa; 781 nvlist_t *label; 782 vdev_phys_t *vp; 783 abd_t *vp_abd; 784 abd_t *pad2; 785 uberblock_t *ub; 786 abd_t *ub_abd; 787 zio_t *zio; 788 char *buf; 789 size_t buflen; 790 int error; 791 uint64_t spare_guid, l2cache_guid; 792 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 793 794 ASSERT(spa_config_held(spa, SCL_ALL, RW_WRITER) == SCL_ALL); 795 796 for (int c = 0; c < vd->vdev_children; c++) 797 if ((error = vdev_label_init(vd->vdev_child[c], 798 crtxg, reason)) != 0) 799 return (error); 800 801 /* Track the creation time for this vdev */ 802 vd->vdev_crtxg = crtxg; 803 804 if (!vd->vdev_ops->vdev_op_leaf || !spa_writeable(spa)) 805 return (0); 806 807 /* 808 * Dead vdevs cannot be initialized. 809 */ 810 if (vdev_is_dead(vd)) 811 return (SET_ERROR(EIO)); 812 813 /* 814 * Determine if the vdev is in use. 815 */ 816 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPLIT && 817 vdev_inuse(vd, crtxg, reason, &spare_guid, &l2cache_guid)) 818 return (SET_ERROR(EBUSY)); 819 820 /* 821 * If this is a request to add or replace a spare or l2cache device 822 * that is in use elsewhere on the system, then we must update the 823 * guid (which was initialized to a random value) to reflect the 824 * actual GUID (which is shared between multiple pools). 825 */ 826 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_L2CACHE && 827 spare_guid != 0ULL) { 828 uint64_t guid_delta = spare_guid - vd->vdev_guid; 829 830 vd->vdev_guid += guid_delta; 831 832 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 833 pvd->vdev_guid_sum += guid_delta; 834 835 /* 836 * If this is a replacement, then we want to fallthrough to the 837 * rest of the code. If we're adding a spare, then it's already 838 * labeled appropriately and we can just return. 839 */ 840 if (reason == VDEV_LABEL_SPARE) 841 return (0); 842 ASSERT(reason == VDEV_LABEL_REPLACE || 843 reason == VDEV_LABEL_SPLIT); 844 } 845 846 if (reason != VDEV_LABEL_REMOVE && reason != VDEV_LABEL_SPARE && 847 l2cache_guid != 0ULL) { 848 uint64_t guid_delta = l2cache_guid - vd->vdev_guid; 849 850 vd->vdev_guid += guid_delta; 851 852 for (vdev_t *pvd = vd; pvd != NULL; pvd = pvd->vdev_parent) 853 pvd->vdev_guid_sum += guid_delta; 854 855 /* 856 * If this is a replacement, then we want to fallthrough to the 857 * rest of the code. If we're adding an l2cache, then it's 858 * already labeled appropriately and we can just return. 859 */ 860 if (reason == VDEV_LABEL_L2CACHE) 861 return (0); 862 ASSERT(reason == VDEV_LABEL_REPLACE); 863 } 864 865 /* 866 * Initialize its label. 867 */ 868 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 869 abd_zero(vp_abd, sizeof (vdev_phys_t)); 870 vp = abd_to_buf(vp_abd); 871 872 /* 873 * Generate a label describing the pool and our top-level vdev. 874 * We mark it as being from txg 0 to indicate that it's not 875 * really part of an active pool just yet. The labels will 876 * be written again with a meaningful txg by spa_sync(). 877 */ 878 if (reason == VDEV_LABEL_SPARE || 879 (reason == VDEV_LABEL_REMOVE && vd->vdev_isspare)) { 880 /* 881 * For inactive hot spares, we generate a special label that 882 * identifies as a mutually shared hot spare. We write the 883 * label if we are adding a hot spare, or if we are removing an 884 * active hot spare (in which case we want to revert the 885 * labels). 886 */ 887 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 888 889 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 890 spa_version(spa)) == 0); 891 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 892 POOL_STATE_SPARE) == 0); 893 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 894 vd->vdev_guid) == 0); 895 } else if (reason == VDEV_LABEL_L2CACHE || 896 (reason == VDEV_LABEL_REMOVE && vd->vdev_isl2cache)) { 897 /* 898 * For level 2 ARC devices, add a special label. 899 */ 900 VERIFY(nvlist_alloc(&label, NV_UNIQUE_NAME, KM_SLEEP) == 0); 901 902 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_VERSION, 903 spa_version(spa)) == 0); 904 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_POOL_STATE, 905 POOL_STATE_L2CACHE) == 0); 906 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_GUID, 907 vd->vdev_guid) == 0); 908 } else { 909 uint64_t txg = 0ULL; 910 911 if (reason == VDEV_LABEL_SPLIT) 912 txg = spa->spa_uberblock.ub_txg; 913 label = spa_config_generate(spa, vd, txg, B_FALSE); 914 915 /* 916 * Add our creation time. This allows us to detect multiple 917 * vdev uses as described above, and automatically expires if we 918 * fail. 919 */ 920 VERIFY(nvlist_add_uint64(label, ZPOOL_CONFIG_CREATE_TXG, 921 crtxg) == 0); 922 } 923 924 buf = vp->vp_nvlist; 925 buflen = sizeof (vp->vp_nvlist); 926 927 error = nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP); 928 if (error != 0) { 929 nvlist_free(label); 930 abd_free(vp_abd); 931 /* EFAULT means nvlist_pack ran out of room */ 932 return (error == EFAULT ? ENAMETOOLONG : EINVAL); 933 } 934 935 /* 936 * Initialize uberblock template. 937 */ 938 ub_abd = abd_alloc_linear(VDEV_UBERBLOCK_RING, B_TRUE); 939 abd_zero(ub_abd, VDEV_UBERBLOCK_RING); 940 abd_copy_from_buf(ub_abd, &spa->spa_uberblock, sizeof (uberblock_t)); 941 ub = abd_to_buf(ub_abd); 942 ub->ub_txg = 0; 943 944 /* Initialize the 2nd padding area. */ 945 pad2 = abd_alloc_for_io(VDEV_PAD_SIZE, B_TRUE); 946 abd_zero(pad2, VDEV_PAD_SIZE); 947 948 /* 949 * Write everything in parallel. 950 */ 951 retry: 952 zio = zio_root(spa, NULL, NULL, flags); 953 954 for (int l = 0; l < VDEV_LABELS; l++) { 955 956 vdev_label_write(zio, vd, l, vp_abd, 957 offsetof(vdev_label_t, vl_vdev_phys), 958 sizeof (vdev_phys_t), NULL, NULL, flags); 959 960 /* 961 * Skip the 1st padding area. 962 * Zero out the 2nd padding area where it might have 963 * left over data from previous filesystem format. 964 */ 965 vdev_label_write(zio, vd, l, pad2, 966 offsetof(vdev_label_t, vl_pad2), 967 VDEV_PAD_SIZE, NULL, NULL, flags); 968 969 vdev_label_write(zio, vd, l, ub_abd, 970 offsetof(vdev_label_t, vl_uberblock), 971 VDEV_UBERBLOCK_RING, NULL, NULL, flags); 972 } 973 974 error = zio_wait(zio); 975 976 if (error != 0 && !(flags & ZIO_FLAG_TRYHARD)) { 977 flags |= ZIO_FLAG_TRYHARD; 978 goto retry; 979 } 980 981 nvlist_free(label); 982 abd_free(pad2); 983 abd_free(ub_abd); 984 abd_free(vp_abd); 985 986 /* 987 * If this vdev hasn't been previously identified as a spare, then we 988 * mark it as such only if a) we are labeling it as a spare, or b) it 989 * exists as a spare elsewhere in the system. Do the same for 990 * level 2 ARC devices. 991 */ 992 if (error == 0 && !vd->vdev_isspare && 993 (reason == VDEV_LABEL_SPARE || 994 spa_spare_exists(vd->vdev_guid, NULL, NULL))) 995 spa_spare_add(vd); 996 997 if (error == 0 && !vd->vdev_isl2cache && 998 (reason == VDEV_LABEL_L2CACHE || 999 spa_l2cache_exists(vd->vdev_guid, NULL))) 1000 spa_l2cache_add(vd); 1001 1002 return (error); 1003 } 1004 1005 /* 1006 * ========================================================================== 1007 * uberblock load/sync 1008 * ========================================================================== 1009 */ 1010 1011 /* 1012 * Consider the following situation: txg is safely synced to disk. We've 1013 * written the first uberblock for txg + 1, and then we lose power. When we 1014 * come back up, we fail to see the uberblock for txg + 1 because, say, 1015 * it was on a mirrored device and the replica to which we wrote txg + 1 1016 * is now offline. If we then make some changes and sync txg + 1, and then 1017 * the missing replica comes back, then for a few seconds we'll have two 1018 * conflicting uberblocks on disk with the same txg. The solution is simple: 1019 * among uberblocks with equal txg, choose the one with the latest timestamp. 1020 */ 1021 static int 1022 vdev_uberblock_compare(const uberblock_t *ub1, const uberblock_t *ub2) 1023 { 1024 int cmp = TREE_CMP(ub1->ub_txg, ub2->ub_txg); 1025 1026 if (likely(cmp)) 1027 return (cmp); 1028 1029 cmp = TREE_CMP(ub1->ub_timestamp, ub2->ub_timestamp); 1030 if (likely(cmp)) 1031 return (cmp); 1032 1033 /* 1034 * If MMP_VALID(ub) && MMP_SEQ_VALID(ub) then the host has an MMP-aware 1035 * ZFS, e.g. zfsonlinux >= 0.7. 1036 * 1037 * If one ub has MMP and the other does not, they were written by 1038 * different hosts, which matters for MMP. So we treat no MMP/no SEQ as 1039 * a 0 value. 1040 * 1041 * Since timestamp and txg are the same if we get this far, either is 1042 * acceptable for importing the pool. 1043 */ 1044 unsigned int seq1 = 0; 1045 unsigned int seq2 = 0; 1046 1047 if (MMP_VALID(ub1) && MMP_SEQ_VALID(ub1)) 1048 seq1 = MMP_SEQ(ub1); 1049 1050 if (MMP_VALID(ub2) && MMP_SEQ_VALID(ub2)) 1051 seq2 = MMP_SEQ(ub2); 1052 1053 return (TREE_CMP(seq1, seq2)); 1054 } 1055 1056 struct ubl_cbdata { 1057 uberblock_t *ubl_ubbest; /* Best uberblock */ 1058 vdev_t *ubl_vd; /* vdev associated with the above */ 1059 }; 1060 1061 static void 1062 vdev_uberblock_load_done(zio_t *zio) 1063 { 1064 vdev_t *vd = zio->io_vd; 1065 spa_t *spa = zio->io_spa; 1066 zio_t *rio = zio->io_private; 1067 uberblock_t *ub = abd_to_buf(zio->io_abd); 1068 struct ubl_cbdata *cbp = rio->io_private; 1069 1070 ASSERT3U(zio->io_size, ==, VDEV_UBERBLOCK_SIZE(vd)); 1071 1072 if (zio->io_error == 0 && uberblock_verify(ub) == 0) { 1073 mutex_enter(&rio->io_lock); 1074 if (ub->ub_txg <= spa->spa_load_max_txg && 1075 vdev_uberblock_compare(ub, cbp->ubl_ubbest) > 0) { 1076 /* 1077 * Keep track of the vdev in which this uberblock 1078 * was found. We will use this information later 1079 * to obtain the config nvlist associated with 1080 * this uberblock. 1081 */ 1082 *cbp->ubl_ubbest = *ub; 1083 cbp->ubl_vd = vd; 1084 } 1085 mutex_exit(&rio->io_lock); 1086 } 1087 1088 abd_free(zio->io_abd); 1089 } 1090 1091 static void 1092 vdev_uberblock_load_impl(zio_t *zio, vdev_t *vd, int flags, 1093 struct ubl_cbdata *cbp) 1094 { 1095 for (int c = 0; c < vd->vdev_children; c++) 1096 vdev_uberblock_load_impl(zio, vd->vdev_child[c], flags, cbp); 1097 1098 if (vd->vdev_ops->vdev_op_leaf && vdev_readable(vd)) { 1099 for (int l = 0; l < VDEV_LABELS; l++) { 1100 for (int n = 0; n < VDEV_UBERBLOCK_COUNT(vd); n++) { 1101 vdev_label_read(zio, vd, l, 1102 abd_alloc_linear(VDEV_UBERBLOCK_SIZE(vd), 1103 B_TRUE), VDEV_UBERBLOCK_OFFSET(vd, n), 1104 VDEV_UBERBLOCK_SIZE(vd), 1105 vdev_uberblock_load_done, zio, flags); 1106 } 1107 } 1108 } 1109 } 1110 1111 /* 1112 * Reads the 'best' uberblock from disk along with its associated 1113 * configuration. First, we read the uberblock array of each label of each 1114 * vdev, keeping track of the uberblock with the highest txg in each array. 1115 * Then, we read the configuration from the same vdev as the best uberblock. 1116 */ 1117 void 1118 vdev_uberblock_load(vdev_t *rvd, uberblock_t *ub, nvlist_t **config) 1119 { 1120 zio_t *zio; 1121 spa_t *spa = rvd->vdev_spa; 1122 struct ubl_cbdata cb; 1123 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL | 1124 ZIO_FLAG_SPECULATIVE | ZIO_FLAG_TRYHARD; 1125 1126 ASSERT(ub); 1127 ASSERT(config); 1128 1129 bzero(ub, sizeof (uberblock_t)); 1130 *config = NULL; 1131 1132 cb.ubl_ubbest = ub; 1133 cb.ubl_vd = NULL; 1134 1135 spa_config_enter(spa, SCL_ALL, FTAG, RW_WRITER); 1136 zio = zio_root(spa, NULL, &cb, flags); 1137 vdev_uberblock_load_impl(zio, rvd, flags, &cb); 1138 (void) zio_wait(zio); 1139 1140 /* 1141 * It's possible that the best uberblock was discovered on a label 1142 * that has a configuration which was written in a future txg. 1143 * Search all labels on this vdev to find the configuration that 1144 * matches the txg for our uberblock. 1145 */ 1146 if (cb.ubl_vd != NULL) { 1147 vdev_dbgmsg(cb.ubl_vd, "best uberblock found for spa %s. " 1148 "txg %llu", spa->spa_name, (u_longlong_t)ub->ub_txg); 1149 1150 *config = vdev_label_read_config(cb.ubl_vd, ub->ub_txg); 1151 if (*config == NULL && spa->spa_extreme_rewind) { 1152 vdev_dbgmsg(cb.ubl_vd, "failed to read label config. " 1153 "Trying again without txg restrictions."); 1154 *config = vdev_label_read_config(cb.ubl_vd, UINT64_MAX); 1155 } 1156 if (*config == NULL) { 1157 vdev_dbgmsg(cb.ubl_vd, "failed to read label config"); 1158 } 1159 } 1160 spa_config_exit(spa, SCL_ALL, FTAG); 1161 } 1162 1163 /* 1164 * On success, increment root zio's count of good writes. 1165 * We only get credit for writes to known-visible vdevs; see spa_vdev_add(). 1166 */ 1167 static void 1168 vdev_uberblock_sync_done(zio_t *zio) 1169 { 1170 uint64_t *good_writes = zio->io_private; 1171 1172 if (zio->io_error == 0 && zio->io_vd->vdev_top->vdev_ms_array != 0) 1173 atomic_inc_64(good_writes); 1174 } 1175 1176 /* 1177 * Write the uberblock to all labels of all leaves of the specified vdev. 1178 */ 1179 static void 1180 vdev_uberblock_sync(zio_t *zio, uint64_t *good_writes, 1181 uberblock_t *ub, vdev_t *vd, int flags) 1182 { 1183 for (uint64_t c = 0; c < vd->vdev_children; c++) { 1184 vdev_uberblock_sync(zio, good_writes, 1185 ub, vd->vdev_child[c], flags); 1186 } 1187 1188 if (!vd->vdev_ops->vdev_op_leaf) 1189 return; 1190 1191 if (!vdev_writeable(vd)) 1192 return; 1193 1194 int m = spa_multihost(vd->vdev_spa) ? MMP_BLOCKS_PER_LABEL : 0; 1195 int n = ub->ub_txg % (VDEV_UBERBLOCK_COUNT(vd) - m); 1196 1197 /* Copy the uberblock_t into the ABD */ 1198 abd_t *ub_abd = abd_alloc_for_io(VDEV_UBERBLOCK_SIZE(vd), B_TRUE); 1199 abd_zero(ub_abd, VDEV_UBERBLOCK_SIZE(vd)); 1200 abd_copy_from_buf(ub_abd, ub, sizeof (uberblock_t)); 1201 1202 for (int l = 0; l < VDEV_LABELS; l++) 1203 vdev_label_write(zio, vd, l, ub_abd, 1204 VDEV_UBERBLOCK_OFFSET(vd, n), VDEV_UBERBLOCK_SIZE(vd), 1205 vdev_uberblock_sync_done, good_writes, 1206 flags | ZIO_FLAG_DONT_PROPAGATE); 1207 1208 abd_free(ub_abd); 1209 } 1210 1211 /* Sync the uberblocks to all vdevs in svd[] */ 1212 int 1213 vdev_uberblock_sync_list(vdev_t **svd, int svdcount, uberblock_t *ub, int flags) 1214 { 1215 spa_t *spa = svd[0]->vdev_spa; 1216 zio_t *zio; 1217 uint64_t good_writes = 0; 1218 1219 zio = zio_root(spa, NULL, NULL, flags); 1220 1221 for (int v = 0; v < svdcount; v++) 1222 vdev_uberblock_sync(zio, &good_writes, ub, svd[v], flags); 1223 1224 (void) zio_wait(zio); 1225 1226 /* 1227 * Flush the uberblocks to disk. This ensures that the odd labels 1228 * are no longer needed (because the new uberblocks and the even 1229 * labels are safely on disk), so it is safe to overwrite them. 1230 */ 1231 zio = zio_root(spa, NULL, NULL, flags); 1232 1233 for (int v = 0; v < svdcount; v++) { 1234 if (vdev_writeable(svd[v])) { 1235 zio_flush(zio, svd[v]); 1236 } 1237 } 1238 1239 (void) zio_wait(zio); 1240 1241 return (good_writes >= 1 ? 0 : EIO); 1242 } 1243 1244 /* 1245 * On success, increment the count of good writes for our top-level vdev. 1246 */ 1247 static void 1248 vdev_label_sync_done(zio_t *zio) 1249 { 1250 uint64_t *good_writes = zio->io_private; 1251 1252 if (zio->io_error == 0) 1253 atomic_inc_64(good_writes); 1254 } 1255 1256 /* 1257 * If there weren't enough good writes, indicate failure to the parent. 1258 */ 1259 static void 1260 vdev_label_sync_top_done(zio_t *zio) 1261 { 1262 uint64_t *good_writes = zio->io_private; 1263 1264 if (*good_writes == 0) 1265 zio->io_error = SET_ERROR(EIO); 1266 1267 kmem_free(good_writes, sizeof (uint64_t)); 1268 } 1269 1270 /* 1271 * We ignore errors for log and cache devices, simply free the private data. 1272 */ 1273 static void 1274 vdev_label_sync_ignore_done(zio_t *zio) 1275 { 1276 kmem_free(zio->io_private, sizeof (uint64_t)); 1277 } 1278 1279 /* 1280 * Write all even or odd labels to all leaves of the specified vdev. 1281 */ 1282 static void 1283 vdev_label_sync(zio_t *zio, uint64_t *good_writes, 1284 vdev_t *vd, int l, uint64_t txg, int flags) 1285 { 1286 nvlist_t *label; 1287 vdev_phys_t *vp; 1288 abd_t *vp_abd; 1289 char *buf; 1290 size_t buflen; 1291 1292 for (int c = 0; c < vd->vdev_children; c++) { 1293 vdev_label_sync(zio, good_writes, 1294 vd->vdev_child[c], l, txg, flags); 1295 } 1296 1297 if (!vd->vdev_ops->vdev_op_leaf) 1298 return; 1299 1300 if (!vdev_writeable(vd)) 1301 return; 1302 1303 /* 1304 * Generate a label describing the top-level config to which we belong. 1305 */ 1306 label = spa_config_generate(vd->vdev_spa, vd, txg, B_FALSE); 1307 1308 vp_abd = abd_alloc_linear(sizeof (vdev_phys_t), B_TRUE); 1309 abd_zero(vp_abd, sizeof (vdev_phys_t)); 1310 vp = abd_to_buf(vp_abd); 1311 1312 buf = vp->vp_nvlist; 1313 buflen = sizeof (vp->vp_nvlist); 1314 1315 if (nvlist_pack(label, &buf, &buflen, NV_ENCODE_XDR, KM_SLEEP) == 0) { 1316 for (; l < VDEV_LABELS; l += 2) { 1317 vdev_label_write(zio, vd, l, vp_abd, 1318 offsetof(vdev_label_t, vl_vdev_phys), 1319 sizeof (vdev_phys_t), 1320 vdev_label_sync_done, good_writes, 1321 flags | ZIO_FLAG_DONT_PROPAGATE); 1322 } 1323 } 1324 1325 abd_free(vp_abd); 1326 nvlist_free(label); 1327 } 1328 1329 int 1330 vdev_label_sync_list(spa_t *spa, int l, uint64_t txg, int flags) 1331 { 1332 list_t *dl = &spa->spa_config_dirty_list; 1333 vdev_t *vd; 1334 zio_t *zio; 1335 int error; 1336 1337 /* 1338 * Write the new labels to disk. 1339 */ 1340 zio = zio_root(spa, NULL, NULL, flags); 1341 1342 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) { 1343 uint64_t *good_writes = kmem_zalloc(sizeof (uint64_t), 1344 KM_SLEEP); 1345 1346 ASSERT(!vd->vdev_ishole); 1347 1348 zio_t *vio = zio_null(zio, spa, NULL, 1349 (vd->vdev_islog || vd->vdev_aux != NULL) ? 1350 vdev_label_sync_ignore_done : vdev_label_sync_top_done, 1351 good_writes, flags); 1352 vdev_label_sync(vio, good_writes, vd, l, txg, flags); 1353 zio_nowait(vio); 1354 } 1355 1356 error = zio_wait(zio); 1357 1358 /* 1359 * Flush the new labels to disk. 1360 */ 1361 zio = zio_root(spa, NULL, NULL, flags); 1362 1363 for (vd = list_head(dl); vd != NULL; vd = list_next(dl, vd)) 1364 zio_flush(zio, vd); 1365 1366 (void) zio_wait(zio); 1367 1368 return (error); 1369 } 1370 1371 /* 1372 * Sync the uberblock and any changes to the vdev configuration. 1373 * 1374 * The order of operations is carefully crafted to ensure that 1375 * if the system panics or loses power at any time, the state on disk 1376 * is still transactionally consistent. The in-line comments below 1377 * describe the failure semantics at each stage. 1378 * 1379 * Moreover, vdev_config_sync() is designed to be idempotent: if it fails 1380 * at any time, you can just call it again, and it will resume its work. 1381 */ 1382 int 1383 vdev_config_sync(vdev_t **svd, int svdcount, uint64_t txg) 1384 { 1385 spa_t *spa = svd[0]->vdev_spa; 1386 uberblock_t *ub = &spa->spa_uberblock; 1387 int error = 0; 1388 int flags = ZIO_FLAG_CONFIG_WRITER | ZIO_FLAG_CANFAIL; 1389 1390 ASSERT(svdcount != 0); 1391 retry: 1392 /* 1393 * Normally, we don't want to try too hard to write every label and 1394 * uberblock. If there is a flaky disk, we don't want the rest of the 1395 * sync process to block while we retry. But if we can't write a 1396 * single label out, we should retry with ZIO_FLAG_TRYHARD before 1397 * bailing out and declaring the pool faulted. 1398 */ 1399 if (error != 0) { 1400 if ((flags & ZIO_FLAG_TRYHARD) != 0) 1401 return (error); 1402 flags |= ZIO_FLAG_TRYHARD; 1403 } 1404 1405 ASSERT(ub->ub_txg <= txg); 1406 1407 /* 1408 * If this isn't a resync due to I/O errors, 1409 * and nothing changed in this transaction group, 1410 * and the vdev configuration hasn't changed, 1411 * then there's nothing to do. 1412 */ 1413 if (ub->ub_txg < txg) { 1414 boolean_t changed = uberblock_update(ub, spa->spa_root_vdev, 1415 txg, spa->spa_mmp.mmp_delay); 1416 1417 if (!changed && list_is_empty(&spa->spa_config_dirty_list)) 1418 return (0); 1419 } 1420 1421 if (txg > spa_freeze_txg(spa)) 1422 return (0); 1423 1424 ASSERT(txg <= spa->spa_final_txg); 1425 1426 /* 1427 * Flush the write cache of every disk that's been written to 1428 * in this transaction group. This ensures that all blocks 1429 * written in this txg will be committed to stable storage 1430 * before any uberblock that references them. 1431 */ 1432 zio_t *zio = zio_root(spa, NULL, NULL, flags); 1433 1434 for (vdev_t *vd = 1435 txg_list_head(&spa->spa_vdev_txg_list, TXG_CLEAN(txg)); vd != NULL; 1436 vd = txg_list_next(&spa->spa_vdev_txg_list, vd, TXG_CLEAN(txg))) 1437 zio_flush(zio, vd); 1438 1439 (void) zio_wait(zio); 1440 1441 /* 1442 * Sync out the even labels (L0, L2) for every dirty vdev. If the 1443 * system dies in the middle of this process, that's OK: all of the 1444 * even labels that made it to disk will be newer than any uberblock, 1445 * and will therefore be considered invalid. The odd labels (L1, L3), 1446 * which have not yet been touched, will still be valid. We flush 1447 * the new labels to disk to ensure that all even-label updates 1448 * are committed to stable storage before the uberblock update. 1449 */ 1450 if ((error = vdev_label_sync_list(spa, 0, txg, flags)) != 0) { 1451 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1452 zfs_dbgmsg("vdev_label_sync_list() returned error %d " 1453 "for pool '%s' when syncing out the even labels " 1454 "of dirty vdevs", error, spa_name(spa)); 1455 } 1456 goto retry; 1457 } 1458 1459 /* 1460 * Sync the uberblocks to all vdevs in svd[]. 1461 * If the system dies in the middle of this step, there are two cases 1462 * to consider, and the on-disk state is consistent either way: 1463 * 1464 * (1) If none of the new uberblocks made it to disk, then the 1465 * previous uberblock will be the newest, and the odd labels 1466 * (which had not yet been touched) will be valid with respect 1467 * to that uberblock. 1468 * 1469 * (2) If one or more new uberblocks made it to disk, then they 1470 * will be the newest, and the even labels (which had all 1471 * been successfully committed) will be valid with respect 1472 * to the new uberblocks. 1473 */ 1474 if ((error = vdev_uberblock_sync_list(svd, svdcount, ub, flags)) != 0) { 1475 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1476 zfs_dbgmsg("vdev_uberblock_sync_list() returned error " 1477 "%d for pool '%s'", error, spa_name(spa)); 1478 } 1479 goto retry; 1480 } 1481 1482 if (spa_multihost(spa)) 1483 mmp_update_uberblock(spa, ub); 1484 1485 /* 1486 * Sync out odd labels for every dirty vdev. If the system dies 1487 * in the middle of this process, the even labels and the new 1488 * uberblocks will suffice to open the pool. The next time 1489 * the pool is opened, the first thing we'll do -- before any 1490 * user data is modified -- is mark every vdev dirty so that 1491 * all labels will be brought up to date. We flush the new labels 1492 * to disk to ensure that all odd-label updates are committed to 1493 * stable storage before the next transaction group begins. 1494 */ 1495 if ((error = vdev_label_sync_list(spa, 1, txg, flags)) != 0) { 1496 if ((flags & ZIO_FLAG_TRYHARD) != 0) { 1497 zfs_dbgmsg("vdev_label_sync_list() returned error %d " 1498 "for pool '%s' when syncing out the odd labels of " 1499 "dirty vdevs", error, spa_name(spa)); 1500 } 1501 goto retry; 1502 } 1503 1504 return (0); 1505 } 1506