Lines Matching defs:vdev

66 /* maximum scrub/resilver I/O queue per leaf vdev */
70 * When a vdev is added, it will be divided into approximately (but no
76 * Given a vdev type, return the appropriate ops vector.
110 * the vdev's asize rounded to the nearest metaslab. This allows us to
127 * The top-level vdev just returns the allocatable size rounded
134 * The allocatable space for a raidz vdev is N * sizeof(smallest child),
153 vdev_lookup_top(spa_t *spa, uint64_t vdev)
159 if (vdev < rvd->vdev_children) {
160 ASSERT(rvd->vdev_child[vdev] != NULL);
161 return (rvd->vdev_child[vdev]);
327 * The root vdev's guid will also be the pool guid,
333 * Any other vdev's guid must be unique within the pool.
358 offsetof(struct vdev, vdev_dtl_node));
367 * Allocate a new vdev. The 'alloctype' is used to control whether we are
368 * creating a new vdev or loading an existing one - the behavior is slightly
389 * If this is a load, get the vdev guid from the nvlist.
413 * The first allocated vdev must be of type 'root'.
419 * Determine whether we're a log vdev.
501 * Retrieve the vdev creation time.
507 * If we're a top-level vdev, try to load the allocation parameters.
531 * If we're a leaf vdev, try to load the DTL object and other state.
600 * vdev_free() implies closing the vdev first. This is simpler than
630 * Remove this vdev from its parent's child list.
637 * Clean up vdev structure.
678 * Transfer top-level vdev state from svd to tvd.
756 * Add a mirror/replacing vdev above an existing vdev.
789 * Remove a 1-way mirror/replacing vdev from the tree.
809 * If cvd will replace mvd as a top-level vdev, preserve mvd's guid.
812 * instead of a different version of the same top-level vdev.
845 * This vdev is not being allocated from yet or is a hole.
894 * If the vdev is being removed we don't activate
989 * vdev label but the first, which we leave alone in case it contains
1010 * this vdev will become parents of the probe io.
1048 * We can't change the vdev state in this context, so we
1157 * If this vdev is not removed, check its fault status. If it's
1177 * the vdev on error.
1197 * the vdev is accessible. If we're faulted, bail.
1303 * vdev open for business.
1324 * If a leaf vdev has a DTL, and seems healthy, then kick off a
1342 * to all of the vdev labels, but not the cached config. The strict check
1347 * /etc/zfs/zpool.cache was readonly at the time. Otherwise, the vdev state
1380 * Determine if this vdev has been split off into another
1406 * If this vdev just became a top-level vdev because its
1408 * vdev guid -- but the label may or may not be on disk yet.
1410 * same top guid, so if we're a top-level vdev, we can
1413 * If we split this vdev off instead, then we also check the
1414 * original pool's guid. We don't want to consider the vdev
1449 * If we were able to open and validate a vdev that was
1538 /* set the reopening flag unless we're taking the vdev offline */
1559 * Reassess parent vdev's health.
1598 * Aim for roughly metaslabs_per_vdev (default 200) metaslabs per vdev.
1634 * A vdev's DTL (dirty time log) is the set of transaction groups for which
1635 * the vdev has less than perfect replication. There are four kinds of DTL:
1637 * DTL_MISSING: txgs for which the vdev has no valid copies of the data
1655 * A vdev's DTL_PARTIAL is the union of its children's DTL_PARTIALs, because
1657 * A vdev's DTL_MISSING is a modified union of its children's DTL_MISSINGs,
1747 * Determine if a resilvering vdev should remove any DTL entries from
1748 * its range. If the vdev was resilvering for the entire duration of the
1750 * vdev is considered partially resilvered and should leave its DTL
1809 * if this vdev should remove any DTLs. We only want to
1857 * If the vdev was resilvering and no longer has any
2012 * Determine whether the specified vdev can be offlined/detached/removed
2030 * whether this results in any DTL outages in the top-level vdev.
2095 * If this is a top-level vdev, initialize its metaslabs.
2104 * If this is a leaf vdev, load its DTL.
2112 * The special vdev case is used for hot spares and l2cache devices. Its
2113 * sole purpose it to set the vdev state for the associated vdev. To do this,
2176 * If the metaslab was not loaded when the vdev
2241 * Remove the metadata associated with this vdev once it's empty.
2264 * Mark the given vdev faulted. A faulted vdev behaves as if the device could
2299 * back off and simply mark the vdev as degraded instead.
2319 * Mark the given vdev degraded. A degraded vdev is purely an indication to the
2320 * user that something is wrong. The vdev continues to operate as normal as far
2337 * If the vdev is already faulted, then don't do anything.
2351 * Online the given vdev.
2454 * then proceed. We check that the vdev's metaslab group
2456 * added this vdev but not yet initialized its metaslabs.
2484 * Offline this device and reopen its top-level vdev.
2485 * If the top-level vdev is a log device then just offline
2487 * vdev becoming unusable, undo it and fail the request.
2525 * Clear the error counts associated with this vdev. Unlike vdev_online() and
2549 * also mark the vdev config dirty, so that the new faulted state is
2625 * the proper locks. Note that we have to get the vdev state
2651 * Get statistics for the given vdev.
2674 * If we're getting stats on the root vdev, aggregate the I/O counts
2740 * (Holes never create vdev children, so all the counters
2746 * one top-level vdev does not imply a root-level error.
2856 * Update the in-core space usage stats for this vdev, its metaslab class,
2857 * and the root vdev.
2873 * factor. We must calculate this here and not at the root vdev
2874 * because the root vdev's psize-to-asize is simply the max of its
2906 * Mark a top-level vdev's config as dirty, placing it on the dirty list
2907 * so that it will be written out next time the vdev configuration is synced.
2908 * If the root vdev is specified (vdev_top == NULL), dirty all top-level vdevs.
2920 * If this is an aux vdev (as with l2cache and spare devices), then we
2921 * update the vdev config manually and set the sync flag.
2997 * Mark a top-level vdev's state as dirty, so that the next pass of
3038 * Propagate vdev state up from children to parent.
3063 * device, treat the root vdev as if it were
3081 * Root special: if there is a top-level vdev that cannot be
3083 * vdev's aux state as 'corrupt' rather than 'insufficient
3097 * Set a vdev's state. If this is during an open, we don't update the parent
3121 * If we are setting the vdev state to anything but an open state, then
3135 * If we have brought this vdev back into service, we need
3141 * double-check the state of the vdev before repairing it.
3165 * If we fail to open a vdev during an import or recovery, we
3232 * Check the vdev configuration to ensure that it's capable of supporting
3234 * In addition, only a single top-level vdev is allowed.
3258 * Load the state from the original vdev tree (ovd) which
3260 * vdev was offline or faulted then we transfer that state to the
3261 * device in the current vdev tree (nvd).
3277 * Restore the persistent vdev state
3287 * Determine if a log device has valid content. If the vdev was
3306 * Expand a vdev if possible.
3321 * Split a vdev.