Lines Matching +full:block +full:- +full:diagram
1 // SPDX-License-Identifier: CDDL-1.0
10 * or https://opensource.org/licenses/CDDL-1.0.
66 TRACE_ALLOC_FAILURE = -1ULL,
67 TRACE_TOO_SMALL = -2ULL,
68 TRACE_FORCE_GANG = -3ULL,
69 TRACE_NOT_ALLOCATABLE = -4ULL,
70 TRACE_GROUP_FAILURE = -5ULL,
71 TRACE_ENOSPC = -6ULL,
72 TRACE_CONDENSING = -7ULL,
73 TRACE_VDEV_ERROR = -8ULL,
74 TRACE_DISABLED = -9ULL,
93 * comprised, but we cannot always use it -- legacy pools do not have the
100 * segment-weighted value.
102 * Space-based weight:
105 * +-------+-------+-------+-------+-------+-------+-------+-------+
106 * |PSC1| weighted-free space |
107 * +-------+-------+-------+-------+-------+-------+-------+-------+
109 * PS - indicates primary and secondary activation
110 * C - indicates activation for claimed block zio
111 * space - the fragmentation-weighted space
113 * Segment-based weight:
116 * +-------+-------+-------+-------+-------+-------+-------+-------+
118 * +-------+-------+-------+-------+-------+-------+-------+-------+
120 * PS - indicates primary and secondary activation
121 * C - indicates activation for claimed block zio
122 * idx - index for the highest bucket in the histogram
123 * count - number of segments in the specified bucket
133 * These macros are only applicable to segment-based weighting.
141 * Per-allocator data structure.
165 * A metaslab class encompasses a category of allocatable top-level vdevs.
166 * Each top-level vdev is associated with a metaslab group which defines
168 * "normal" for data block allocations (i.e. main pool allocations) or "log"
170 * When a block allocation is requested from the SPA it is associated with a
171 * metaslab_class_t, and only top-level vdevs (i.e. metaslab groups) belonging
175 * attempted. Allocating a block is a 3 step process -- select the metaslab
176 * group, select the metaslab, and then allocate the block. The metaslab
177 * class defines the low-level block allocator that will be used as the
179 * to use a block allocator that best suits that class.
218 * Per-allocator data structure.
228 * of a top-level vdev. They are linked together to form a circular linked
258 * A metalab group that can no longer allocate the minimum block
284 * Each metaslab maintains a set of in-core trees to track metaslab
285 * operations. The in-core free tree (ms_allocatable) contains the list of
291 * where it is safe to update the on-disk space maps. An additional set
292 * of in-core trees is maintained to track deferred frees
293 * (ms_defer). Once a block is freed it will move from the
294 * ms_freed to the ms_defer tree. A deferred free means that a block
296 * transactions groups later. For example, a block that is freed in txg
300 * groups and ensure that no block has been reallocated.
302 * The simplified transition diagram looks like this:
308 * free segment (ms_allocatable) -> ms_allocating[4] -> (write to space map)
310 * | ms_freeing <--- FREE
315 * +-------- ms_defer[2] <-------+-------> (write to space map)
323 * To load the in-core free tree we read the space map from disk. This
326 * segments are represented in-core by the ms_allocatable and are stored
330 * eventually become space-inefficient. When the metaslab's in-core
332 * on-disk representation, we rewrite it in its minimized form. If a
352 * write to metaslab data on-disk (i.e flushing entries to
393 * subset of ms_allocatable.) It's kept in-core as long as the
488 * -1 if it's not active in an allocator, otherwise set to the allocator
492 boolean_t ms_primary; /* Only valid if ms_allocator is not -1 */
495 * The metaslab block allocators can optionally use a size-ordered
508 txg_node_t ms_txg_node; /* per-txg dirty metaslab links */
538 /* on-disk counterpart of ms_unflushed_txg */