Lines Matching +full:alloc +full:- +full:ranges

1 // SPDX-License-Identifier: CDDL-1.0
10 * or https://opensource.org/licenses/CDDL-1.0.
66 TRACE_ALLOC_FAILURE = -1ULL,
67 TRACE_TOO_SMALL = -2ULL,
68 TRACE_FORCE_GANG = -3ULL,
69 TRACE_NOT_ALLOCATABLE = -4ULL,
70 TRACE_GROUP_FAILURE = -5ULL,
71 TRACE_ENOSPC = -6ULL,
72 TRACE_CONDENSING = -7ULL,
73 TRACE_VDEV_ERROR = -8ULL,
74 TRACE_DISABLED = -9ULL,
93 * comprised, but we cannot always use it -- legacy pools do not have the
100 * segment-weighted value.
102 * Space-based weight:
105 * +-------+-------+-------+-------+-------+-------+-------+-------+
106 * |PSC1| weighted-free space |
107 * +-------+-------+-------+-------+-------+-------+-------+-------+
109 * PS - indicates primary and secondary activation
110 * C - indicates activation for claimed block zio
111 * space - the fragmentation-weighted space
113 * Segment-based weight:
116 * +-------+-------+-------+-------+-------+-------+-------+-------+
118 * +-------+-------+-------+-------+-------+-------+-------+-------+
120 * PS - indicates primary and secondary activation
121 * C - indicates activation for claimed block zio
122 * idx - index for the highest bucket in the histogram
123 * count - number of segments in the specified bucket
133 * These macros are only applicable to segment-based weighting.
141 * Per-allocator data structure.
165 * A metaslab class encompasses a category of allocatable top-level vdevs.
166 * Each top-level vdev is associated with a metaslab group which defines
171 * metaslab_class_t, and only top-level vdevs (i.e. metaslab groups) belonging
175 * attempted. Allocating a block is a 3 step process -- select the metaslab
177 * class defines the low-level block allocator that will be used as the
204 uint64_t mc_space; /* total space (alloc + free) */
218 * Per-allocator data structure.
228 * of a top-level vdev. They are linked together to form a circular linked
284 * Each metaslab maintains a set of in-core trees to track metaslab
285 * operations. The in-core free tree (ms_allocatable) contains the list of
291 * where it is safe to update the on-disk space maps. An additional set
292 * of in-core trees is maintained to track deferred frees
308 * free segment (ms_allocatable) -> ms_allocating[4] -> (write to space map)
310 * | ms_freeing <--- FREE
315 * +-------- ms_defer[2] <-------+-------> (write to space map)
323 * To load the in-core free tree we read the space map from disk. This
324 * object contains a series of alloc and free records that are combined
326 * segments are represented in-core by the ms_allocatable and are stored
330 * eventually become space-inefficient. When the metaslab's in-core
332 * on-disk representation, we rewrite it in its minimized form. If a
352 * write to metaslab data on-disk (i.e flushing entries to
393 * subset of ms_allocatable.) It's kept in-core as long as the
395 * is unloaded. Its purpose is to aggregate freed ranges to
426 * the spacemap histogram, but that includes ranges that are
429 * weight, we need to remove those ranges.
431 * The ranges in the ms_freed and ms_defer[] range trees are all
436 * Adjacent ranges that are freed in different sync passes of
439 * ranges will be consolidated (represented as one entry) in the
451 * ms_synchist represents the same ranges as ms_freeing +
454 * ms_deferhist[i] represents the same ranges as ms_defer[i],
484 uint64_t ms_alloc_txg; /* last successful alloc (debug only) */
488 * -1 if it's not active in an allocator, otherwise set to the allocator
492 boolean_t ms_primary; /* Only valid if ms_allocator is not -1 */
495 * The metaslab block allocators can optionally use a size-ordered
508 txg_node_t ms_txg_node; /* per-txg dirty metaslab links */
538 /* on-disk counterpart of ms_unflushed_txg */