Lines Matching full:buckets

7  * Allocation in bcache is done in terms of buckets:
17 * of buckets on disk, with a pointer to them in the journal header.
25 * batch this up: We fill up the free_inc list with freshly invalidated buckets,
26 * call prio_write(), and when prio_write() finishes we pull buckets off the
31 * smaller freelist, and buckets on that list are always ready to be used.
33 * There is another freelist, because sometimes we have buckets that we know
35 * priorities to be rewritten. These come from freed btree nodes and buckets
37 * them (because they were overwritten). That's the unused list - buckets on the
54 * buckets are ready.
56 * invalidate_buckets_(lru|fifo)() find buckets that are available to be
116 * Background allocation thread: scans for buckets to be invalidated,
139 trace_bcache_invalidate(ca, b - ca->buckets); in __bch_invalidate_one_bucket()
151 fifo_push(&ca->free_inc, b - ca->buckets); in bch_invalidate_one_bucket()
155 * Determines what order we're going to reuse buckets, smallest bucket_prio()
220 b = ca->buckets + ca->fifo_last_bucket++; in invalidate_buckets_fifo()
246 b = ca->buckets + n; in invalidate_buckets_random()
319 * First, we pull buckets off of the unused and free_inc lists, in bch_allocator_thread()
334 * We've run out of free buckets, we need to find some buckets in bch_allocator_thread()
433 b = ca->buckets + r; in bch_bucket_alloc()
495 k->ptr[0] = MAKE_PTR(ca->buckets[b].gen, in __bch_bucket_alloc_set()
525 * We keep multiple buckets open for writes, and try to segregate different
536 * dirty sectors mixed with dirty sectors of cached device, such buckets will
543 * data to the same buckets it'd get invalidated at the same time.
667 * k takes refcounts on the buckets it points to until it's inserted in bch_alloc_sectors()