Lines Matching full:we

28  *   1) space_info.  This is the ultimate arbiter of how much space we can use.
31 * reservations we care about total_bytes - SUM(space_info->bytes_) when
36 * metadata reservation we have. You can see the comment in the block_rsv
40 * 3) btrfs_calc*_size. These are the worst case calculations we used based
41 * on the number of items we will want to modify. We have one for changing
42 * items, and one for inserting new items. Generally we use these helpers to
48 * We call into either btrfs_reserve_data_bytes() or
49 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with
50 * num_bytes we want to reserve.
67 * Assume we are unable to simply make the reservation because we do not have
90 * Check if ->bytes == 0, if it does we got our reservation and we can carry
91 * on, if not return the appropriate error (ENOSPC, but can be EINTR if we
96 * Same as the above, except we add ourselves to the
97 * space_info->priority_tickets, and we do not use ticket->wait, we simply
103 * Generally speaking we will have two cases for each state, a "nice" state
104 * and a "ALL THE THINGS" state. In btrfs we delay a lot of work in order to
108 * reclaim space so we can make new reservations.
112 * for example, we would update the inode item at write time to update the
114 * isize or bytes. We keep these delayed items to coalesce these operations
120 * for delayed allocation. We can reclaim some of this space simply by
121 * running delalloc, but usually we need to wait for ordered extents to
125 * We have a block reserve for the outstanding delayed refs space, and every
127 * to reclaim space, but we want to hold this until the end because COW can
128 * churn a lot and we can avoid making some extent tree modifications if we
132 * This state works only for the zoned mode. On the zoned mode, we cannot
133 * reuse once allocated then freed region until we reset the zone, due to
140 * We will skip this the first time through space reservation, because of
141 * overcommit and we don't want to have a lot of useless metadata space when
145 * If we're freeing inodes we're likely freeing checksums, file extent
150 * This will commit the transaction. Historically we had a lot of logic
151 * surrounding whether or not we'd commit the transaction, but this waits born
152 * out of a pre-tickets era where we could end up committing the transaction
154 * ticketing system we know if we're not making progress and can error
160 * Because we hold so many reservations for metadata we will allow you to
165 * You can see the current logic for when we allow overcommit in
186 * after adding space to the filesystem, we need to clear the full flags
364 * "optimal" chunk size based on the fs size. However when we actually in calc_effective_data_chunk_size()
365 * allocate the chunk we will strip this down further, making it no in calc_effective_data_chunk_size()
368 * On the zoned mode, we need to use zone_size (= data_sinfo->chunk_size) in calc_effective_data_chunk_size()
396 * If we have dup, raid1 or raid10 then only half of the free in calc_available_free_space()
398 * doesn't include the parity drive, so we don't have to in calc_available_free_space()
410 * reservation, because we assume that data reservations will == actual in calc_available_free_space()
411 * usage, we could potentially overcommit and then immediately have that in calc_available_free_space()
413 * bind when we get close to filling the file system. in calc_available_free_space()
416 * space. If we are relatively empty this won't affect our ability to in calc_available_free_space()
417 * overcommit much, and if we're very close to full it'll keep us from in calc_available_free_space()
418 * getting into a position where we've given ourselves very little in calc_available_free_space()
426 * If we aren't flushing all things, let us overcommit up to in calc_available_free_space()
427 * 1/2th of the space. If we can flush, don't let us overcommit in calc_available_free_space()
436 * On the zoned mode, we always allocate one zone as one chunk. in calc_available_free_space()
478 * This is for space we already have accounted in space_info->bytes_may_use, so
479 * basically when we're returning space from block_rsv's.
644 /* Calc the number of the pages we need flush for space reservation */ in shrink_delalloc()
649 * to_reclaim is set to however much metadata we need to in shrink_delalloc()
651 * exactly. What we really want to do is reclaim full inode's in shrink_delalloc()
653 * here. We will take a fraction of the delalloc bytes for our in shrink_delalloc()
655 * the amount we write to cover an entire dirty extent, which in shrink_delalloc()
667 * If we are doing more ordered than delalloc we need to just wait on in shrink_delalloc()
668 * ordered extents, otherwise we'll waste time trying to flush delalloc in shrink_delalloc()
669 * that likely won't give us the space back we need. in shrink_delalloc()
683 * We need to make sure any outstanding async pages are now in shrink_delalloc()
684 * processed before we continue. This is because things like in shrink_delalloc()
686 * marked clean. We don't use filemap_fwrite for flushing in shrink_delalloc()
687 * because we want to control how many pages we write out at a in shrink_delalloc()
688 * time, thus this is the only safe way to make sure we've in shrink_delalloc()
692 * This exists because we do not want to wait for each in shrink_delalloc()
693 * individual inode to finish its async work, we simply want to in shrink_delalloc()
695 * for all of the async work to catch up. Once we're done with in shrink_delalloc()
696 * that we know we'll have ordered extents for everything and we in shrink_delalloc()
697 * can decide if we wait for that or not. in shrink_delalloc()
699 * If we choose to replace this in the future, make absolutely in shrink_delalloc()
708 * We don't want to wait forever, if we wrote less pages in this in shrink_delalloc()
709 * loop than we have outstanding, only wait for that number of in shrink_delalloc()
710 * pages, otherwise we can wait for all async pages to finish in shrink_delalloc()
731 * If we are for preemption we just want a one-shot of delalloc in shrink_delalloc()
732 * flushing so we can stop flushing if we decide we don't need in shrink_delalloc()
826 * If we have pending delayed iputs then we could free up a in flush_space()
827 * bunch of pinned space, so make sure we run the iputs before in flush_space()
828 * we do our pinned bytes check below. in flush_space()
836 * We don't want to start a new transaction, just attach to the in flush_space()
838 * happening at the moment. Note: we don't use a nostart join in flush_space()
871 * We may be flushing because suddenly we have less space than we had in btrfs_calc_reclaim_metadata_size()
872 * before, and now we're well over-committed based on our current free in btrfs_calc_reclaim_metadata_size()
873 * space. If that's the case add in our overage so we make sure to put in btrfs_calc_reclaim_metadata_size()
894 /* If we're just plain full then async reclaim just slows us down. */ in need_preemptive_reclaim()
906 * 128MiB is 1/4 of the maximum global rsv size. If we have less than in need_preemptive_reclaim()
908 * we don't have a lot of things that need flushing. in need_preemptive_reclaim()
914 * We have tickets queued, bail so we don't compete with the async in need_preemptive_reclaim()
921 * If we have over half of the free space occupied by reservations or in need_preemptive_reclaim()
922 * pinned then we want to start flushing. in need_preemptive_reclaim()
924 * We do not do the traditional thing here, which is to say in need_preemptive_reclaim()
929 * because this doesn't quite work how we want. If we had more than 50% in need_preemptive_reclaim()
930 * of the space_info used by bytes_used and we had 0 available we'd just in need_preemptive_reclaim()
931 * constantly run the background flusher. Instead we want it to kick in in need_preemptive_reclaim()
946 * much delalloc we need for the background flusher to kick in. in need_preemptive_reclaim()
960 * If we have more ordered bytes than delalloc bytes then we're either in need_preemptive_reclaim()
961 * doing a lot of DIO, or we simply don't have a lot of delalloc waiting in need_preemptive_reclaim()
965 * nothing, if our reservations are tied up in ordered extents we'll in need_preemptive_reclaim()
970 * block reserves that we would actually be able to directly reclaim in need_preemptive_reclaim()
971 * from. In this case if we're heavy on metadata operations this will in need_preemptive_reclaim()
976 * We want to make sure we truly are maxed out on ordered however, so in need_preemptive_reclaim()
977 * cut ordered in half, and if it's still higher than delalloc then we in need_preemptive_reclaim()
978 * can keep flushing. This is to avoid the case where we start in need_preemptive_reclaim()
979 * flushing, and now delalloc == ordered and we stop preemptively in need_preemptive_reclaim()
980 * flushing when we could still have several gigs of delalloc to flush. in need_preemptive_reclaim()
1026 * We've exhausted our flushing, start failing tickets.
1029 * @space_info - the space info we were flushing
1031 * We call this when we've exhausted our flushing ability and haven't made
1033 * order, so if there is a large ticket first and then smaller ones we could
1075 * We're just throwing tickets away, so more flushing may not in maybe_fail_all_tickets()
1076 * trip over btrfs_try_granting_tickets, so we need to call it in maybe_fail_all_tickets()
1077 * here to see if we can make progress with the next ticket in in maybe_fail_all_tickets()
1087 * This is for normal flushers, we can wait all goddamned day if we want to. We
1088 * will loop and continuously try to flush as long as we are making progress.
1089 * We count progress as clearing off tickets each time we have to loop.
1139 * We do not want to empty the system of delalloc unless we're in btrfs_async_reclaim_metadata_space()
1141 * logic before we start doing a FLUSH_DELALLOC_FULL. in btrfs_async_reclaim_metadata_space()
1147 * We don't want to force a chunk allocation until we've tried in btrfs_async_reclaim_metadata_space()
1148 * pretty hard to reclaim space. Think of the case where we in btrfs_async_reclaim_metadata_space()
1150 * to reclaim. We would rather use that than possibly create a in btrfs_async_reclaim_metadata_space()
1154 * around then we can force a chunk allocation. in btrfs_async_reclaim_metadata_space()
1177 * This handles pre-flushing of metadata space before we get to the point that
1178 * we need to start blocking threads on tickets. The logic here is different
1180 * much we need to flush, instead it attempts to keep us below the 80% full
1212 * We don't have a precise counter for the metadata being in btrfs_preempt_reclaim_metadata_space()
1213 * reserved for delalloc, so we'll approximate it by subtracting in btrfs_preempt_reclaim_metadata_space()
1215 * amount is higher than the individual reserves, then we can in btrfs_preempt_reclaim_metadata_space()
1226 * We don't want to include the global_rsv in our calculation, in btrfs_preempt_reclaim_metadata_space()
1227 * because that's space we can't touch. Subtract it from the in btrfs_preempt_reclaim_metadata_space()
1233 * We really want to avoid flushing delalloc too much, as it in btrfs_preempt_reclaim_metadata_space()
1257 * We don't want to reclaim everything, just a portion, so scale in btrfs_preempt_reclaim_metadata_space()
1269 /* We only went through once, back off our clamping. */ in btrfs_preempt_reclaim_metadata_space()
1280 * 1) compression is on and we allocate less space than we reserved
1281 * 2) we are overwriting existing space
1288 * For #2 this is trickier. Once the ordered extent runs we will drop the
1289 * extent in the range we are overwriting, which creates a delayed ref for
1294 * If we are freeing inodes, we want to make sure all delayed iputs have
1301 * This is where we reclaim all of the pinned space generated by running the
1305 * This state works only for the zoned mode. We scan the unused block group
1309 * For data we start with alloc chunk force, however we could have been full
1311 * so if we now have space to allocate do the force chunk allocation.
1439 * because we may have only satisfied the priority tickets and still in priority_reclaim_metadata_space()
1440 * left non priority tickets on the list. We would then have in priority_reclaim_metadata_space()
1461 * Attempt to steal from the global rsv if we can, except if the fs was in priority_reclaim_metadata_space()
1464 * success to the caller if we can steal from the global rsv - this is in priority_reclaim_metadata_space()
1477 * We must run try_granting_tickets here because we could be a large in priority_reclaim_metadata_space()
1491 /* We could have been granted before we got here. */ in priority_reclaim_data_space()
1525 * Delete us from the list. After we unlock the space in wait_reserve_ticket()
1526 * info, we don't want the async reclaim job to reserve in wait_reserve_ticket()
1554 * @flush: how much we can flush
1594 * Check that we can't have an error set if the reservation succeeded, in handle_reserve_ticket()
1622 * If we're heavy on ordered operations then clamping won't help us. We in maybe_clamp_preempt()
1626 * delayed nodes. If we're already more ordered than delalloc then in maybe_clamp_preempt()
1627 * we're keeping up, otherwise we aren't and should probably clamp. in maybe_clamp_preempt()
1653 * @space_info: space info we want to allocate from
1654 * @orig_bytes: number of bytes we want
1655 * @flush: whether or not we can flush to make our reservation
1679 * BTRFS_RESERVE_FLUSH_EVICT, as we could deadlock because those in __reserve_bytes()
1698 * We don't want NO_FLUSH allocations to jump everybody, they can in __reserve_bytes()
1709 * Carry on if we have enough space (short-circuit) OR call in __reserve_bytes()
1710 * can_overcommit() to ensure we can overcommit to continue. in __reserve_bytes()
1720 * Things are dire, we need to make a reservation so we don't abort. We in __reserve_bytes()
1721 * will let this reservation go through as long as we have actual space in __reserve_bytes()
1733 * If we couldn't make a reservation then setup our reservation ticket in __reserve_bytes()
1736 * If we are a priority flusher then we just need to add our ticket to in __reserve_bytes()
1737 * the list and we will do our own flushing further down. in __reserve_bytes()
1754 * We were forced to add a reserve ticket, so in __reserve_bytes()
1775 * We will do the space reservation dance during log replay, in __reserve_bytes()
1776 * which means we won't have fs_info->fs_root set, so don't do in __reserve_bytes()
1777 * the async reclaim as we will panic. in __reserve_bytes()
1800 * @space_info: the space_info we're allocating for
1801 * @orig_bytes: number of bytes we want
1802 * @flush: whether or not we can flush to make our reservation
1833 * @bytes: number of bytes we need
1834 * @flush: how we are allowed to flush
1837 * space then we will attempt to flush space as specified by flush.
1860 /* Dump all the space infos when we abort a transaction due to ENOSPC. */
1884 /* It's df, we don't care if it's racy */ in btrfs_account_ro_block_groups_free_space()
1929 * If we claw this back repeatedly, we can still achieve efficient
1951 * Furthermore, we want to avoid doing too much reclaim even if there are good
1953 * holes with writes. So we want to do just enough reclaim to try and stay
1958 * - ratchet up the intensity of reclaim depending on how far we are from
1975 /* If we have no unused space, don't bother, it won't work anyway. */ in calc_dynamic_reclaim_threshold()
1993 * Under "urgent" reclaim, we will reclaim even fresh block groups that have
1994 * recently seen successful allocations, as we are desperate to reclaim
1995 * whatever we can to avoid ENOSPC in a transaction leading to a readonly fs.
2039 * In situations where we are very motivated to reclaim (low unalloc) in do_reclaim_sweep()
2042 * If we have any staler groups, we don't touch the fresher ones, but if we in do_reclaim_sweep()
2130 /* Add to any tickets we may have. */ in btrfs_return_free_space()