Lines Matching full:we
27 * 1) space_info. This is the ultimate arbiter of how much space we can use.
30 * reservations we care about total_bytes - SUM(space_info->bytes_) when
35 * metadata reservation we have. You can see the comment in the block_rsv
39 * 3) btrfs_calc*_size. These are the worst case calculations we used based
40 * on the number of items we will want to modify. We have one for changing
41 * items, and one for inserting new items. Generally we use these helpers to
47 * We call into either btrfs_reserve_data_bytes() or
48 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with
49 * num_bytes we want to reserve.
66 * Assume we are unable to simply make the reservation because we do not have
89 * Check if ->bytes == 0, if it does we got our reservation and we can carry
90 * on, if not return the appropriate error (ENOSPC, but can be EINTR if we
95 * Same as the above, except we add ourselves to the
96 * space_info->priority_tickets, and we do not use ticket->wait, we simply
102 * Generally speaking we will have two cases for each state, a "nice" state
103 * and a "ALL THE THINGS" state. In btrfs we delay a lot of work in order to
107 * reclaim space so we can make new reservations.
111 * for example, we would update the inode item at write time to update the
113 * isize or bytes. We keep these delayed items to coalesce these operations
119 * for delayed allocation. We can reclaim some of this space simply by
120 * running delalloc, but usually we need to wait for ordered extents to
124 * We have a block reserve for the outstanding delayed refs space, and every
126 * to reclaim space, but we want to hold this until the end because COW can
127 * churn a lot and we can avoid making some extent tree modifications if we
131 * We will skip this the first time through space reservation, because of
132 * overcommit and we don't want to have a lot of useless metadata space when
136 * If we're freeing inodes we're likely freeing checksums, file extent
141 * This will commit the transaction. Historically we had a lot of logic
142 * surrounding whether or not we'd commit the transaction, but this waits born
143 * out of a pre-tickets era where we could end up committing the transaction
145 * ticketing system we know if we're not making progress and can error
151 * Because we hold so many reservations for metadata we will allow you to
156 * You can see the current logic for when we allow overcommit in
177 * after adding space to the filesystem, we need to clear the full flags
355 * "optimal" chunk size based on the fs size. However when we actually in calc_effective_data_chunk_size()
356 * allocate the chunk we will strip this down further, making it no in calc_effective_data_chunk_size()
359 * On the zoned mode, we need to use zone_size (= data_sinfo->chunk_size) in calc_effective_data_chunk_size()
387 * If we have dup, raid1 or raid10 then only half of the free in calc_available_free_space()
389 * doesn't include the parity drive, so we don't have to in calc_available_free_space()
401 * reservation, because we assume that data reservations will == actual in calc_available_free_space()
402 * usage, we could potentially overcommit and then immediately have that in calc_available_free_space()
404 * bind when we get close to filling the file system. in calc_available_free_space()
407 * space. If we are relatively empty this won't affect our ability to in calc_available_free_space()
408 * overcommit much, and if we're very close to full it'll keep us from in calc_available_free_space()
409 * getting into a position where we've given ourselves very little in calc_available_free_space()
417 * If we aren't flushing all things, let us overcommit up to in calc_available_free_space()
418 * 1/2th of the space. If we can flush, don't let us overcommit in calc_available_free_space()
427 * On the zoned mode, we always allocate one zone as one chunk. in calc_available_free_space()
469 * This is for space we already have accounted in space_info->bytes_may_use, so
470 * basically when we're returning space from block_rsv's.
637 /* Calc the number of the pages we need flush for space reservation */ in shrink_delalloc()
642 * to_reclaim is set to however much metadata we need to in shrink_delalloc()
644 * exactly. What we really want to do is reclaim full inode's in shrink_delalloc()
646 * here. We will take a fraction of the delalloc bytes for our in shrink_delalloc()
648 * the amount we write to cover an entire dirty extent, which in shrink_delalloc()
660 * If we are doing more ordered than delalloc we need to just wait on in shrink_delalloc()
661 * ordered extents, otherwise we'll waste time trying to flush delalloc in shrink_delalloc()
662 * that likely won't give us the space back we need. in shrink_delalloc()
676 * We need to make sure any outstanding async pages are now in shrink_delalloc()
677 * processed before we continue. This is because things like in shrink_delalloc()
679 * marked clean. We don't use filemap_fwrite for flushing in shrink_delalloc()
680 * because we want to control how many pages we write out at a in shrink_delalloc()
681 * time, thus this is the only safe way to make sure we've in shrink_delalloc()
685 * This exists because we do not want to wait for each in shrink_delalloc()
686 * individual inode to finish its async work, we simply want to in shrink_delalloc()
688 * for all of the async work to catch up. Once we're done with in shrink_delalloc()
689 * that we know we'll have ordered extents for everything and we in shrink_delalloc()
690 * can decide if we wait for that or not. in shrink_delalloc()
692 * If we choose to replace this in the future, make absolutely in shrink_delalloc()
701 * We don't want to wait forever, if we wrote less pages in this in shrink_delalloc()
702 * loop than we have outstanding, only wait for that number of in shrink_delalloc()
703 * pages, otherwise we can wait for all async pages to finish in shrink_delalloc()
724 * If we are for preemption we just want a one-shot of delalloc in shrink_delalloc()
725 * flushing so we can stop flushing if we decide we don't need in shrink_delalloc()
819 * If we have pending delayed iputs then we could free up a in flush_space()
820 * bunch of pinned space, so make sure we run the iputs before in flush_space()
821 * we do our pinned bytes check below. in flush_space()
829 * We don't want to start a new transaction, just attach to the in flush_space()
831 * happening at the moment. Note: we don't use a nostart join in flush_space()
861 * We may be flushing because suddenly we have less space than we had in btrfs_calc_reclaim_metadata_size()
862 * before, and now we're well over-committed based on our current free in btrfs_calc_reclaim_metadata_size()
863 * space. If that's the case add in our overage so we make sure to put in btrfs_calc_reclaim_metadata_size()
884 /* If we're just plain full then async reclaim just slows us down. */ in need_preemptive_reclaim()
896 * 128MiB is 1/4 of the maximum global rsv size. If we have less than in need_preemptive_reclaim()
898 * we don't have a lot of things that need flushing. in need_preemptive_reclaim()
904 * We have tickets queued, bail so we don't compete with the async in need_preemptive_reclaim()
911 * If we have over half of the free space occupied by reservations or in need_preemptive_reclaim()
912 * pinned then we want to start flushing. in need_preemptive_reclaim()
914 * We do not do the traditional thing here, which is to say in need_preemptive_reclaim()
919 * because this doesn't quite work how we want. If we had more than 50% in need_preemptive_reclaim()
920 * of the space_info used by bytes_used and we had 0 available we'd just in need_preemptive_reclaim()
921 * constantly run the background flusher. Instead we want it to kick in in need_preemptive_reclaim()
936 * much delalloc we need for the background flusher to kick in. in need_preemptive_reclaim()
950 * If we have more ordered bytes than delalloc bytes then we're either in need_preemptive_reclaim()
951 * doing a lot of DIO, or we simply don't have a lot of delalloc waiting in need_preemptive_reclaim()
955 * nothing, if our reservations are tied up in ordered extents we'll in need_preemptive_reclaim()
960 * block reserves that we would actually be able to directly reclaim in need_preemptive_reclaim()
961 * from. In this case if we're heavy on metadata operations this will in need_preemptive_reclaim()
966 * We want to make sure we truly are maxed out on ordered however, so in need_preemptive_reclaim()
967 * cut ordered in half, and if it's still higher than delalloc then we in need_preemptive_reclaim()
968 * can keep flushing. This is to avoid the case where we start in need_preemptive_reclaim()
969 * flushing, and now delalloc == ordered and we stop preemptively in need_preemptive_reclaim()
970 * flushing when we could still have several gigs of delalloc to flush. in need_preemptive_reclaim()
1016 * We've exhausted our flushing, start failing tickets.
1019 * @space_info - the space info we were flushing
1021 * We call this when we've exhausted our flushing ability and haven't made
1023 * order, so if there is a large ticket first and then smaller ones we could
1065 * We're just throwing tickets away, so more flushing may not in maybe_fail_all_tickets()
1066 * trip over btrfs_try_granting_tickets, so we need to call it in maybe_fail_all_tickets()
1067 * here to see if we can make progress with the next ticket in in maybe_fail_all_tickets()
1077 * This is for normal flushers, we can wait all goddamned day if we want to. We
1078 * will loop and continuously try to flush as long as we are making progress.
1079 * We count progress as clearing off tickets each time we have to loop.
1124 * We do not want to empty the system of delalloc unless we're in btrfs_async_reclaim_metadata_space()
1126 * logic before we start doing a FLUSH_DELALLOC_FULL. in btrfs_async_reclaim_metadata_space()
1132 * We don't want to force a chunk allocation until we've tried in btrfs_async_reclaim_metadata_space()
1133 * pretty hard to reclaim space. Think of the case where we in btrfs_async_reclaim_metadata_space()
1135 * to reclaim. We would rather use that than possibly create a in btrfs_async_reclaim_metadata_space()
1139 * around then we can force a chunk allocation. in btrfs_async_reclaim_metadata_space()
1162 * This handles pre-flushing of metadata space before we get to the point that
1163 * we need to start blocking threads on tickets. The logic here is different
1165 * much we need to flush, instead it attempts to keep us below the 80% full
1197 * We don't have a precise counter for the metadata being in btrfs_preempt_reclaim_metadata_space()
1198 * reserved for delalloc, so we'll approximate it by subtracting in btrfs_preempt_reclaim_metadata_space()
1200 * amount is higher than the individual reserves, then we can in btrfs_preempt_reclaim_metadata_space()
1211 * We don't want to include the global_rsv in our calculation, in btrfs_preempt_reclaim_metadata_space()
1212 * because that's space we can't touch. Subtract it from the in btrfs_preempt_reclaim_metadata_space()
1218 * We really want to avoid flushing delalloc too much, as it in btrfs_preempt_reclaim_metadata_space()
1242 * We don't want to reclaim everything, just a portion, so scale in btrfs_preempt_reclaim_metadata_space()
1254 /* We only went through once, back off our clamping. */ in btrfs_preempt_reclaim_metadata_space()
1265 * 1) compression is on and we allocate less space than we reserved
1266 * 2) we are overwriting existing space
1273 * For #2 this is trickier. Once the ordered extent runs we will drop the
1274 * extent in the range we are overwriting, which creates a delayed ref for
1279 * If we are freeing inodes, we want to make sure all delayed iputs have
1286 * This is where we reclaim all of the pinned space generated by running the
1290 * For data we start with alloc chunk force, however we could have been full
1292 * so if we now have space to allocate do the force chunk allocation.
1417 * because we may have only satisfied the priority tickets and still in priority_reclaim_metadata_space()
1418 * left non priority tickets on the list. We would then have in priority_reclaim_metadata_space()
1439 * Attempt to steal from the global rsv if we can, except if the fs was in priority_reclaim_metadata_space()
1442 * success to the caller if we can steal from the global rsv - this is in priority_reclaim_metadata_space()
1455 * We must run try_granting_tickets here because we could be a large in priority_reclaim_metadata_space()
1469 /* We could have been granted before we got here. */ in priority_reclaim_data_space()
1504 * Delete us from the list. After we unlock the space in wait_reserve_ticket()
1505 * info, we don't want the async reclaim job to reserve in wait_reserve_ticket()
1533 * @flush: how much we can flush
1573 * Check that we can't have an error set if the reservation succeeded, in handle_reserve_ticket()
1601 * If we're heavy on ordered operations then clamping won't help us. We in maybe_clamp_preempt()
1605 * delayed nodes. If we're already more ordered than delalloc then in maybe_clamp_preempt()
1606 * we're keeping up, otherwise we aren't and should probably clamp. in maybe_clamp_preempt()
1632 * @space_info: space info we want to allocate from
1633 * @orig_bytes: number of bytes we want
1634 * @flush: whether or not we can flush to make our reservation
1658 * BTRFS_RESERVE_FLUSH_EVICT, as we could deadlock because those in __reserve_bytes()
1677 * We don't want NO_FLUSH allocations to jump everybody, they can in __reserve_bytes()
1688 * Carry on if we have enough space (short-circuit) OR call in __reserve_bytes()
1689 * can_overcommit() to ensure we can overcommit to continue. in __reserve_bytes()
1700 * Things are dire, we need to make a reservation so we don't abort. We in __reserve_bytes()
1701 * will let this reservation go through as long as we have actual space in __reserve_bytes()
1714 * If we couldn't make a reservation then setup our reservation ticket in __reserve_bytes()
1717 * If we are a priority flusher then we just need to add our ticket to in __reserve_bytes()
1718 * the list and we will do our own flushing further down. in __reserve_bytes()
1735 * We were forced to add a reserve ticket, so in __reserve_bytes()
1756 * We will do the space reservation dance during log replay, in __reserve_bytes()
1757 * which means we won't have fs_info->fs_root set, so don't do in __reserve_bytes()
1758 * the async reclaim as we will panic. in __reserve_bytes()
1781 * @space_info: the space_info we're allocating for
1782 * @orig_bytes: number of bytes we want
1783 * @flush: whether or not we can flush to make our reservation
1814 * @bytes: number of bytes we need
1815 * @flush: how we are allowed to flush
1818 * space then we will attempt to flush space as specified by flush.
1841 /* Dump all the space infos when we abort a transaction due to ENOSPC. */
1865 /* It's df, we don't care if it's racy */ in btrfs_account_ro_block_groups_free_space()
1910 * If we claw this back repeatedly, we can still achieve efficient
1932 * Furthermore, we want to avoid doing too much reclaim even if there are good
1934 * holes with writes. So we want to do just enough reclaim to try and stay
1939 * - ratchet up the intensity of reclaim depending on how far we are from
1956 /* If we have no unused space, don't bother, it won't work anyway. */ in calc_dynamic_reclaim_threshold()
1974 * Under "urgent" reclaim, we will reclaim even fresh block groups that have
1975 * recently seen successful allocations, as we are desperate to reclaim
1976 * whatever we can to avoid ENOSPC in a transaction leading to a readonly fs.
2021 * In situations where we are very motivated to reclaim (low unalloc) in do_reclaim_sweep()
2024 * If we have any staler groups, we don't touch the fresher ones, but if we in do_reclaim_sweep()