Lines Matching full:we

30  * - Cache the next entry to be emitted to the fiemap buffer, so that we can
35 * buffer is memory mapped to the fiemap target file, we don't deadlock
36 * during btrfs_page_mkwrite(). This is because during fiemap we are locking
40 * if the fiemap buffer is memory mapped to the file we are running fiemap
53 * the next file extent item we must search for in the inode's subvolume
59 * This matches struct fiemap_extent_info::fi_mapped_extents, we use it
61 * fiemap_fill_next_extent() because we buffer ready fiemap entries at
62 * the @entries array, and we want to stop as soon as we hit the max
86 * Ignore 1 (reached max entries) because we keep track of that in flush_fiemap_cache()
102 * And only when we fails to merge, cached one will be submitted as
121 * When iterating the extents of the inode, at extent_fiemap(), we may in emit_fiemap_extent()
123 * previous extent we processed. This happens if fiemap is called in emit_fiemap_extent()
125 * after we had to unlock the file range, release the search path, emit in emit_fiemap_extent()
129 * For example we are in leaf X processing its last item, which is the in emit_fiemap_extent()
139 * last one we processed. So in order not to report overlapping extents in emit_fiemap_extent()
140 * to user space, we trim the length of the previously cached extent and in emit_fiemap_extent()
143 * Upon calling btrfs_next_leaf() we may also find an extent with an in emit_fiemap_extent()
145 * when we had a hole or prealloc extent with several delalloc ranges in in emit_fiemap_extent()
147 * flushed and the resulting ordered extents were completed, so we can in emit_fiemap_extent()
149 * or equals to what we have in cache->offset. We deal with this as in emit_fiemap_extent()
156 * We cached a dealloc range (found in the io tree) for in emit_fiemap_extent()
157 * a hole or prealloc extent and we have now found a in emit_fiemap_extent()
158 * file extent item for the same offset. What we have in emit_fiemap_extent()
160 * we had in the cache and use what we have just found. in emit_fiemap_extent()
165 * The extent range we previously found ends after the in emit_fiemap_extent()
166 * offset of the file extent item we found and that in emit_fiemap_extent()
168 * extent range. So adjust the range we previously found in emit_fiemap_extent()
169 * to end at the offset of the file extent item we have in emit_fiemap_extent()
172 * item we have just found. This corresponds to the case in emit_fiemap_extent()
182 * The offset of the file extent item we have just found in emit_fiemap_extent()
183 * is behind the cached offset. This means we were in emit_fiemap_extent()
184 * processing a hole or prealloc extent for which we in emit_fiemap_extent()
186 * we have in the cache is the last delalloc range we in emit_fiemap_extent()
187 * found while the file extent item we found can be in emit_fiemap_extent()
188 * either for a whole delalloc range we previously in emit_fiemap_extent()
191 * We have two cases here: in emit_fiemap_extent()
195 * current file extent item because we don't want to in emit_fiemap_extent()
201 * end offset of the cached extent. We don't want to in emit_fiemap_extent()
203 * emmitted already, so we emit the currently cached in emit_fiemap_extent()
243 * We will need to research for the end offset of the last in emit_fiemap_extent()
247 * may have been inserted due to a new write, so we don't want in emit_fiemap_extent()
288 * So we must emit it before ending extent_fiemap().
319 * prevent btrfs_next_leaf() freeing it, we want to reuse it to avoid in fiemap_next_leaf_item()
343 * We must set ->start before calling copy_extent_buffer_full(). If we in fiemap_next_leaf_item()
344 * are on sub-pagesize blocksize, we use ->start to determine the offset in fiemap_next_leaf_item()
345 * into the folio where our eb exists, and if we update ->start after in fiemap_next_leaf_item()
347 * different offset in the folio than where we originally copied into. in fiemap_next_leaf_item()
350 /* See the comment at fiemap_search_slot() about why we clone. */ in fiemap_next_leaf_item()
404 * We clone the leaf and use it during fiemap. This is because while in fiemap_search_slot()
405 * using the leaf we do expensive things like checking if an extent is in fiemap_search_slot()
407 * other tasks for too long, we use a clone of the leaf. We have locked in fiemap_search_slot()
408 * the file range in the inode's io tree, so we know none of our file in fiemap_search_slot()
409 * extent items can change. This way we avoid blocking other tasks that in fiemap_search_slot()
414 * We also need the private clone because holding a read lock on an in fiemap_search_slot()
416 * when we check if extents are shared, as backref walking may need to in fiemap_search_slot()
417 * lock the same leaf we are processing. in fiemap_search_slot()
433 * btree. If @disk_bytenr is 0, we are dealing with a hole, otherwise a prealloc
471 * If this is a prealloc extent we have to report every section in fiemap_process_hole()
519 * Either we found no delalloc for the whole prealloc extent or we have in fiemap_process_hole()
567 * Lookup the last file extent. We're not using i_size here because in fiemap_find_last_extent_offset()
594 * so first check if we have an inline extent item before checking if we in fiemap_find_last_extent_offset()
606 * case: we have one hole file extent item at slot 0 of a leaf and in fiemap_find_last_extent_offset()
608 * This is because we merge file extent items that represent holes. in fiemap_find_last_extent_offset()
677 * No file extent item found, but we may have delalloc between in extent_fiemap()
712 /* We have in implicit hole (NO_HOLES feature enabled). */ in extent_fiemap()
728 /* We've reached the end of the fiemap range, stop. */ in extent_fiemap()
764 /* We have an explicit hole. */ in extent_fiemap()
770 /* We have a regular extent. */ in extent_fiemap()
859 * Must free the path before emitting to the fiemap buffer because we in extent_fiemap()
893 * file range (0 to LLONG_MAX), but that is not enough if we have in btrfs_fiemap()
898 * complete and writeback to start. We also need to wait for ordered in btrfs_fiemap()
902 * if we have delalloc in those ranges. in btrfs_fiemap()
913 * We did an initial flush to avoid holding the inode's lock while in btrfs_fiemap()
915 * extents. Now after we locked the inode we do it again, because it's in btrfs_fiemap()