1457c8996SThomas Gleixner // SPDX-License-Identifier: GPL-2.0-only 21da177e4SLinus Torvalds /* 31da177e4SLinus Torvalds * mm/readahead.c - address_space-level file readahead. 41da177e4SLinus Torvalds * 51da177e4SLinus Torvalds * Copyright (C) 2002, Linus Torvalds 61da177e4SLinus Torvalds * 7e1f8e874SFrancois Cami * 09Apr2002 Andrew Morton 81da177e4SLinus Torvalds * Initial version. 91da177e4SLinus Torvalds */ 101da177e4SLinus Torvalds 1184dacdbdSNeilBrown /** 1284dacdbdSNeilBrown * DOC: Readahead Overview 1384dacdbdSNeilBrown * 1484dacdbdSNeilBrown * Readahead is used to read content into the page cache before it is 1584dacdbdSNeilBrown * explicitly requested by the application. Readahead only ever 161e470280SMatthew Wilcox (Oracle) * attempts to read folios that are not yet in the page cache. If a 171e470280SMatthew Wilcox (Oracle) * folio is present but not up-to-date, readahead will not try to read 185efe7448SMatthew Wilcox (Oracle) * it. In that case a simple ->read_folio() will be requested. 1984dacdbdSNeilBrown * 2084dacdbdSNeilBrown * Readahead is triggered when an application read request (whether a 211e470280SMatthew Wilcox (Oracle) * system call or a page fault) finds that the requested folio is not in 2284dacdbdSNeilBrown * the page cache, or that it is in the page cache and has the 231e470280SMatthew Wilcox (Oracle) * readahead flag set. This flag indicates that the folio was read 241e470280SMatthew Wilcox (Oracle) * as part of a previous readahead request and now that it has been 251e470280SMatthew Wilcox (Oracle) * accessed, it is time for the next readahead. 2684dacdbdSNeilBrown * 2784dacdbdSNeilBrown * Each readahead request is partly synchronous read, and partly async 281e470280SMatthew Wilcox (Oracle) * readahead. This is reflected in the struct file_ra_state which 291e470280SMatthew Wilcox (Oracle) * contains ->size being the total number of pages, and ->async_size 301e470280SMatthew Wilcox (Oracle) * which is the number of pages in the async section. The readahead 311e470280SMatthew Wilcox (Oracle) * flag will be set on the first folio in this async section to trigger 321e470280SMatthew Wilcox (Oracle) * a subsequent readahead. Once a series of sequential reads has been 3384dacdbdSNeilBrown * established, there should be no need for a synchronous component and 3484dacdbdSNeilBrown * all readahead request will be fully asynchronous. 3584dacdbdSNeilBrown * 361e470280SMatthew Wilcox (Oracle) * When either of the triggers causes a readahead, three numbers need 371e470280SMatthew Wilcox (Oracle) * to be determined: the start of the region to read, the size of the 381e470280SMatthew Wilcox (Oracle) * region, and the size of the async tail. 3984dacdbdSNeilBrown * 4084dacdbdSNeilBrown * The start of the region is simply the first page address at or after 4184dacdbdSNeilBrown * the accessed address, which is not currently populated in the page 4284dacdbdSNeilBrown * cache. This is found with a simple search in the page cache. 4384dacdbdSNeilBrown * 4484dacdbdSNeilBrown * The size of the async tail is determined by subtracting the size that 4584dacdbdSNeilBrown * was explicitly requested from the determined request size, unless 4684dacdbdSNeilBrown * this would be less than zero - then zero is used. NOTE THIS 4784dacdbdSNeilBrown * CALCULATION IS WRONG WHEN THE START OF THE REGION IS NOT THE ACCESSED 481e470280SMatthew Wilcox (Oracle) * PAGE. ALSO THIS CALCULATION IS NOT USED CONSISTENTLY. 4984dacdbdSNeilBrown * 5084dacdbdSNeilBrown * The size of the region is normally determined from the size of the 5184dacdbdSNeilBrown * previous readahead which loaded the preceding pages. This may be 5284dacdbdSNeilBrown * discovered from the struct file_ra_state for simple sequential reads, 5384dacdbdSNeilBrown * or from examining the state of the page cache when multiple 5484dacdbdSNeilBrown * sequential reads are interleaved. Specifically: where the readahead 551e470280SMatthew Wilcox (Oracle) * was triggered by the readahead flag, the size of the previous 5684dacdbdSNeilBrown * readahead is assumed to be the number of pages from the triggering 5784dacdbdSNeilBrown * page to the start of the new readahead. In these cases, the size of 5884dacdbdSNeilBrown * the previous readahead is scaled, often doubled, for the new 5984dacdbdSNeilBrown * readahead, though see get_next_ra_size() for details. 6084dacdbdSNeilBrown * 6184dacdbdSNeilBrown * If the size of the previous read cannot be determined, the number of 6284dacdbdSNeilBrown * preceding pages in the page cache is used to estimate the size of 6384dacdbdSNeilBrown * a previous read. This estimate could easily be misled by random 6484dacdbdSNeilBrown * reads being coincidentally adjacent, so it is ignored unless it is 6584dacdbdSNeilBrown * larger than the current request, and it is not scaled up, unless it 6684dacdbdSNeilBrown * is at the start of file. 6784dacdbdSNeilBrown * 6884dacdbdSNeilBrown * In general readahead is accelerated at the start of the file, as 6984dacdbdSNeilBrown * reads from there are often sequential. There are other minor 7084dacdbdSNeilBrown * adjustments to the readahead size in various special cases and these 7184dacdbdSNeilBrown * are best discovered by reading the code. 7284dacdbdSNeilBrown * 731e470280SMatthew Wilcox (Oracle) * The above calculation, based on the previous readahead size, 741e470280SMatthew Wilcox (Oracle) * determines the size of the readahead, to which any requested read 751e470280SMatthew Wilcox (Oracle) * size may be added. 7684dacdbdSNeilBrown * 7784dacdbdSNeilBrown * Readahead requests are sent to the filesystem using the ->readahead() 7884dacdbdSNeilBrown * address space operation, for which mpage_readahead() is a canonical 7984dacdbdSNeilBrown * implementation. ->readahead() should normally initiate reads on all 801e470280SMatthew Wilcox (Oracle) * folios, but may fail to read any or all folios without causing an I/O 815efe7448SMatthew Wilcox (Oracle) * error. The page cache reading code will issue a ->read_folio() request 821e470280SMatthew Wilcox (Oracle) * for any folio which ->readahead() did not read, and only an error 8384dacdbdSNeilBrown * from this will be final. 8484dacdbdSNeilBrown * 851e470280SMatthew Wilcox (Oracle) * ->readahead() will generally call readahead_folio() repeatedly to get 861e470280SMatthew Wilcox (Oracle) * each folio from those prepared for readahead. It may fail to read a 871e470280SMatthew Wilcox (Oracle) * folio by: 8884dacdbdSNeilBrown * 891e470280SMatthew Wilcox (Oracle) * * not calling readahead_folio() sufficiently many times, effectively 901e470280SMatthew Wilcox (Oracle) * ignoring some folios, as might be appropriate if the path to 9184dacdbdSNeilBrown * storage is congested. 9284dacdbdSNeilBrown * 931e470280SMatthew Wilcox (Oracle) * * failing to actually submit a read request for a given folio, 9484dacdbdSNeilBrown * possibly due to insufficient resources, or 9584dacdbdSNeilBrown * 9684dacdbdSNeilBrown * * getting an error during subsequent processing of a request. 9784dacdbdSNeilBrown * 981e470280SMatthew Wilcox (Oracle) * In the last two cases, the folio should be unlocked by the filesystem 991e470280SMatthew Wilcox (Oracle) * to indicate that the read attempt has failed. In the first case the 1001e470280SMatthew Wilcox (Oracle) * folio will be unlocked by the VFS. 10184dacdbdSNeilBrown * 1021e470280SMatthew Wilcox (Oracle) * Those folios not in the final ``async_size`` of the request should be 10384dacdbdSNeilBrown * considered to be important and ->readahead() should not fail them due 10484dacdbdSNeilBrown * to congestion or temporary resource unavailability, but should wait 10584dacdbdSNeilBrown * for necessary resources (e.g. memory or indexing information) to 1061e470280SMatthew Wilcox (Oracle) * become available. Folios in the final ``async_size`` may be 10784dacdbdSNeilBrown * considered less urgent and failure to read them is more acceptable. 1081e470280SMatthew Wilcox (Oracle) * In this case it is best to use filemap_remove_folio() to remove the 1091e470280SMatthew Wilcox (Oracle) * folios from the page cache as is automatically done for folios that 1101e470280SMatthew Wilcox (Oracle) * were not fetched with readahead_folio(). This will allow a 1119fd472afSNeilBrown * subsequent synchronous readahead request to try them again. If they 1129fd472afSNeilBrown * are left in the page cache, then they will be read individually using 1135efe7448SMatthew Wilcox (Oracle) * ->read_folio() which may be less efficient. 11484dacdbdSNeilBrown */ 11584dacdbdSNeilBrown 116c97ab271SChristoph Hellwig #include <linux/blkdev.h> 1171da177e4SLinus Torvalds #include <linux/kernel.h> 11811bd969fSRoss Zwisler #include <linux/dax.h> 1195a0e3ad6STejun Heo #include <linux/gfp.h> 120b95f1b31SPaul Gortmaker #include <linux/export.h> 1211da177e4SLinus Torvalds #include <linux/backing-dev.h> 1228bde37f0SAndrew Morton #include <linux/task_io_accounting_ops.h> 123f5ff8422SJens Axboe #include <linux/pagemap.h> 12417604240SChristoph Hellwig #include <linux/psi.h> 125782182e5SCong Wang #include <linux/syscalls.h> 126782182e5SCong Wang #include <linux/file.h> 127d72ee911SGeliang Tang #include <linux/mm_inline.h> 128ca47e8c7SJosef Bacik #include <linux/blk-cgroup.h> 1293d8f7615SAmir Goldstein #include <linux/fadvise.h> 130f2c817beSMatthew Wilcox (Oracle) #include <linux/sched/mm.h> 1311da177e4SLinus Torvalds 13229f175d1SFabian Frederick #include "internal.h" 13329f175d1SFabian Frederick 1341da177e4SLinus Torvalds /* 1351da177e4SLinus Torvalds * Initialise a struct file's readahead state. Assumes that the caller has 1361da177e4SLinus Torvalds * memset *ra to zero. 1371da177e4SLinus Torvalds */ 1381da177e4SLinus Torvalds void 1391da177e4SLinus Torvalds file_ra_state_init(struct file_ra_state *ra, struct address_space *mapping) 1401da177e4SLinus Torvalds { 141de1414a6SChristoph Hellwig ra->ra_pages = inode_to_bdi(mapping->host)->ra_pages; 142f4e6b498SFengguang Wu ra->prev_pos = -1; 1431da177e4SLinus Torvalds } 144d41cc702SSteven Whitehouse EXPORT_SYMBOL_GPL(file_ra_state_init); 1451da177e4SLinus Torvalds 146b4e089d7SChristoph Hellwig static void read_pages(struct readahead_control *rac) 1471da177e4SLinus Torvalds { 148a4d96536SMatthew Wilcox (Oracle) const struct address_space_operations *aops = rac->mapping->a_ops; 149a42634a6SMatthew Wilcox (Oracle) struct folio *folio; 1505b417b18SJens Axboe struct blk_plug plug; 1511da177e4SLinus Torvalds 152a4d96536SMatthew Wilcox (Oracle) if (!readahead_count(rac)) 153b4e089d7SChristoph Hellwig return; 154ad4ae1c7SMatthew Wilcox (Oracle) 15517604240SChristoph Hellwig if (unlikely(rac->_workingset)) 15617604240SChristoph Hellwig psi_memstall_enter(&rac->_pflags); 1575b417b18SJens Axboe blk_start_plug(&plug); 1585b417b18SJens Axboe 1598151b4c8SMatthew Wilcox (Oracle) if (aops->readahead) { 1608151b4c8SMatthew Wilcox (Oracle) aops->readahead(rac); 1619fd472afSNeilBrown /* 162a42634a6SMatthew Wilcox (Oracle) * Clean up the remaining folios. The sizes in ->ra 1631e470280SMatthew Wilcox (Oracle) * may be used to size the next readahead, so make sure 1649fd472afSNeilBrown * they accurately reflect what happened. 1659fd472afSNeilBrown */ 166a42634a6SMatthew Wilcox (Oracle) while ((folio = readahead_folio(rac)) != NULL) { 167a42634a6SMatthew Wilcox (Oracle) unsigned long nr = folio_nr_pages(folio); 168a42634a6SMatthew Wilcox (Oracle) 1696bf74cddSMatthew Wilcox (Oracle) folio_get(folio); 170a42634a6SMatthew Wilcox (Oracle) rac->ra->size -= nr; 171a42634a6SMatthew Wilcox (Oracle) if (rac->ra->async_size >= nr) { 172a42634a6SMatthew Wilcox (Oracle) rac->ra->async_size -= nr; 173a42634a6SMatthew Wilcox (Oracle) filemap_remove_folio(folio); 1749fd472afSNeilBrown } 175a42634a6SMatthew Wilcox (Oracle) folio_unlock(folio); 1766bf74cddSMatthew Wilcox (Oracle) folio_put(folio); 1778151b4c8SMatthew Wilcox (Oracle) } 178c1f6925eSMatthew Wilcox (Oracle) } else { 1795efe7448SMatthew Wilcox (Oracle) while ((folio = readahead_folio(rac)) != NULL) 1807e0a1265SMatthew Wilcox (Oracle) aops->read_folio(rac->file, folio); 181c1f6925eSMatthew Wilcox (Oracle) } 1825b417b18SJens Axboe 1835b417b18SJens Axboe blk_finish_plug(&plug); 18417604240SChristoph Hellwig if (unlikely(rac->_workingset)) 18517604240SChristoph Hellwig psi_memstall_leave(&rac->_pflags); 18617604240SChristoph Hellwig rac->_workingset = false; 187ad4ae1c7SMatthew Wilcox (Oracle) 188c1f6925eSMatthew Wilcox (Oracle) BUG_ON(readahead_count(rac)); 1891da177e4SLinus Torvalds } 1901da177e4SLinus Torvalds 1912c684234SMatthew Wilcox (Oracle) /** 19273bb49daSMatthew Wilcox (Oracle) * page_cache_ra_unbounded - Start unchecked readahead. 19373bb49daSMatthew Wilcox (Oracle) * @ractl: Readahead control. 1942c684234SMatthew Wilcox (Oracle) * @nr_to_read: The number of pages to read. 1952c684234SMatthew Wilcox (Oracle) * @lookahead_size: Where to start the next readahead. 1962c684234SMatthew Wilcox (Oracle) * 1972c684234SMatthew Wilcox (Oracle) * This function is for filesystems to call when they want to start 1982c684234SMatthew Wilcox (Oracle) * readahead beyond a file's stated i_size. This is almost certainly 1992c684234SMatthew Wilcox (Oracle) * not the function you want to call. Use page_cache_async_readahead() 2002c684234SMatthew Wilcox (Oracle) * or page_cache_sync_readahead() instead. 2012c684234SMatthew Wilcox (Oracle) * 2022c684234SMatthew Wilcox (Oracle) * Context: File is referenced by caller. Mutexes may be held by caller. 2032c684234SMatthew Wilcox (Oracle) * May sleep, but will not reenter filesystem to reclaim memory. 2041da177e4SLinus Torvalds */ 20573bb49daSMatthew Wilcox (Oracle) void page_cache_ra_unbounded(struct readahead_control *ractl, 20673bb49daSMatthew Wilcox (Oracle) unsigned long nr_to_read, unsigned long lookahead_size) 2071da177e4SLinus Torvalds { 20873bb49daSMatthew Wilcox (Oracle) struct address_space *mapping = ractl->mapping; 20973bb49daSMatthew Wilcox (Oracle) unsigned long index = readahead_index(ractl); 2108a5c743eSMichal Hocko gfp_t gfp_mask = readahead_gfp_mask(mapping); 211c2c7ad74SMatthew Wilcox (Oracle) unsigned long i; 2121da177e4SLinus Torvalds 2131da177e4SLinus Torvalds /* 214f2c817beSMatthew Wilcox (Oracle) * Partway through the readahead operation, we will have added 215f2c817beSMatthew Wilcox (Oracle) * locked pages to the page cache, but will not yet have submitted 216f2c817beSMatthew Wilcox (Oracle) * them for I/O. Adding another page may need to allocate memory, 217f2c817beSMatthew Wilcox (Oracle) * which can trigger memory reclaim. Telling the VM we're in 218f2c817beSMatthew Wilcox (Oracle) * the middle of a filesystem operation will cause it to not 219f2c817beSMatthew Wilcox (Oracle) * touch file-backed pages, preventing a deadlock. Most (all?) 220f2c817beSMatthew Wilcox (Oracle) * filesystems already specify __GFP_NOFS in their mapping's 221f2c817beSMatthew Wilcox (Oracle) * gfp_mask, but let's be explicit here. 222f2c817beSMatthew Wilcox (Oracle) */ 223f2c817beSMatthew Wilcox (Oracle) unsigned int nofs = memalloc_nofs_save(); 224f2c817beSMatthew Wilcox (Oracle) 225730633f0SJan Kara filemap_invalidate_lock_shared(mapping); 226f2c817beSMatthew Wilcox (Oracle) /* 2271da177e4SLinus Torvalds * Preallocate as many pages as we will need. 2281da177e4SLinus Torvalds */ 229c2c7ad74SMatthew Wilcox (Oracle) for (i = 0; i < nr_to_read; i++) { 2300387df1dSMatthew Wilcox (Oracle) struct folio *folio = xa_load(&mapping->i_pages, index + i); 2311da177e4SLinus Torvalds 2320387df1dSMatthew Wilcox (Oracle) if (folio && !xa_is_value(folio)) { 233b3751e6aSChristoph Hellwig /* 2342d8163e4SMatthew Wilcox (Oracle) * Page already present? Kick off the current batch 2352d8163e4SMatthew Wilcox (Oracle) * of contiguous pages before continuing with the 2362d8163e4SMatthew Wilcox (Oracle) * next batch. This page may be the one we would 2372d8163e4SMatthew Wilcox (Oracle) * have intended to mark as Readahead, but we don't 2382d8163e4SMatthew Wilcox (Oracle) * have a stable reference to this page, and it's 2392d8163e4SMatthew Wilcox (Oracle) * not worth getting one just for that. 240b3751e6aSChristoph Hellwig */ 241b4e089d7SChristoph Hellwig read_pages(ractl); 242b4e089d7SChristoph Hellwig ractl->_index++; 243f615bd5cSMatthew Wilcox (Oracle) i = ractl->_index + ractl->_nr_pages - index - 1; 2441da177e4SLinus Torvalds continue; 245b3751e6aSChristoph Hellwig } 2461da177e4SLinus Torvalds 2470387df1dSMatthew Wilcox (Oracle) folio = filemap_alloc_folio(gfp_mask, 0); 2480387df1dSMatthew Wilcox (Oracle) if (!folio) 2491da177e4SLinus Torvalds break; 250704528d8SMatthew Wilcox (Oracle) if (filemap_add_folio(mapping, folio, index + i, 251c1f6925eSMatthew Wilcox (Oracle) gfp_mask) < 0) { 2520387df1dSMatthew Wilcox (Oracle) folio_put(folio); 253b4e089d7SChristoph Hellwig read_pages(ractl); 254b4e089d7SChristoph Hellwig ractl->_index++; 255f615bd5cSMatthew Wilcox (Oracle) i = ractl->_index + ractl->_nr_pages - index - 1; 256c1f6925eSMatthew Wilcox (Oracle) continue; 257c1f6925eSMatthew Wilcox (Oracle) } 258c2c7ad74SMatthew Wilcox (Oracle) if (i == nr_to_read - lookahead_size) 2590387df1dSMatthew Wilcox (Oracle) folio_set_readahead(folio); 26017604240SChristoph Hellwig ractl->_workingset |= folio_test_workingset(folio); 26173bb49daSMatthew Wilcox (Oracle) ractl->_nr_pages++; 2621da177e4SLinus Torvalds } 2631da177e4SLinus Torvalds 2641da177e4SLinus Torvalds /* 2657e0a1265SMatthew Wilcox (Oracle) * Now start the IO. We ignore I/O errors - if the folio is not 2667e0a1265SMatthew Wilcox (Oracle) * uptodate then the caller will launch read_folio again, and 2671da177e4SLinus Torvalds * will then handle the error. 2681da177e4SLinus Torvalds */ 269b4e089d7SChristoph Hellwig read_pages(ractl); 270730633f0SJan Kara filemap_invalidate_unlock_shared(mapping); 271f2c817beSMatthew Wilcox (Oracle) memalloc_nofs_restore(nofs); 2721da177e4SLinus Torvalds } 27373bb49daSMatthew Wilcox (Oracle) EXPORT_SYMBOL_GPL(page_cache_ra_unbounded); 2742c684234SMatthew Wilcox (Oracle) 2752c684234SMatthew Wilcox (Oracle) /* 2768238287eSMatthew Wilcox (Oracle) * do_page_cache_ra() actually reads a chunk of disk. It allocates 2772c684234SMatthew Wilcox (Oracle) * the pages first, then submits them for I/O. This avoids the very bad 2782c684234SMatthew Wilcox (Oracle) * behaviour which would occur if page allocations are causing VM writeback. 2792c684234SMatthew Wilcox (Oracle) * We really don't want to intermingle reads and writes like that. 2802c684234SMatthew Wilcox (Oracle) */ 28156a4d67cSMatthew Wilcox (Oracle) static void do_page_cache_ra(struct readahead_control *ractl, 2828238287eSMatthew Wilcox (Oracle) unsigned long nr_to_read, unsigned long lookahead_size) 2832c684234SMatthew Wilcox (Oracle) { 2848238287eSMatthew Wilcox (Oracle) struct inode *inode = ractl->mapping->host; 2858238287eSMatthew Wilcox (Oracle) unsigned long index = readahead_index(ractl); 2862c684234SMatthew Wilcox (Oracle) loff_t isize = i_size_read(inode); 2872c684234SMatthew Wilcox (Oracle) pgoff_t end_index; /* The last page we want to read */ 2882c684234SMatthew Wilcox (Oracle) 2892c684234SMatthew Wilcox (Oracle) if (isize == 0) 2902c684234SMatthew Wilcox (Oracle) return; 2912c684234SMatthew Wilcox (Oracle) 2922c684234SMatthew Wilcox (Oracle) end_index = (isize - 1) >> PAGE_SHIFT; 2932c684234SMatthew Wilcox (Oracle) if (index > end_index) 2942c684234SMatthew Wilcox (Oracle) return; 2952c684234SMatthew Wilcox (Oracle) /* Don't read past the page containing the last byte of the file */ 2962c684234SMatthew Wilcox (Oracle) if (nr_to_read > end_index - index) 2972c684234SMatthew Wilcox (Oracle) nr_to_read = end_index - index + 1; 2982c684234SMatthew Wilcox (Oracle) 2998238287eSMatthew Wilcox (Oracle) page_cache_ra_unbounded(ractl, nr_to_read, lookahead_size); 3002c684234SMatthew Wilcox (Oracle) } 3011da177e4SLinus Torvalds 3021da177e4SLinus Torvalds /* 3031da177e4SLinus Torvalds * Chunk the readahead into 2 megabyte units, so that we don't pin too much 3041da177e4SLinus Torvalds * memory at once. 3051da177e4SLinus Torvalds */ 3067b3df3b9SDavid Howells void force_page_cache_ra(struct readahead_control *ractl, 307fcd9ae4fSMatthew Wilcox (Oracle) unsigned long nr_to_read) 3081da177e4SLinus Torvalds { 3097b3df3b9SDavid Howells struct address_space *mapping = ractl->mapping; 310fcd9ae4fSMatthew Wilcox (Oracle) struct file_ra_state *ra = ractl->ra; 3119491ae4aSJens Axboe struct backing_dev_info *bdi = inode_to_bdi(mapping->host); 3127b3df3b9SDavid Howells unsigned long max_pages, index; 3139491ae4aSJens Axboe 3147e0a1265SMatthew Wilcox (Oracle) if (unlikely(!mapping->a_ops->read_folio && !mapping->a_ops->readahead)) 3159a42823aSMatthew Wilcox (Oracle) return; 3161da177e4SLinus Torvalds 3179491ae4aSJens Axboe /* 3189491ae4aSJens Axboe * If the request exceeds the readahead window, allow the read to 3199491ae4aSJens Axboe * be up to the optimal hardware IO size 3209491ae4aSJens Axboe */ 3217b3df3b9SDavid Howells index = readahead_index(ractl); 3229491ae4aSJens Axboe max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages); 3237b3df3b9SDavid Howells nr_to_read = min_t(unsigned long, nr_to_read, max_pages); 3241da177e4SLinus Torvalds while (nr_to_read) { 32509cbfeafSKirill A. Shutemov unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE; 3261da177e4SLinus Torvalds 3271da177e4SLinus Torvalds if (this_chunk > nr_to_read) 3281da177e4SLinus Torvalds this_chunk = nr_to_read; 3297b3df3b9SDavid Howells ractl->_index = index; 3307b3df3b9SDavid Howells do_page_cache_ra(ractl, this_chunk, 0); 33158d5640eSMark Rutland 33208eb9658SMatthew Wilcox (Oracle) index += this_chunk; 3331da177e4SLinus Torvalds nr_to_read -= this_chunk; 3341da177e4SLinus Torvalds } 3351da177e4SLinus Torvalds } 3361da177e4SLinus Torvalds 3375ce1110bSFengguang Wu /* 338c743d96bSFengguang Wu * Set the initial window size, round to next power of 2 and square 339c743d96bSFengguang Wu * for small size, x 4 for medium, and x 2 for large 340c743d96bSFengguang Wu * for 128k (32 page) max ra 341fb25a77dSLin Feng * 1-2 page = 16k, 3-4 page 32k, 5-8 page = 64k, > 8 page = 128k initial 342c743d96bSFengguang Wu */ 343c743d96bSFengguang Wu static unsigned long get_init_ra_size(unsigned long size, unsigned long max) 344c743d96bSFengguang Wu { 345c743d96bSFengguang Wu unsigned long newsize = roundup_pow_of_two(size); 346c743d96bSFengguang Wu 347c743d96bSFengguang Wu if (newsize <= max / 32) 348c743d96bSFengguang Wu newsize = newsize * 4; 349c743d96bSFengguang Wu else if (newsize <= max / 4) 350c743d96bSFengguang Wu newsize = newsize * 2; 351c743d96bSFengguang Wu else 352c743d96bSFengguang Wu newsize = max; 353c743d96bSFengguang Wu 354c743d96bSFengguang Wu return newsize; 355c743d96bSFengguang Wu } 356c743d96bSFengguang Wu 357c743d96bSFengguang Wu /* 358122a21d1SFengguang Wu * Get the previous window size, ramp it up, and 359122a21d1SFengguang Wu * return it as the new window size. 360122a21d1SFengguang Wu */ 361c743d96bSFengguang Wu static unsigned long get_next_ra_size(struct file_ra_state *ra, 362122a21d1SFengguang Wu unsigned long max) 363122a21d1SFengguang Wu { 364f9acc8c7SFengguang Wu unsigned long cur = ra->size; 365122a21d1SFengguang Wu 366122a21d1SFengguang Wu if (cur < max / 16) 36720ff1c95SGao Xiang return 4 * cur; 36820ff1c95SGao Xiang if (cur <= max / 2) 36920ff1c95SGao Xiang return 2 * cur; 37020ff1c95SGao Xiang return max; 371122a21d1SFengguang Wu } 372122a21d1SFengguang Wu 373122a21d1SFengguang Wu /* 374122a21d1SFengguang Wu * On-demand readahead design. 375122a21d1SFengguang Wu * 376122a21d1SFengguang Wu * The fields in struct file_ra_state represent the most-recently-executed 377122a21d1SFengguang Wu * readahead attempt: 378122a21d1SFengguang Wu * 379f9acc8c7SFengguang Wu * |<----- async_size ---------| 380f9acc8c7SFengguang Wu * |------------------- size -------------------->| 381f9acc8c7SFengguang Wu * |==================#===========================| 382f9acc8c7SFengguang Wu * ^start ^page marked with PG_readahead 383122a21d1SFengguang Wu * 384122a21d1SFengguang Wu * To overlap application thinking time and disk I/O time, we do 385122a21d1SFengguang Wu * `readahead pipelining': Do not wait until the application consumed all 386122a21d1SFengguang Wu * readahead pages and stalled on the missing page at readahead_index; 387f9acc8c7SFengguang Wu * Instead, submit an asynchronous readahead I/O as soon as there are 388f9acc8c7SFengguang Wu * only async_size pages left in the readahead window. Normally async_size 389f9acc8c7SFengguang Wu * will be equal to size, for maximum pipelining. 390122a21d1SFengguang Wu * 391122a21d1SFengguang Wu * In interleaved sequential reads, concurrent streams on the same fd can 392122a21d1SFengguang Wu * be invalidating each other's readahead state. So we flag the new readahead 393f9acc8c7SFengguang Wu * page at (start+size-async_size) with PG_readahead, and use it as readahead 394122a21d1SFengguang Wu * indicator. The flag won't be set on already cached pages, to avoid the 395122a21d1SFengguang Wu * readahead-for-nothing fuss, saving pointless page cache lookups. 396122a21d1SFengguang Wu * 397f4e6b498SFengguang Wu * prev_pos tracks the last visited byte in the _previous_ read request. 398122a21d1SFengguang Wu * It should be maintained by the caller, and will be used for detecting 399122a21d1SFengguang Wu * small random reads. Note that the readahead algorithm checks loosely 400122a21d1SFengguang Wu * for sequential patterns. Hence interleaved reads might be served as 401122a21d1SFengguang Wu * sequential ones. 402122a21d1SFengguang Wu * 403122a21d1SFengguang Wu * There is a special-case: if the first page which the application tries to 404122a21d1SFengguang Wu * read happens to be the first page of the file, it is assumed that a linear 405122a21d1SFengguang Wu * read is about to happen and the window is immediately set to the initial size 406122a21d1SFengguang Wu * based on I/O request size and the max_readahead. 407122a21d1SFengguang Wu * 408122a21d1SFengguang Wu * The code ramps up the readahead size aggressively at first, but slow down as 409122a21d1SFengguang Wu * it approaches max_readhead. 410122a21d1SFengguang Wu */ 411122a21d1SFengguang Wu 412122a21d1SFengguang Wu /* 41308eb9658SMatthew Wilcox (Oracle) * Count contiguously cached pages from @index-1 to @index-@max, 41410be0b37SWu Fengguang * this count is a conservative estimation of 41510be0b37SWu Fengguang * - length of the sequential read sequence, or 41610be0b37SWu Fengguang * - thrashing threshold in memory tight systems 41710be0b37SWu Fengguang */ 41810be0b37SWu Fengguang static pgoff_t count_history_pages(struct address_space *mapping, 41908eb9658SMatthew Wilcox (Oracle) pgoff_t index, unsigned long max) 42010be0b37SWu Fengguang { 42110be0b37SWu Fengguang pgoff_t head; 42210be0b37SWu Fengguang 42310be0b37SWu Fengguang rcu_read_lock(); 42408eb9658SMatthew Wilcox (Oracle) head = page_cache_prev_miss(mapping, index - 1, max); 42510be0b37SWu Fengguang rcu_read_unlock(); 42610be0b37SWu Fengguang 42708eb9658SMatthew Wilcox (Oracle) return index - 1 - head; 42810be0b37SWu Fengguang } 42910be0b37SWu Fengguang 43010be0b37SWu Fengguang /* 4311e470280SMatthew Wilcox (Oracle) * page cache context based readahead 43210be0b37SWu Fengguang */ 43310be0b37SWu Fengguang static int try_context_readahead(struct address_space *mapping, 43410be0b37SWu Fengguang struct file_ra_state *ra, 43508eb9658SMatthew Wilcox (Oracle) pgoff_t index, 43610be0b37SWu Fengguang unsigned long req_size, 43710be0b37SWu Fengguang unsigned long max) 43810be0b37SWu Fengguang { 43910be0b37SWu Fengguang pgoff_t size; 44010be0b37SWu Fengguang 44108eb9658SMatthew Wilcox (Oracle) size = count_history_pages(mapping, index, max); 44210be0b37SWu Fengguang 44310be0b37SWu Fengguang /* 4442cad4018SFengguang Wu * not enough history pages: 44510be0b37SWu Fengguang * it could be a random read 44610be0b37SWu Fengguang */ 4472cad4018SFengguang Wu if (size <= req_size) 44810be0b37SWu Fengguang return 0; 44910be0b37SWu Fengguang 45010be0b37SWu Fengguang /* 45110be0b37SWu Fengguang * starts from beginning of file: 45210be0b37SWu Fengguang * it is a strong indication of long-run stream (or whole-file-read) 45310be0b37SWu Fengguang */ 45408eb9658SMatthew Wilcox (Oracle) if (size >= index) 45510be0b37SWu Fengguang size *= 2; 45610be0b37SWu Fengguang 45708eb9658SMatthew Wilcox (Oracle) ra->start = index; 4582cad4018SFengguang Wu ra->size = min(size + req_size, max); 4592cad4018SFengguang Wu ra->async_size = 1; 46010be0b37SWu Fengguang 46110be0b37SWu Fengguang return 1; 46210be0b37SWu Fengguang } 46310be0b37SWu Fengguang 464793917d9SMatthew Wilcox (Oracle) static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index, 465793917d9SMatthew Wilcox (Oracle) pgoff_t mark, unsigned int order, gfp_t gfp) 466793917d9SMatthew Wilcox (Oracle) { 467793917d9SMatthew Wilcox (Oracle) int err; 468793917d9SMatthew Wilcox (Oracle) struct folio *folio = filemap_alloc_folio(gfp, order); 469793917d9SMatthew Wilcox (Oracle) 470793917d9SMatthew Wilcox (Oracle) if (!folio) 471793917d9SMatthew Wilcox (Oracle) return -ENOMEM; 472ab4443feSJan Kara mark = round_down(mark, 1UL << order); 473b9ff43ddSMatthew Wilcox (Oracle) if (index == mark) 474793917d9SMatthew Wilcox (Oracle) folio_set_readahead(folio); 475793917d9SMatthew Wilcox (Oracle) err = filemap_add_folio(ractl->mapping, folio, index, gfp); 47617604240SChristoph Hellwig if (err) { 477793917d9SMatthew Wilcox (Oracle) folio_put(folio); 478793917d9SMatthew Wilcox (Oracle) return err; 479793917d9SMatthew Wilcox (Oracle) } 480793917d9SMatthew Wilcox (Oracle) 48117604240SChristoph Hellwig ractl->_nr_pages += 1UL << order; 48217604240SChristoph Hellwig ractl->_workingset |= folio_test_workingset(folio); 48317604240SChristoph Hellwig return 0; 48417604240SChristoph Hellwig } 48517604240SChristoph Hellwig 48656a4d67cSMatthew Wilcox (Oracle) void page_cache_ra_order(struct readahead_control *ractl, 487793917d9SMatthew Wilcox (Oracle) struct file_ra_state *ra, unsigned int new_order) 488793917d9SMatthew Wilcox (Oracle) { 489793917d9SMatthew Wilcox (Oracle) struct address_space *mapping = ractl->mapping; 490793917d9SMatthew Wilcox (Oracle) pgoff_t index = readahead_index(ractl); 491793917d9SMatthew Wilcox (Oracle) pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; 492793917d9SMatthew Wilcox (Oracle) pgoff_t mark = index + ra->size - ra->async_size; 493793917d9SMatthew Wilcox (Oracle) int err = 0; 494793917d9SMatthew Wilcox (Oracle) gfp_t gfp = readahead_gfp_mask(mapping); 495793917d9SMatthew Wilcox (Oracle) 496793917d9SMatthew Wilcox (Oracle) if (!mapping_large_folio_support(mapping) || ra->size < 4) 497793917d9SMatthew Wilcox (Oracle) goto fallback; 498793917d9SMatthew Wilcox (Oracle) 499793917d9SMatthew Wilcox (Oracle) limit = min(limit, index + ra->size - 1); 500793917d9SMatthew Wilcox (Oracle) 501793917d9SMatthew Wilcox (Oracle) if (new_order < MAX_PAGECACHE_ORDER) { 502793917d9SMatthew Wilcox (Oracle) new_order += 2; 503*e03c16fbSPankaj Raghav new_order = min_t(unsigned int, MAX_PAGECACHE_ORDER, new_order); 504*e03c16fbSPankaj Raghav new_order = min_t(unsigned int, new_order, ilog2(ra->size)); 505793917d9SMatthew Wilcox (Oracle) } 506793917d9SMatthew Wilcox (Oracle) 50700fa15e0SAlistair Popple filemap_invalidate_lock_shared(mapping); 508793917d9SMatthew Wilcox (Oracle) while (index <= limit) { 509793917d9SMatthew Wilcox (Oracle) unsigned int order = new_order; 510793917d9SMatthew Wilcox (Oracle) 511793917d9SMatthew Wilcox (Oracle) /* Align with smaller pages if needed */ 512ec056cefSRyan Roberts if (index & ((1UL << order) - 1)) 513793917d9SMatthew Wilcox (Oracle) order = __ffs(index); 514ec056cefSRyan Roberts /* Don't allocate pages past EOF */ 515ec056cefSRyan Roberts while (index + (1UL << order) - 1 > limit) 516ec056cefSRyan Roberts order--; 517793917d9SMatthew Wilcox (Oracle) err = ra_alloc_folio(ractl, index, mark, order, gfp); 518793917d9SMatthew Wilcox (Oracle) if (err) 519793917d9SMatthew Wilcox (Oracle) break; 520793917d9SMatthew Wilcox (Oracle) index += 1UL << order; 521793917d9SMatthew Wilcox (Oracle) } 522793917d9SMatthew Wilcox (Oracle) 523793917d9SMatthew Wilcox (Oracle) if (index > limit) { 524793917d9SMatthew Wilcox (Oracle) ra->size += index - limit - 1; 525793917d9SMatthew Wilcox (Oracle) ra->async_size += index - limit - 1; 526793917d9SMatthew Wilcox (Oracle) } 527793917d9SMatthew Wilcox (Oracle) 528b4e089d7SChristoph Hellwig read_pages(ractl); 52900fa15e0SAlistair Popple filemap_invalidate_unlock_shared(mapping); 530793917d9SMatthew Wilcox (Oracle) 531793917d9SMatthew Wilcox (Oracle) /* 532793917d9SMatthew Wilcox (Oracle) * If there were already pages in the page cache, then we may have 533793917d9SMatthew Wilcox (Oracle) * left some gaps. Let the regular readahead code take care of this 534793917d9SMatthew Wilcox (Oracle) * situation. 535793917d9SMatthew Wilcox (Oracle) */ 536793917d9SMatthew Wilcox (Oracle) if (!err) 537793917d9SMatthew Wilcox (Oracle) return; 538793917d9SMatthew Wilcox (Oracle) fallback: 539793917d9SMatthew Wilcox (Oracle) do_page_cache_ra(ractl, ra->size, ra->async_size); 540793917d9SMatthew Wilcox (Oracle) } 541793917d9SMatthew Wilcox (Oracle) 542793917d9SMatthew Wilcox (Oracle) /* 543122a21d1SFengguang Wu * A minimal readahead algorithm for trivial sequential/random reads. 544122a21d1SFengguang Wu */ 5456e4af69aSDavid Howells static void ondemand_readahead(struct readahead_control *ractl, 546793917d9SMatthew Wilcox (Oracle) struct folio *folio, unsigned long req_size) 547122a21d1SFengguang Wu { 5486e4af69aSDavid Howells struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host); 549fcd9ae4fSMatthew Wilcox (Oracle) struct file_ra_state *ra = ractl->ra; 5509491ae4aSJens Axboe unsigned long max_pages = ra->ra_pages; 551dc30b96aSMarkus Stockhausen unsigned long add_pages; 552b9ff43ddSMatthew Wilcox (Oracle) pgoff_t index = readahead_index(ractl); 553b9ff43ddSMatthew Wilcox (Oracle) pgoff_t expected, prev_index; 554b9ff43ddSMatthew Wilcox (Oracle) unsigned int order = folio ? folio_order(folio) : 0; 555045a2529SWu Fengguang 556045a2529SWu Fengguang /* 5579491ae4aSJens Axboe * If the request exceeds the readahead window, allow the read to 5589491ae4aSJens Axboe * be up to the optimal hardware IO size 5599491ae4aSJens Axboe */ 5609491ae4aSJens Axboe if (req_size > max_pages && bdi->io_pages > max_pages) 5619491ae4aSJens Axboe max_pages = min(req_size, bdi->io_pages); 5629491ae4aSJens Axboe 5639491ae4aSJens Axboe /* 564045a2529SWu Fengguang * start of file 565045a2529SWu Fengguang */ 56608eb9658SMatthew Wilcox (Oracle) if (!index) 567045a2529SWu Fengguang goto initial_readahead; 568122a21d1SFengguang Wu 569122a21d1SFengguang Wu /* 57008eb9658SMatthew Wilcox (Oracle) * It's the expected callback index, assume sequential access. 571122a21d1SFengguang Wu * Ramp up sizes, and push forward the readahead window. 572122a21d1SFengguang Wu */ 573ab4443feSJan Kara expected = round_down(ra->start + ra->size - ra->async_size, 574b9ff43ddSMatthew Wilcox (Oracle) 1UL << order); 575b9ff43ddSMatthew Wilcox (Oracle) if (index == expected || index == (ra->start + ra->size)) { 576f9acc8c7SFengguang Wu ra->start += ra->size; 5779491ae4aSJens Axboe ra->size = get_next_ra_size(ra, max_pages); 578f9acc8c7SFengguang Wu ra->async_size = ra->size; 579f9acc8c7SFengguang Wu goto readit; 580122a21d1SFengguang Wu } 581122a21d1SFengguang Wu 582122a21d1SFengguang Wu /* 583793917d9SMatthew Wilcox (Oracle) * Hit a marked folio without valid readahead state. 5846b10c6c9SFengguang Wu * E.g. interleaved reads. 5856b10c6c9SFengguang Wu * Query the pagecache for async_size, which normally equals to 5866b10c6c9SFengguang Wu * readahead size. Ramp it up and use it as the new readahead size. 5876b10c6c9SFengguang Wu */ 588793917d9SMatthew Wilcox (Oracle) if (folio) { 5896b10c6c9SFengguang Wu pgoff_t start; 5906b10c6c9SFengguang Wu 59130002ed2SNick Piggin rcu_read_lock(); 5926e4af69aSDavid Howells start = page_cache_next_miss(ractl->mapping, index + 1, 5936e4af69aSDavid Howells max_pages); 59430002ed2SNick Piggin rcu_read_unlock(); 5956b10c6c9SFengguang Wu 59608eb9658SMatthew Wilcox (Oracle) if (!start || start - index > max_pages) 5979a42823aSMatthew Wilcox (Oracle) return; 5986b10c6c9SFengguang Wu 5996b10c6c9SFengguang Wu ra->start = start; 60008eb9658SMatthew Wilcox (Oracle) ra->size = start - index; /* old async_size */ 601160334a0SWu Fengguang ra->size += req_size; 6029491ae4aSJens Axboe ra->size = get_next_ra_size(ra, max_pages); 6036b10c6c9SFengguang Wu ra->async_size = ra->size; 6046b10c6c9SFengguang Wu goto readit; 6056b10c6c9SFengguang Wu } 6066b10c6c9SFengguang Wu 6076b10c6c9SFengguang Wu /* 608045a2529SWu Fengguang * oversize read 609122a21d1SFengguang Wu */ 6109491ae4aSJens Axboe if (req_size > max_pages) 611045a2529SWu Fengguang goto initial_readahead; 612045a2529SWu Fengguang 613045a2529SWu Fengguang /* 614045a2529SWu Fengguang * sequential cache miss 61508eb9658SMatthew Wilcox (Oracle) * trivial case: (index - prev_index) == 1 61608eb9658SMatthew Wilcox (Oracle) * unaligned reads: (index - prev_index) == 0 617045a2529SWu Fengguang */ 61808eb9658SMatthew Wilcox (Oracle) prev_index = (unsigned long long)ra->prev_pos >> PAGE_SHIFT; 61908eb9658SMatthew Wilcox (Oracle) if (index - prev_index <= 1UL) 620045a2529SWu Fengguang goto initial_readahead; 621045a2529SWu Fengguang 622045a2529SWu Fengguang /* 62310be0b37SWu Fengguang * Query the page cache and look for the traces(cached history pages) 62410be0b37SWu Fengguang * that a sequential stream would leave behind. 62510be0b37SWu Fengguang */ 6266e4af69aSDavid Howells if (try_context_readahead(ractl->mapping, ra, index, req_size, 6276e4af69aSDavid Howells max_pages)) 62810be0b37SWu Fengguang goto readit; 62910be0b37SWu Fengguang 63010be0b37SWu Fengguang /* 631045a2529SWu Fengguang * standalone, small random read 632045a2529SWu Fengguang * Read as is, and do not pollute the readahead state. 633045a2529SWu Fengguang */ 6346e4af69aSDavid Howells do_page_cache_ra(ractl, req_size, 0); 6359a42823aSMatthew Wilcox (Oracle) return; 636045a2529SWu Fengguang 637045a2529SWu Fengguang initial_readahead: 63808eb9658SMatthew Wilcox (Oracle) ra->start = index; 6399491ae4aSJens Axboe ra->size = get_init_ra_size(req_size, max_pages); 640f9acc8c7SFengguang Wu ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size; 641122a21d1SFengguang Wu 642f9acc8c7SFengguang Wu readit: 64351daa88eSWu Fengguang /* 64451daa88eSWu Fengguang * Will this read hit the readahead marker made by itself? 64551daa88eSWu Fengguang * If so, trigger the readahead marker hit now, and merge 64651daa88eSWu Fengguang * the resulted next readahead window into the current one. 647dc30b96aSMarkus Stockhausen * Take care of maximum IO pages as above. 64851daa88eSWu Fengguang */ 64908eb9658SMatthew Wilcox (Oracle) if (index == ra->start && ra->size == ra->async_size) { 650dc30b96aSMarkus Stockhausen add_pages = get_next_ra_size(ra, max_pages); 651dc30b96aSMarkus Stockhausen if (ra->size + add_pages <= max_pages) { 652dc30b96aSMarkus Stockhausen ra->async_size = add_pages; 653dc30b96aSMarkus Stockhausen ra->size += add_pages; 654dc30b96aSMarkus Stockhausen } else { 655dc30b96aSMarkus Stockhausen ra->size = max_pages; 656dc30b96aSMarkus Stockhausen ra->async_size = max_pages >> 1; 657dc30b96aSMarkus Stockhausen } 65851daa88eSWu Fengguang } 65951daa88eSWu Fengguang 6606e4af69aSDavid Howells ractl->_index = ra->start; 661b9ff43ddSMatthew Wilcox (Oracle) page_cache_ra_order(ractl, ra, order); 662122a21d1SFengguang Wu } 663122a21d1SFengguang Wu 664fefa7c47SMatthew Wilcox (Oracle) void page_cache_sync_ra(struct readahead_control *ractl, 665fcd9ae4fSMatthew Wilcox (Oracle) unsigned long req_count) 666cf914a7dSRusty Russell { 667324bcf54SJens Axboe bool do_forced_ra = ractl->file && (ractl->file->f_mode & FMODE_RANDOM); 668cf914a7dSRusty Russell 669324bcf54SJens Axboe /* 6701e470280SMatthew Wilcox (Oracle) * Even if readahead is disabled, issue this request as readahead 671324bcf54SJens Axboe * as we'll need it to satisfy the requested range. The forced 6721e470280SMatthew Wilcox (Oracle) * readahead will do the right thing and limit the read to just the 673324bcf54SJens Axboe * requested range, which we'll set to 1 page for this case. 674324bcf54SJens Axboe */ 675fcd9ae4fSMatthew Wilcox (Oracle) if (!ractl->ra->ra_pages || blk_cgroup_congested()) { 676324bcf54SJens Axboe if (!ractl->file) 677ca47e8c7SJosef Bacik return; 678324bcf54SJens Axboe req_count = 1; 679324bcf54SJens Axboe do_forced_ra = true; 680324bcf54SJens Axboe } 681ca47e8c7SJosef Bacik 6820141450fSWu Fengguang /* be dumb */ 683324bcf54SJens Axboe if (do_forced_ra) { 684fcd9ae4fSMatthew Wilcox (Oracle) force_page_cache_ra(ractl, req_count); 6850141450fSWu Fengguang return; 6860141450fSWu Fengguang } 6870141450fSWu Fengguang 688793917d9SMatthew Wilcox (Oracle) ondemand_readahead(ractl, NULL, req_count); 689cf914a7dSRusty Russell } 690fefa7c47SMatthew Wilcox (Oracle) EXPORT_SYMBOL_GPL(page_cache_sync_ra); 691cf914a7dSRusty Russell 692fefa7c47SMatthew Wilcox (Oracle) void page_cache_async_ra(struct readahead_control *ractl, 6937836d999SMatthew Wilcox (Oracle) struct folio *folio, unsigned long req_count) 694122a21d1SFengguang Wu { 6951e470280SMatthew Wilcox (Oracle) /* no readahead */ 696fcd9ae4fSMatthew Wilcox (Oracle) if (!ractl->ra->ra_pages) 697cf914a7dSRusty Russell return; 698122a21d1SFengguang Wu 699fe3cba17SFengguang Wu /* 700cf914a7dSRusty Russell * Same bit is used for PG_readahead and PG_reclaim. 701fe3cba17SFengguang Wu */ 7027836d999SMatthew Wilcox (Oracle) if (folio_test_writeback(folio)) 703cf914a7dSRusty Russell return; 704fe3cba17SFengguang Wu 7057836d999SMatthew Wilcox (Oracle) folio_clear_readahead(folio); 706122a21d1SFengguang Wu 707ca47e8c7SJosef Bacik if (blk_cgroup_congested()) 708ca47e8c7SJosef Bacik return; 709ca47e8c7SJosef Bacik 710793917d9SMatthew Wilcox (Oracle) ondemand_readahead(ractl, folio, req_count); 711122a21d1SFengguang Wu } 712fefa7c47SMatthew Wilcox (Oracle) EXPORT_SYMBOL_GPL(page_cache_async_ra); 713782182e5SCong Wang 714c7b95d51SDominik Brodowski ssize_t ksys_readahead(int fd, loff_t offset, size_t count) 715782182e5SCong Wang { 716782182e5SCong Wang ssize_t ret; 7172903ff01SAl Viro struct fd f; 718782182e5SCong Wang 719782182e5SCong Wang ret = -EBADF; 7202903ff01SAl Viro f = fdget(fd); 7213d8f7615SAmir Goldstein if (!f.file || !(f.file->f_mode & FMODE_READ)) 7223d8f7615SAmir Goldstein goto out; 7233d8f7615SAmir Goldstein 7243d8f7615SAmir Goldstein /* 7253d8f7615SAmir Goldstein * The readahead() syscall is intended to run only on files 7263d8f7615SAmir Goldstein * that can execute readahead. If readahead is not possible 7273d8f7615SAmir Goldstein * on this file, then we must return -EINVAL. 7283d8f7615SAmir Goldstein */ 7293d8f7615SAmir Goldstein ret = -EINVAL; 7303d8f7615SAmir Goldstein if (!f.file->f_mapping || !f.file->f_mapping->a_ops || 7317116c0afSReuben Hawkins (!S_ISREG(file_inode(f.file)->i_mode) && 7327116c0afSReuben Hawkins !S_ISBLK(file_inode(f.file)->i_mode))) 7333d8f7615SAmir Goldstein goto out; 7343d8f7615SAmir Goldstein 7353d8f7615SAmir Goldstein ret = vfs_fadvise(f.file, offset, count, POSIX_FADV_WILLNEED); 7363d8f7615SAmir Goldstein out: 7372903ff01SAl Viro fdput(f); 738782182e5SCong Wang return ret; 739782182e5SCong Wang } 740c7b95d51SDominik Brodowski 741c7b95d51SDominik Brodowski SYSCALL_DEFINE3(readahead, int, fd, loff_t, offset, size_t, count) 742c7b95d51SDominik Brodowski { 743c7b95d51SDominik Brodowski return ksys_readahead(fd, offset, count); 744c7b95d51SDominik Brodowski } 7453ca23644SDavid Howells 74659c10c52SGuo Ren #if defined(CONFIG_COMPAT) && defined(__ARCH_WANT_COMPAT_READAHEAD) 74759c10c52SGuo Ren COMPAT_SYSCALL_DEFINE4(readahead, int, fd, compat_arg_u64_dual(offset), size_t, count) 74859c10c52SGuo Ren { 74959c10c52SGuo Ren return ksys_readahead(fd, compat_arg_u64_glue(offset), count); 75059c10c52SGuo Ren } 75159c10c52SGuo Ren #endif 75259c10c52SGuo Ren 7533ca23644SDavid Howells /** 7543ca23644SDavid Howells * readahead_expand - Expand a readahead request 7553ca23644SDavid Howells * @ractl: The request to be expanded 7563ca23644SDavid Howells * @new_start: The revised start 7573ca23644SDavid Howells * @new_len: The revised size of the request 7583ca23644SDavid Howells * 7593ca23644SDavid Howells * Attempt to expand a readahead request outwards from the current size to the 7603ca23644SDavid Howells * specified size by inserting locked pages before and after the current window 7613ca23644SDavid Howells * to increase the size to the new window. This may involve the insertion of 7623ca23644SDavid Howells * THPs, in which case the window may get expanded even beyond what was 7633ca23644SDavid Howells * requested. 7643ca23644SDavid Howells * 7653ca23644SDavid Howells * The algorithm will stop if it encounters a conflicting page already in the 7663ca23644SDavid Howells * pagecache and leave a smaller expansion than requested. 7673ca23644SDavid Howells * 7683ca23644SDavid Howells * The caller must check for this by examining the revised @ractl object for a 7693ca23644SDavid Howells * different expansion than was requested. 7703ca23644SDavid Howells */ 7713ca23644SDavid Howells void readahead_expand(struct readahead_control *ractl, 7723ca23644SDavid Howells loff_t new_start, size_t new_len) 7733ca23644SDavid Howells { 7743ca23644SDavid Howells struct address_space *mapping = ractl->mapping; 7753ca23644SDavid Howells struct file_ra_state *ra = ractl->ra; 7763ca23644SDavid Howells pgoff_t new_index, new_nr_pages; 7773ca23644SDavid Howells gfp_t gfp_mask = readahead_gfp_mask(mapping); 7783ca23644SDavid Howells 7793ca23644SDavid Howells new_index = new_start / PAGE_SIZE; 7803ca23644SDavid Howells 7813ca23644SDavid Howells /* Expand the leading edge downwards */ 7823ca23644SDavid Howells while (ractl->_index > new_index) { 7833ca23644SDavid Howells unsigned long index = ractl->_index - 1; 78411a98042SMatthew Wilcox (Oracle) struct folio *folio = xa_load(&mapping->i_pages, index); 7853ca23644SDavid Howells 78611a98042SMatthew Wilcox (Oracle) if (folio && !xa_is_value(folio)) 78711a98042SMatthew Wilcox (Oracle) return; /* Folio apparently present */ 7883ca23644SDavid Howells 78911a98042SMatthew Wilcox (Oracle) folio = filemap_alloc_folio(gfp_mask, 0); 79011a98042SMatthew Wilcox (Oracle) if (!folio) 7913ca23644SDavid Howells return; 79211a98042SMatthew Wilcox (Oracle) if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { 79311a98042SMatthew Wilcox (Oracle) folio_put(folio); 7943ca23644SDavid Howells return; 7953ca23644SDavid Howells } 79611a98042SMatthew Wilcox (Oracle) if (unlikely(folio_test_workingset(folio)) && 79711a98042SMatthew Wilcox (Oracle) !ractl->_workingset) { 79811a98042SMatthew Wilcox (Oracle) ractl->_workingset = true; 79911a98042SMatthew Wilcox (Oracle) psi_memstall_enter(&ractl->_pflags); 80011a98042SMatthew Wilcox (Oracle) } 8013ca23644SDavid Howells ractl->_nr_pages++; 80211a98042SMatthew Wilcox (Oracle) ractl->_index = folio->index; 8033ca23644SDavid Howells } 8043ca23644SDavid Howells 8053ca23644SDavid Howells new_len += new_start - readahead_pos(ractl); 8063ca23644SDavid Howells new_nr_pages = DIV_ROUND_UP(new_len, PAGE_SIZE); 8073ca23644SDavid Howells 8083ca23644SDavid Howells /* Expand the trailing edge upwards */ 8093ca23644SDavid Howells while (ractl->_nr_pages < new_nr_pages) { 8103ca23644SDavid Howells unsigned long index = ractl->_index + ractl->_nr_pages; 81111a98042SMatthew Wilcox (Oracle) struct folio *folio = xa_load(&mapping->i_pages, index); 8123ca23644SDavid Howells 81311a98042SMatthew Wilcox (Oracle) if (folio && !xa_is_value(folio)) 81411a98042SMatthew Wilcox (Oracle) return; /* Folio apparently present */ 8153ca23644SDavid Howells 81611a98042SMatthew Wilcox (Oracle) folio = filemap_alloc_folio(gfp_mask, 0); 81711a98042SMatthew Wilcox (Oracle) if (!folio) 8183ca23644SDavid Howells return; 81911a98042SMatthew Wilcox (Oracle) if (filemap_add_folio(mapping, folio, index, gfp_mask) < 0) { 82011a98042SMatthew Wilcox (Oracle) folio_put(folio); 8213ca23644SDavid Howells return; 8223ca23644SDavid Howells } 82311a98042SMatthew Wilcox (Oracle) if (unlikely(folio_test_workingset(folio)) && 82411a98042SMatthew Wilcox (Oracle) !ractl->_workingset) { 82517604240SChristoph Hellwig ractl->_workingset = true; 82617604240SChristoph Hellwig psi_memstall_enter(&ractl->_pflags); 82717604240SChristoph Hellwig } 8283ca23644SDavid Howells ractl->_nr_pages++; 8293ca23644SDavid Howells if (ra) { 8303ca23644SDavid Howells ra->size++; 8313ca23644SDavid Howells ra->async_size++; 8323ca23644SDavid Howells } 8333ca23644SDavid Howells } 8343ca23644SDavid Howells } 8353ca23644SDavid Howells EXPORT_SYMBOL(readahead_expand); 836