Lines Matching full:folios

100 	/* Can active folios be deactivated as part of reclaim? */
110 /* Can mapped folios be reclaimed? */
113 /* Can folios be swapped as part of reclaim? */
147 /* The file folios on the current node are dangerously low */
159 /* The highest zone to isolate folios for reclaim from */
388 * This misses isolated folios which are not accounted for to save counters.
390 * not expected that isolated folios will be a dominating factor.
532 * If there are a lot of dirty/writeback folios then do not in skip_throttle_noprogress()
533 * throttle as throttling will occur when the folios cycle in skip_throttle_noprogress()
571 * writeback to a slow device to excessive referenced folios at the tail in reclaim_throttle()
617 * Account for folios written if tasks are throttled waiting on dirty
618 * folios to clean. If enough folios have been cleaned since throttling
664 * the split out folios get added back to folio_list. in writeout()
694 * We no longer attempt to writeback filesystem folios here, other in pageout()
711 * Some data journaling orphaned folios can have in pageout()
805 * only page cache folios found in these are zero pages in __remove_mapping()
941 * All mapped folios start out with page table in folio_check_references()
950 * Note: the mark is set for activated folios as well in folio_check_references()
951 * so that recently deactivated but used folios are in folio_check_references()
960 * Activate file-backed executable folios after first usage. in folio_check_references()
968 /* Reclaim if clean, defer dirty folios to writeback */ in folio_check_references()
982 * Anonymous folios are not handled by flushers and must be written in folio_check_dirty_writeback()
984 * MADV_FREE anonymous folios are put into inactive file list too. in folio_check_dirty_writeback()
1040 * Take folios on @demote_folios and attempt to demote them to another node.
1041 * Folios which are not demoted are left on @demote_folios.
1163 * folios if the tail of the LRU is all dirty unqueued folios. in shrink_folio_list()
1173 * Treat this folio as congested if folios are cycling in shrink_folio_list()
1174 * through the LRU so quickly that the folios marked in shrink_folio_list()
1186 * of folios under writeback and this folio has both in shrink_folio_list()
1188 * indicates that folios are being queued for I/O but in shrink_folio_list()
1215 * folios are in writeback and there is nothing else to in shrink_folio_list()
1218 * In cases 1) and 2) we activate the folios to get them out of in shrink_folio_list()
1219 * the way while we continue scanning for clean folios on the in shrink_folio_list()
1312 * Split partially mapped folios right away. in shrink_folio_list()
1412 * Only kswapd can writeback filesystem folios in shrink_folio_list()
1416 * write folios when we've encountered many in shrink_folio_list()
1417 * dirty folios, and when we've already scanned in shrink_folio_list()
1418 * the rest of the LRU for clean folios and see in shrink_folio_list()
1419 * the same dirty folios again (with the reclaim in shrink_folio_list()
1509 * Rarely, folios can have buffers and no ->mapping. in shrink_folio_list()
1510 * These are the folios which were not successfully in shrink_folio_list()
1602 /* Migrate folios selected for demotion */ in shrink_folio_list()
1606 /* Folios that could not be demoted are still in @demote_folios */ in shrink_folio_list()
1608 /* Folios which weren't demoted go back on @folio_list */ in shrink_folio_list()
1612 * goto retry to reclaim the undemoted folios in folio_list if in shrink_folio_list()
1771 * Do not count skipped folios because that makes the function in isolate_lru_folios()
1772 * return with no isolated folios if the LRU mostly contains in isolate_lru_folios()
1773 * ineligible folios. This causes the VM to not reclaim any in isolate_lru_folios()
1774 * folios, triggering a premature OOM. in isolate_lru_folios()
1806 * Splice any skipped folios to the start of the LRU list. Note that in isolate_lru_folios()
1809 * scanning would soon rescan the same folios to skip and waste lots in isolate_lru_folios()
1918 * move_folios_to_lru() moves folios from private @list to appropriate LRU list.
2069 * If dirty folios are scanned that are not queued for IO, it in shrink_inactive_list()
2071 * happen when memory pressure pushes dirty folios to the end of in shrink_inactive_list()
2074 * dirty folios grows not through writes but through memory in shrink_inactive_list()
2083 * the kernel flusher here and later waiting on folios in shrink_inactive_list()
2109 * shrink_active_list() moves folios from the active LRU to the inactive LRU.
2114 * If the folios are mostly unmapped, the processing is fast and it is
2116 * the folios are mapped, the processing is slow (folio_referenced()), so
2118 * this, so instead we remove the folios from the LRU while processing them.
2119 * It is safe to rely on the active flag against the non-LRU folios in here
2133 LIST_HEAD(l_hold); /* The folios which were snipped off */ in shrink_active_list()
2180 * Identify referenced, file-backed active folios and in shrink_active_list()
2183 * memory under moderate memory pressure. Anon folios in shrink_active_list()
2185 * IO, plus JVM can create lots of anon VM_EXEC folios, in shrink_active_list()
2201 * Move folios back to the lru list. in shrink_active_list()
2303 * The inactive_ratio is the target ratio of ACTIVE to INACTIVE folios
2305 * of 3 means 3:1 or 25% of the folios are kept on the inactive list.
2552 * nr[0] = anon inactive folios to scan; nr[1] = anon active folios to scan
2553 * nr[2] = file inactive folios to scan; nr[3] = file active folios to scan
2566 /* If we have no swap space, do not bother scanning anon folios. */ in get_scan_count()
3891 struct list_head *head = &lrugen->folios[old_gen][type][zone]; in inc_min_seq()
3904 list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); in inc_min_seq()
3942 if (!list_empty(&lrugen->folios[gen][type][zone])) in try_to_inc_min_seq()
4490 list_move(&folio->lru, &lrugen->folios[gen][type][zone]); in sort_folio()
4497 list_move(&folio->lru, &lrugen->folios[gen][type][zone]); in sort_folio()
4512 list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); in sort_folio()
4527 list_move(&folio->lru, &lrugen->folios[gen][type][zone]); in sort_folio()
4593 struct list_head *head = &lrugen->folios[gen][type][zone]; in scan_folios()
4644 * There might not be eligible folios due to reclaim_idx. Check the in scan_folios()
4760 /* retry folios that may have missed folio_rotate_reclaimable() */ in evict_folios()
4767 /* don't add rejected folios to the oldest generation */ in evict_folios()
4859 /* stop scanning this lruvec as it's low on cold folios */ in get_nr_to_scan()
5072 * Unmapped clean folios are already prioritized. Scanning for more of in lru_gen_shrink_node()
5125 if (!list_empty(&lrugen->folios[gen][type][zone])) in state_is_valid()
5170 struct list_head *head = &lruvec->lrugen.folios[gen][type][zone]; in drain_evictable()
5719 INIT_LIST_HEAD(&lrugen->folios[gen][type][zone]); in lru_gen_init_lruvec()
7850 * check_move_unevictable_folios - Move evictable folios to appropriate zone
7852 * @fbatch: Batch of lru folios to check.
7854 * Checks folios for evictability, if an evictable folio is in the unevictable
7856 * should be only used for lru folios.
7866 struct folio *folio = fbatch->folios[i]; in check_move_unevictable_folios()