truncate.c (664b0bae0b87f69bc9deb098f5e0158b9cf18e04) | truncate.c (b93b016313b3ba8003c3b8bb71f569af91f19fc7) |
---|---|
1/* 2 * mm/truncate.c - code for taking down pages from address_spaces 3 * 4 * Copyright (C) 2002, Linus Torvalds 5 * 6 * 10Sep2002 Andrew Morton 7 * Initial version. 8 */ --- 22 unchanged lines hidden (view full) --- 31 * lock. 32 */ 33static inline void __clear_shadow_entry(struct address_space *mapping, 34 pgoff_t index, void *entry) 35{ 36 struct radix_tree_node *node; 37 void **slot; 38 | 1/* 2 * mm/truncate.c - code for taking down pages from address_spaces 3 * 4 * Copyright (C) 2002, Linus Torvalds 5 * 6 * 10Sep2002 Andrew Morton 7 * Initial version. 8 */ --- 22 unchanged lines hidden (view full) --- 31 * lock. 32 */ 33static inline void __clear_shadow_entry(struct address_space *mapping, 34 pgoff_t index, void *entry) 35{ 36 struct radix_tree_node *node; 37 void **slot; 38 |
39 if (!__radix_tree_lookup(&mapping->page_tree, index, &node, &slot)) | 39 if (!__radix_tree_lookup(&mapping->i_pages, index, &node, &slot)) |
40 return; 41 if (*slot != entry) 42 return; | 40 return; 41 if (*slot != entry) 42 return; |
43 __radix_tree_replace(&mapping->page_tree, node, slot, NULL, | 43 __radix_tree_replace(&mapping->i_pages, node, slot, NULL, |
44 workingset_update_node); 45 mapping->nrexceptional--; 46} 47 48static void clear_shadow_entry(struct address_space *mapping, pgoff_t index, 49 void *entry) 50{ | 44 workingset_update_node); 45 mapping->nrexceptional--; 46} 47 48static void clear_shadow_entry(struct address_space *mapping, pgoff_t index, 49 void *entry) 50{ |
51 spin_lock_irq(&mapping->tree_lock); | 51 xa_lock_irq(&mapping->i_pages); |
52 __clear_shadow_entry(mapping, index, entry); | 52 __clear_shadow_entry(mapping, index, entry); |
53 spin_unlock_irq(&mapping->tree_lock); | 53 xa_unlock_irq(&mapping->i_pages); |
54} 55 56/* 57 * Unconditionally remove exceptional entries. Usually called from truncate 58 * path. Note that the pagevec may be altered by this function by removing 59 * exceptional entries similar to what pagevec_remove_exceptionals does. 60 */ 61static void truncate_exceptional_pvec_entries(struct address_space *mapping, --- 12 unchanged lines hidden (view full) --- 74 break; 75 76 if (j == pagevec_count(pvec)) 77 return; 78 79 dax = dax_mapping(mapping); 80 lock = !dax && indices[j] < end; 81 if (lock) | 54} 55 56/* 57 * Unconditionally remove exceptional entries. Usually called from truncate 58 * path. Note that the pagevec may be altered by this function by removing 59 * exceptional entries similar to what pagevec_remove_exceptionals does. 60 */ 61static void truncate_exceptional_pvec_entries(struct address_space *mapping, --- 12 unchanged lines hidden (view full) --- 74 break; 75 76 if (j == pagevec_count(pvec)) 77 return; 78 79 dax = dax_mapping(mapping); 80 lock = !dax && indices[j] < end; 81 if (lock) |
82 spin_lock_irq(&mapping->tree_lock); | 82 xa_lock_irq(&mapping->i_pages); |
83 84 for (i = j; i < pagevec_count(pvec); i++) { 85 struct page *page = pvec->pages[i]; 86 pgoff_t index = indices[i]; 87 88 if (!radix_tree_exceptional_entry(page)) { 89 pvec->pages[j++] = page; 90 continue; --- 6 unchanged lines hidden (view full) --- 97 dax_delete_mapping_entry(mapping, index); 98 continue; 99 } 100 101 __clear_shadow_entry(mapping, index, page); 102 } 103 104 if (lock) | 83 84 for (i = j; i < pagevec_count(pvec); i++) { 85 struct page *page = pvec->pages[i]; 86 pgoff_t index = indices[i]; 87 88 if (!radix_tree_exceptional_entry(page)) { 89 pvec->pages[j++] = page; 90 continue; --- 6 unchanged lines hidden (view full) --- 97 dax_delete_mapping_entry(mapping, index); 98 continue; 99 } 100 101 __clear_shadow_entry(mapping, index, page); 102 } 103 104 if (lock) |
105 spin_unlock_irq(&mapping->tree_lock); | 105 xa_unlock_irq(&mapping->i_pages); |
106 pvec->nr = j; 107} 108 109/* 110 * Invalidate exceptional entry if easily possible. This handles exceptional 111 * entries for invalidate_inode_pages(). 112 */ 113static int invalidate_exceptional_entry(struct address_space *mapping, --- 399 unchanged lines hidden (view full) --- 513 514 if (nrpages || nrexceptional) { 515 /* 516 * As truncation uses a lockless tree lookup, cycle 517 * the tree lock to make sure any ongoing tree 518 * modification that does not see AS_EXITING is 519 * completed before starting the final truncate. 520 */ | 106 pvec->nr = j; 107} 108 109/* 110 * Invalidate exceptional entry if easily possible. This handles exceptional 111 * entries for invalidate_inode_pages(). 112 */ 113static int invalidate_exceptional_entry(struct address_space *mapping, --- 399 unchanged lines hidden (view full) --- 513 514 if (nrpages || nrexceptional) { 515 /* 516 * As truncation uses a lockless tree lookup, cycle 517 * the tree lock to make sure any ongoing tree 518 * modification that does not see AS_EXITING is 519 * completed before starting the final truncate. 520 */ |
521 spin_lock_irq(&mapping->tree_lock); 522 spin_unlock_irq(&mapping->tree_lock); | 521 xa_lock_irq(&mapping->i_pages); 522 xa_unlock_irq(&mapping->i_pages); |
523 524 truncate_inode_pages(mapping, 0); 525 } 526} 527EXPORT_SYMBOL(truncate_inode_pages_final); 528 529/** 530 * invalidate_mapping_pages - Invalidate all the unlocked pages of one inode --- 91 unchanged lines hidden (view full) --- 622 unsigned long flags; 623 624 if (page->mapping != mapping) 625 return 0; 626 627 if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL)) 628 return 0; 629 | 523 524 truncate_inode_pages(mapping, 0); 525 } 526} 527EXPORT_SYMBOL(truncate_inode_pages_final); 528 529/** 530 * invalidate_mapping_pages - Invalidate all the unlocked pages of one inode --- 91 unchanged lines hidden (view full) --- 622 unsigned long flags; 623 624 if (page->mapping != mapping) 625 return 0; 626 627 if (page_has_private(page) && !try_to_release_page(page, GFP_KERNEL)) 628 return 0; 629 |
630 spin_lock_irqsave(&mapping->tree_lock, flags); | 630 xa_lock_irqsave(&mapping->i_pages, flags); |
631 if (PageDirty(page)) 632 goto failed; 633 634 BUG_ON(page_has_private(page)); 635 __delete_from_page_cache(page, NULL); | 631 if (PageDirty(page)) 632 goto failed; 633 634 BUG_ON(page_has_private(page)); 635 __delete_from_page_cache(page, NULL); |
636 spin_unlock_irqrestore(&mapping->tree_lock, flags); | 636 xa_unlock_irqrestore(&mapping->i_pages, flags); |
637 638 if (mapping->a_ops->freepage) 639 mapping->a_ops->freepage(page); 640 641 put_page(page); /* pagecache ref */ 642 return 1; 643failed: | 637 638 if (mapping->a_ops->freepage) 639 mapping->a_ops->freepage(page); 640 641 put_page(page); /* pagecache ref */ 642 return 1; 643failed: |
644 spin_unlock_irqrestore(&mapping->tree_lock, flags); | 644 xa_unlock_irqrestore(&mapping->i_pages, flags); |
645 return 0; 646} 647 648static int do_launder_page(struct address_space *mapping, struct page *page) 649{ 650 if (!PageDirty(page)) 651 return 0; 652 if (page->mapping != mapping || mapping->a_ops->launder_page == NULL) --- 265 unchanged lines hidden --- | 645 return 0; 646} 647 648static int do_launder_page(struct address_space *mapping, struct page *page) 649{ 650 if (!PageDirty(page)) 651 return 0; 652 if (page->mapping != mapping || mapping->a_ops->launder_page == NULL) --- 265 unchanged lines hidden --- |