Lines Matching defs:pages
85 * This small page list array covers either 8 pages or 64kB worth of pages -
343 * forcing COW faults from vnode to amp and mapping amp pages instead of vnode
344 * pages. Replication amp is assigned to a segment when it gets its first
357 * amp pages are used instead of vnode pages as long as segment has a very
597 * on swapfs pages.
615 * If segment may need private pages, reserve them now.
679 * until we fault the pages in.
908 * share the underlying pages if nothing
1137 * If either segment has private pages, create a new merged anon
1298 * Segment has private pages, can data structures
1418 * Segment has private pages, can data structures
1494 * Duplicate all the pages in the segment. This may break COW sharing for a
1518 * pages. They will be relocated into larger
1519 * pages at fault time.
1679 * softlocked pages do not end up as copy on write
1680 * pages. This would cause problems where one
1688 * sharing on pages that could possibly be
1691 * In addition, if any pages have been marked that they
1702 * because some pages are still stuck in the
1706 * pages]. Note, we have the writers lock so
1779 * callback function to invoke free_vp_pages() for only those pages actually
1810 * those pages actually processed by the HAT
1861 pgcnt_t opages; /* old segment size in pages */
1862 pgcnt_t npages; /* new segment size in pages */
1863 pgcnt_t dpages; /* pages being deleted (unmapped) */
1878 * Fail the unmap if pages are SOFTLOCKed through this mapping.
1887 * means locked pages are still in use.
2043 * freeing its pages purge all entries from
2155 * freeing its pages purge all entries from
2309 * freeing its pages purge all entries from
2431 * Be sure to unlock pages. XXX Why do things get free'ed instead
2470 * freeing its pages purge all entries from
2654 * Release all the pages in the NULL terminated ppp list
2677 * anonymous pages and loading up the translations.
2707 uint_t vpprot, /* access allowed to object pages */
2815 * Handle pages that have been marked for migration
2836 * segment), its pages must be locked. If this
2974 * Handle pages that have been marked for migration
3100 * Handle pages that have been marked for migration
3135 * relocate a bunch of smaller targ pages into one large repl page. all targ
3136 * pages must be complete pages smaller than replacement pages.
3138 * complete large pages locked SHARED.
3214 * Check if all pages in ppa array are complete smaller than szc pages and
3219 * If all pages are properly aligned attempt to upgrade their locks
3223 * Return 1 if all pages are aligned and locked exclusively.
3225 * If all pages in ppa array happen to be physically contiguous to make one
3227 * size to szc and set *pszc to szc. Return 1 with pages locked shared.
3273 * p_szc changed means we don't have all pages
3318 * system reclaimed pages from cachelist targ pages will be physically
3320 * pages without any relocations.
3322 * hat_pageunload() the target pages first.
3345 * Create physically contiguous pages for [vp, off] - [vp, off +
3351 * If physically contiguous pages already exist for this range return 1 without
3364 pgcnt_t pages = btop(pgsz);
3393 * downsize will be set to 1 only if we fail to lock pages. this will
3397 * where we can't get large pages anyway.
3410 * pages in the range we handle and they are all
3431 IS_P2ALIGNED(pfn, pages)) {
3437 page_create_putback(pages);
3481 * now that pages are locked SE_EXCL. Any file
3482 * truncation will wait until the pages are
3519 ASSERT(pgidx < pages);
3584 ASSERT(pgidx < pages);
3590 * remove pages from done_pplist. it's not needed anymore.
3615 ASSERT(P2PHASE(pfn, pages) == pgidx);
3636 for (i = 0; i < pages; i++) {
3644 ppa[pages] = NULL;
3652 for (i = 0; i < pages; i++) {
3666 * Do the cleanup. Unlock target pages we didn't relocate. They are
3667 * linked on targ_pplist by root pages. reassemble unused replacement
3668 * and io pages back to pplist.
3724 * at this point all pages are either on done_pplist or
3778 page_create_putback(pages);
3785 #define SEGVN_RESTORE_SOFTLOCK_VP(type, pages) \
3788 -(pages)); \
3791 #define SEGVN_UPDATE_MODBITS(ppa, pages, rw, prot, vpprot) \
3794 for (i = 0; i < (pages); i++) { \
3801 for (i = 0; i < (pages); i++) { \
3834 pgcnt_t pages = btop(pgsz);
3835 pgcnt_t maxpages = pages;
3836 size_t ppasize = (pages + 1) * sizeof (page_t *);
3870 ASSERT(enable_mbit_wa == 0); /* no mbit simulations with large pages */
3921 for (; a < lpgeaddr; a += pgsz, off += pgsz, aindx += pages) {
3936 pages = btop(pgsz);
3964 pages = maxpages;
3991 pages);
4045 for (i = 0; i < pages; i++) {
4070 page_create_putback(pages);
4072 SEGVN_RESTORE_SOFTLOCK_VP(type, pages);
4119 for (i = 0; i < pages; i++) {
4126 page_create_putback(pages);
4128 SEGVN_RESTORE_SOFTLOCK_VP(type, pages);
4175 SEGVN_RESTORE_SOFTLOCK_VP(type, pages);
4183 * swapfs pages.
4192 for (i = 0; i < pages; i++) {
4209 * all its constituent pages are locked
4231 IS_P2ALIGNED(pfn, pages)) {
4235 for (i = 0; i < pages; i++) {
4247 * All pages are of szc we need and they are
4256 page_create_putback(pages);
4259 page_migrate(seg, a, ppa, pages);
4261 SEGVN_UPDATE_MODBITS(ppa, pages, rw,
4268 for (i = 0; i < pages; i++) {
4300 page_create_putback(pages);
4302 for (i = 0; i < pages; i++) {
4321 !IS_P2ALIGNED(pfn, pages)) ||
4329 * all pages EXCL. Size down.
4338 page_create_putback(pages);
4341 for (i = 0; i < pages; i++) {
4357 page_create_putback(pages);
4359 SEGVN_UPDATE_MODBITS(ppa, pages, rw,
4367 for (i = 0; i < pages; i++) {
4375 for (i = 0; i < pages; i++) {
4388 * segvn_full_szcpages() upgraded pages szc.
4391 ASSERT(IS_P2ALIGNED(pfn, pages));
4400 * locked all constituent pages. Call
4425 page_create_putback(pages);
4427 SEGVN_UPDATE_MODBITS(ppa, pages, rw,
4433 for (i = 0; i < pages; i++) {
4453 for (i = 0; i < pages; i++) {
4476 SEGVN_UPDATE_MODBITS(ppa, pages, rw, prot, vpprot);
4480 for (i = 0; i < pages; i++) {
4493 for (i = 0; i < pages; i++) {
4505 vpage += pages;
4545 pages = btop(pgsz);
4629 * it will use smaller pages if that fails.
4643 pgcnt_t pages = btop(pgsz);
4662 ASSERT(enable_mbit_wa == 0); /* no mbit simulations with large pages */
4704 for (; a < lpgeaddr; a += pgsz, aindx += pages) {
4720 pages = btop(pgsz);
4721 ASSERT(IS_P2ALIGNED(aindx, pages));
4727 pages);
4740 -pages);
4758 * Handle pages that have been marked for migration
4761 page_migrate(seg, a, ppa, pages);
4776 for (i = 0; i < pages; i++)
4780 vpage += pages;
4793 * pages with this process has allocated a larger page and we
4794 * need to retry with larger pages. So do a size up
4795 * operation. This relies on the fact that large pages are
4801 * have relocated locked pages.
4831 pages = btop(pgsz);
4898 int fltadvice = 1; /* set to free behind pages for sequential access */
4911 * If we will need some non-anonymous pages
4912 * Call VOP_GETPAGE over the range of non-anonymous pages
4916 * to load up translations and handle anonymous pages
4918 * Load up translation to any additional pages in page list not
5228 * are faulting on, free behind all pages in the segment and put
5283 * Skip pages that are free or have an
5352 * from being changed. It's ok to miss pages,
5388 * of pages if
5396 * Ask VOP_GETPAGE to return adjacent pages
5406 * Need to get some non-anonymous pages.
5412 * pages in the requested range. We have to
5443 * pages in addition to (possibly) having some adjacent pages.
5459 * more threads from creating different private pages for the
5508 /* Didn't get pages from the underlying fs so we're done */
5513 * Now handle any other pages in the list returned.
5562 * creating private pages (i.e., allocating anon slots)
5564 * to additional pages returned by the underlying
5582 * Skip mapping read ahead pages marked
5610 * This routine is used to start I/O on pages asynchronously. XXX it will
5611 * only create PAGESIZE pages. At fault time they will be relocated into
5612 * larger pages.
5702 * means locked pages are still in use.
5782 * array to keep a record of the pages for which we have reserved
5788 * modified any pages, we can release the swap space.
5834 * of pages required.
6042 * and we don't allow write access to any copy-on-write pages
6043 * that might be around or to prevent write access to pages
6157 * means locked pages are still in use.
6247 * means locked pages are still in use.
6358 pgcnt_t pages;
6412 pages = btop(pgsz);
6420 for (; a < ea; a += pgsz, an_idx += pages) {
6430 an_idx, pages) == pages);
6455 vpage = (vpage == NULL) ? NULL : vpage + pages;
6680 * For MAP_NORESERVE, only allocate swap reserve for pages
6988 * Check to see if either of the pages addr or addr + delta
7054 * Swap the pages of seg out to secondary storage, returning the
7058 * VOP_PUTPAGE() for all newly-unmapped pages, to push them out to the
7066 * only succeed in swapping out pages for the last sharer of the
7086 * Find pages unmapped by our caller and force them
7103 * pages as well as anon pages. Is this appropriate here?
7171 * dispose skips large pages so try to demote first.
7214 * XXX: Can we ever encounter modified pages
7239 * XXX - Anonymous pages should not be sync'ed out at all.
7268 * means locked pages are still in use.
7276 * flush all pages from seg cache
7339 * No attributes, no anonymous pages and MS_INVALIDATE flag
7385 * See if any of these pages are locked -- if so, then we
7405 * invalidate large pages.
7409 * pages it is no big deal.
7419 * Avoid writing out to disk ISM's large pages
7421 * of anon slots of such pages.
7440 * Note ISM pages are created large so (vp, off)'s
7464 * Determine if we have data corresponding to pages in the
7489 return (len); /* no anonymous pages created yet */
7623 * Lock down (or unlock) pages mapped by this segment.
7625 * XXX only creates PAGESIZE pages if anon slots are not initialized.
7626 * At fault time they will be relocated into larger pages.
7762 /* Only count sysV pages once for locked memory */
7799 * Loop over all pages in the range. Process if we're locking and
7996 /* sysV pages should be locked */
8045 * Set advice from user for specified pages
8093 * Large pages are assumed to be only turned on when accesses to the
8120 * means locked pages are still in use.
8193 * If we purged pages on a MAP_NORESERVE mapping, we
8260 * Mark any existing pages in given range for
8354 * boundaries for large pages
8399 * Mark any existing pages in given range for
8568 * There is one kind of inheritance that can be specified for pages:
8656 * and the advice from the segment itself to the individual pages.
8660 * Start by calculating the number of pages we must allocate to
8694 * Dump the pages belonging to this segvn segment.
8764 * Lock/Unlock anon pages over a given range. Return shadow list. This routine
8765 * uses global segment pcache to cache shadow lists (i.e. pp arrays) of pages
8771 * set of pages and lead to long hash chains that decrease pcache lookup
8779 * pages via segvn_fault() and pagelock'd pages via this routine. But pagelock
8886 * be freed anyway unless all constituent pages of a large
9064 * pages.
9186 * try to find pages in segment page cache
9255 * it into pcache. For large enough pages it's a big overhead to
9319 * We must never use seg_pcache for COW pages
9423 * purge any cached pages in the I/O page cache
9582 * XXX only creates PAGESIZE pages if anon slots are not initialized.
9583 * At fault time they will be relocated into larger pages.
9652 spgcnt_t pages = btop(len);
9663 pages--;
9664 while (pages-- > 0) {
9722 * established to per vnode mapping per lgroup amp pages instead of to vnode
9723 * pages. There's one amp per vnode text mapping per lgroup. Many processes
9850 * entries to avoid replication of the same file pages more
9894 * We want to pick a replica with pages on main thread's (t_tid = 1,