Lines Matching defs:page
54 * Types of page locking supported by page_lock & friends.
62 * For requesting that page_lock reclaim the page from the free list.
65 P_RECLAIM, /* reclaim page from free list */
66 P_NO_RECLAIM /* DON`T reclaim the page */
108 * We use ? : instead of #if because <vm/page.h> is included everywhere;
127 * pp may be the root of a large page, and many low order bits will be 0.
129 * possible page sizes.
150 * Each physical page has a page structure, which is used to maintain
151 * these pages as a cache. A page can be found via a hashed lookup
152 * based on the [vp, offset]. If a page has an [vp, offset] identity,
155 * is on, then the page is also on a doubly linked circular free
157 * are held, then the page is currently being read in (exclusive p_selock)
160 * the page is being brought in from its backing store, then other processes
161 * will wait for the i/o to complete before attaching to the page since it
164 * Each page structure has the locks described below along with
167 * p_selock This is a per-page shared/exclusive lock that is
169 * lock for each page. The "shared" lock is normally
172 * a page (e.g., while reading in pages). The appropriate
174 * to a page structure (e.g., during i/o).
197 * page structure. It is always held while the page
199 * even though a page may be only `shared' locked
201 * change anyway. Normally, the page must be
224 * The following fields are protected by hat layer lock(s). When a page
239 * The page structure is used to represent and control the system's
241 * page that is not permenately allocated. For example, the pages that
242 * hold the page structures are permanently held by the kernel
243 * and hence do not need page structures to track them. The array
244 * of page structures is allocated early on in the kernel's life and
247 * Each page structure may simultaneously appear on several linked lists.
248 * The lists are: hash list, free or in i/o list, and a vnode's page list.
252 * The hash list is used to quickly find a page when the page's vnode and
253 * offset within the vnode are known. Each page that is hashed is
265 * `p_next' and `p_prev' fields. When a page is involved in some sort
270 * are anchored in architecture dependent ways (to handle page coloring etc.).
274 * `p_vpnext' and `p_vpprev'. The field `p_offset' contains a page's
281 * Again, each of the lists that a page can appear on is protected by a
286 * In addition to the list locks, each page structure contains a
292 * Removing a page structure from one of the lists requires holding
293 * the appropriate list lock and the page's p_selock. A page may be
298 * there are two cases: In the first case, the page structure in question
299 * is known ahead of time (e.g., when the page is to be added or removed
300 * from a list). In the second case, the page structure is not known and
303 * When adding or removing a known page to one of the lists, first the
304 * page must be exclusively locked (since at least one of its fields
306 * third the page inserted or deleted, and finally the list lock dropped.
308 * The more interesting case occures when the particular page structure
310 * page_lookup(), it is not known if a page with the desired (vnode and
312 * acquired, the hash list searched, and if the desired page is found
315 * if some other process was trying to remove the page from the list.
317 * locked the page, and be spinning waiting to acquire the lock protecting
319 * and is waiting to acquire the page lock, a deadlock occurs.
322 * search the list, and if the desired page is found either use
324 * list lock to page_lock(). If page_lock() can not acquire the page's
329 * If the list lock was dropped before the attempt at locking the page
330 * was made, checks would have to be made to ensure that the page had
332 * the interval between dropping the list lock and acquiring the page
349 * vph_mutex[]'s protect the v_pages field and the vp page chains.
351 * First lock the page, then the hash chain, then the vnode chain. When
424 * involves moving the contents and identity of a page to another, free page.
425 * To relocate a page, the SE_EXCL lock must be obtained. The way to prevent
426 * a page from being relocated is to hold the SE_SHARED lock (the SE_EXCL
427 * lock must not be held indefinitely). If the page is going to be held
440 * Memory delete and page locking.
442 * The set of all usable pages is managed using the global page list as
448 * page_t virtual address space is remapped to a page (or pages) of
451 * memory delete to collect the page and so that part of the page list is
452 * prevented from being deleted. If the page is referenced outside of one
460 * Page size (p_szc field) and page locking.
476 * large page or prevent hat_pageunload() by holding hat level lock that
480 * page one can either use the same method as used for changeing p_szc of
486 * demotes the page. This mechanism relies on the fact that any code that
487 * needs to prevent p_szc of a file system large page from changeing either
489 * least SHARED and calls page_szc_lock() or uses hat level page locks.
498 typedef struct page {
499 u_offset_t p_offset; /* offset into vnode for this page */
500 struct vnode *p_vnode; /* vnode that this page is named by */
501 selock_t p_selock; /* shared/exclusive lock on the page */
505 struct page *p_hash; /* hash by [vnode, offset] */
506 struct page *p_vpnext; /* next page in vnode list */
507 struct page *p_vpprev; /* prev page in vnode list */
508 struct page *p_next; /* next page in free/intrans lists */
509 struct page *p_prev; /* prev page in free/intrans lists */
510 ushort_t p_lckcnt; /* number of locks on page data */
512 kcondvar_t p_cv; /* page struct's condition var */
515 volatile uchar_t p_szc; /* page size code */
525 uchar_t p_toxic; /* page has an unrecoverable error */
527 pfn_t p_pagenum; /* physical page number */
544 uint64_t p_msresv_2; /* page allocation debugging */
550 #define devpage page
568 * (page offset from "off" and the low 3 bits of "vp" which are zero for
579 * We use ? : instead of #if because <vm/page.h> is included everywhere;
623 * The page hash value is re-hashed to an index for the ph_mutex array.
669 extern pad_mutex_t page_llocks[]; /* page logical lock mutex */
824 kmutex_t *page_se_mutex(struct page *);
825 kmutex_t *page_szc_lock(struct page *);
826 int page_szc_lock_assert(struct page *pp);
858 * hw_page_array[] is configured with hardware supported page sizes by
884 /* page_get_replacement page flags */
885 #define PGR_SAMESZC 0x1 /* only look for page size same as orig */
886 #define PGR_NORELOC 0x2 /* allocate a P_NORELOC page */
930 #define P_MIGRATE 0x20 /* Migrate page on next touch */
933 #define P_RAF 0x04 /* page retired at free */
967 * error(s) have occurred on a given page. The flags are cleared with
971 * When an error occurs on a page, p_toxic is set to record the error. The
978 * should only be cleared while holding the page exclusively locked.
980 * within the page capture logic and thus to set or clear the bit, that mutex
981 * needs to be held. The page does not need to be locked but the page_clrtoxic
984 * large pages such that if we are unlocking a page and the PR_CAPTURE bit is
985 * set, we will only try to capture the page if the current threads T_CAPTURING
987 * the page even though the PR_CAPTURE bit is set.
992 * A page must be exclusively locked to be retired. Pages can be retired if
995 * Once a page has been successfully retired it is zeroed, attached to the
999 * fail to lock the page, unless SE_RETIRED is passed as an argument.
1005 #define PR_MCE 0x01 /* page has seen two or more CEs */
1006 #define PR_UE 0x02 /* page has an unhandled UE */
1007 #define PR_UE_SCRUBBED 0x04 /* page has seen a UE but was cleaned */
1008 #define PR_FMA 0x08 /* A DE wants this page retired */
1009 #define PR_CAPTURE 0x10 /* page is hashed on page_capture_hash[] */
1011 #define PR_MSG 0x40 /* message(s) already printed for this page */
1012 #define PR_RETIRED 0x80 /* This page has been retired */
1034 * kpm large page description.
1047 * window the large page mapping is broken up into smaller PAGESIZE
1050 * with the small page size. In non vac conflict mode kp_refcntc is
1070 * kpm small page description.
1076 * is already available on a per page basis.
1078 * The state about how a kpm page is mapped and whether it is ready to go
1081 * - kp_mapped == 1 the page is mapped cacheable
1082 * - kp_mapped == 2 the page is mapped non-cacheable
1093 uchar_t mapped: 4; /* page mapped small */
1095 uchar_t mapped: 4; /* page mapped small */
1126 page_t *pages, *epages; /* [from, to] in page array */
1127 pfn_t pages_base, pages_end; /* [from, to] in page numbers */
1131 uint64_t pagespa, epagespa; /* [from, to] page array physical */
1170 * page capture related info:
1171 * The page capture routines allow us to asynchronously capture given pages
1176 * Subsystems using page capture must register a callback before attempting
1177 * to capture a page. A duration of -1 will indicate that we will never give
1178 * up while trying to capture a page and will only stop trying to capture the
1179 * given page once we have successfully captured it. Thus the user needs to be
1182 * For now, only /dev/physmem and page retire use the page capture interface
1183 * and only a single request can be outstanding for a given page. Thus, if
1184 * /dev/phsymem wants a page and page retire also wants the same page, only
1185 * the page retire request will be honored until the point in time that the
1186 * page is actually retired, at which point in time, subsequent requests by
1206 /* capture this page asynchronously. (in HZ) */