Lines Matching full:page

75  *	page, indexed by page number.  Each structure
84 * and offset to which this page belongs (for pageout),
91 * The queue lock for a page depends on the value of its queue field and is
97 * (B) the page busy lock.
101 * (O) the object that the page belongs to.
102 * (Q) the page's queue lock.
105 * page's contents and identity (i.e., its <object, pindex> tuple) as
107 * the page structure, the busy lock lacks some of the features available
110 * detected, and an attempt to xbusy a busy page or sbusy an xbusy page
112 * vm_page_sleep_if_busy() can be used to sleep until the page's busy
113 * state changes, after which the caller must re-lookup the page and
117 * The valid field is protected by the page busy lock (B) and object
120 * These must be protected with the busy lock to prevent page-in or
121 * creation races. Page invalidation generally happens as a result
132 * In contrast, the synchronization of accesses to the page's
134 * the machine-independent layer, the page busy must be held to
144 * only way to ensure a page can not become dirty. I/O generally
145 * removes the page from pmap to ensure exclusive access and atomic
148 * The ref_count field tracks references to the page. References that
149 * prevent the page from being reclaimable are called wirings and are
154 * pmap_extract_and_hold(). When a page belongs to an object, it may be
155 * wired only when the object is locked, or the page is busy, or by
157 * page is not busy (or is exclusively busied by the current thread), and
158 * the page is unmapped, its wire count will not increase. The ref_count
160 * is known that no other references to the page exist, such as in the page
161 * allocator. A page may be present in the page queues, or even actively
162 * scanned by the page daemon, without an explicitly counted referenced.
163 * The page daemon must therefore handle the possibility of a concurrent
164 * free of the page.
166 * The queue state of a page consists of the queue and act_count fields of
168 * by PGA_QUEUE_STATE_MASK. The queue field contains the page's page queue
169 * index, or PQ_NONE if it does not belong to a page queue. To modify the
170 * queue field, the page queue lock corresponding to the old value must be
173 * this rule: the page daemon may transition the queue field from
174 * PQ_INACTIVE to PQ_NONE immediately prior to freeing the page during an
175 * inactive queue scan. At that point the page is already dequeued and no
177 * flag, when set, indicates that the page structure is physically inserted
178 * into the queue corresponding to the page's queue index, and may only be
179 * set or cleared with the corresponding page queue lock held.
181 * To avoid contention on page queue locks, page queue operations (enqueue,
185 * queue is full, an attempt to insert a new entry will lock the page
189 * indefinitely. In particular, a page may be freed with pending batch
190 * queue entries. The page queue operation flags must be set using atomic
219 TAILQ_ENTRY(vm_page) q; /* page queue or free list (Q) */
234 vm_paddr_t phys_addr; /* physical address of page (C) */
236 u_int ref_count; /* page references (A) */
241 uint8_t flags; /* page PG_* flags (P) */
242 uint8_t oflags; /* page VPO_* flags (O) */
245 /* NOTE that these must support one bit per DEV_BSIZE in a page */
254 * ref_count is normally used to count wirings that prevent the page from being
257 * the page is unallocated.
263 * attempting to tear down all mappings of a given page. The page busy lock and
272 * Page flags stored in oflags:
274 * Access to these page flags is synchronized by the lock on the object
275 * containing the page (O).
278 * indicates that the page is not under PV management but
279 * otherwise should be treated as a normal page. Pages not
287 #define VPO_UNMANAGED 0x04 /* no PV management for page */
288 #define VPO_SWAPINPROG 0x08 /* swap I/O in progress on page */
291 * Busy page implementation details.
344 * PGA_REFERENCED may be cleared only if the page is locked. It is set by
349 * When it does so, the object must be locked, or the page must be
353 * PGA_EXECUTABLE may be set by pmap routines, and indicates that a page has
356 * PGA_NOSYNC must be set and cleared with the page busy lock held.
358 * PGA_ENQUEUED is set and cleared when a page is inserted into or removed
359 * from a page queue, respectively. It determines whether the plinks.q field
360 * of the page is valid. To set or clear this flag, page's "queue" field must
361 * be a valid queue index, and the corresponding page queue lock must be held.
363 * PGA_DEQUEUE is set when the page is scheduled to be dequeued from a page
364 * queue, and cleared when the dequeue request is processed. A page may
366 * is requested after the page is scheduled to be enqueued but before it is
367 * actually inserted into the page queue.
369 * PGA_REQUEUE is set when the page is scheduled to be enqueued or requeued
370 * in its page queue.
377 * and the corresponding page queue lock must be held when clearing any of the
381 * when the context that dirties the page does not have the object write lock
384 #define PGA_WRITEABLE 0x0001 /* page may be mapped writeable */
385 #define PGA_REFERENCED 0x0002 /* page has been referenced */
386 #define PGA_EXECUTABLE 0x0004 /* page may be mapped executable */
387 #define PGA_ENQUEUED 0x0008 /* page is enqueued in a page queue */
388 #define PGA_DEQUEUE 0x0010 /* page is due to be dequeued */
389 #define PGA_REQUEUE 0x0020 /* page is due to be requeued */
390 #define PGA_REQUEUE_HEAD 0x0040 /* page requeue should bypass LRU */
392 #define PGA_SWAP_FREE 0x0100 /* page with swap space was dirtied */
393 #define PGA_SWAP_SPACE 0x0200 /* page has allocated swap space */
399 * Page flags. Updates to these flags are not synchronized, and thus they must
400 * be set during page allocation or free to avoid races.
402 * The PG_PCPU_CACHE flag is set at allocation time if the page was
404 * page is allocated from the physical memory allocator.
407 #define PG_FICTITIOUS 0x02 /* physical page doesn't exist */
408 #define PG_ZERO 0x04 /* page is zeroed */
409 #define PG_MARKER 0x08 /* special queue marker page */
410 #define PG_NODUMP 0x10 /* don't include this page in a dump */
411 #define PG_NOFREE 0x20 /* page should never be freed. */
428 * Each pageable resident page falls into one of five lists:
451 extern vm_page_t vm_page_array; /* First resident page in table */
453 extern long first_page; /* first physical page number */
459 * page to which the given physical address belongs. The correct vm_page_t
460 * object is returned for addresses that are not page-aligned.
473 * the caller to test the page's flags for PG_ZERO.
492 #define VM_ALLOC_WIRED 0x0020 /* (acgnp) Allocate a wired page */
493 #define VM_ALLOC_ZERO 0x0040 /* (acgnp) Allocate a zeroed page */
495 #define VM_ALLOC_NOFREE 0x0100 /* (agnp) Page will never be freed */
496 #define VM_ALLOC_NOBUSY 0x0200 /* (acgp) Do not excl busy the page */
497 #define VM_ALLOC_NOCREAT 0x0400 /* (gp) Do not allocate a page */
501 #define VM_ALLOC_SBUSY 0x4000 /* (acgp) Shared busy the page */
542 * PS_ALL_DIRTY is true only if the entire (super)page is dirty.
543 * However, it can be spuriously false when the (super)page has become
679 ("vm_page_assert_busied: page %p not busy @ %s:%d", \
684 ("vm_page_assert_sbusied: page %p not shared busy @ %s:%d", \
690 ("vm_page_assert_unbusied: page %p busy_lock %#x owned" \
696 ("vm_page_assert_xbusied: page %p not exclusive busy @ %s:%d", \
703 ("vm_page_assert_xbusied: page %p busy_lock %#x not owned" \
717 /* Note: page m's lock must not be owned by the caller. */
736 * Claim ownership of a page's xbusy state. In non-INVARIANTS kernels this
762 * Load a snapshot of a page's 32-bit atomic state.
774 * Atomically compare and set a page's atomic state.
781 ("%s: invalid head requeue request for page %p", __func__, m)); in vm_page_astate_fcmpset()
783 ("%s: setting PGA_ENQUEUED with PQ_NONE in page %p", __func__, m)); in vm_page_astate_fcmpset()
791 * Clear the given bits in the specified page.
809 * Set the given bits in the specified page.
831 * Set all bits in the page's dirty field.
833 * The object containing the specified page must be locked if the
853 * Set page to not be dirty. Note: does not clear pmap modify bits
921 * Release a reference to a page and return the old reference count.
930 * page structure are visible before it is freed. in vm_page_drop()
935 ("vm_page_drop: page %p has an invalid refcount value", m)); in vm_page_drop()
942 * Perform a racy check to determine whether a reference prevents the page
943 * from being reclaimable. If the page's object is locked, and the page is