History log of /freebsd/sys/vm/phys_pager.c (Results 76 – 100 of 111)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 54019b0a 05-Dec-2000 Alfred Perlstein <alfred@FreeBSD.org>

need to adjust allocation size to properly deal with non PAGE_SIZE
allocations, specifically with allocations < PAGE_SIZE when the code
doesn't work properly


Revision tags: release/4.2.0, release/4.1.1_cvs
# bb663856 29-Jul-2000 Peter Wemm <peter@FreeBSD.org>

Minor cleanups:
- remove unused variables (fix warnings)
- use a more consistant ansi style rather than a mixture
- remove dead #if 0 code and declarations


Revision tags: release/4.1.0, release/3.5.0_cvs
# 8b03c8ed 30-May-2000 Matthew Dillon <dillon@FreeBSD.org>

This is a cleanup patch to Peter's new OBJT_PHYS VM object type
and sysv shared memory support for it. It implements a new
PG_UNMANAGED flag that has slightly different characteristics
f

This is a cleanup patch to Peter's new OBJT_PHYS VM object type
and sysv shared memory support for it. It implements a new
PG_UNMANAGED flag that has slightly different characteristics
from PG_FICTICIOUS.

A new sysctl, kern.ipc.shm_use_phys has been added to enable the
use of physically-backed sysv shared memory rather then swap-backed.
Physically backed shm segments are not tracked with PV entries,
allowing programs which use a large shm segment as a rendezvous
point to operate without eating an insane amount of KVM in the
PV entry management. Read: Oracle.

Peter's OBJT_PHYS object will also allow us to eventually implement
page-table sharing and/or 4MB physical page support for such segments.
We're half way there.

show more ...


# 24964514 21-May-2000 Peter Wemm <peter@FreeBSD.org>

Checkpoint of a new physical memory backed object type, that does not
have pv_entries. This is intended for very special circumstances,
eg: a certain database that has a 1GB shm segment mapped into

Checkpoint of a new physical memory backed object type, that does not
have pv_entries. This is intended for very special circumstances,
eg: a certain database that has a 1GB shm segment mapped into 300
processes. That would consume 2GB of kvm just to hold the pv_entries
alone. This would not be used on systems unless the physical ram was
available, as it's not pageable.

This is a work-in-progress, but is a useful and functional checkpoint.
Matt has got some more fixes for it that will be committed soon.

Reviewed by: dillon

show more ...


# 09c817ba 03-Jul-2009 Oleksandr Tymoshenko <gonzo@FreeBSD.org>

- MFC


# 3364c323 23-Jun-2009 Konstantin Belousov <kib@FreeBSD.org>

Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.

The accounting information (charge) is associ

Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.

The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.

The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.

The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).

Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.

In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)

show more ...


# 53f55a43 13-Jun-2009 Alan Cox <alc@FreeBSD.org>

Eliminate an unnecessary clearing of a page's dirty bits in
phys_pager_getpages().


Revision tags: release/7.2.0_cvs, release/7.2.0, release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0, release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0
# 248a0568 30-Oct-2007 Remko Lodder <remko@FreeBSD.org>

Correct a copy and paste'o in phys_pager.c, we are talking about phys here
and not about devices.

PR: 93755
Approved by: imp (mentor, implicit when re-assigning the ticket to me).


# efe7553e 18-Aug-2007 Konstantin Belousov <kib@FreeBSD.org>

Fix the phys_pager in the way similar to the rev. 1.83 of the
sys/vm/device_pager.c:

Protect the creation of the phys pager with non-NULL handle with the
phys_pager_mtx. Lookup of phys pager in the

Fix the phys_pager in the way similar to the rev. 1.83 of the
sys/vm/device_pager.c:

Protect the creation of the phys pager with non-NULL handle with the
phys_pager_mtx. Lookup of phys pager in the pagers list by handle is now
synchronized with its removal from the list, and phys_pager_mtx is put
before vm object lock in lock order. Dispose the phys_pager_alloc_lock
and tsleep calls, together with acquiring Giant, since phys_pager_mtx
now covers the same block.

Reviewed by: alc
Approved by: re (kensmith)

show more ...


# b5e8f167 05-Aug-2007 Alan Cox <alc@FreeBSD.org>

Consider a scenario in which one processor, call it Pt, is performing
vm_object_terminate() on a device-backed object at the same time that
another processor, call it Pa, is performing dev_pager_allo

Consider a scenario in which one processor, call it Pt, is performing
vm_object_terminate() on a device-backed object at the same time that
another processor, call it Pa, is performing dev_pager_alloc() on the
same device. The problem is that vm_pager_object_lookup() should not be
allowed to return a doomed object, i.e., an object with OBJ_DEAD set,
but it does. In detail, the unfortunate sequence of events is: Pt in
vm_object_terminate() holds the doomed object's lock and sets OBJ_DEAD
on the object. Pa in dev_pager_alloc() holds dev_pager_sx and calls
vm_pager_object_lookup(), which returns the doomed object. Next, Pa
calls vm_object_reference(), which requires the doomed object's lock, so
Pa waits for Pt to release the doomed object's lock. Pt proceeds to the
point in vm_object_terminate() where it releases the doomed object's
lock. Pa is now able to complete vm_object_reference() because it can
now complete the acquisition of the doomed object's lock. So, now the
doomed object has a reference count of one! Pa releases dev_pager_sx
and returns the doomed object from dev_pager_alloc(). Pt now acquires
dev_pager_mtx, removes the doomed object from dev_pager_object_list,
releases dev_pager_mtx, and finally calls uma_zfree with the doomed
object. However, the doomed object is still in use by Pa.

Repeating my key point, vm_pager_object_lookup() must not return a
doomed object. Moreover, the test for the object's state, i.e.,
doomed or not, and the increment of the object's reference count
should be carried out atomically.

Reviewed by: kib
Approved by: re (kensmith)
MFC after: 3 weeks

show more ...


# a52da38f 10-Apr-2007 Giorgos Keramidas <keramida@FreeBSD.org>

Minor typo fix, noticed while I was going through *_pager.c files.


# 9f5c801b 25-Feb-2007 Alan Cox <alc@FreeBSD.org>

Change the way that unmanaged pages are created. Specifically,
immediately flag any page that is allocated to a OBJT_PHYS object as
unmanaged in vm_page_alloc() rather than waiting for a later call

Change the way that unmanaged pages are created. Specifically,
immediately flag any page that is allocated to a OBJT_PHYS object as
unmanaged in vm_page_alloc() rather than waiting for a later call to
vm_page_unmanage(). This allows for the elimination of some uses of
the page queues lock.

Change the type of the kernel and kmem objects from OBJT_DEFAULT to
OBJT_PHYS. This allows us to take advantage of the above change to
simplify the allocation of unmanaged pages in kmem_alloc() and
kmem_malloc().

Remove vm_page_unmanage(). It is no longer used.

show more ...


Revision tags: release/6.2.0_cvs, release/6.2.0
# 9af80719 22-Oct-2006 Alan Cox <alc@FreeBSD.org>

Replace PG_BUSY with VPO_BUSY. In other words, changes to the page's
busy flag, i.e., VPO_BUSY, are now synchronized by the per-vm object
lock instead of the global page queues lock.


Revision tags: release/5.5.0_cvs, release/5.5.0, release/6.1.0_cvs, release/6.1.0, release/6.0.0_cvs, release/6.0.0, release/5.4.0_cvs, release/5.4.0, release/4.11.0_cvs, release/4.11.0
# 60727d8b 07-Jan-2005 Warner Losh <imp@FreeBSD.org>

/* -> /*- for license, minor formatting changes


Revision tags: release/5.3.0_cvs, release/5.3.0, release/4.10.0_cvs, release/4.10.0
# 8a3ef857 25-Apr-2004 Alan Cox <alc@FreeBSD.org>

Zero the physical page only if it is invalid and not prezeroed.


# e265f054 25-Apr-2004 Alan Cox <alc@FreeBSD.org>

Add a VM_OBJECT_LOCK_ASSERT() call. Remove splvm() and splx() calls. Move
a comment.


Revision tags: release/5.2.1_cvs, release/5.2.1, release/5.2.0_cvs, release/5.2.0
# 2f7af3db 04-Jan-2004 Alan Cox <alc@FreeBSD.org>

Simplify the various pager allocation routines by computing the desired
object size once and assigning that value to a local variable.


Revision tags: release/4.9.0_cvs, release/4.9.0
# 4e658600 05-Aug-2003 Poul-Henning Kamp <phk@FreeBSD.org>

Use sparse struct initializations for struct pagerops.

This makes grepping for which pagers implement which methods easier.


# 874651b1 12-Jun-2003 David E. O'Brien <obrien@FreeBSD.org>

Use __FBSDID().


Revision tags: release/5.1.0_cvs, release/5.1.0, release/4.8.0_cvs, release/4.8.0, release/5.0.0_cvs, release/5.0.0
# 969da54c 27-Dec-2002 Alan Cox <alc@FreeBSD.org>

Increase the scope of the page queues lock in phys_pager_getpages().


# d8e7c54e 17-Dec-2002 Alan Cox <alc@FreeBSD.org>

Hold the page queues lock when performing vm_page_flag_set().


Revision tags: release/4.7.0_cvs
# fff6062a 25-Aug-2002 Alan Cox <alc@FreeBSD.org>

o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever since
pmap_zero_page() and pmap_zero_page_area() were modified to accept
a struct vm_page * instead of a physical address, vm_pa

o Retire vm_page_zero_fill() and vm_page_zero_fill_area(). Ever since
pmap_zero_page() and pmap_zero_page_area() were modified to accept
a struct vm_page * instead of a physical address, vm_page_zero_fill()
and vm_page_zero_fill_area() have served no purpose.

show more ...


Revision tags: release/4.6.2_cvs, release/4.6.2, release/4.6.1
# eed6f3fd 14-Jul-2002 Alan Cox <alc@FreeBSD.org>

o Lock page queue accesses by vm_page_unmanage().
o Assert that the page queues lock is held in vm_page_unmanage().


Revision tags: release/4.6.0_cvs
# 2a1618cd 22-Jun-2002 Alan Cox <alc@FreeBSD.org>

o Remove GIANT_REQUIRED from phys_pager_alloc(). If handle isn't NULL,
acquire and release Giant. If handle is NULL, Giant isn't needed.
o Annotate phys_pager_alloc() and phys_pager_dealloc() a

o Remove GIANT_REQUIRED from phys_pager_alloc(). If handle isn't NULL,
acquire and release Giant. If handle is NULL, Giant isn't needed.
o Annotate phys_pager_alloc() and phys_pager_dealloc() as MPSAFE.

show more ...


# 6008862b 04-Apr-2002 John Baldwin <jhb@FreeBSD.org>

Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zon

Change callers of mtx_init() to pass in an appropriate lock type name. In
most cases NULL is passed, but in some cases such as network driver locks
(which use the MTX_NETWORK_LOCK macro) and UMA zone locks, a name is used.

Tested on: i386, alpha, sparc64

show more ...


12345