History log of /freebsd/sys/vm/vm_kern.c (Results 201 – 225 of 495)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 945f418a 06-May-2010 Kirk McKusick <mckusick@FreeBSD.org>

Final update to current version of head in preparation for reintegration.


# 746c2dde 03-May-2010 Alan Cox <alc@FreeBSD.org>

The pages allocated by kmem_alloc_attr() and kmem_malloc() are unmanaged.
Consequently, neither the page lock nor the page queues lock is needed to
unwire and free them.


# 2965a453 30-Apr-2010 Kip Macy <kmacy@FreeBSD.org>

On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
s

On Alan's advice, rather than do a wholesale conversion on a single
architecture from page queue lock to a hashed array of page locks
(based on a patch by Jeff Roberson), I've implemented page lock
support in the MI code and have only moved vm_page's hold_count
out from under page queue mutex to page lock. This changes
pmap_extract_and_hold on all pmaps.

Supported by: Bitgravity Inc.

Discussed with: alc, jeffr, and kib

show more ...


# a4bf5fb9 28-Apr-2010 Kirk McKusick <mckusick@FreeBSD.org>

Update to current version of head.


# ca596a25 19-Apr-2010 Juli Mallett <jmallett@FreeBSD.org>

o) Add a VM find-space option, VMFS_TLB_ALIGNED_SPACE, which searches the
address space for an address as aligned by the new pmap_align_tlb()
function, which is for constraints imposed by the T

o) Add a VM find-space option, VMFS_TLB_ALIGNED_SPACE, which searches the
address space for an address as aligned by the new pmap_align_tlb()
function, which is for constraints imposed by the TLB. [1]
o) Add a kmem_alloc_nofault_space() function, which acts like
kmem_alloc_nofault() but allows the caller to specify which find-space
option to use. [1]
o) Use kmem_alloc_nofault_space() with VMFS_TLB_ALIGNED_SPACE to allocate the
kernel stack address on MIPS. [1]
o) Make pmap_align_tlb() on MIPS align addresses so that they do not start on
an odd boundary within the TLB, so that they are suitable for insertion as
wired entries and do not have to share a TLB entry with another mapping,
assuming they are appropriately-sized.
o) Eliminate md_realstack now that the kstack will be appropriately-aligned on
MIPS.
o) Increase the number of guard pages to 2 so that we retain the proper
alignment of the kstack address.

Reviewed by: [1] alc
X-MFC-after: Making sure alc has not come up with a better interface.

show more ...


Revision tags: release/7.3.0_cvs, release/7.3.0, release/8.0.0_cvs, release/8.0.0
# 10b3b545 17-Sep-2009 Dag-Erling Smørgrav <des@FreeBSD.org>

Merge from head


# 7d4b968b 17-Sep-2009 Dag-Erling Smørgrav <des@FreeBSD.org>

Merge from head up to r188941 (last revision before the USB stack switch)


# 09c817ba 03-Jul-2009 Oleksandr Tymoshenko <gonzo@FreeBSD.org>

- MFC


# 3364c323 23-Jun-2009 Konstantin Belousov <kib@FreeBSD.org>

Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.

The accounting information (charge) is associ

Implement global and per-uid accounting of the anonymous memory. Add
rlimit RLIMIT_SWAP that limits the amount of swap that may be reserved
for the uid.

The accounting information (charge) is associated with either map entry,
or vm object backing the entry, assuming the object is the first one
in the shadow chain and entry does not require COW. Charge is moved
from entry to object on allocation of the object, e.g. during the mmap,
assuming the object is allocated, or on the first page fault on the
entry. It moves back to the entry on forks due to COW setup.

The per-entry granularity of accounting makes the charge process fair
for processes that change uid during lifetime, and decrements charge
for proper uid when region is unmapped.

The interface of vm_pager_allocate(9) is extended by adding struct ucred *,
that is used to charge appropriate uid when allocation if performed by
kernel, e.g. md(4).

Several syscalls, among them is fork(2), may now return ENOMEM when
global or per-uid limits are enforced.

In collaboration with: pho
Reviewed by: alc
Approved by: re (kensmith)

show more ...


Revision tags: release/7.2.0_cvs, release/7.2.0
# 1829d5da 12-Mar-2009 Warner Losh <imp@FreeBSD.org>

Update the projects tree to a newer FreeBSD current.


# 655c3490 24-Feb-2009 Konstantin Belousov <kib@FreeBSD.org>

Revert the addition of the freelist argument for the vm_map_delete()
function, done in r188334. Instead, collect the entries that shall be
freed, in the deferred_freelist member of the map. Automatic

Revert the addition of the freelist argument for the vm_map_delete()
function, done in r188334. Instead, collect the entries that shall be
freed, in the deferred_freelist member of the map. Automatically purge
the deferred freelist when map is unlocked.

Tested by: pho
Reviewed by: alc

show more ...


# 9309e63c 24-Feb-2009 Robert Watson <rwatson@FreeBSD.org>

Put debug.vm_lowmem sysctl under DIAGNOSTIC.

Submitted by: sam
MFC after: 3 days


# 86f08737 24-Feb-2009 Robert Watson <rwatson@FreeBSD.org>

Add a debugging sysctl, debug.vm_lowmem, that when assigned a value of
1 will trigger a pass through the VM's low-memory handlers, such as
protocol and UMA drain routines. This makes it easier to ex

Add a debugging sysctl, debug.vm_lowmem, that when assigned a value of
1 will trigger a pass through the VM's low-memory handlers, such as
protocol and UMA drain routines. This makes it easier to exercise
these otherwise rarely-invoked code paths.

MFC after: 3 days

show more ...


# 897d81a0 08-Feb-2009 Konstantin Belousov <kib@FreeBSD.org>

Do not call vm_object_deallocate() from vm_map_delete(), because we
hold the map lock there, and might need the vnode lock for OBJT_VNODE
objects. Postpone object deallocation until caller of vm_map_

Do not call vm_object_deallocate() from vm_map_delete(), because we
hold the map lock there, and might need the vnode lock for OBJT_VNODE
objects. Postpone object deallocation until caller of vm_map_delete()
drops the map lock. Link the map entries to be freed into the freelist,
that is released by the new helper function vm_map_entry_free_freelist().

Reviewed by: tegge, alc
Tested by: pho

show more ...


Revision tags: release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0
# fb272dc8 18-Jul-2008 Alan Cox <alc@FreeBSD.org>

Eliminate stale comments from kmem_malloc().


# 5cfa90e9 22-Jun-2008 Alan Cox <alc@FreeBSD.org>

Make preparations for increasing the size of the kernel virtual address space
on the amd64 architecture. The amd64 architecture requires kernel code and
global variables to reside in the highest 2GB

Make preparations for increasing the size of the kernel virtual address space
on the amd64 architecture. The amd64 architecture requires kernel code and
global variables to reside in the highest 2GB of the 64-bit virtual address
space. Thus, the memory allocated during bootstrap, before the call to
kmem_init(), starts at KERNBASE, which is not necessarily the same as
VM_MIN_KERNEL_ADDRESS on amd64.

show more ...


# 3202ed75 10-May-2008 Alan Cox <alc@FreeBSD.org>

Introduce a new parameter "superpage_align" to kmem_suballoc() that is
used to request superpage alignment for the submap.

Request superpage alignment for the kmem_map.

Pass VMFS_ANY_SPACE instead

Introduce a new parameter "superpage_align" to kmem_suballoc() that is
used to request superpage alignment for the submap.

Request superpage alignment for the kmem_map.

Pass VMFS_ANY_SPACE instead of TRUE to vm_map_find(). (They are currently
equivalent but VMFS_ANY_SPACE is the new preferred spelling.)

Remove a stale comment from kmem_malloc().

show more ...


# 2bc24aa9 28-Apr-2008 Alan Cox <alc@FreeBSD.org>

Eliminate pointless casts from kmem_suballoc().


# 24dedba9 30-Mar-2008 Alan Cox <alc@FreeBSD.org>

Eliminate an unnecessary printf() from kmem_suballoc(). The subsequent
panic() can be extended to convey the same information.


Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0
# 79c2840d 10-Jan-2008 Pawel Jakub Dawidek <pjd@FreeBSD.org>

When one tries to allocate memory with the M_WAITOK flag and we are short in
address space in kmem map call vm_lowmem event in a loop and wait a bit for
subsystems to reclaim some memory which in tur

When one tries to allocate memory with the M_WAITOK flag and we are short in
address space in kmem map call vm_lowmem event in a loop and wait a bit for
subsystems to reclaim some memory which in turn will reclaim address space as
well.

Note, this is a work-around.

Reviewed by: alc
Approved by: alc
MFC after: 3 days

show more ...


# eb2a0517 03-Jan-2008 Alan Cox <alc@FreeBSD.org>

Add an access type parameter to pmap_enter(). It will be used to implement
superpage promotion.

Correct a style error in kmem_malloc(): pmap_enter()'s last parameter is
a Boolean.


# 8ce2d00a 07-Nov-2007 Pawel Jakub Dawidek <pjd@FreeBSD.org>

Change unused 'user_wait' argument to 'timo' argument, which will be
used to specify timeout for msleep(9).

Discussed with: alc
Reviewed by: alc


# 0f2c2ce0 05-Apr-2007 Pawel Jakub Dawidek <pjd@FreeBSD.org>

When KVA is exhausted, try the vm_lowmem event for the last time before
panicing. This helps a lot in ZFS stability.


# 9f5c801b 25-Feb-2007 Alan Cox <alc@FreeBSD.org>

Change the way that unmanaged pages are created. Specifically,
immediately flag any page that is allocated to a OBJT_PHYS object as
unmanaged in vm_page_alloc() rather than waiting for a later call

Change the way that unmanaged pages are created. Specifically,
immediately flag any page that is allocated to a OBJT_PHYS object as
unmanaged in vm_page_alloc() rather than waiting for a later call to
vm_page_unmanage(). This allows for the elimination of some uses of
the page queues lock.

Change the type of the kernel and kmem objects from OBJT_DEFAULT to
OBJT_PHYS. This allows us to take advantage of the above change to
simplify the allocation of unmanaged pages in kmem_alloc() and
kmem_malloc().

Remove vm_page_unmanage(). It is no longer used.

show more ...


Revision tags: release/6.2.0_cvs, release/6.2.0
# e6eaadba 07-Jan-2007 Alan Cox <alc@FreeBSD.org>

Declare the map entry created by kmem_init() for the range from
VM_MIN_KERNEL_ADDRESS to the end of the kernel's bootstrap data as
MAP_NOFAULT.


12345678910>>...20