#
984c6473 |
| 23-Jul-2010 |
Ivan Voras <ivoras@FreeBSD.org> |
Make lorunningspace catch up with hirunningspace. While there, add comment about the magic numbers.
Prodded by: alc
|
#
b089a177 |
| 20-Jul-2010 |
Ivan Voras <ivoras@FreeBSD.org> |
Fix expression style.
Prodded by: jhb
|
#
1de98e06 |
| 18-Jul-2010 |
Ivan Voras <ivoras@FreeBSD.org> |
In keeping with the Age-of-the-fruitbat theme, scale up hirunningspace on machines which can clearly afford the memory.
This is a somewhat conservative version of the patch - more fine tuning may be
In keeping with the Age-of-the-fruitbat theme, scale up hirunningspace on machines which can clearly afford the memory.
This is a somewhat conservative version of the patch - more fine tuning may be necessary.
Idea from: Thread on hackers@ Discussed with: alc
show more ...
|
Revision tags: release/8.1.0_cvs, release/8.1.0 |
|
#
28823883 |
| 11-Jul-2010 |
Alan Cox <alc@FreeBSD.org> |
Change the implementation of vm_hold_free_pages() so that it performs at most one call to pmap_qremove(), and thus one TLB shootdown, instead of one call and TLB shootdown per page.
Simplify the int
Change the implementation of vm_hold_free_pages() so that it performs at most one call to pmap_qremove(), and thus one TLB shootdown, instead of one call and TLB shootdown per page.
Simplify the interface to vm_hold_free_pages().
MFC after: 3 weeks
show more ...
|
#
b99348e5 |
| 09-Jul-2010 |
Alan Cox <alc@FreeBSD.org> |
Add support for the VM_ALLOC_COUNT() hint to vm_page_alloc(). Consequently, the maintenance of vm_pageout_deficit can be localized to just two places: vm_page_alloc() and vm_pageout_scan().
This ch
Add support for the VM_ALLOC_COUNT() hint to vm_page_alloc(). Consequently, the maintenance of vm_pageout_deficit can be localized to just two places: vm_page_alloc() and vm_pageout_scan().
This change also corrects an off-by-one error in the maintenance of vm_pageout_deficit. Historically, the buffer cache functions, allocbuf() and vm_hold_load_pages(), have not taken into account that vm_page_alloc() already increments vm_pageout_deficit by one.
Reviewed by: kib
show more ...
|
#
d6c18050 |
| 07-Jul-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@209749
|
#
5f195aa3 |
| 05-Jul-2010 |
Konstantin Belousov <kib@FreeBSD.org> |
Add the ability for the allocflag argument of the vm_page_grab() to specify the increment of vm_pageout_deficit when sleeping due to page shortage. Then, in allocbuf(), the code to allocate pages whe
Add the ability for the allocflag argument of the vm_page_grab() to specify the increment of vm_pageout_deficit when sleeping due to page shortage. Then, in allocbuf(), the code to allocate pages when extending vmio buffer can be replaced by a call to vm_page_grab().
Suggested and reviewed by: alc MFC after: 2 weeks
show more ...
|
#
f4b9ace4 |
| 30-Jun-2010 |
Alan Cox <alc@FreeBSD.org> |
Improve bufdone_finish()'s handling of the bogus page. Specifically, if one or more mappings to the bogus page must be replaced, call pmap_qenter() just once. Previously, pmap_qenter() was called f
Improve bufdone_finish()'s handling of the bogus page. Specifically, if one or more mappings to the bogus page must be replaced, call pmap_qenter() just once. Previously, pmap_qenter() was called for each mapping to the bogus page.
MFC after: 3 weeks
show more ...
|
#
95bf6530 |
| 12-Jun-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@209086
|
#
d19511c3 |
| 11-Jun-2010 |
Matthew D Fleming <mdf@FreeBSD.org> |
Add INVARIANTS checking that numfreebufs values are sane. Also add a per-buf flag to catch if a buf is double-counted in the free count. This code was useful to debug an instance where a local patch
Add INVARIANTS checking that numfreebufs values are sane. Also add a per-buf flag to catch if a buf is double-counted in the free count. This code was useful to debug an instance where a local patch at Isilon was incorrectly managing numfreebufs for a new buf state.
Reviewed by: jeff Approved by: zml (mentor)
show more ...
|
#
8d4a7be8 |
| 08-Jun-2010 |
Konstantin Belousov <kib@FreeBSD.org> |
Reorganize the code in bdwrite() which handles move of dirtiness from the buffer pages to buffer. Combine the code to set buffer dirty range (previously in vfs_setdirty()) and to clean the pages (vfs
Reorganize the code in bdwrite() which handles move of dirtiness from the buffer pages to buffer. Combine the code to set buffer dirty range (previously in vfs_setdirty()) and to clean the pages (vfs_clean_pages()) into new function vfs_clean_pages_dirty_buf(). Now the vm object lock is acquired only once.
Drain the VPO_BUSY bit of the buffer pages before setting valid and clean bits in vfs_clean_pages_dirty_buf() with new helper vfs_drain_busy_pages(). pmap_clear_modify() asserts that page is not busy.
In vfs_busy_pages(), move the wait for draining of VPO_BUSY before the dirtyness handling, to follow the structure of vfs_clean_pages_dirty_buf().
Reported and tested by: pho Suggested and reviewed by: alc MFC after: 2 weeks
show more ...
|
#
970c23b2 |
| 06-Jun-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@208879
|
#
c8fa8709 |
| 02-Jun-2010 |
Alan Cox <alc@FreeBSD.org> |
Minimize the use of the page queues lock for synchronizing access to the page's dirty field. With the exception of one case, access to this field is now synchronized by the object lock.
|
#
7708106a |
| 26-May-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@208557
|
#
e98d019d |
| 25-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Eliminate the acquisition and release of the page queues lock from vfs_busy_pages(). It is no longer needed.
Submitted by: kib
|
#
567e51e1 |
| 24-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Roughly half of a typical pmap_mincore() implementation is machine- independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore().
Push down the page queue
Roughly half of a typical pmap_mincore() implementation is machine- independent code. Move this code into mincore(), and eliminate the page queues lock from pmap_mincore().
Push down the page queues lock into pmap_clear_modify(), pmap_clear_reference(), and pmap_is_modified(). Assert that these functions are never passed an unmanaged page.
Eliminate an inaccurate comment from powerpc/powerpc/mmu_if.m: Contrary to what the comment says, pmap_mincore() is not simply an optimization. Without a complete pmap_mincore() implementation, mincore() cannot return either MINCORE_MODIFIED or MINCORE_REFERENCED because only the pmap can provide this information.
Eliminate the page queues lock from vfs_setdirty_locked_object(), vm_pageout_clean(), vm_object_page_collect_flush(), and vm_object_page_clean(). Generally speaking, these are all accesses to the page's dirty field, which are synchronized by the containing vm object's lock.
Reduce the scope of the page queues lock in vm_object_madvise() and vm_page_dontneed().
Reviewed by: kib (an earlier version)
show more ...
|
#
aa12e8b7 |
| 18-May-2010 |
Alan Cox <alc@FreeBSD.org> |
The page queues lock is no longer required by vm_page_set_invalid(), so eliminate it.
Assert that the object containing the page is locked in vm_page_test_dirty(). Perform some style clean up while
The page queues lock is no longer required by vm_page_set_invalid(), so eliminate it.
Assert that the object containing the page is locked in vm_page_test_dirty(). Perform some style clean up while I'm here.
Reviewed by: kib
show more ...
|
#
3c4a2440 |
| 08-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), and vm_page_try_to_free(). Consequently, push down the page queues lock into pmap_enter_quick(), pmap_page_wired_mapped(), pma
Push down the page queues into vm_page_cache(), vm_page_try_to_cache(), and vm_page_try_to_free(). Consequently, push down the page queues lock into pmap_enter_quick(), pmap_page_wired_mapped(), pmap_remove_all(), and pmap_remove_write().
Push down the page queues lock into Xen's pmap_page_is_mapped(). (I overlooked the Xen pmap in r207702.)
Switch to a per-processor counter for the total number of pages cached.
show more ...
|
#
9307d8bd |
| 08-May-2010 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Merge svn+ssh://svn.freebsd.org/base/head@207793
|
#
945f418a |
| 06-May-2010 |
Kirk McKusick <mckusick@FreeBSD.org> |
Final update to current version of head in preparation for reintegration.
|
#
e3ef0d2f |
| 05-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Push down the acquisition of the page queues lock into vm_page_unwire().
Update the comment describing which lock should be held on entry to vm_page_wire().
Reviewed by: kib
|
#
a7283d32 |
| 04-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Add page locking to the vm_page_cow* functions.
Push down the acquisition and release of the page queues lock into vm_page_wire().
Reviewed by: kib
|
#
c5a64851 |
| 03-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Acquire the page lock around vm_page_unwire() and vm_page_wire().
Reviewed by: kib
|
#
139a0de7 |
| 02-May-2010 |
Alan Cox <alc@FreeBSD.org> |
Properly synchronize access to the page's hold_count in vfs_vmio_release().
Reviewed by: kib
|
#
b88b6c9d |
| 02-May-2010 |
Alan Cox <alc@FreeBSD.org> |
It makes no sense for vm_page_sleep_if_busy()'s helper, vm_page_sleep(), to unconditionally set PG_REFERENCED on a page before sleeping. In many cases, it's perfectly ok for the page to disappear, i
It makes no sense for vm_page_sleep_if_busy()'s helper, vm_page_sleep(), to unconditionally set PG_REFERENCED on a page before sleeping. In many cases, it's perfectly ok for the page to disappear, i.e., be reclaimed by the page daemon, before the caller to vm_page_sleep() is reawakened. Instead, we now explicitly set PG_REFERENCED in those cases where having the page persist until the caller is awakened is clearly desirable. Note, however, that setting PG_REFERENCED on the page is still only a hint, and not a guarantee that the page should persist.
show more ...
|