#
47823319 |
| 11-Sep-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r255459
|
#
3aaea6ef |
| 08-Sep-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Drain for the xbusy state for two places which potentially do pmap_remove_all(). Not doing the drain allows the pmap_enter() to proceed in parallel, making the pmap_remove_all() effects void.
The ra
Drain for the xbusy state for two places which potentially do pmap_remove_all(). Not doing the drain allows the pmap_enter() to proceed in parallel, making the pmap_remove_all() effects void.
The race results in an invalidated page mapped wired by usermode.
Reported and tested by: pho Reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (glebius)
show more ...
|
#
0fbf163e |
| 06-Sep-2013 |
Mark Murray <markm@FreeBSD.org> |
MFC
|
#
d1d01586 |
| 05-Sep-2013 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge from head
|
#
a677b314 |
| 05-Sep-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
The vm_pageout_flush() functions sbusies pages in the passed pages run. After that, the pager put method is called, usually translated to VOP_WRITE(). For the filesystems which use buffer cache, bu
The vm_pageout_flush() functions sbusies pages in the passed pages run. After that, the pager put method is called, usually translated to VOP_WRITE(). For the filesystems which use buffer cache, bufwrite() sbusies the buffer pages again, waiting for the xbusy state to drain. The later is done in vfs_drain_busy_pages(), which is called with the buffer pages already sbusied (by vm_pageout_flush()).
Since vfs_drain_busy_pages() can only wait for one page at the time, and during the wait, the object lock is dropped, previous pages in the buffer must be protected from other threads busying them. Up to the moment, it was done by xbusying the pages, that is incompatible with the sbusy state in the new implementation of busy. Switch to sbusy.
Reported and tested by: pho Sponsored by: The FreeBSD Foundation
show more ...
|
#
46ed9e49 |
| 04-Sep-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r255209
|
#
e0653845 |
| 23-Aug-2013 |
Mark Murray <markm@FreeBSD.org> |
MFC
|
#
4f8cf6e5 |
| 22-Aug-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Both cluster_rbuild() and cluster_wbuild() sometimes set the pages shared busy without first draining the hard busy state. Previously it went unnoticed since VPO_BUSY and m->busy fields were distinc
Both cluster_rbuild() and cluster_wbuild() sometimes set the pages shared busy without first draining the hard busy state. Previously it went unnoticed since VPO_BUSY and m->busy fields were distinct, and vm_page_io_start() did not verified that the passed page has VPO_BUSY flag cleared, but such page state is wrong. New implementation is more strict and catched this case.
Drain the busy state as needed, before calling vm_page_sbusy().
Tested by: pho, jkim Sponsored by: The FreeBSD Foundation
show more ...
|
#
5944de8e |
| 22-Aug-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Remove the deprecated VM_ALLOC_RETRY flag for the vm_page_grab(9). The flag was mandatory since r209792, where vm_page_grab(9) was changed to only support the alloc retry semantic.
Suggested and rev
Remove the deprecated VM_ALLOC_RETRY flag for the vm_page_grab(9). The flag was mandatory since r209792, where vm_page_grab(9) was changed to only support the alloc retry semantic.
Suggested and reviewed by: alc Sponsored by: The FreeBSD Foundation
show more ...
|
#
c7aebda8 |
| 09-Aug-2013 |
Attilio Rao <attilio@FreeBSD.org> |
The soft and hard busy mechanism rely on the vm object lock to work. Unify the 2 concept into a real, minimal, sxlock where the shared acquisition represent the soft busy and the exclusive acquisitio
The soft and hard busy mechanism rely on the vm object lock to work. Unify the 2 concept into a real, minimal, sxlock where the shared acquisition represent the soft busy and the exclusive acquisition represent the hard busy. The old VPO_WANTED mechanism becames the hard-path for this new lock and it becomes per-page rather than per-object. The vm_object lock becames an interlock for this functionality: it can be held in both read or write mode. However, if the vm_object lock is held in read mode while acquiring or releasing the busy state, the thread owner cannot make any assumption on the busy state unless it is also busying it.
Also: - Add a new flag to directly shared busy pages while vm_page_alloc and vm_page_grab are being executed. This will be very helpful once these functions happen under a read object lock. - Move the swapping sleep into its own per-object flag
The KPI is heavilly changed this is why the version is bumped. It is very likely that some VM ports users will need to change their own code.
Sponsored by: EMC / Isilon storage division Discussed with: alc Reviewed by: jeff, kib Tested by: gavin, bapt (older version) Tested by: pho, scottl
show more ...
|
#
5df87b21 |
| 07-Aug-2013 |
Jeff Roberson <jeff@FreeBSD.org> |
Replace kernel virtual address space allocation with vmem. This provides transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_* - Those that alloc
Replace kernel virtual address space allocation with vmem. This provides transparent layering and better fragmentation.
- Normalize functions that allocate memory to use kmem_* - Those that allocate address space are named kva_* - Those that operate on maps are named kmap_* - Implement recursive allocation handling for kmem_arena in vmem.
Reviewed by: alc Tested by: pho Sponsored by: EMC / Isilon Storage Division
show more ...
|
#
40f65a4d |
| 07-Aug-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r254014
|
#
92e0a672 |
| 19-Jul-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r253461
|
#
552311f4 |
| 17-Jul-2013 |
Xin LI <delphij@FreeBSD.org> |
IFC @253398
|
#
982d7712 |
| 13-Jul-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Assert that runningbufspace does not underflow.
Sponsored by: The FreeBSD Foundation
|
#
da4ca6c8 |
| 13-Jul-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
There is no need to count waiters for the runningbufspace.
Sponsored by: The FreeBSD Foundation
|
#
92e53673 |
| 11-Jul-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Do not invalidate page of the B_NOCACHE buffer or buffer after an I/O error if any user wired mappings exist. Doing the invalidation destroys the user wiring.
The change is the temporal measure to
Do not invalidate page of the B_NOCACHE buffer or buffer after an I/O error if any user wired mappings exist. Doing the invalidation destroys the user wiring.
The change is the temporal measure to close the bug, the more proper fix is to delegate the invalidation of the page to upper layers always.
Reported and tested by: pho Reviewed by: alc Sponsored by: The FreeBSD Foundation MFC after: 2 weeks
show more ...
|
#
d7b5c50b |
| 07-Jul-2013 |
Alfred Perlstein <alfred@FreeBSD.org> |
Make kassert_printf use __printflike.
Fix associated errors/warnings while I'm here.
Requested by: avg
|
#
ceae90c2 |
| 05-Jul-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r252763
|
#
5f518366 |
| 28-Jun-2013 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add a general purpose resource allocator, vmem, from NetBSD. It was originally inspired by the Solaris vmem detailed in the proceedings of usenix 2001. The NetBSD version was heavily refact
- Add a general purpose resource allocator, vmem, from NetBSD. It was originally inspired by the Solaris vmem detailed in the proceedings of usenix 2001. The NetBSD version was heavily refactored for bugs and simplicity. - Use this resource allocator to allocate the buffer and transient maps. Buffer cache defrags are reduced by 25% when used by filesystems with mixed block sizes. Ultimately this may permit dynamic buffer cache sizing on low KVA machines.
Discussed with: alc, kib, attilio Tested by: pho Sponsored by: EMC / Isilon Storage Division
show more ...
|
#
cfe30d02 |
| 19-Jun-2013 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge fresh head.
|
#
ba39d89b |
| 06-Jun-2013 |
Jeff Roberson <jeff@FreeBSD.org> |
- Consolidate duplicate code into support functions. - Split the bqlock into bqclean and bqdirty locks. - Only acquire the wakeup synchronization locks when we cross a threshold requiring them.
- Consolidate duplicate code into support functions. - Split the bqlock into bqclean and bqdirty locks. - Only acquire the wakeup synchronization locks when we cross a threshold requiring them. - Restructure the way flushbufqueues() targets work so they are more smp friendly and sane.
Reviewed by: kib Discussed with: mckusick, attilio Sponsored by: EMC / Isilon Storage Division
M vfs_bio.c
show more ...
|
#
92fab43f |
| 03-Jun-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
When auto-sizing the buffer cache, limit the amount of physical memory used as the estimation of size, to 32GB. This provides around 100K of buffer headers and corresponding KVA for buffer map at th
When auto-sizing the buffer cache, limit the amount of physical memory used as the estimation of size, to 32GB. This provides around 100K of buffer headers and corresponding KVA for buffer map at the peak. Sizing the cache larger is not useful, also resulting in the wasting and exhausting of KVA for large machines.
Reported and tested by: bdrewery Sponsored by: The FreeBSD Foundation
show more ...
|
Revision tags: release/8.4.0 |
|
#
39a4cd0c |
| 02-Jun-2013 |
Alan Cox <alc@FreeBSD.org> |
Reduce the scope of the VM object locking in brelse(). In my tests, this change reduced the total number of VM object lock acquisitions by brelse() by 74%.
Sponsored by: EMC / Isilon Storage Divisi
Reduce the scope of the VM object locking in brelse(). In my tests, this change reduced the total number of VM object lock acquisitions by brelse() by 74%.
Sponsored by: EMC / Isilon Storage Division
show more ...
|
#
22a72260 |
| 31-May-2013 |
Jeff Roberson <jeff@FreeBSD.org> |
- Convert the bufobj lock to rwlock. - Use a shared bufobj lock in getblk() and inmem(). - Convert softdep's lk to rwlock to match the bufobj lock. - Move INFREECNT to b_flags and protect it with
- Convert the bufobj lock to rwlock. - Use a shared bufobj lock in getblk() and inmem(). - Convert softdep's lk to rwlock to match the bufobj lock. - Move INFREECNT to b_flags and protect it with the buf lock. - Remove unnecessary locking around bremfree() and BKGRDINPROG.
Sponsored by: EMC / Isilon Storage Division Discussed with: mckusick, kib, mdf
show more ...
|