#
8f0ea33f |
| 13-Jan-2015 |
Glen Barber <gjb@FreeBSD.org> |
Reintegrate head revisions r273096-r277147
Sponsored by: The FreeBSD Foundation
|
#
18cc2ff0 |
| 12-Jan-2015 |
Konstantin Belousov <kib@FreeBSD.org> |
Revert r263475: TDP_DEVMEMIO no longer needed, since amd64 /dev/kmem does not access kernel mappings directly.
Reviewed by: alc Sponsored by: The FreeBSD Foundation MFC after: 1 week
|
#
9268022b |
| 19-Nov-2014 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge from head@274682
|
Revision tags: release/10.1.0 |
|
#
2a382033 |
| 14-Oct-2014 |
Glen Barber <gjb@FreeBSD.org> |
Reintegrate head@r273095
Sponsored by: The FreeBSD Foundation
|
#
f1a52b69 |
| 14-Oct-2014 |
Neel Natu <neel@FreeBSD.org> |
IFC @r273066
|
#
c81ab40b |
| 11-Oct-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Merge HEAD@r272944.
|
#
a36f5532 |
| 10-Oct-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
Make MAP_NOSYNC handling in the vm_fault() read-locked object path compatible with write-locked path. Test for MAP_ENTRY_NOSYNC and set VPO_NOSYNC for pages with dirty mask zero (this does not exclu
Make MAP_NOSYNC handling in the vm_fault() read-locked object path compatible with write-locked path. Test for MAP_ENTRY_NOSYNC and set VPO_NOSYNC for pages with dirty mask zero (this does not exclude a possibility that the page is dirty, e.g. due to read fault on writeable mapping and consequent write; the same issue exists in the slow path).
Use helper vm_fault_dirty() to unify fast and slow path handling of VPO_NOSYNC and setting the dirty mask.
Reviewed by: alc Sponsored by: The FreeBSD Foundation
show more ...
|
#
4e27d36d |
| 17-Sep-2014 |
Neel Natu <neel@FreeBSD.org> |
IFC @r271694
|
#
246e7a2b |
| 02-Sep-2014 |
Neel Natu <neel@FreeBSD.org> |
IFC @r269962
Submitted by: Anish Gupta (akgupt3@gmail.com)
|
#
832fd780 |
| 23-Aug-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Sync to HEAD@r270409.
|
#
b9ce8cc2 |
| 23-Aug-2014 |
Alan Cox <alc@FreeBSD.org> |
Relax one of the conditions for mapping a page on the fast path.
Reviewed by: kib X-MFC with: r270011 Sponsored by: EMC / Isilon Storage Division
|
#
ee7b0571 |
| 19-Aug-2014 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge head from 7/28
|
#
afe55ca3 |
| 15-Aug-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
Implement 'fast path' for the vm page fault handler. Or, it could be called a scalable path. When several preconditions hold, the vm object lock for the object containing the faulted page is taken
Implement 'fast path' for the vm page fault handler. Or, it could be called a scalable path. When several preconditions hold, the vm object lock for the object containing the faulted page is taken in read mode, instead of write, which allows parallel faults processing in the region.
Namely, the fast path is taken when the faulted page already exists and does not need copy on write, is already fully valid, and not busy. For technical reasons, fast path is avoided when the fault is the first write on the vnode object, or when the fault is for wiring or debugger read or write.
On the fast path, pmap_enter(9) is passed the PMAP_ENTER_NOSLEEP flag, since object lock is kept. Pmap might fail to create the entry, in which case the fallback to slow path is performed.
Reviewed by: alc Tested by: pho (previous version) Hardware provided and hosted by: The FreeBSD Foundation and Sentex Data Communications Sponsored by: The FreeBSD Foundation MFC after: 2 week
show more ...
|
#
9f746b66 |
| 14-Aug-2014 |
Alan Cox <alc@FreeBSD.org> |
Avoid pointless (but harmless) actions on unmanaged pages.
Reviewed by: kib Sponsored by: EMC / Isilon Storage Division
|
#
1b833d53 |
| 13-Aug-2014 |
Alexander V. Chernikov <melifaro@FreeBSD.org> |
Sync to HEAD@r269943.
|
#
39ffa8c1 |
| 08-Aug-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
Change pmap_enter(9) interface to take flags parameter and superpage mapping size (currently unused). The flags includes the fault access bits, wired flag as PMAP_ENTER_WIRED, and a new flag PMAP_EN
Change pmap_enter(9) interface to take flags parameter and superpage mapping size (currently unused). The flags includes the fault access bits, wired flag as PMAP_ENTER_WIRED, and a new flag PMAP_ENTER_NOSLEEP to indicate that pmap should not sleep.
For powerpc aim both 32 and 64 bit, fix implementation to ensure that the requested mapping is created when PMAP_ENTER_NOSLEEP is not specified, in particular, wait for the available memory required to proceed.
In collaboration with: alc Tested by: nwhitehorn (ppc aim32 and booke) Sponsored by: The FreeBSD Foundation and EMC / Isilon Storage Division MFC after: 2 weeks
show more ...
|
#
66cd575b |
| 02-Aug-2014 |
Alan Cox <alc@FreeBSD.org> |
Handle wiring failures in vm_map_wire() with the new functions pmap_unwire() and vm_object_unwire().
Retire vm_fault_{un,}wire(), since they are no longer used.
(See r268327 and r269134 for the mot
Handle wiring failures in vm_map_wire() with the new functions pmap_unwire() and vm_object_unwire().
Retire vm_fault_{un,}wire(), since they are no longer used.
(See r268327 and r269134 for the motivation behind this change.)
Reviewed by: kib Sponsored by: EMC / Isilon Storage Division
show more ...
|
#
03462509 |
| 26-Jul-2014 |
Alan Cox <alc@FreeBSD.org> |
When unwiring a region of an address space, do not assume that the underlying physical pages are mapped by the pmap. If, for example, the application has performed an mprotect(..., PROT_NONE) on any
When unwiring a region of an address space, do not assume that the underlying physical pages are mapped by the pmap. If, for example, the application has performed an mprotect(..., PROT_NONE) on any part of the wired region, then those pages will no longer be mapped by the pmap. So, using the pmap to lookup the wired pages in order to unwire them doesn't always work, and when it doesn't work wired pages are leaked.
To avoid the leak, introduce and use a new function vm_object_unwire() that locates the wired pages by traversing the object and its backing objects.
At the same time, switch from using pmap_change_wiring() to the recently introduced function pmap_unwire() for unwiring the region's mappings. pmap_unwire() is faster, because it operates a range of virtual addresses rather than a single virtual page at a time. Moreover, by operating on a range, it is superpage friendly. It doesn't waste time performing unnecessary demotions.
Reported by: markj Reviewed by: kib Tested by: pho, jmg (arm) Sponsored by: EMC / Isilon Storage Division
show more ...
|
Revision tags: release/9.3.0 |
|
#
3ae10f74 |
| 16-Jun-2014 |
Attilio Rao <attilio@FreeBSD.org> |
- Modify vm_page_unwire() and vm_page_enqueue() to directly accept the queue where to enqueue pages that are going to be unwired. - Add stronger checks to the enqueue/dequeue for the pagequeues whe
- Modify vm_page_unwire() and vm_page_enqueue() to directly accept the queue where to enqueue pages that are going to be unwired. - Add stronger checks to the enqueue/dequeue for the pagequeues when adding and removing pages to them.
Of course, for unmanaged pages the queue parameter of vm_page_unwire() will be ignored, just as the active parameter today. This makes adding new pagequeues quicker.
This change effectively modifies the KPI. __FreeBSD_version will be, however, bumped just when the full cache of free pages will be evicted.
Sponsored by: EMC / Isilon storage division Reviewed by: alc Tested by: pho
show more ...
|
#
6cec9cad |
| 03-Jun-2014 |
Peter Grehan <grehan@FreeBSD.org> |
MFC @ r266724
An SVM update will follow this.
|
#
414fdaf0 |
| 21-May-2014 |
Alan Somers <asomers@FreeBSD.org> |
IFC @266473
|
#
2602a2ea |
| 21-May-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
Remove redundand loop. The inner goto restarts the whole page handling in the situation identical to the loop condition.
Sponsored by: The FreeBSD Foundation MFC after: 3 days
|
#
c8f780e3 |
| 11-May-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
Fix locking. The dst_object must remain locked on the retry of the loop iteration.
Reported and tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 6 days
|
#
0973283d |
| 10-May-2014 |
Konstantin Belousov <kib@FreeBSD.org> |
For the upgrade case in vm_fault_copy_entry(), when the entry does not need COW and is writeable (i.e. becoming writeable due to the mprotect(2) operation), do not create a new backing object for the
For the upgrade case in vm_fault_copy_entry(), when the entry does not need COW and is writeable (i.e. becoming writeable due to the mprotect(2) operation), do not create a new backing object for the entry. The caller of the function is vm_map_protect(), the call is made to ensure that wired entry has all pages resident and wired in the top level object and to enable the write. We might need to copy read-only page from some backing objects into the top object or remap the page with the write allowed.
This fixes the issue with mishandling of the swap accounting when read-only wired mapping is upgraded to write-enabled after fork. The previous code path did not accounted the new object, but it creation is redundand anyway and the change provides an optimization for the non-common situation.
Reported by: markj Suggested and reviewed by: alc (previous version) Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week
show more ...
|
#
3b8f0845 |
| 28-Apr-2014 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge head
|