Searched hist:"13458803 f4111b552c573f20768353769ee401cd" (Results 1 – 3 of 3) sorted by relevance
/freebsd/sys/vm/ |
H A D | vm_map.h | diff 13458803f4111b552c573f20768353769ee401cd Thu May 10 17:16:42 CEST 2012 Alan Cox <alc@FreeBSD.org> Give vm_fault()'s sequential access optimization a makeover.
There are two aspects to the sequential access optimization: (1) read ahead of pages that are expected to be accessed in the near future and (2) unmap and cache behind of pages that are not expected to be accessed again. This revision changes both aspects.
The read ahead optimization is now more effective. It starts with the same initial read window as before, but arithmetically grows the window on sequential page faults. This can yield increased read bandwidth. For example, on one of my machines, a program using mmap() to read a file that is several times larger than the machine's physical memory takes about 17% less time to complete.
The unmap and cache behind optimization is now more selectively applied. The read ahead window must grow to its maximum size before unmap and cache behind is performed. This significantly reduces the number of times that pages are unmapped and cached only to be reactivated a short time later.
The unmap and cache behind optimization now clears each page's referenced flag. Previously, in the case of dirty pages, if the containing file was still mapped at the time that the page daemon examined the dirty pages, they would be reactivated.
From a stylistic standpoint, this revision also cleanly separates the implementation of the read ahead and unmap/cache behind optimizations.
Glanced at: kib MFC after: 2 weeks
|
H A D | vm_fault.c | diff 13458803f4111b552c573f20768353769ee401cd Thu May 10 17:16:42 CEST 2012 Alan Cox <alc@FreeBSD.org> Give vm_fault()'s sequential access optimization a makeover.
There are two aspects to the sequential access optimization: (1) read ahead of pages that are expected to be accessed in the near future and (2) unmap and cache behind of pages that are not expected to be accessed again. This revision changes both aspects.
The read ahead optimization is now more effective. It starts with the same initial read window as before, but arithmetically grows the window on sequential page faults. This can yield increased read bandwidth. For example, on one of my machines, a program using mmap() to read a file that is several times larger than the machine's physical memory takes about 17% less time to complete.
The unmap and cache behind optimization is now more selectively applied. The read ahead window must grow to its maximum size before unmap and cache behind is performed. This significantly reduces the number of times that pages are unmapped and cached only to be reactivated a short time later.
The unmap and cache behind optimization now clears each page's referenced flag. Previously, in the case of dirty pages, if the containing file was still mapped at the time that the page daemon examined the dirty pages, they would be reactivated.
From a stylistic standpoint, this revision also cleanly separates the implementation of the read ahead and unmap/cache behind optimizations.
Glanced at: kib MFC after: 2 weeks
|
H A D | vm_map.c | diff 13458803f4111b552c573f20768353769ee401cd Thu May 10 17:16:42 CEST 2012 Alan Cox <alc@FreeBSD.org> Give vm_fault()'s sequential access optimization a makeover.
There are two aspects to the sequential access optimization: (1) read ahead of pages that are expected to be accessed in the near future and (2) unmap and cache behind of pages that are not expected to be accessed again. This revision changes both aspects.
The read ahead optimization is now more effective. It starts with the same initial read window as before, but arithmetically grows the window on sequential page faults. This can yield increased read bandwidth. For example, on one of my machines, a program using mmap() to read a file that is several times larger than the machine's physical memory takes about 17% less time to complete.
The unmap and cache behind optimization is now more selectively applied. The read ahead window must grow to its maximum size before unmap and cache behind is performed. This significantly reduces the number of times that pages are unmapped and cached only to be reactivated a short time later.
The unmap and cache behind optimization now clears each page's referenced flag. Previously, in the case of dirty pages, if the containing file was still mapped at the time that the page daemon examined the dirty pages, they would be reactivated.
From a stylistic standpoint, this revision also cleanly separates the implementation of the read ahead and unmap/cache behind optimizations.
Glanced at: kib MFC after: 2 weeks
|