| #
c6879c6c |
| 23-Oct-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r339015 through r339669.
|
| #
01d4e214 |
| 05-Oct-2018 |
Glen Barber <gjb@FreeBSD.org> |
MFH r338661 through r339200.
Sponsored by: The FreeBSD Foundation
|
| #
e8bb589d |
| 05-Oct-2018 |
Matt Macy <mmacy@FreeBSD.org> |
eliminate locking surrounding ui_vmsize and swap reserve by using atomics
Change swap_reserve and swap_total to be in units of pages so that swap reservations can be done using only atomics instead
eliminate locking surrounding ui_vmsize and swap reserve by using atomics
Change swap_reserve and swap_total to be in units of pages so that swap reservations can be done using only atomics instead of using a single global mutex for swap_reserve and a single mutex for all processes running under the same uid for uid accounting.
Results in mmap speed up and a 70% increase in brk calls / second.
Reviewed by: alc@, markj@, kib@ Approved by: re (delphij@) Differential Revision: https://reviews.freebsd.org/D16273
show more ...
|
| #
ce44d808 |
| 27-Sep-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r338731 through r338987.
|
| #
f5fbe90d |
| 24-Sep-2018 |
Alan Cox <alc@FreeBSD.org> |
Passing UMA_ZONE_NOFREE to uma_zcreate() for swpctrie_zone and swblk_zone is redundant, because uma_zone_reserve_kva() is performed on both zones and it sets this same flag on the zone. (Moreover, t
Passing UMA_ZONE_NOFREE to uma_zcreate() for swpctrie_zone and swblk_zone is redundant, because uma_zone_reserve_kva() is performed on both zones and it sets this same flag on the zone. (Moreover, the implementation of the swap pager does not itself require these zones to be UMA_ZONE_NOFREE.)
Reviewed by: kib, markj Approved by: re (gjb) MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D17296
show more ...
|
| #
14b841d4 |
| 11-Aug-2018 |
Kyle Evans <kevans@FreeBSD.org> |
MFH @ r337607, in preparation for boarding
|
| #
f9c0a512 |
| 10-Aug-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r337286 through r337585.
|
| #
78f1deef |
| 08-Aug-2018 |
Alan Cox <alc@FreeBSD.org> |
Defer and aggregate swap_pager_meta_build frees.
Before swp_pager_meta_build replaces an old swapblk with an new one, it frees the old one. To allow such freeing of blocks to be aggregated, have sw
Defer and aggregate swap_pager_meta_build frees.
Before swp_pager_meta_build replaces an old swapblk with an new one, it frees the old one. To allow such freeing of blocks to be aggregated, have swp_pager_meta_build return the old swap block, and make the caller responsible for freeing it.
Define a pair of short static functions, swp_pager_init_freerange and swp_pager_update_freerange, to do the initialization and updating of blk addresses and counters used in aggregating blocks to be freed.
Submitted by: Doug Moore <dougm@rice.edu> Reviewed by: kib, markj (an earlier version) Tested by: pho MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D13707
show more ...
|
|
Revision tags: release/11.2.0 |
|
| #
b7b8a096 |
| 14-Jun-2018 |
Konstantin Belousov <kib@FreeBSD.org> |
Handle the race between fork/vm_object_split() and faults.
If fault started before vmspace_fork() locked the map, and then during fork, vm_map_copy_entry()->vm_object_split() is executed, it is poss
Handle the race between fork/vm_object_split() and faults.
If fault started before vmspace_fork() locked the map, and then during fork, vm_map_copy_entry()->vm_object_split() is executed, it is possible that the fault instantiate the page into the original object when the page was already copied into the new object (see vm_map_split() for the orig/new objects terminology). This can happen if split found a busy page (e.g. from the fault) and slept dropping the objects lock, which allows the swap pager to instantiate read-behind pages for the fault. Then the restart of the scan can see a page in the scanned range, where it was already copied to the upper object.
Fix it by instantiating the read-ahead pages before swap_pager_getpages() method drops the lock to allocate pbuf. The object scan would see the whole range prefilled with the busy pages and not proceed the range.
Note that vm_fault rechecks the map generation count after the object unlock, so that it restarts the handling if raced with split, and re-lookups the right page from the upper object.
In collaboration with: alc Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week
show more ...
|
| #
6469bdcd |
| 06-Apr-2018 |
Brooks Davis <brooks@FreeBSD.org> |
Move most of the contents of opt_compat.h to opt_global.h.
opt_compat.h is mentioned in nearly 180 files. In-progress network driver compabibility improvements may add over 100 more so this is close
Move most of the contents of opt_compat.h to opt_global.h.
opt_compat.h is mentioned in nearly 180 files. In-progress network driver compabibility improvements may add over 100 more so this is closer to "just about everywhere" than "only some files" per the guidance in sys/conf/options.
Keep COMPAT_LINUX32 in opt_compat.h as it is confined to a subset of sys/compat/linux/*.c. A fake _COMPAT_LINUX option ensure opt_compat.h is created on all architectures.
Move COMPAT_LINUXKPI to opt_dontuse.h as it is only used to control the set of compiled files.
Reviewed by: kib, cem, jhb, jtl Sponsored by: DARPA, AFRL Differential Revision: https://reviews.freebsd.org/D14941
show more ...
|
| #
3f060b60 |
| 16-Feb-2018 |
Mark Johnston <markj@FreeBSD.org> |
Use the conventional name for an array of pages.
No functional change intended.
Discussed with: kib MFC after: 3 days
|
| #
e958ad4c |
| 12-Feb-2018 |
Jeff Roberson <jeff@FreeBSD.org> |
Make v_wire_count a per-cpu counter(9) counter. This eliminates a significant source of cache line contention from vm_page_alloc(). Use accessors and vm_page_unwire_noq() so that the mechanism can
Make v_wire_count a per-cpu counter(9) counter. This eliminates a significant source of cache line contention from vm_page_alloc(). Use accessors and vm_page_unwire_noq() so that the mechanism can be easily changed in the future.
Reviewed by: markj Discussed with: kib, glebius Tested by: pho (earlier version) Sponsored by: Netflix, Dell/EMC Isilon Differential Revision: https://reviews.freebsd.org/D14273
show more ...
|
| #
e2068d0b |
| 06-Feb-2018 |
Jeff Roberson <jeff@FreeBSD.org> |
Use per-domain locks for vm page queue free. Move paging control from global to per-domain state. Protect reservations with the free lock from the domain that they belong to. Refactor to make vm d
Use per-domain locks for vm page queue free. Move paging control from global to per-domain state. Protect reservations with the free lock from the domain that they belong to. Refactor to make vm domains more of a first class object.
Reviewed by: markj, kib, gallatin Tested by: pho Sponsored by: Netflix, Dell/EMC Isilon Differential Revision: https://reviews.freebsd.org/D14000
show more ...
|
| #
4b49587c |
| 06-Jan-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r327341 through r327623.
|
| #
4abca9bb |
| 31-Dec-2017 |
Alan Cox <alc@FreeBSD.org> |
Previously, swap_pager_copy() freed swap blocks one at at time, via swp_pager_meta_ctl(), with no opportunity to recognize freeing of consecutive blocks and free fewer block ranges. To open that opp
Previously, swap_pager_copy() freed swap blocks one at at time, via swp_pager_meta_ctl(), with no opportunity to recognize freeing of consecutive blocks and free fewer block ranges. To open that opportunity, this change removes the SWM_FREE option from swp_pager_meta_ctl(), and compels the caller to do the freeing when a valid block address is returned. In swap_pager_copy(), these frees are aggregated, so that a sequence of them can be done at one time.
The only other caller to swp_pager_meta_ctl() that passed SWM_FREE, swp_pager_unswapped(), is also modified to handle its single free explicitly.
Submitted by: Doug Moore <dougm@rice.edu> Reviewed by: kib (an earlier version) MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D13290
show more ...
|
| #
230869e0 |
| 28-Nov-2017 |
Alan Cox <alc@FreeBSD.org> |
When the swap pager allocates space on disk, it requests contiguous blocks in a single call to blist_alloc(). However, when it frees that space, it previously called blist_free() on each block, one
When the swap pager allocates space on disk, it requests contiguous blocks in a single call to blist_alloc(). However, when it frees that space, it previously called blist_free() on each block, one at a time. With this change, the swap pager identifies ranges of contiguous blocks to be freed, and calls blist_free() once per range. In one extreme case, that is described in the review, the time to perform an munmap(2) was reduced by 55%.
Submitted by: Doug Moore <dougm@rice.edu> Reviewed by: kib MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D12397
show more ...
|
| #
937d37fc |
| 19-Nov-2017 |
Hans Petter Selasky <hselasky@FreeBSD.org> |
Merge ^/head r325842 through r325998.
|
| #
df57947f |
| 18-Nov-2017 |
Pedro F. Giffuni <pfg@FreeBSD.org> |
spdx: initial adoption of licensing ID tags.
The Software Package Data Exchange (SPDX) group provides a specification to make it easier for automated tools to detect and summarize well known opensou
spdx: initial adoption of licensing ID tags.
The Software Package Data Exchange (SPDX) group provides a specification to make it easier for automated tools to detect and summarize well known opensource licenses. We are gradually adopting the specification, noting that the tags are considered only advisory and do not, in any way, superceed or replace the license texts.
Special thanks to Wind River for providing access to "The Duke of Highlander" tool: an older (2014) run over FreeBSD tree was useful as a starting point.
Initially, only tag files that use BSD 4-Clause "Original" license.
RelNotes: yes Differential Revision: https://reviews.freebsd.org/D13133
show more ...
|
| #
f8190300 |
| 10-Nov-2017 |
Hans Petter Selasky <hselasky@FreeBSD.org> |
Merge ^/head r325505 through r325662.
|
| #
8d6fbbb8 |
| 08-Nov-2017 |
Jeff Roberson <jeff@FreeBSD.org> |
Replace manyinstances of VM_WAIT with blocking page allocation flags similar to the kernel memory allocator.
This simplifies NUMA allocation because the domain will be known at wait time and races b
Replace manyinstances of VM_WAIT with blocking page allocation flags similar to the kernel memory allocator.
This simplifies NUMA allocation because the domain will be known at wait time and races between failure and sleeping are eliminated. This also reduces boilerplate code and simplifies callers.
A wait primitive is supplied for uma zones for similar reasons. This eliminates some non-specific VM_WAIT calls in favor of more explicit sleeps that may be satisfied without new pages.
Reviewed by: alc, kib, markj Tested by: pho Sponsored by: Netflix, Dell/EMC Isilon
show more ...
|
| #
c2c014f2 |
| 07-Nov-2017 |
Hans Petter Selasky <hselasky@FreeBSD.org> |
Merge ^/head r323559 through r325504.
|
| #
dd467c8a |
| 23-Oct-2017 |
Enji Cooper <ngie@FreeBSD.org> |
MFhead@r324884
|
| #
be7d4ac5 |
| 22-Oct-2017 |
Edward Tomasz Napierala <trasz@FreeBSD.org> |
Add OID for the vm.overcommit sysctl. This makes it possible to remove one call to sysctl(2) from jemalloc startup code. (That also requires changes to jemalloc, but I plan to push those to upstream
Add OID for the vm.overcommit sysctl. This makes it possible to remove one call to sysctl(2) from jemalloc startup code. (That also requires changes to jemalloc, but I plan to push those to upstream first.)
Reviewed by: kib MFC after: 2 weeks Sponsored by: DARPA, AFRL Differential Revision: https://reviews.freebsd.org/D12745
show more ...
|
| #
0a8f81bc |
| 22-Oct-2017 |
Enji Cooper <ngie@FreeBSD.org> |
MFhead@r324837
While here, diff reduce some of the changes in sys/boot by moving MK_COVERAGE=no to sys/boot/Makefile.inc .
|
| #
1fffcd75 |
| 18-Oct-2017 |
Konstantin Belousov <kib@FreeBSD.org> |
Do not report reduction of swap zone if it was not.
After r324600 we see the actual reservation.
Reported by: jkim Sponsored by: The FreeBSD Foundation MFC after: 1 week
|