#
a0850dd0 |
| 01-May-2021 |
Konstantin Belousov <kib@FreeBSD.org> |
swappagerops: slightly more style-compliant formatting
Remove excess spaces from comments.
Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revi
swappagerops: slightly more style-compliant formatting
Remove excess spaces from comments.
Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D30070
show more ...
|
#
ecfbddf0 |
| 15-Apr-2021 |
Konstantin Belousov <kib@FreeBSD.org> |
sysctl vm.objects: report backing object and swap use
For anonymous objects, provide a handle kvo_me naming the object, and report the handle of the backing object. This allows userspace to deconst
sysctl vm.objects: report backing object and swap use
For anonymous objects, provide a handle kvo_me naming the object, and report the handle of the backing object. This allows userspace to deconstruct the shadow chain. Right now the handle is the address of the object in KVA, but this is not guaranteed.
For the same anonymous objects, report the swap space used for actually swapped out pages, in kvo_swapped field. I do not believe that it is useful to report full 64bit counter there, so only uint32_t value is returned, clamped to the max.
For kinfo_vmentry, report anonymous object handle backing the entry, so that the shadow chain for the specific mapping can be deconstructed.
Reviewed by: markj Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D29771
show more ...
|
Revision tags: release/13.0.0 |
|
#
cd853791 |
| 28-Nov-2020 |
Konstantin Belousov <kib@FreeBSD.org> |
Make MAXPHYS tunable. Bump MAXPHYS to 1M.
Replace MAXPHYS by runtime variable maxphys. It is initialized from MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.
Make b_pag
Make MAXPHYS tunable. Bump MAXPHYS to 1M.
Replace MAXPHYS by runtime variable maxphys. It is initialized from MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.
Make b_pages[] array in struct buf flexible. Size b_pages[] for buffer cache buffers exactly to atop(maxbcachebuf) (currently it is sized to atop(MAXPHYS)), and b_pages[] for pbufs is sized to atop(maxphys) + 1. The +1 for pbufs allow several pbuf consumers, among them vmapbuf(), to use unaligned buffers still sized to maxphys, esp. when such buffers come from userspace (*). Overall, we save significant amount of otherwise wasted memory in b_pages[] for buffer cache buffers, while bumping MAXPHYS to desired high value.
Eliminate all direct uses of the MAXPHYS constant in kernel and driver sources, except a place which initialize maxphys. Some random (and arguably weird) uses of MAXPHYS, e.g. in linuxolator, are converted straight. Some drivers, which use MAXPHYS to size embeded structures, get private MAXPHYS-like constant; their convertion is out of scope for this work.
Changes to cam/, dev/ahci, dev/ata, dev/mpr, dev/mpt, dev/mvs, dev/siis, where either submitted by, or based on changes by mav.
Suggested by: mav (*) Reviewed by: imp, mav, imp, mckusick, scottl (intermediate versions) Tested by: pho Sponsored by: The FreeBSD Foundation Differential revision: https://reviews.freebsd.org/D27225
show more ...
|
Revision tags: release/12.2.0 |
|
#
c3aa3bf9 |
| 01-Sep-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: clean up empty lines in .c and .h files
|
#
e2515283 |
| 27-Aug-2020 |
Glen Barber <gjb@FreeBSD.org> |
MFH
Sponsored by: Rubicon Communications, LLC (netgate.com)
|
#
7ad2a82d |
| 19-Aug-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vfs: drop the error parameter from vn_isdisk, introduce vn_isdisk_error
Most consumers pass NULL.
|
#
c7aa572c |
| 31-Jul-2020 |
Glen Barber <gjb@FreeBSD.org> |
MFH
Sponsored by: Rubicon Communications, LLC (netgate.com)
|
#
00fd73d2 |
| 25-Jul-2020 |
Doug Moore <dougm@FreeBSD.org> |
Fix an overflow bug in the blist allocator that needlessly capped max swap size by dividing a value, which was always a multiple of 64, by 64. Remove the code that reduced max swap size down to that
Fix an overflow bug in the blist allocator that needlessly capped max swap size by dividing a value, which was always a multiple of 64, by 64. Remove the code that reduced max swap size down to that cap.
Eliminate the distinction between BLIST_BMAP_RADIX and BLIST_META_RADIX. Call them both BLIST_RADIX.
Make improvments to the blist self-test code to silence compiler warnings and to test larger blists.
Reported by: jmallett Reviewed by: alc Discussed with: kib Tested by: pho Differential Revision: https://reviews.freebsd.org/D25736
show more ...
|
#
ee744122 |
| 24-Jul-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: fix swap reservation leak and clean up surrounding code
The code did not subtract from the global counter if per-uid reservation failed.
Cleanup highlights: - load overcommit once - move per-ui
vm: fix swap reservation leak and clean up surrounding code
The code did not subtract from the global counter if per-uid reservation failed.
Cleanup highlights: - load overcommit once - move per-uid manipulation to dedicated routines - don't fetch wire count if requested size is below the limit - convert return type from int to bool - ifdef the routines with _KERNEL to keep vm.h compilable by userspace
Reviewed by: kib (previous version) Differential Revision: https://reviews.freebsd.org/D25787
show more ...
|
#
126a2470 |
| 23-Jul-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: annotate swap_reserved with __exclusive_cache_line
The counter keeps being updated all the time and variables read afterwards share the cacheline. Note this still fundamentally does not scale an
vm: annotate swap_reserved with __exclusive_cache_line
The counter keeps being updated all the time and variables read afterwards share the cacheline. Note this still fundamentally does not scale and needs to be replaced, in the meantime gets a bandaid.
brk1_processes -t 52 ops/s: before: 8598298 after: 9098080
show more ...
|
Revision tags: release/11.4.0 |
|
#
7ce3a312 |
| 09-Jun-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: rework swap_pager_status to execute in constant time
The lock-protected iteration is trivially avoidable.
This removes a serialisation point from Linux binaries (which end up calling here from
vm: rework swap_pager_status to execute in constant time
The lock-protected iteration is trivially avoidable.
This removes a serialisation point from Linux binaries (which end up calling here from the sysinfo syscall).
show more ...
|
#
2ac6b71f |
| 07-Mar-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358712 through r358730.
|
#
d869a17e |
| 06-Mar-2020 |
Mark Johnston <markj@FreeBSD.org> |
Use COUNTER_U64_DEFINE_EARLY() in places where it simplifies things.
Reviewed by: kib Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23978
|
#
75dfc66c |
| 27-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358269 through r358399.
|
#
7029da5c |
| 26-Feb-2020 |
Pawel Biernacki <kaktus@FreeBSD.org> |
Mark more nodes as CTLFLAG_MPSAFE or CTLFLAG_NEEDGIANT (17 of many)
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are still not MPSAFE (or already are but aren’t properly mark
Mark more nodes as CTLFLAG_MPSAFE or CTLFLAG_NEEDGIANT (17 of many)
r357614 added CTLFLAG_NEEDGIANT to make it easier to find nodes that are still not MPSAFE (or already are but aren’t properly marked). Use it in preparation for a general review of all nodes.
This is non-functional change that adds annotations to SYSCTL_NODE and SYSCTL_PROC nodes using one of the soon-to-be-required flags.
Mark all obvious cases as MPSAFE. All entries that haven't been marked as MPSAFE before are by default marked as NEEDGIANT
Approved by: kib (mentor, blanket) Commented by: kib, gallatin, melifaro Differential Revision: https://reviews.freebsd.org/D23718
show more ...
|
#
1f374d0c |
| 24-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358263 through r358268.
|
#
36b01270 |
| 24-Feb-2020 |
Doug Moore <dougm@FreeBSD.org> |
The last argument to swp_pager_getswapspace is always 1. Remove that argument.
Reviewed by: markj Differential Revision: https://reviews.freebsd.org/D23810
|
#
5d25f943 |
| 23-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358239 through r358262.
|
#
7ca55392 |
| 23-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Allow swap_pager_putpages() to allocate one block at a time.
The minimum allocation size of 4 blocks is an old policy that came with the "new" swap pager in r42957. Since then the blist allocator h
Allow swap_pager_putpages() to allocate one block at a time.
The minimum allocation size of 4 blocks is an old policy that came with the "new" swap pager in r42957. Since then the blist allocator has gotten better at reducing fragmentation; for example, with r349777 it can return a range that spans multiple leaves. When swap space is close to being exhaused, the minimum of 4 blocks most likely exacerbates memory pressure, so reduce it to 1.
Reported by: alc Tested by: pho Reviewed by: alc, dougm, kib Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23763
show more ...
|
#
43c7dd6b |
| 19-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358075 through r358130.
|
#
6c5f36ff |
| 19-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Eliminate some unnecessary uses of UMA_ZONE_VM. Only zones involved in virtual address or physical page allocation need to be marked with this flag.
Reviewed by: markj Tested by: pho Differential R
Eliminate some unnecessary uses of UMA_ZONE_VM. Only zones involved in virtual address or physical page allocation need to be marked with this flag.
Reviewed by: markj Tested by: pho Differential Revision: https://reviews.freebsd.org/D23712
show more ...
|
#
3c4ad300 |
| 17-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358000 through r358048.
|
#
34e2051f |
| 17-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Remove swblk_t.
It was used only to store the bounds of each swap device. However, since swblk_t is a signed 32-bit int and daddr_t is a signed 64-bit int, swp_pager_isondev() may return an invalid
Remove swblk_t.
It was used only to store the bounds of each swap device. However, since swblk_t is a signed 32-bit int and daddr_t is a signed 64-bit int, swp_pager_isondev() may return an invalid result if swap devices are repeatedly added and removed and sw_end for a device ends up becoming a negative number.
Note that the removed comment about maximum swap size still applies.
Reviewed by: jeff, kib Tested by: pho MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23666
show more ...
|
#
725b4ff0 |
| 17-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Fix a swap block allocation race.
putpages' allocation of swap blocks is done under the global sw_dev lock. Previously it would drop that lock before inserting the allocated blocks into the object'
Fix a swap block allocation race.
putpages' allocation of swap blocks is done under the global sw_dev lock. Previously it would drop that lock before inserting the allocated blocks into the object's trie, creating a window in which swap blocks are allocated but are not visible to swapoff. This can cause swp_pager_strategy() to fail and panic the system.
Fix the problem bluntly, by allocating swap blocks under the object lock.
Reviewed by: jeff, kib Tested by: pho MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23665
show more ...
|
#
c90d075b |
| 17-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Fix object locking races in swapoff(2).
swap_pager_swapoff_object()'s goal is to allocate pages for all valid swap blocks belonging to the object, for which there is no resident page. If the page c
Fix object locking races in swapoff(2).
swap_pager_swapoff_object()'s goal is to allocate pages for all valid swap blocks belonging to the object, for which there is no resident page. If the page corresponding to a block is already resident and valid, the block can simply be discarded.
The existing implementation tries to minimize the number of I/Os used. For each cluster of swap blocks, it finds maximal runs of valid swap blocks not resident in memory, and valid resident pages. During this processing, the object lock may be dropped in several places: when calling getpages, or when blocking on a busy page in vm_page_grab_pages(). While the lock is dropped, another thread may free swap blocks, causing getpages to page in stale data.
Fix the problem following a suggestion from Jeff: use getpages' readahead capability to perform clustering rather than doing it ourselves. The simplies the code a bit without reintroducing the old behaviour of performing one I/O per page.
Reviewed by: jeff Reported by: dhw, gallatin Tested by: pho MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23664
show more ...
|