#
6aa98f78 |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
conf: Remove kernel stack swapping support, part 12
Remove the NO_SWAPPING option. There is still some code in vm_swapout.c, but it relates to RACCT handling. Remove the option and make compilatio
conf: Remove kernel stack swapping support, part 12
Remove the NO_SWAPPING option. There is still some code in vm_swapout.c, but it relates to RACCT handling. Remove the option and make compilation of vm_swapout.c conditional on RACCT.
Tested by: pho Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D46130
show more ...
|
#
ec84a986 |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 11
- Remove sysctls that control stack swapping, update documentation. - Remove vm_swapout_dummy.c, which serves no purpose now.
Tested by: pho Review
vm: Remove kernel stack swapping support, part 11
- Remove sysctls that control stack swapping, update documentation. - Remove vm_swapout_dummy.c, which serves no purpose now.
Tested by: pho Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D46129
show more ...
|
#
47288801 |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
proc: Remove kernel stack swapping support, part 6
- Remove most checks of the P_INMEM flag. - Some uses remain since a few userspace tools, e.g., ps(1) and top(1) expect the flag to be set. Thes
proc: Remove kernel stack swapping support, part 6
- Remove most checks of the P_INMEM flag. - Some uses remain since a few userspace tools, e.g., ps(1) and top(1) expect the flag to be set. These can be cleaned up but the code has most likely been copy-pasted elsewhere and while linger for a long time.
Tested by: pho Reviewed by: alc, imp, kib Differential Revision: https://reviews.freebsd.org/D46117
show more ...
|
#
8370e9df |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 3
- Modify PHOLD() to no longer fault in the process. - Remove _PHOLD_LITE(), which is now the same as _PHOLD(), fix up consumers. - Remove faultin()
vm: Remove kernel stack swapping support, part 3
- Modify PHOLD() to no longer fault in the process. - Remove _PHOLD_LITE(), which is now the same as _PHOLD(), fix up consumers. - Remove faultin() and its callees.
Tested by: pho Reviewed by: imp, kib Differential Revision: https://reviews.freebsd.org/D46114
show more ...
|
#
0dd77895 |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 2
After mi_startup() finishes, thread0 becomes the "swapper", whose responsibility is to swap threads back in on demand. Now that threads can't be swa
vm: Remove kernel stack swapping support, part 2
After mi_startup() finishes, thread0 becomes the "swapper", whose responsibility is to swap threads back in on demand. Now that threads can't be swapped out, there is no use for this thread. Just sleep forever once sysinits are finished; thread_exit() doesn't work because thread0 is allocated statically. The thread could be repurposed if that would be useful.
Tested by: pho Reviewed by: alc, imp, kib Differential Revision: https://reviews.freebsd.org/D46113
show more ...
|
#
13a1129d |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 1
- Disconnect the swapout daemon from the page daemon. - Remove swapout() and swapout_procs().
Tested by: pho Reviewed by: alc, imp, kib Differential
vm: Remove kernel stack swapping support, part 1
- Disconnect the swapout daemon from the page daemon. - Remove swapout() and swapout_procs().
Tested by: pho Reviewed by: alc, imp, kib Differential Revision: https://reviews.freebsd.org/D46112
show more ...
|
Revision tags: release/14.1.0 |
|
#
7a79d066 |
| 09-Apr-2024 |
Bojan Novković <bnovkov@FreeBSD.org> |
vm: improve kstack_object pindex calculation to avoid pindex holes
This commit replaces the linear transformation of kernel virtual addresses to kstack_object pindex values with a non-linear scheme
vm: improve kstack_object pindex calculation to avoid pindex holes
This commit replaces the linear transformation of kernel virtual addresses to kstack_object pindex values with a non-linear scheme that circumvents physical memory fragmentation caused by kernel stack guard pages. The new mapping scheme is used to effectively "skip" guard pages and assign pindices for non-guard pages in a contiguous fashion.
The new allocation scheme requires that all default-sized kstack KVAs come from a separate, specially aligned region of the KVA space. For this to work, this commited introduces a dedicated per-domain kstack KVA arena used to allocate kernel stacks of default size. The behaviour on 32-bit platforms remains unchanged due to a significatly smaller KVA space.
Aside from fullfilling the requirements imposed by the new scheme, a separate kstack KVA arena facilitates superpage promotion in the rest of kernel and causes most kstacks to have guard pages at both ends.
Reviewed by: alc, kib, markj Tested by: markj Approved by: markj (mentor) Differential Revision: https://reviews.freebsd.org/D38852
show more ...
|
Revision tags: release/13.3.0 |
|
#
29363fb4 |
| 23-Nov-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove ancient SCCS tags.
Remove ancient SCCS tags from the tree, automated scripting, with two minor fixup to keep things compiling. All the common forms in the tree were removed with a perl s
sys: Remove ancient SCCS tags.
Remove ancient SCCS tags from the tree, automated scripting, with two minor fixup to keep things compiling. All the common forms in the tree were removed with a perl script.
Sponsored by: Netflix
show more ...
|
Revision tags: release/14.0.0 |
|
#
685dc743 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: one-line .c pattern
Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
|
Revision tags: release/13.2.0, release/12.4.0, release/13.1.0 |
|
#
b8ebd99a |
| 14-Apr-2022 |
John Baldwin <jhb@FreeBSD.org> |
vm: Use __diagused for variables only used in KASSERT().
|
Revision tags: release/12.3.0 |
|
#
0f559a9f |
| 17-Oct-2021 |
Edward Tomasz Napierala <trasz@FreeBSD.org> |
Make vmdaemon timeout configurable
Make vmdaemon timeout configurable, so that one can adjust how often it runs.
Here's a trick: set this to 1, then run 'limits -m 0 sh', then run whatever you want
Make vmdaemon timeout configurable
Make vmdaemon timeout configurable, so that one can adjust how often it runs.
Here's a trick: set this to 1, then run 'limits -m 0 sh', then run whatever you want with 'ktrace -it XXX', and observe how the working set changes over time.
Reviewed By: kib Sponsored By: EPSRC Differential Revision: https://reviews.freebsd.org/D22038
show more ...
|
Revision tags: release/13.0.0, release/12.2.0, release/11.4.0 |
|
#
f13fa9df |
| 26-Apr-2020 |
Mark Johnston <markj@FreeBSD.org> |
Use a single VM object for kernel stacks.
Previously we allocated a separate VM object for each kernel stack. However, fully constructed kernel stacks are cached by UMA, so there is no harm in using
Use a single VM object for kernel stacks.
Previously we allocated a separate VM object for each kernel stack. However, fully constructed kernel stacks are cached by UMA, so there is no harm in using a single global object for all stacks. This reduces memory consumption and makes it easier to define a memory allocation policy for kernel stack pages, with the aim of reducing physical memory fragmentation.
Add a global kstack_object, and use the stack KVA address to index into the object like we do with kernel_object.
Reviewed by: kib Tested by: pho Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D24473
show more ...
|
#
91019ea7 |
| 29-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358400 through r358465.
|
#
c99d0c58 |
| 28-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Add a blocking counter KPI.
refcount(9) was recently extended to support waiting on a refcount to drop to zero, as this was needed for a lockless VM object paging-in-progress counter. However, this
Add a blocking counter KPI.
refcount(9) was recently extended to support waiting on a refcount to drop to zero, as this was needed for a lockless VM object paging-in-progress counter. However, this adds overhead to all uses of refcount(9) and doesn't really match traditional refcounting semantics: once a counter has dropped to zero, the protected object may be freed at any point and it is not safe to dereference the counter.
This change removes that extension and instead adds a new set of KPIs, blockcount_*, for use by VM object PIP and busy.
Reviewed by: jeff, kib, mjg Tested by: pho Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23723
show more ...
|
#
43c7dd6b |
| 19-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358075 through r358130.
|
#
e9ceb9dd |
| 19-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Don't release xbusy on kmem pages. After lockless page lookup we will not be able to guarantee that they can be racquired without blocking.
Reviewed by: kib Discussed with: markj Differential Revis
Don't release xbusy on kmem pages. After lockless page lookup we will not be able to guarantee that they can be racquired without blocking.
Reviewed by: kib Discussed with: markj Differential Revision: https://reviews.freebsd.org/D23506
show more ...
|
#
53d2936c |
| 20-Jan-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r356848 through r356919.
|
#
d6e13f3b |
| 20-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Don't hold the object lock while calling getpages.
The vnode pager does not want the object lock held. Moving this out allows further object lock scope reduction in callers. While here add some mi
Don't hold the object lock while calling getpages.
The vnode pager does not want the object lock held. Moving this out allows further object lock scope reduction in callers. While here add some missing paging in progress calls and an assert. The object handle is now protected explicitly with pip.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23033
show more ...
|
#
9f5632e6 |
| 28-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Remove page locking for queue operations.
With the previous reviews, the page lock is no longer required in order to perform queue operations on a page. It is also no longer needed in the page queu
Remove page locking for queue operations.
With the previous reviews, the page lock is no longer required in order to perform queue operations on a page. It is also no longer needed in the page queue scans. This change effectively eliminates remaining uses of the page lock and also the false sharing caused by multiple pages sharing a page lock.
Reviewed by: jeff Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22885
show more ...
|
#
3c01c56b |
| 28-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Don't update per-page activation counts in the swapout code.
This avoids duplicating the work of the page daemon's active queue scan. Moreover, this duplication was inconsistent: - PGA_REFERENCED is
Don't update per-page activation counts in the swapout code.
This avoids duplicating the work of the page daemon's active queue scan. Moreover, this duplication was inconsistent: - PGA_REFERENCED is not counted in act_count unless pmap_ts_referenced() returned 0, but the page daemon always counts PGA_REFERENCED towards the activation count. - The swapout daemon always activates a referenced page, but the page daemon only does so when the containing object is mapped at least once.
The main purpose of swapout_deactivate_pages() is to shrink the number of pages mapped into a given pmap. To do this without unmapping active pages, use the non-destructive pmap_is_referenced() instead of the destructive pmap_ts_referenced() and deactivate pages accordingly. This simplifies some future changes to the locking protocol for page queue state.
Reviewed by: kib Discussed with: jeff Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22674
show more ...
|
#
61a74c5c |
| 15-Dec-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
schedlock 1/4
Eliminate recursion from most thread_lock consumers. Return from sched_add() without the thread_lock held. This eliminates unnecessary atomics and lock word loads as well as reducing
schedlock 1/4
Eliminate recursion from most thread_lock consumers. Return from sched_add() without the thread_lock held. This eliminates unnecessary atomics and lock word loads as well as reducing the hold time for scheduler locks. This will eventually allow for lockless remote adds.
Discussed with: kib Reviewed by: jhb Tested by: pho Differential Revision: https://reviews.freebsd.org/D22626
show more ...
|
#
9be9ea42 |
| 10-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Add a helper function to the swapout daemon's deactivation code.
vm_swapout_object_deactivate_pages() is renamed to vm_swapout_object_deactivate(), and the loop body is moved into the new vm_swapout
Add a helper function to the swapout daemon's deactivation code.
vm_swapout_object_deactivate_pages() is renamed to vm_swapout_object_deactivate(), and the loop body is moved into the new vm_swapout_object_deactivate_page(). This makes the code a bit easier to follow and is in preparation for some functional changes.
Reviewed by: jeff, kib Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22651
show more ...
|
#
5cff1f4d |
| 10-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Introduce vm_page_astate.
This is a 32-bit structure embedded in each vm_page, consisting mostly of page queue state. The use of a structure makes it easy to store a snapshot of a page's queue stat
Introduce vm_page_astate.
This is a 32-bit structure embedded in each vm_page, consisting mostly of page queue state. The use of a structure makes it easy to store a snapshot of a page's queue state in a stack variable and use cmpset loops to update that state without requiring the page lock.
This change merely adds the structure and updates references to atomic state fields. No functional change intended.
Reviewed by: alc, jeff, kib Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22650
show more ...
|
Revision tags: release/12.1.0 |
|
#
0012f373 |
| 15-Oct-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are in the updated vm_page.h comments.
Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21594
show more ...
|
#
63e97555 |
| 15-Oct-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
(1/6) Replace busy checks with acquires where it is trival to do so.
This is the first in a series of patches that promotes the page busy field to a first class lock that no longer requires the obje
(1/6) Replace busy checks with acquires where it is trival to do so.
This is the first in a series of patches that promotes the page busy field to a first class lock that no longer requires the object lock for consistency.
Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21548
show more ...
|