#
6d42d5db |
| 19-Oct-2024 |
Doug Moore <dougm@FreeBSD.org> |
vm_glue: use vm_page_alloc_domain_after
Drop the function vm_page_alloc_domain, used only in vm_thread_stack_back, and replace it with vm_page_alloc_domain_after there, with the extra mpred argument
vm_glue: use vm_page_alloc_domain_after
Drop the function vm_page_alloc_domain, used only in vm_thread_stack_back, and replace it with vm_page_alloc_domain_after there, with the extra mpred argument either computed on the first iteration or retrieved from previous iterations. Define a function vm_page_mpred() for computing that first mpred argument.
Reviewed by: bnovkov Differential Revision: https://reviews.freebsd.org/D47054
show more ...
|
Revision tags: release/13.4.0 |
|
#
01518f5e |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
sleepqueue: Remove kernel stack swapping support, part 10
- Remove kick_proc0(). - Make the return type of sleepq_broadcast(), sleepq_signal(), etc., void. - Fix up callers.
Tested by: pho Review
sleepqueue: Remove kernel stack swapping support, part 10
- Remove kick_proc0(). - Make the return type of sleepq_broadcast(), sleepq_signal(), etc., void. - Fix up callers.
Tested by: pho Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D46128
show more ...
|
#
235750ee |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 8
- The kernel stack objects do not need to be pageable, so use OBJT_PHYS objects instead. The main difference is that mappings do not require PV
vm: Remove kernel stack swapping support, part 8
- The kernel stack objects do not need to be pageable, so use OBJT_PHYS objects instead. The main difference is that mappings do not require PV entries. - Make some externally visible functions, relating to kernel thread stack internals, private to vm_glue.c, as their external consumers are now gone.
Tested by: pho Reviewed by: alc, kib Differential Revision: https://reviews.freebsd.org/D46119
show more ...
|
#
0dd77895 |
| 29-Jul-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Remove kernel stack swapping support, part 2
After mi_startup() finishes, thread0 becomes the "swapper", whose responsibility is to swap threads back in on demand. Now that threads can't be swa
vm: Remove kernel stack swapping support, part 2
After mi_startup() finishes, thread0 becomes the "swapper", whose responsibility is to swap threads back in on demand. Now that threads can't be swapped out, there is no use for this thread. Just sleep forever once sysinits are finished; thread_exit() doesn't work because thread0 is allocated statically. The thread could be repurposed if that would be useful.
Tested by: pho Reviewed by: alc, imp, kib Differential Revision: https://reviews.freebsd.org/D46113
show more ...
|
#
3e00c11a |
| 12-Jul-2024 |
Alan Cox <alc@FreeBSD.org> |
arm64: Support the L3 ATTR_CONTIGUOUS page size in pagesizes[]
Update pagesizes[] to include the L3 ATTR_CONTIGUOUS (L3C) page size, which is 64KB when the base page size is 4KB and 2MB when the bas
arm64: Support the L3 ATTR_CONTIGUOUS page size in pagesizes[]
Update pagesizes[] to include the L3 ATTR_CONTIGUOUS (L3C) page size, which is 64KB when the base page size is 4KB and 2MB when the base page size is 16KB.
Add support for L3C pages to shm_create_largepage().
Add support for creating L3C page mappings to pmap_enter(psind=1).
Add support for reporting L3C page mappings to mincore(2) and procstat(8).
Update vm_fault_soft_fast() and vm_fault_populate() to handle multiple superpage sizes.
Declare arm64 as supporting two superpage reservation sizes, and simulate two superpage reservation sizes, updating the vm_page's psind field to reflect the correct page size from pagesizes[]. (The next patch in this series will replace this simulation. This patch is already big enough.)
Co-authored-by: Eliot Solomon <ehs3@rice.edu> Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D45766
show more ...
|
Revision tags: release/14.1.0 |
|
#
9e016408 |
| 10-May-2024 |
John Baldwin <jhb@FreeBSD.org> |
vm: Change the return types of kernacc and useracc to bool
Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D45156
|
#
661a83f9 |
| 29-Apr-2024 |
Mark Johnston <markj@FreeBSD.org> |
vm: Fix error handling in vm_thread_stack_back()
vm_object_page_remove() wants to busy the page, but that won't work here. (Kernel stack pages are always busy.)
Make the error handling path look m
vm: Fix error handling in vm_thread_stack_back()
vm_object_page_remove() wants to busy the page, but that won't work here. (Kernel stack pages are always busy.)
Make the error handling path look more like vm_thread_stack_dispose().
Reported by: pho Reviewed by: kib, bnovkov Fixes: 7a79d0669761 ("vm: improve kstack_object pindex calculation to avoid pindex holes") Differential Revision: https://reviews.freebsd.org/D45019
show more ...
|
#
800da341 |
| 22-Apr-2024 |
Mark Johnston <markj@FreeBSD.org> |
thread: Simplify sanitizer integration with thread creation
fork() may allocate a new thread in one of two ways: from UMA, or cached in a freed proc that was just allocated from UMA. In either case
thread: Simplify sanitizer integration with thread creation
fork() may allocate a new thread in one of two ways: from UMA, or cached in a freed proc that was just allocated from UMA. In either case, KASAN and KMSAN need to initialize some state; in particular they need to initialize the shadow mapping of the new thread's stack.
This is done differently between KASAN and KMSAN, which is confusing. This patch improves things a bit: - Add a new thread_recycle() function, which moves all kernel stack handling out of kern_fork.c, since it doesn't really belong there. - Then, thread_alloc_stack() has only one local caller, so just inline it. - Avoid redundant shadow stack initialization: thread_alloc() initializes the KMSAN shadow stack (via kmsan_thread_alloc()) even through vm_thread_new() already did that. - Add kasan_thread_alloc(), for consistency with kmsan_thread_alloc().
No functional change intended.
Reviewed by: khng MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D44891
show more ...
|
#
7a79d066 |
| 09-Apr-2024 |
Bojan Novković <bnovkov@FreeBSD.org> |
vm: improve kstack_object pindex calculation to avoid pindex holes
This commit replaces the linear transformation of kernel virtual addresses to kstack_object pindex values with a non-linear scheme
vm: improve kstack_object pindex calculation to avoid pindex holes
This commit replaces the linear transformation of kernel virtual addresses to kstack_object pindex values with a non-linear scheme that circumvents physical memory fragmentation caused by kernel stack guard pages. The new mapping scheme is used to effectively "skip" guard pages and assign pindices for non-guard pages in a contiguous fashion.
The new allocation scheme requires that all default-sized kstack KVAs come from a separate, specially aligned region of the KVA space. For this to work, this commited introduces a dedicated per-domain kstack KVA arena used to allocate kernel stacks of default size. The behaviour on 32-bit platforms remains unchanged due to a significatly smaller KVA space.
Aside from fullfilling the requirements imposed by the new scheme, a separate kstack KVA arena facilitates superpage promotion in the rest of kernel and causes most kstacks to have guard pages at both ends.
Reviewed by: alc, kib, markj Tested by: markj Approved by: markj (mentor) Differential Revision: https://reviews.freebsd.org/D38852
show more ...
|
Revision tags: release/13.3.0 |
|
#
29363fb4 |
| 23-Nov-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove ancient SCCS tags.
Remove ancient SCCS tags from the tree, automated scripting, with two minor fixup to keep things compiling. All the common forms in the tree were removed with a perl s
sys: Remove ancient SCCS tags.
Remove ancient SCCS tags from the tree, automated scripting, with two minor fixup to keep things compiling. All the common forms in the tree were removed with a perl script.
Sponsored by: Netflix
show more ...
|
Revision tags: release/14.0.0 |
|
#
685dc743 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: one-line .c pattern
Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
|
Revision tags: release/13.2.0 |
|
#
bbb6228e |
| 13-Feb-2023 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: ansify
Sponsored by: Rubicon Communications, LLC ("Netgate")
|
Revision tags: release/12.4.0, release/13.1.0 |
|
#
f54882a8 |
| 06-Jan-2022 |
Konstantin Belousov <kib@FreeBSD.org> |
Remove special kstack allocation code for mips.
The arch required two-pages alignment due to single TLB entry caching two consequtive mappings.
Reviewed by: imp Sponsored by: The FreeBSD Foundation
Remove special kstack allocation code for mips.
The arch required two-pages alignment due to single TLB entry caching two consequtive mappings.
Reviewed by: imp Sponsored by: The FreeBSD Foundation Differential revision: https://reviews.freebsd.org/D33763
show more ...
|
Revision tags: release/12.3.0 |
|
#
c28e39c3 |
| 03-Nov-2021 |
Gordon Bergling <gbe@FreeBSD.org> |
Fix a common typo in syctl descriptions
- s/maxiumum/maximum/
MFC after: 3 days
|
#
10094910 |
| 10-Aug-2021 |
Mark Johnston <markj@FreeBSD.org> |
uma: Add KMSAN hooks
For now, just hook the allocation path: upon allocation, items are marked as initialized (absent M_ZERO). Some zones are exempted from this when it would otherwise raise false
uma: Add KMSAN hooks
For now, just hook the allocation path: upon allocation, items are marked as initialized (absent M_ZERO). Some zones are exempted from this when it would otherwise raise false positives.
Use kmsan_orig() to update the origin map for UMA and malloc(9) allocations. This allows KMSAN to print the return address when an uninitialized UMA item is implicated in a report. For example: panic: MSan: Uninitialized UMA memory from m_getm2+0x7fe
Sponsored by: The FreeBSD Foundation
show more ...
|
#
9246b309 |
| 13-May-2021 |
Mark Johnston <markj@FreeBSD.org> |
fork: Suspend other threads if both RFPROC and RFMEM are not set
Otherwise, a multithreaded parent process may trigger races in vm_forkproc() if one thread calls rfork() with RFMEM set and another c
fork: Suspend other threads if both RFPROC and RFMEM are not set
Otherwise, a multithreaded parent process may trigger races in vm_forkproc() if one thread calls rfork() with RFMEM set and another calls rfork() without RFMEM.
Also simplify vm_forkproc() a bit, vmspace_unshare() already checks to see if the address space is shared.
Reported by: syzbot+0aa7c2bec74c4066c36f@syzkaller.appspotmail.com Reported by: syzbot+ea84cb06937afeae609d@syzkaller.appspotmail.com Reviewed by: kib MFC after: 1 week Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D30220
show more ...
|
#
244f3ec6 |
| 13-Apr-2021 |
Mark Johnston <markj@FreeBSD.org> |
kstack: Add KASAN state transitions
We allocate kernel stacks using a UMA cache zone. Cache zones have KASAN disabled by default, but in this case it makes sense to enable it.
Reviewed by: andrew
kstack: Add KASAN state transitions
We allocate kernel stacks using a UMA cache zone. Cache zones have KASAN disabled by default, but in this case it makes sense to enable it.
Reviewed by: andrew MFC after: 2 weeks Differential Revision: https://reviews.freebsd.org/D29457
show more ...
|
Revision tags: release/13.0.0 |
|
#
f7db0c95 |
| 04-Nov-2020 |
Mark Johnston <markj@FreeBSD.org> |
vmspace: Convert to refcount(9)
This is mostly mechanical except for vmspace_exit(). There, use the new refcount_release_if_last() to avoid switching to vmspace0 unless other processes are sharing
vmspace: Convert to refcount(9)
This is mostly mechanical except for vmspace_exit(). There, use the new refcount_release_if_last() to avoid switching to vmspace0 unless other processes are sharing the vmspace. In that case, upon switching to vmspace0 we can unconditionally release the reference.
Remove the volatile qualifier from vm_refcnt now that accesses are protected using refcount(9) KPIs.
Reviewed by: alc, kib, mmel MFC after: 1 month Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D27057
show more ...
|
Revision tags: release/12.2.0 |
|
#
89d2fb14 |
| 09-Sep-2020 |
Konstantin Belousov <kib@FreeBSD.org> |
Add interruptible variant of vm_wait(9), vm_wait_intr(9).
Also add msleep flags argument to vm_wait_doms(9).
Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week
Add interruptible variant of vm_wait(9), vm_wait_intr(9).
Also add msleep flags argument to vm_wait_doms(9).
Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D24652
show more ...
|
Revision tags: release/11.4.0 |
|
#
f13fa9df |
| 26-Apr-2020 |
Mark Johnston <markj@FreeBSD.org> |
Use a single VM object for kernel stacks.
Previously we allocated a separate VM object for each kernel stack. However, fully constructed kernel stacks are cached by UMA, so there is no harm in using
Use a single VM object for kernel stacks.
Previously we allocated a separate VM object for each kernel stack. However, fully constructed kernel stacks are cached by UMA, so there is no harm in using a single global object for all stacks. This reduces memory consumption and makes it easier to define a memory allocation policy for kernel stack pages, with the aim of reducing physical memory fragmentation.
Add a global kstack_object, and use the stack KVA address to index into the object like we do with kernel_object.
Reviewed by: kib Tested by: pho Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D24473
show more ...
|
#
91019ea7 |
| 29-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358400 through r358465.
|
#
7aaf252c |
| 28-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Convert a few triviail consumers to the new unlocked grab API.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23847
|
#
43c7dd6b |
| 19-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358075 through r358130.
|
#
e9ceb9dd |
| 19-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Don't release xbusy on kmem pages. After lockless page lookup we will not be able to guarantee that they can be racquired without blocking.
Reviewed by: kib Discussed with: markj Differential Revis
Don't release xbusy on kmem pages. After lockless page lookup we will not be able to guarantee that they can be racquired without blocking.
Reviewed by: kib Discussed with: markj Differential Revision: https://reviews.freebsd.org/D23506
show more ...
|
#
051669e8 |
| 25-Jan-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r356931 through r357118.
|