History log of /freebsd/sys/vm/vm_map.c (Results 26 – 50 of 1293)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 3b44ee50 10-Aug-2023 Konstantin Belousov <kib@FreeBSD.org>

vm_map_insert(): update herald comment

Only a part of the object may be mapped.

Noted by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.o

vm_map_insert(): update herald comment

Only a part of the object may be mapped.

Noted by: alc
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D41099

show more ...


# d0e4e53e 09-May-2023 Mark Johnston <markj@FreeBSD.org>

vm_map: Add a macro to fetch a map entry's split boundary index

The resulting code is a bit more concise. No functional change
intended.

Reviewed by: alc, dougm, kib
MFC after: 1 week
Differential

vm_map: Add a macro to fetch a map entry's split boundary index

The resulting code is a bit more concise. No functional change
intended.

Reviewed by: alc, dougm, kib
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D41249

show more ...


# 50d663b1 25-Jul-2023 Alan Cox <alc@FreeBSD.org>

vm: Fix vm_map_find_min()

Fix the handling of address hints that are less than min_addr by
vm_map_find_min().

Reported by: dchagin
Reviewed by: kib
Fixes: d8e6f4946cec0 "vm: Fix anonymous memory cl

vm: Fix vm_map_find_min()

Fix the handling of address hints that are less than min_addr by
vm_map_find_min().

Reported by: dchagin
Reviewed by: kib
Fixes: d8e6f4946cec0 "vm: Fix anonymous memory clustering under ASLR"
Differential Revision: https://reviews.freebsd.org/D41159

show more ...


# db6c7c7f 20-Jul-2023 Konstantin Belousov <kib@FreeBSD.org>

vmspace_fork(): do not override offset for the guard entries

The offset field contains protection for the stack guards.

Reported by: cy
Fixes: 21e45c30c35c9aa732073f725924caf581c93460
MFC after: 1

vmspace_fork(): do not override offset for the guard entries

The offset field contains protection for the stack guards.

Reported by: cy
Fixes: 21e45c30c35c9aa732073f725924caf581c93460
MFC after: 1 week

show more ...


# 21e45c30 19-Jul-2023 Konstantin Belousov <kib@FreeBSD.org>

mmap(MAP_STACK): on stack grow, use original protection

If mprotect(2) changed protection in the bottom of the currently grown
stack region, currently the changed protection would be used for the
st

mmap(MAP_STACK): on stack grow, use original protection

If mprotect(2) changed protection in the bottom of the currently grown
stack region, currently the changed protection would be used for the
stack grow on next fault. This is arguably unexpected.

Store the original protection for the entry at mmap(2) time in the
offset member of the gap vm_map_entry, and use it for protection of the
grown stack region.

PR: 272585
Reported by: John F. Carr <jfc@mit.edu>
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D41089

show more ...


# d8e6f494 23-Jun-2023 Alan Cox <alc@FreeBSD.org>

vm: Fix anonymous memory clustering under ASLR

By default, our ASLR implementation is supposed to cluster anonymous
memory allocations, unless the application's mmap(..., MAP_ANON, ...)
call include

vm: Fix anonymous memory clustering under ASLR

By default, our ASLR implementation is supposed to cluster anonymous
memory allocations, unless the application's mmap(..., MAP_ANON, ...)
call included a non-zero address hint. Unfortunately, clustering
never occurred because kern_mmap() always replaced the given address
hint when it was zero. So, the ASLR implementation always believed
that a non-zero hint had been provided and randomized the mapping's
location in the address space. To fix this problem, I'm pushing down
the point at which we convert a hint of zero to the minimum allocatable
address from kern_mmap() to vm_map_find_min().

Reviewed by: kib
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D40743

show more ...


# 1e0e335b 13-Apr-2023 Konstantin Belousov <kib@FreeBSD.org>

amd64: fix PKRU and swapout interaction

When vm_map_remove() is called from vm_swapout_map_deactivate_pages()
due to swapout, PKRU attributes for the removed range must be kept
intact. Provide a va

amd64: fix PKRU and swapout interaction

When vm_map_remove() is called from vm_swapout_map_deactivate_pages()
due to swapout, PKRU attributes for the removed range must be kept
intact. Provide a variant of pmap_remove(), pmap_map_delete(), to
allow pmap to distinguish between real removes of the UVA mappings
and any other internal removes, e.g. swapout.

For non-amd64, pmap_map_delete() is stubbed by define to pmap_remove().

Reported by: andrew
Reviewed by: markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D39556

show more ...


Revision tags: release/13.2.0, release/12.4.0
# 361971fb 02-Jun-2022 Kornel Dulęba <kd@FreeBSD.org>

Rework how shared page related data is stored

Store the shared page address in struct vmspace.
Also instead of storing absolute addresses of various shared page
segments save their offsets with resp

Rework how shared page related data is stored

Store the shared page address in struct vmspace.
Also instead of storing absolute addresses of various shared page
segments save their offsets with respect to the shared page address.
This will be more useful when the shared page address is randomized.

Approved by: mw(mentor)
Sponsored by: Stormshield
Obtained from: Semihalf
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D35393

show more ...


# 0cb2610e 16-Jul-2022 Mark Johnston <markj@FreeBSD.org>

vm: Remove handling for OBJT_DEFAULT objects

Now that OBJT_DEFAULT objects can't be instantiated, we can simplify
checks of the form object->type == OBJT_DEFAULT || (object->flags &
OBJ_SWAP) != 0.

vm: Remove handling for OBJT_DEFAULT objects

Now that OBJT_DEFAULT objects can't be instantiated, we can simplify
checks of the form object->type == OBJT_DEFAULT || (object->flags &
OBJ_SWAP) != 0. No functional change intended.

Reviewed by: alc, kib
Tested by: pho
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D35788

show more ...


# 70b29961 12-Jul-2022 Mark Johnston <markj@FreeBSD.org>

vm_map: Simplify a call to vm_object_allocate_anon()

vm_object_allocate_anon() automatically sets "charge" to 0 if no cred
reference is provided, so the caller doesn't need any conditional logic.

N

vm_map: Simplify a call to vm_object_allocate_anon()

vm_object_allocate_anon() automatically sets "charge" to 0 if no cred
reference is provided, so the caller doesn't need any conditional logic.

No functional change intended.

Reviewed by: alc, kib
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D35781

show more ...


# e123264e 20-Jun-2022 Mark Johnston <markj@FreeBSD.org>

vm: Fix racy checks for swap objects

Commit 4b8365d752ef introduced the ability to dynamically register
VM object types, for use by tmpfs, which creates swap-backed objects.
As a part of this, check

vm: Fix racy checks for swap objects

Commit 4b8365d752ef introduced the ability to dynamically register
VM object types, for use by tmpfs, which creates swap-backed objects.
As a part of this, checks for such objects changed from

object->type == OBJT_DEFAULT || object->type == OBJT_SWAP

to

object->type == OBJT_DEFAULT || (object->flags & OBJ_SWAP) != 0

In particular, objects of type OBJT_DEFAULT do not have OBJ_SWAP set;
the swap pager sets this flag when converting from OBJT_DEFAULT to
OBJT_SWAP.

A few of these checks are done without the object lock held. It turns
out that this can result in false negatives since the swap pager
converts objects like so:

object->type = OBJT_SWAP;
object->flags |= OBJ_SWAP;

Fix the problem by adding explicit tests for OBJT_SWAP objects in
unlocked checks.

PR: 258932
Fixes: 4b8365d752ef ("Add OBJT_SWAP_TMPFS pager")
Reported by: bdrewery
Reviewed by: kib
MFC after: 2 weeks
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D35470

show more ...


Revision tags: release/13.1.0
# b8ebd99a 14-Apr-2022 John Baldwin <jhb@FreeBSD.org>

vm: Use __diagused for variables only used in KASSERT().


# becaf643 14-Feb-2022 John Baldwin <jhb@FreeBSD.org>

Use vmspace->vm_stacktop in place of sv_usrstack in more places.

Reviewed by: markj
Obtained from: CheriBSD
Differential Revision: https://reviews.freebsd.org/D34174


# 46d35d41 18-Jan-2022 Mark Johnston <markj@FreeBSD.org>

fork: Copy the vm_stacktop field into the new vmspace

Fixes: 1811c1e957ee ("exec: Reimplement stack address randomization")
Reported by: pho
Reported by: syzbot+0446312a51bc13ead834@syzkaller.appspo

fork: Copy the vm_stacktop field into the new vmspace

Fixes: 1811c1e957ee ("exec: Reimplement stack address randomization")
Reported by: pho
Reported by: syzbot+0446312a51bc13ead834@syzkaller.appspotmail.com
Sponsored by: The FreeBSD Foundation

show more ...


# 1811c1e9 17-Jan-2022 Mark Johnston <markj@FreeBSD.org>

exec: Reimplement stack address randomization

The approach taken by the stack gap implementation was to insert a
random gap between the top of the fixed stack mapping and the true top
of the main pr

exec: Reimplement stack address randomization

The approach taken by the stack gap implementation was to insert a
random gap between the top of the fixed stack mapping and the true top
of the main process stack. This approach was chosen so as to avoid
randomizing the previously fixed address of certain process metadata
stored at the top of the stack, but had some shortcomings. In
particular, mlockall(2) calls would wire the gap, bloating the process'
memory usage, and RLIMIT_STACK included the size of the gap so small
(< several MB) limits could not be used.

There is little value in storing each process' ps_strings at a fixed
location, as only very old programs hard-code this address; consumers
were converted decades ago to use a sysctl-based interface for this
purpose. Thus, this change re-implements stack address randomization by
simply breaking the convention of storing ps_strings at a fixed
location, and randomizing the location of the entire stack mapping.
This implementation is simpler and avoids the problems mentioned above,
while being unlikely to break compatibility anywhere the default ASLR
settings are used.

The kern.elfN.aslr.stack_gap sysctl is renamed to kern.elfN.aslr.stack,
and is re-enabled by default.

PR: 260303
Reviewed by: kib
Discussed with: emaste, mw
MFC after: 1 month
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D33704

show more ...


# c606ab59 31-Dec-2021 Doug Moore <dougm@FreeBSD.org>

vm_extern: use standard address checkers everywhere

Define simple functions for alignment and boundary checks and use them
everywhere instead of having slightly different implementations
scattered a

vm_extern: use standard address checkers everywhere

Define simple functions for alignment and boundary checks and use them
everywhere instead of having slightly different implementations
scattered about. Define them in vm_extern.h and use them where
possible where vm_extern.h is included.

Reviewed by: kib, markj
Differential Revision: https://reviews.freebsd.org/D33685

show more ...


Revision tags: release/12.3.0
# 889b56c8 13-Oct-2021 Dawid Gorecki <dgr@semihalf.com>

setrlimit: Take stack gap into account.

Calling setrlimit with stack gap enabled and with low values of stack
resource limit often caused the program to abort immediately after
exiting the syscall.

setrlimit: Take stack gap into account.

Calling setrlimit with stack gap enabled and with low values of stack
resource limit often caused the program to abort immediately after
exiting the syscall. This happened due to the fact that the resource
limit was calculated assuming that the stack started at sv_usrstack,
while with stack gap enabled the stack is moved by a random number
of bytes.

Save information about stack size in struct vmspace and adjust the
rlim_cur value. If the rlim_cur and stack gap is bigger than rlim_max,
then the value is truncated to rlim_max.

PR: 253208
Reviewed by: kib
Obtained from: Semihalf
Sponsored by: Stormshield
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D31516

show more ...


# 9246b309 13-May-2021 Mark Johnston <markj@FreeBSD.org>

fork: Suspend other threads if both RFPROC and RFMEM are not set

Otherwise, a multithreaded parent process may trigger races in
vm_forkproc() if one thread calls rfork() with RFMEM set and another
c

fork: Suspend other threads if both RFPROC and RFMEM are not set

Otherwise, a multithreaded parent process may trigger races in
vm_forkproc() if one thread calls rfork() with RFMEM set and another
calls rfork() without RFMEM.

Also simplify vm_forkproc() a bit, vmspace_unshare() already checks to
see if the address space is shared.

Reported by: syzbot+0aa7c2bec74c4066c36f@syzkaller.appspotmail.com
Reported by: syzbot+ea84cb06937afeae609d@syzkaller.appspotmail.com
Reviewed by: kib
MFC after: 1 week
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D30220

show more ...


# 4b8365d7 01-May-2021 Konstantin Belousov <kib@FreeBSD.org>

Add OBJT_SWAP_TMPFS pager

This is OBJT_SWAP pager, specialized for tmpfs. Right now, both swap pager
and generic vm code have to explicitly handle swap objects which are tmpfs
vnode v_object, in th

Add OBJT_SWAP_TMPFS pager

This is OBJT_SWAP pager, specialized for tmpfs. Right now, both swap pager
and generic vm code have to explicitly handle swap objects which are tmpfs
vnode v_object, in the special ways. Replace (almost) all such places with
proper methods.

Since VM still needs a notion of the 'swap object', regardless of its
use, add yet another type-classification flag OBJ_SWAP. Set it in
vm_object_allocate() where other type-class flags are set.

This change almost completely eliminates the knowledge of tmpfs from VM,
and opens a way to make OBJT_SWAP_TMPFS loadable from tmpfs.ko.

Reviewed by: markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D30070

show more ...


# 192112b7 01-May-2021 Konstantin Belousov <kib@FreeBSD.org>

Add pgo_getvp method

This eliminates the staircase of conditions in vm_map_entry_set_vnode_text().

Reviewed by: markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Different

Add pgo_getvp method

This eliminates the staircase of conditions in vm_map_entry_set_vnode_text().

Reviewed by: markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D30070

show more ...


Revision tags: release/13.0.0
# 420d4be3 13-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

vm_map_protect(): remove not needed recalculations of new_prot, new_maxprot

Requested by: alc
Sponsored by: The FreeBSD Foundation


# 0659df6f 12-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

vm_map_protect: allow to set prot and max_prot in one go.

This prevents a situation where other thread modifies map entries
permissions between setting max_prot, then relocking, then setting prot,
c

vm_map_protect: allow to set prot and max_prot in one go.

This prevents a situation where other thread modifies map entries
permissions between setting max_prot, then relocking, then setting prot,
confusing the operation outcome. E.g. you can get an error that is not
possible if operation is performed atomic.

Also enable setting rwx for max_prot even if map does not allow to set
effective rwx protection.

Reviewed by: brooks, markj (previous version)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D28117

show more ...


# 9402bb44 12-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

vmspace_fork: preserve wx settings in the child vm map after fork

Noted by: markj
Sponsored by: The FreeBSD Foundation


# 2e1c94aa 08-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

Implement enforcing write XOR execute mapping policy.

It is checked in vm_map_insert() and vm_map_protect() that PROT_WRITE |
PROT_EXEC are never specified together, if vm_map has MAP_WX flag set.
F

Implement enforcing write XOR execute mapping policy.

It is checked in vm_map_insert() and vm_map_protect() that PROT_WRITE |
PROT_EXEC are never specified together, if vm_map has MAP_WX flag set.
FreeBSD control flag allows specific binary to request WX exempt, and
there are per ABI boolean sysctls kern.elf{32,64}.allow_wx to enable/
disable globally.

Reviewed by: emaste, jhb
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D28050

show more ...


# 20f02659 11-Nov-2020 Mark Johnston <markj@FreeBSD.org>

vm_map: Handle kernel map entry allocator recursion

On platforms without a direct map[*], vm_map_insert() may in rare
situations need to allocate a kernel map entry in order to allocate
kernel map e

vm_map: Handle kernel map entry allocator recursion

On platforms without a direct map[*], vm_map_insert() may in rare
situations need to allocate a kernel map entry in order to allocate
kernel map entries. This poses a problem similar to the one solved for
vmem boundary tags by vmem_bt_alloc(). In fact the kernel map case is a
bit more complicated since we must allocate entries with the kernel map
locked, whereas vmem can recurse into itself because boundary tags are
allocated up-front.

The solution is to add a custom slab allocator for kmapentzone which
allocates KVA directly from kernel_map, bypassing the kmem_* layer.
This avoids mutual recursion with the vmem btag allocator. Then, when
vm_map_insert() allocates a new kernel map entry, it avoids triggering
allocation of a new slab with M_NOVM until after the insertion is
complete. Instead, vm_map_insert() allocates from the reserve and sets
a flag in kernel_map to trigger re-population of the reserve just before
the map is unlocked. This places an implicit upper bound on the number
of kernel map entries that may be allocated before the kernel map lock
is released, but in general a bound of 1 suffices.

[*] This also comes up on amd64 with UMA_MD_SMALL_ALLOC undefined, a
configuration required by some kernel sanitizers.

Discussed with: kib, rlibby
Reported by: andrew
Tested by: pho (i386 and amd64 with !UMA_MD_SMALL_ALLOC)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D26851

show more ...


12345678910>>...52