History log of /freebsd/sys/vm/vm_map.h (Results 1 – 25 of 427)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# fae0cc5f 08-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm/vm_map.h: drop vm_flags_t

Suggested by: alc
Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47934


# d939fd2d 07-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm_map: convert several bool members into flags

Extend flags to u_int.
Move system_map and needs_wakeup bools into flags.

Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential revision

vm_map: convert several bool members into flags

Extend flags to u_int.
Move system_map and needs_wakeup bools into flags.

Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47934

show more ...


# b4431e95 08-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm/vm_map.h: extend number of digits in vm_map flags definitions

Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47934


# c5b19cef 07-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm_map: wrap map->system_map checks into wrapper

Reviewed by: alc
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47934


# 6ed68e6f 05-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm_map: overlap system map mutex and user man sx

This saves 616-584 = 32 bytes per struct vmspace on amd64, which allows
to pack 7 vmspaces per page vs. 6 for non-overlapping layout.

I used anonymo

vm_map: overlap system map mutex and user man sx

This saves 616-584 = 32 bytes per struct vmspace on amd64, which allows
to pack 7 vmspaces per page vs. 6 for non-overlapping layout.

I used anonymous union member feature to avoid too much churn.

Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D47934

show more ...


# d302c053 05-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm: rename MAP_STACK_GROWS_DOWN to MAP_STACK_AREA

Reviewed by: alc, dougm, markj
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47892


# 03046754 04-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

vm_map: remove _GN suffix from MAP_ENTRY_STACK_GAP and MAP_CREATE_STACK_GAP symbols`

Reviewed by: alc, dougm, markj
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebs

vm_map: remove _GN suffix from MAP_ENTRY_STACK_GAP and MAP_CREATE_STACK_GAP symbols`

Reviewed by: alc, dougm, markj
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47892

show more ...


# 17e624ca 04-Dec-2024 Konstantin Belousov <kib@FreeBSD.org>

sys/vm: remove support for growing-up stacks

Reviewed by: alc, dougm, markj
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D47892


Revision tags: release/14.2.0
# 0ecbb28c 15-Sep-2024 Konstantin Belousov <kib@FreeBSD.org>

vm_map: add vm_map_find_locked(9)

Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D46678


Revision tags: release/13.4.0, release/14.1.0, release/13.3.0
# 29363fb4 23-Nov-2023 Warner Losh <imp@FreeBSD.org>

sys: Remove ancient SCCS tags.

Remove ancient SCCS tags from the tree, automated scripting, with two
minor fixup to keep things compiling. All the common forms in the tree
were removed with a perl s

sys: Remove ancient SCCS tags.

Remove ancient SCCS tags from the tree, automated scripting, with two
minor fixup to keep things compiling. All the common forms in the tree
were removed with a perl script.

Sponsored by: Netflix

show more ...


Revision tags: release/14.0.0
# 95ee2897 16-Aug-2023 Warner Losh <imp@FreeBSD.org>

sys: Remove $FreeBSD$: two-line .h pattern

Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/


# 90049eab 28-Jul-2023 Konstantin Belousov <kib@FreeBSD.org>

vm_map_protect(): add VM_MAP_PROTECT_GROWSDOWN flag

which requests to propagate lowest stack segment protection to the grow gap.
This seems to be required for Linux emulation.

Reported by: dchagin

vm_map_protect(): add VM_MAP_PROTECT_GROWSDOWN flag

which requests to propagate lowest stack segment protection to the grow gap.
This seems to be required for Linux emulation.

Reported by: dchagin
Reviewed by: alc, markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D41099

show more ...


# ba41b0de 19-Jul-2023 Konstantin Belousov <kib@FreeBSD.org>

Add vm_map_insert1(9)

The function returns the newly created entry.
Use vm_map_insert1() in stack grow code to avoid gap entry re-lookup.

The comment update for vm_map_try_merge_entries() was sugge

Add vm_map_insert1(9)

The function returns the newly created entry.
Use vm_map_insert1() in stack grow code to avoid gap entry re-lookup.

The comment update for vm_map_try_merge_entries() was suggested by dougm.

Suggested by: alc
Reviewed by: alc, markj
Tested by: pho
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D41099

show more ...


# 9da33e8d 10-Aug-2023 Konstantin Belousov <kib@FreeBSD.org>

Update comment describing struct vm_map

There is no list connecting all entries any more, and correspondingly no
order on the list entries.

Reviewed by: dougm
Sponsored by: The FreeBSD Foundation
M

Update comment describing struct vm_map

There is no list connecting all entries any more, and correspondingly no
order on the list entries.

Reviewed by: dougm
Sponsored by: The FreeBSD Foundation
MFC after: 3 days
Differential revision: https://reviews.freebsd.org/D41405

show more ...


# d0e4e53e 09-May-2023 Mark Johnston <markj@FreeBSD.org>

vm_map: Add a macro to fetch a map entry's split boundary index

The resulting code is a bit more concise. No functional change
intended.

Reviewed by: alc, dougm, kib
MFC after: 1 week
Differential

vm_map: Add a macro to fetch a map entry's split boundary index

The resulting code is a bit more concise. No functional change
intended.

Reviewed by: alc, dougm, kib
MFC after: 1 week
Differential Revision: https://reviews.freebsd.org/D41249

show more ...


# 21e45c30 19-Jul-2023 Konstantin Belousov <kib@FreeBSD.org>

mmap(MAP_STACK): on stack grow, use original protection

If mprotect(2) changed protection in the bottom of the currently grown
stack region, currently the changed protection would be used for the
st

mmap(MAP_STACK): on stack grow, use original protection

If mprotect(2) changed protection in the bottom of the currently grown
stack region, currently the changed protection would be used for the
stack grow on next fault. This is arguably unexpected.

Store the original protection for the entry at mmap(2) time in the
offset member of the gap vm_map_entry, and use it for protection of the
grown stack region.

PR: 272585
Reported by: John F. Carr <jfc@mit.edu>
Reviewed by: alc, markj
Sponsored by: The FreeBSD Foundation
MFC after: 1 week
Differential revision: https://reviews.freebsd.org/D41089

show more ...


# d8e6f494 23-Jun-2023 Alan Cox <alc@FreeBSD.org>

vm: Fix anonymous memory clustering under ASLR

By default, our ASLR implementation is supposed to cluster anonymous
memory allocations, unless the application's mmap(..., MAP_ANON, ...)
call include

vm: Fix anonymous memory clustering under ASLR

By default, our ASLR implementation is supposed to cluster anonymous
memory allocations, unless the application's mmap(..., MAP_ANON, ...)
call included a non-zero address hint. Unfortunately, clustering
never occurred because kern_mmap() always replaced the given address
hint when it was zero. So, the ASLR implementation always believed
that a non-zero hint had been provided and randomized the mapping's
location in the address space. To fix this problem, I'm pushing down
the point at which we convert a hint of zero to the minimum allocatable
address from kern_mmap() to vm_map_find_min().

Reviewed by: kib
MFC after: 2 weeks
Differential Revision: https://reviews.freebsd.org/D40743

show more ...


Revision tags: release/13.2.0, release/12.4.0
# 361971fb 02-Jun-2022 Kornel Dulęba <kd@FreeBSD.org>

Rework how shared page related data is stored

Store the shared page address in struct vmspace.
Also instead of storing absolute addresses of various shared page
segments save their offsets with resp

Rework how shared page related data is stored

Store the shared page address in struct vmspace.
Also instead of storing absolute addresses of various shared page
segments save their offsets with respect to the shared page address.
This will be more useful when the shared page address is randomized.

Approved by: mw(mentor)
Sponsored by: Stormshield
Obtained from: Semihalf
Reviewed by: kib
Differential Revision: https://reviews.freebsd.org/D35393

show more ...


Revision tags: release/13.1.0
# 1811c1e9 17-Jan-2022 Mark Johnston <markj@FreeBSD.org>

exec: Reimplement stack address randomization

The approach taken by the stack gap implementation was to insert a
random gap between the top of the fixed stack mapping and the true top
of the main pr

exec: Reimplement stack address randomization

The approach taken by the stack gap implementation was to insert a
random gap between the top of the fixed stack mapping and the true top
of the main process stack. This approach was chosen so as to avoid
randomizing the previously fixed address of certain process metadata
stored at the top of the stack, but had some shortcomings. In
particular, mlockall(2) calls would wire the gap, bloating the process'
memory usage, and RLIMIT_STACK included the size of the gap so small
(< several MB) limits could not be used.

There is little value in storing each process' ps_strings at a fixed
location, as only very old programs hard-code this address; consumers
were converted decades ago to use a sysctl-based interface for this
purpose. Thus, this change re-implements stack address randomization by
simply breaking the convention of storing ps_strings at a fixed
location, and randomizing the location of the entire stack mapping.
This implementation is simpler and avoids the problems mentioned above,
while being unlikely to break compatibility anywhere the default ASLR
settings are used.

The kern.elfN.aslr.stack_gap sysctl is renamed to kern.elfN.aslr.stack,
and is re-enabled by default.

PR: 260303
Reviewed by: kib
Discussed with: emaste, mw
MFC after: 1 month
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D33704

show more ...


Revision tags: release/12.3.0
# 889b56c8 13-Oct-2021 Dawid Gorecki <dgr@semihalf.com>

setrlimit: Take stack gap into account.

Calling setrlimit with stack gap enabled and with low values of stack
resource limit often caused the program to abort immediately after
exiting the syscall.

setrlimit: Take stack gap into account.

Calling setrlimit with stack gap enabled and with low values of stack
resource limit often caused the program to abort immediately after
exiting the syscall. This happened due to the fact that the resource
limit was calculated assuming that the stack started at sv_usrstack,
while with stack gap enabled the stack is moved by a random number
of bytes.

Save information about stack size in struct vmspace and adjust the
rlim_cur value. If the rlim_cur and stack gap is bigger than rlim_max,
then the value is truncated to rlim_max.

PR: 253208
Reviewed by: kib
Obtained from: Semihalf
Sponsored by: Stormshield
MFC after: 1 month
Differential Revision: https://reviews.freebsd.org/D31516

show more ...


Revision tags: release/13.0.0
# 0659df6f 12-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

vm_map_protect: allow to set prot and max_prot in one go.

This prevents a situation where other thread modifies map entries
permissions between setting max_prot, then relocking, then setting prot,
c

vm_map_protect: allow to set prot and max_prot in one go.

This prevents a situation where other thread modifies map entries
permissions between setting max_prot, then relocking, then setting prot,
confusing the operation outcome. E.g. you can get an error that is not
possible if operation is performed atomic.

Also enable setting rwx for max_prot even if map does not allow to set
effective rwx protection.

Reviewed by: brooks, markj (previous version)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D28117

show more ...


# 2e1c94aa 08-Jan-2021 Konstantin Belousov <kib@FreeBSD.org>

Implement enforcing write XOR execute mapping policy.

It is checked in vm_map_insert() and vm_map_protect() that PROT_WRITE |
PROT_EXEC are never specified together, if vm_map has MAP_WX flag set.
F

Implement enforcing write XOR execute mapping policy.

It is checked in vm_map_insert() and vm_map_protect() that PROT_WRITE |
PROT_EXEC are never specified together, if vm_map has MAP_WX flag set.
FreeBSD control flag allows specific binary to request WX exempt, and
there are per ABI boolean sysctls kern.elf{32,64}.allow_wx to enable/
disable globally.

Reviewed by: emaste, jhb
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D28050

show more ...


# cd853791 28-Nov-2020 Konstantin Belousov <kib@FreeBSD.org>

Make MAXPHYS tunable. Bump MAXPHYS to 1M.

Replace MAXPHYS by runtime variable maxphys. It is initialized from
MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.

Make b_pag

Make MAXPHYS tunable. Bump MAXPHYS to 1M.

Replace MAXPHYS by runtime variable maxphys. It is initialized from
MAXPHYS by default, but can be also adjusted with the tunable kern.maxphys.

Make b_pages[] array in struct buf flexible. Size b_pages[] for buffer
cache buffers exactly to atop(maxbcachebuf) (currently it is sized to
atop(MAXPHYS)), and b_pages[] for pbufs is sized to atop(maxphys) + 1.
The +1 for pbufs allow several pbuf consumers, among them vmapbuf(),
to use unaligned buffers still sized to maxphys, esp. when such
buffers come from userspace (*). Overall, we save significant amount
of otherwise wasted memory in b_pages[] for buffer cache buffers,
while bumping MAXPHYS to desired high value.

Eliminate all direct uses of the MAXPHYS constant in kernel and driver
sources, except a place which initialize maxphys. Some random (and
arguably weird) uses of MAXPHYS, e.g. in linuxolator, are converted
straight. Some drivers, which use MAXPHYS to size embeded structures,
get private MAXPHYS-like constant; their convertion is out of scope
for this work.

Changes to cam/, dev/ahci, dev/ata, dev/mpr, dev/mpt, dev/mvs,
dev/siis, where either submitted by, or based on changes by mav.

Suggested by: mav (*)
Reviewed by: imp, mav, imp, mckusick, scottl (intermediate versions)
Tested by: pho
Sponsored by: The FreeBSD Foundation
Differential revision: https://reviews.freebsd.org/D27225

show more ...


# 20f02659 11-Nov-2020 Mark Johnston <markj@FreeBSD.org>

vm_map: Handle kernel map entry allocator recursion

On platforms without a direct map[*], vm_map_insert() may in rare
situations need to allocate a kernel map entry in order to allocate
kernel map e

vm_map: Handle kernel map entry allocator recursion

On platforms without a direct map[*], vm_map_insert() may in rare
situations need to allocate a kernel map entry in order to allocate
kernel map entries. This poses a problem similar to the one solved for
vmem boundary tags by vmem_bt_alloc(). In fact the kernel map case is a
bit more complicated since we must allocate entries with the kernel map
locked, whereas vmem can recurse into itself because boundary tags are
allocated up-front.

The solution is to add a custom slab allocator for kmapentzone which
allocates KVA directly from kernel_map, bypassing the kmem_* layer.
This avoids mutual recursion with the vmem btag allocator. Then, when
vm_map_insert() allocates a new kernel map entry, it avoids triggering
allocation of a new slab with M_NOVM until after the insertion is
complete. Instead, vm_map_insert() allocates from the reserve and sets
a flag in kernel_map to trigger re-population of the reserve just before
the map is unlocked. This places an implicit upper bound on the number
of kernel map entries that may be allocated before the kernel map lock
is released, but in general a bound of 1 suffices.

[*] This also comes up on amd64 with UMA_MD_SMALL_ALLOC undefined, a
configuration required by some kernel sanitizers.

Discussed with: kib, rlibby
Reported by: andrew
Tested by: pho (i386 and amd64 with !UMA_MD_SMALL_ALLOC)
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D26851

show more ...


# f7db0c95 04-Nov-2020 Mark Johnston <markj@FreeBSD.org>

vmspace: Convert to refcount(9)

This is mostly mechanical except for vmspace_exit(). There, use the new
refcount_release_if_last() to avoid switching to vmspace0 unless other
processes are sharing

vmspace: Convert to refcount(9)

This is mostly mechanical except for vmspace_exit(). There, use the new
refcount_release_if_last() to avoid switching to vmspace0 unless other
processes are sharing the vmspace. In that case, upon switching to
vmspace0 we can unconditionally release the reference.

Remove the volatile qualifier from vm_refcnt now that accesses are
protected using refcount(9) KPIs.

Reviewed by: alc, kib, mmel
MFC after: 1 month
Sponsored by: The FreeBSD Foundation
Differential Revision: https://reviews.freebsd.org/D27057

show more ...


12345678910>>...18