Revision tags: release/14.0.0 |
|
#
95ee2897 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: two-line .h pattern
Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/
|
#
4d846d26 |
| 10-May-2023 |
Warner Losh <imp@FreeBSD.org> |
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of BSD-2-Clause.
Discussed with: pfg MFC After: 3 days Sponsored by: Netflix
show more ...
|
Revision tags: release/13.2.0, release/12.4.0, release/13.1.0, release/12.3.0 |
|
#
2760658b |
| 03-May-2021 |
Alexander Motin <mav@FreeBSD.org> |
Improve UMA cache reclamation.
When estimating working set size, measure only allocation batches, not free batches. Allocation and free patterns can be very different. For example, ZFS on vm_lowme
Improve UMA cache reclamation.
When estimating working set size, measure only allocation batches, not free batches. Allocation and free patterns can be very different. For example, ZFS on vm_lowmem event can free to UMA few gigabytes of memory in one call, but it does not mean it will request the same amount back that fast too, in fact it won't.
Update working set size on every reclamation call, shrinking caches faster under pressure. Lack of this caused repeating vm_lowmem events squeezing more and more memory out of real consumers only to make it stuck in UMA caches. I saw ZFS drop ARC size in half before previous algorithm after periodic WSS update decided to reclaim UMA caches.
Introduce voluntary reclamation of UMA caches not used for a long time. For each zdom track longterm minimal cache size watermark, freeing some unused items every UMA_TIMEOUT after first 15 minutes without cache misses. Freed memory can get better use by other consumers. For example, ZFS won't grow its ARC unless it see free memory, since it does not know it is not really used. And even if memory is not really needed, periodic free during inactivity periods should reduce its fragmentation.
Reviewed by: markj, jeff (previous version) MFC after: 2 weeks Sponsored by: iXsystems, Inc. Differential Revision: https://reviews.freebsd.org/D29790
show more ...
|
#
aabe13f1 |
| 14-Apr-2021 |
Mark Johnston <markj@FreeBSD.org> |
uma: Introduce per-domain reclamation functions
Make it possible to reclaim items from a specific NUMA domain.
- Add uma_zone_reclaim_domain() and uma_reclaim_domain(). - Permit parallel reclamatio
uma: Introduce per-domain reclamation functions
Make it possible to reclaim items from a specific NUMA domain.
- Add uma_zone_reclaim_domain() and uma_reclaim_domain(). - Permit parallel reclamations. Use a counter instead of a flag to synchronize with zone_dtor(). - Use the zone lock to protect cache_shrink() now that parallel reclaims can happen. - Add a sysctl that can be used to trigger reclamation from a specific domain.
Currently the new KPIs are unused, so there should be no functional change.
Reviewed by: mav MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D29685
show more ...
|
Revision tags: release/13.0.0 |
|
#
d2f1c44b |
| 27-Dec-2020 |
Mark Johnston <markj@FreeBSD.org> |
uma: Remove the MINBUCKET flag from the flag name list
This should have been done in r368399 / commit f8b6c51538fab88a7a62a399fb0948806b06133c.
Reported by: rlibby Sponsored by: The FreeBSD Foundat
uma: Remove the MINBUCKET flag from the flag name list
This should have been done in r368399 / commit f8b6c51538fab88a7a62a399fb0948806b06133c.
Reported by: rlibby Sponsored by: The FreeBSD Foundation
show more ...
|
Revision tags: release/12.2.0 |
|
#
c3aa3bf9 |
| 01-Sep-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vm: clean up empty lines in .c and .h files
|
#
a2e19465 |
| 28-Aug-2020 |
Eric van Gyzen <vangyzen@FreeBSD.org> |
memstat_kvm_uma: fix reading of uma_zone_domain structures
Coverity flagged the scaling by sizeof(uzd). That is the type of the pointer, so the scaling was already done by pointer arithmetic. Howev
memstat_kvm_uma: fix reading of uma_zone_domain structures
Coverity flagged the scaling by sizeof(uzd). That is the type of the pointer, so the scaling was already done by pointer arithmetic. However, this was also passing a stack frame pointer to kvm_read, so it was doubly wrong.
Move ZDOM_GET into the !_KERNEL section and use it in libmemstat.
Reported by: Coverity Reviewed by: markj MFC after: 2 weeks Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D26213
show more ...
|
#
c8b0a88b |
| 20-Jun-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Clarify some language. Favor primary where both master and primary were used in conjunction with secondary.
|
Revision tags: release/11.4.0 |
|
#
16b90565 |
| 10-Mar-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358731 through r358831.
|
#
54007ce8 |
| 07-Mar-2020 |
Mark Johnston <markj@FreeBSD.org> |
Clean up uma_int.h a bit.
This makes it easier to write libkvm programs that access UMA data structures.
- Remove a couple of unused slab functions and make others local to uma_core.c. Similarly
Clean up uma_int.h a bit.
This makes it easier to write libkvm programs that access UMA data structures.
- Remove a couple of unused slab functions and make others local to uma_core.c. Similarly move SLAB_BITSETS, which affects the layout of slab structures, to uma_core.c. - Stop defining the slab structures under _KERNEL. There's no real reason they can't be visible to userspace like the rest of UMA's structures are. - Group KEG_ASSERT_COLD with other keg macros. - Convert an assertion about MAXMEMDOM to use _Static_assert.
No functional change intended.
Discussed with: jeff Reviewed by: rlibby Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D23980
show more ...
|
#
43c7dd6b |
| 19-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358075 through r358130.
|
#
c6fd3e23 |
| 19-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Use per-domain locks for the bucket cache.
This gives much better concurrency when there are a large number of cores per-domain and multiple domains. Avoid taking the lock entirely if it will not b
Use per-domain locks for the bucket cache.
This gives much better concurrency when there are a large number of cores per-domain and multiple domains. Avoid taking the lock entirely if it will not be productive. ROUNDROBIN domains will have mixed memory in each domain and will load balance to all domains.
While here refactor the zone/domain separation and bucket limits to simplify callers.
Reviewed by: markj Differential Revision: https://reviews.freebsd.org/D23673
show more ...
|
#
44e86fbd |
| 13-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r357662 through r357854.
|
#
4ab3aee8 |
| 11-Feb-2020 |
Mark Johnston <markj@FreeBSD.org> |
Reduce lock hold time in keg_drain().
Maintain a count of free slabs in the per-domain keg structure and use that to clear the free slab list in constant time for most cases. This helps minimize lo
Reduce lock hold time in keg_drain().
Maintain a count of free slabs in the per-domain keg structure and use that to clear the free slab list in constant time for most cases. This helps minimize lock contention induced by reclamation, in preparation for proactive trimming of excesses of free memory.
Reviewed by: jeff, rlibby Tested by: pho Differential Revision: https://reviews.freebsd.org/D23532
show more ...
|
#
bc02c18c |
| 07-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r357408 through r357661.
|
#
bae55c4a |
| 06-Feb-2020 |
Ryan Libby <rlibby@FreeBSD.org> |
uma: remove UMA_ZFLAG_CACHEONLY flag
UMA_ZFLAG_CACHEONLY was essentially the same thing as UMA_ZONE_VM, but with a more confusing name. Remove the flag, make UMA_ZONE_VM an inherit flag, and replac
uma: remove UMA_ZFLAG_CACHEONLY flag
UMA_ZFLAG_CACHEONLY was essentially the same thing as UMA_ZONE_VM, but with a more confusing name. Remove the flag, make UMA_ZONE_VM an inherit flag, and replace all references.
Reviewed by: markj Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D23516
show more ...
|
#
ec0d8280 |
| 04-Feb-2020 |
Ryan Libby <rlibby@FreeBSD.org> |
uma: add UMA_ZONE_CONTIG, and a default contig_alloc
For now, copy the mbuf allocator.
Reviewed by: jeff, markj (previous version) Sponsored by: Dell EMC Isilon Differential Revision: https://revie
uma: add UMA_ZONE_CONTIG, and a default contig_alloc
For now, copy the mbuf allocator.
Reviewed by: jeff, markj (previous version) Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D23237
show more ...
|
#
dc3915c8 |
| 04-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Use STAILQ instead of TAILQ for bucket lists. We only need FIFO behavior and this is more space efficient.
Stop queueing recently used buckets to the head of the list. If the bucket goes to a diff
Use STAILQ instead of TAILQ for bucket lists. We only need FIFO behavior and this is more space efficient.
Stop queueing recently used buckets to the head of the list. If the bucket goes to a different processor the cache coherency will be more expensive. We already try to encourage cache-hot behavior in the per-cpu layer.
Reviewed by: rlibby Differential Revision: https://reviews.freebsd.org/D23493
show more ...
|
#
59abbffa |
| 31-Jan-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r357270 through r357349.
|
#
d4665eaa |
| 31-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Implement a safe memory reclamation feature that is tightly coupled with UMA.
This is in the same family of algorithms as Epoch/QSBR/RCU/PARSEC but is a unique algorithm. This has 3x the performanc
Implement a safe memory reclamation feature that is tightly coupled with UMA.
This is in the same family of algorithms as Epoch/QSBR/RCU/PARSEC but is a unique algorithm. This has 3x the performance of epoch in a write heavy workload with less than half of the read side cost. The memory overhead is significantly lessened by limiting the free-to-use latency. A synthetic test uses 1/20th of the memory vs Epoch. There is significant further discussion in the comments and code review.
This code should be considered experimental. I will write a man page after it has settled. After further validation the VM will begin using this feature to permit lockless page lookups.
Both markj and cperciva tested on arm64 at large core counts to verify fences on weaker ordering architectures. I will commit a stress testing tool in a follow-up.
Reviewed by: mmacy, markj, rlibby, hselasky Discussed with: sbahara Differential Revision: https://reviews.freebsd.org/D22586
show more ...
|
#
9b8db4d0 |
| 14-Jan-2020 |
Ryan Libby <rlibby@FreeBSD.org> |
uma: split slabzone into two sizes
By allowing more items per slab, we can improve memory efficiency for small allocs. If we were just to increase the bitmap size of the slabzone, we would then was
uma: split slabzone into two sizes
By allowing more items per slab, we can improve memory efficiency for small allocs. If we were just to increase the bitmap size of the slabzone, we would then waste slabzone memory. So, split slabzone into two zones, one especially for 8-byte allocs (512 per slab). The practical effect should be reduced memory usage for counter(9).
Reviewed by: jeff, markj Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D23149
show more ...
|
#
4a8b575c |
| 09-Jan-2020 |
Ryan Libby <rlibby@FreeBSD.org> |
uma: unify layout paths and improve efficiency
Unify the keg layout selection paths (keg_small_init, keg_large_init, keg_cachespread_init), and slightly improve memory efficiecy by: - using the pad
uma: unify layout paths and improve efficiency
Unify the keg layout selection paths (keg_small_init, keg_large_init, keg_cachespread_init), and slightly improve memory efficiecy by: - using the padding of the final item to store the slab header, - not going OFFPAGE if we have a choice unless it improves efficiency.
Reviewed by: jeff, markj Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D23048
show more ...
|
#
54c5ae80 |
| 09-Jan-2020 |
Ryan Libby <rlibby@FreeBSD.org> |
uma: reorganize flags
- Garbage collect UMA_ZONE_PAGEABLE & UMA_ZONE_STATIC. - Move flag VTOSLAB from public to private. - Introduce public NOTPAGE flag and make HASH private. - Introduce public
uma: reorganize flags
- Garbage collect UMA_ZONE_PAGEABLE & UMA_ZONE_STATIC. - Move flag VTOSLAB from public to private. - Introduce public NOTPAGE flag and make HASH private. - Introduce public NOTOUCH flag and make OFFPAGE private. - Update man page.
The net effect of this should be to make the contract with clients more clear. Clients should choose constraints, UMA will figure out how to implement them. This also breaks the confusing double meaning of OFFPAGE.
Reviewed by: jeff, markj Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D23016
show more ...
|
#
79c9f942 |
| 06-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Fix uma boot pages calculations on NUMA machines that also don't have MD_UMA_SMALL_ALLOC. This is unusual but not impossible. Fix the alignemnt of zones while here. This was already correct becaus
Fix uma boot pages calculations on NUMA machines that also don't have MD_UMA_SMALL_ALLOC. This is unusual but not impossible. Fix the alignemnt of zones while here. This was already correct because uz_cpu strongly aligned the zone structure but the specified alignment did not match reality and involved redundant defines.
Reviewed by: markj, rlibby Differential Revision: https://reviews.freebsd.org/D23046
show more ...
|
#
31c251a0 |
| 04-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Fix an assertion introduced in r356348. On architectures without UMA_MD_SMALL_ALLOC vmem has a more complicated startup sequence that violated the new assert. Resolve this by rewriting the COLD ass
Fix an assertion introduced in r356348. On architectures without UMA_MD_SMALL_ALLOC vmem has a more complicated startup sequence that violated the new assert. Resolve this by rewriting the COLD asserts to look at the per-cpu allocation counts for evidence of api activity.
Discussed with: rlibby Reviewed by: markj Reported by: lwhsu
show more ...
|