#
53d2936c |
| 20-Jan-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r356848 through r356919.
|
#
d6e13f3b |
| 20-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Don't hold the object lock while calling getpages.
The vnode pager does not want the object lock held. Moving this out allows further object lock scope reduction in callers. While here add some mi
Don't hold the object lock while calling getpages.
The vnode pager does not want the object lock held. Moving this out allows further object lock scope reduction in callers. While here add some missing paging in progress calls and an assert. The object handle is now protected explicitly with pip.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23033
show more ...
|
#
98087a06 |
| 19-Jan-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Make collapse synchronization more explicit and allow it to complete during paging.
Shadow objects are marked with a COLLAPSING flag while they are collapsing with their backing object. This gives
Make collapse synchronization more explicit and allow it to complete during paging.
Shadow objects are marked with a COLLAPSING flag while they are collapsing with their backing object. This gives us an explicit test rather than overloading paging-in-progress. While split is on-going we mark an object with SPLIT. These two operations will modify the swap tree so they must be serialized and swap_pager_getpages() can now directly detect these conditions and page more conservatively.
Callers to vm_object_collapse() now will reliably wait for a collapse to finish so that the backing chain is as short as possible before other decisions are made that may inflate the object chain. For example, split, coalesce, etc. It is now safe to run fault concurrently with collapse. It is safe to increase or decrease paging in progress with no lock so long as there is another valid ref on increase.
This change makes collapse more reliable as a secondary benefit. The primary benefit is making it safe to drop the object lock much earlier in fault or never acquire it at all.
This was tested with a new shadow chain test script that uncovered long standing bugs and will be integrated with stress2.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D22908
show more ...
|
#
b249ce48 |
| 03-Jan-2020 |
Mateusz Guzik <mjg@FreeBSD.org> |
vfs: drop the mostly unused flags argument from VOP_UNLOCK
Filesystems which want to use it in limited capacity can employ the VOP_UNLOCK_FLAGS macro.
Reviewed by: kib (previous version) Differenti
vfs: drop the mostly unused flags argument from VOP_UNLOCK
Filesystems which want to use it in limited capacity can employ the VOP_UNLOCK_FLAGS macro.
Reviewed by: kib (previous version) Differential Revision: https://reviews.freebsd.org/D21427
show more ...
|
#
9f5632e6 |
| 28-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Remove page locking for queue operations.
With the previous reviews, the page lock is no longer required in order to perform queue operations on a page. It is also no longer needed in the page queu
Remove page locking for queue operations.
With the previous reviews, the page lock is no longer required in order to perform queue operations on a page. It is also no longer needed in the page queue scans. This change effectively eliminates remaining uses of the page lock and also the false sharing caused by multiple pages sharing a page lock.
Reviewed by: jeff Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22885
show more ...
|
#
a8081778 |
| 15-Dec-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
Add a deferred free mechanism for freeing swap space that does not require an exclusive object lock.
Previously swap space was freed on a best effort basis when a page that had valid swap was dirtie
Add a deferred free mechanism for freeing swap space that does not require an exclusive object lock.
Previously swap space was freed on a best effort basis when a page that had valid swap was dirtied, thus invalidating the swap copy. This may be done inconsistently and requires the object lock which is not always convenient.
Instead, track when swap space is present. The first dirty is responsible for deleting space or setting PGA_SWAP_FREE which will trigger background scans to free the swap space.
Simplify the locking in vm_fault_dirty() now that we can reliably identify the first dirty.
Discussed with: alc, kib, markj Differential Revision: https://reviews.freebsd.org/D22654
show more ...
|
#
5cff1f4d |
| 10-Dec-2019 |
Mark Johnston <markj@FreeBSD.org> |
Introduce vm_page_astate.
This is a 32-bit structure embedded in each vm_page, consisting mostly of page queue state. The use of a structure makes it easy to store a snapshot of a page's queue stat
Introduce vm_page_astate.
This is a 32-bit structure embedded in each vm_page, consisting mostly of page queue state. The use of a structure makes it easy to store a snapshot of a page's queue state in a stack variable and use cmpset loops to update that state without requiring the page lock.
This change merely adds the structure and updates references to atomic state fields. No functional change intended.
Reviewed by: alc, jeff, kib Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22650
show more ...
|
#
abd80ddb |
| 08-Dec-2019 |
Mateusz Guzik <mjg@FreeBSD.org> |
vfs: introduce v_irflag and make v_type smaller
The current vnode layout is not smp-friendly by having frequently read data avoidably sharing cachelines with very frequently modified fields. In part
vfs: introduce v_irflag and make v_type smaller
The current vnode layout is not smp-friendly by having frequently read data avoidably sharing cachelines with very frequently modified fields. In particular v_iflag inspected for VI_DOOMED can be found in the same line with v_usecount. Instead make it available in the same cacheline as the v_op, v_data and v_type which all get read all the time.
v_type is avoidably 4 bytes while the necessary data will easily fit in 1. Shrinking it frees up 3 bytes, 2 of which get used here to introduce a new flag field with a new value: VIRF_DOOMED.
Reviewed by: kib, jeff Differential Revision: https://reviews.freebsd.org/D22715
show more ...
|
#
67388836 |
| 01-Dec-2019 |
Konstantin Belousov <kib@FreeBSD.org> |
Store the bottom of the shadow chain in OBJ_ANON object->handle member.
The handle value is stable for all shadow objects in the inheritance chain. This allows to avoid descending the shadow chain
Store the bottom of the shadow chain in OBJ_ANON object->handle member.
The handle value is stable for all shadow objects in the inheritance chain. This allows to avoid descending the shadow chain to get to the bottom of it in vm_map_entry_set_vnode_text(), and eliminate corresponding object relocking which appeared to be contending.
Change vm_object_allocate_anon() and vm_object_shadow() to handle more of the cred/charge initialization for the new shadow object, in addition to set up the handle.
Reported by: jeff Reviewed by: alc (previous version), jeff (previous version) Tested by: pho Sponsored by: The FreeBSD Foundation Differrential revision: https://reviews.freebsd.org/D22541
show more ...
|
#
63967687 |
| 20-Nov-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
Simplify anonymous memory handling with an OBJ_ANON flag. This eliminates reudundant complicated checks and additional locking required only for anonymous memory. Introduce vm_object_allocate_anon(
Simplify anonymous memory handling with an OBJ_ANON flag. This eliminates reudundant complicated checks and additional locking required only for anonymous memory. Introduce vm_object_allocate_anon() to create these objects. DEFAULT and SWAP objects now have the correct settings for non-anonymous consumers and so individual consumers need not modify the default flags to create super-pages and avoid ONEMAPPING/NOSPLIT.
Reviewed by: alc, dougm, kib, markj Tested by: pho Differential Revision: https://reviews.freebsd.org/D22119
show more ...
|
#
8ecbf14b |
| 19-Nov-2019 |
Doug Moore <dougm@FreeBSD.org> |
Drop the extra argument from swp_pager_meta_ctl and have it do lookup only. Rename it swp_pager_meta_lookup. Stop checking for obj->type == swap there and assert it instead. Make the caller respon
Drop the extra argument from swp_pager_meta_ctl and have it do lookup only. Rename it swp_pager_meta_lookup. Stop checking for obj->type == swap there and assert it instead. Make the caller responsible for the obj->type check.
Move the meta_ctl 'pop' functionality to swap_pager_unswapped, the only place that uses it, and assume obj->type == swap there too.
Assisted by: ota_j.email.ne.jp Reviewed by: kib Tested by: pho Differential Revision: https://reviews.freebsd.org/D22437
show more ...
|
#
abdab7b6 |
| 17-Nov-2019 |
Doug Moore <dougm@FreeBSD.org> |
Add a helper function for testing a swap block and freeing it if empty.
Submitted by: ota_j.email.ne.jp Approved by: alc, kib, dougm Differential Revision: https://reviews.freebsd.org/D22402
|
#
467057fc |
| 11-Nov-2019 |
Doug Moore <dougm@FreeBSD.org> |
swap_pager_meta_free() frees allocated blocks in a way that exploits the sparsity of allocated blocks in a range, without issuing an "are you there?" query for every block in the range. swap_pager_co
swap_pager_meta_free() frees allocated blocks in a way that exploits the sparsity of allocated blocks in a range, without issuing an "are you there?" query for every block in the range. swap_pager_copy() is not so smart. Modify the implementation of swap_pager_meta_free() slightly so that swap_pager_copy() can use that smarter implementation too.
Based on an observation of: Yoshihiro Ota (ota_j.email.ne.jp) Reviewed by: kib,alc Tested by: pho Differential Revision: https://reviews.freebsd.org/D22280
show more ...
|
Revision tags: release/12.1.0 |
|
#
303fa05a |
| 17-Oct-2019 |
Konstantin Belousov <kib@FreeBSD.org> |
swapon_check_swzone(): use already calculated static variables.
Submitted by: ota@j.email.ne.jp MFC after: 1 week Differential revision: https://reviews.freebsd.org/D22065
|
#
0012f373 |
| 15-Oct-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are in the updated vm_page.h comments.
Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21594
show more ...
|
#
8b3bc70a |
| 08-Oct-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r352764 through r353315.
|
#
2288078c |
| 08-Oct-2019 |
Doug Moore <dougm@FreeBSD.org> |
Define macro VM_MAP_ENTRY_FOREACH for enumerating the entries in a vm_map. In case the implementation ever changes from using a chain of next pointers, then changing the macro definition will be nece
Define macro VM_MAP_ENTRY_FOREACH for enumerating the entries in a vm_map. In case the implementation ever changes from using a chain of next pointers, then changing the macro definition will be necessary, but changing all the files that iterate over vm_map entries will not.
Drop a counter in vm_object.c that would have an effect only if the vm_map entry count was wrong.
Discussed with: alc Reviewed by: markj Tested by: pho (earlier version) Differential Revision: https://reviews.freebsd.org/D21882
show more ...
|
#
e8bcf696 |
| 16-Sep-2019 |
Mark Johnston <markj@FreeBSD.org> |
Revert r352406, which contained changes I didn't intend to commit.
|
#
41fd4b94 |
| 16-Sep-2019 |
Mark Johnston <markj@FreeBSD.org> |
Fix a couple of nits in r352110.
- Remove a dead variable from the amd64 pmap_extract_and_hold(). - Fix grammar in the vm_page_wire man page.
Reported by: alc Reviewed by: alc, kib Sponsored by: Ne
Fix a couple of nits in r352110.
- Remove a dead variable from the amd64 pmap_extract_and_hold(). - Fix grammar in the vm_page_wire man page.
Reported by: alc Reviewed by: alc, kib Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21639
show more ...
|
#
f993ed2f |
| 09-Sep-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r351732 through r352104.
|
#
fe7bcbaf |
| 03-Sep-2019 |
Kyle Evans <kevans@FreeBSD.org> |
vm pager: writemapping accounting for OBJT_SWAP
Currently writemapping accounting is only done for vnode_pager which does some accounting on the underlying vnode.
Extend this to allow accounting to
vm pager: writemapping accounting for OBJT_SWAP
Currently writemapping accounting is only done for vnode_pager which does some accounting on the underlying vnode.
Extend this to allow accounting to be possible for any of the pager types. New pageops are added to update/release writecount that need to be implemented for any pager wishing to do said accounting, and we implement these methods now for both vnode_pager (unchanged) and swap_pager.
The primary motivation for this is to allow other systems with OBJT_SWAP objects to check if their objects have any write mappings and reject operations with EBUSY if so. posixshm will be the first to do so in order to reject adding write seals to the shmfd if any writable mappings exist.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D21456
show more ...
|
#
cf27e0d1 |
| 20-Aug-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
Use an atomic reference count for paging in progress so that callers do not require the object lock.
Reviewed by: markj Tested by: pho (as part of a larger branch) Sponsored by: Netflix Differential
Use an atomic reference count for paging in progress so that callers do not require the object lock.
Reviewed by: markj Tested by: pho (as part of a larger branch) Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21311
show more ...
|
#
504f5e29 |
| 15-Aug-2019 |
Doug Moore <dougm@FreeBSD.org> |
swap_pager.c reserves 2 blocks for a bsd label. Change that 2 to the expression howmany(BBSIZE, PAGE_SIZE), where BBSIZE is the size of the boot block area. That can be less than 2 if PAGE_SIZE is
swap_pager.c reserves 2 blocks for a bsd label. Change that 2 to the expression howmany(BBSIZE, PAGE_SIZE), where BBSIZE is the size of the boot block area. That can be less than 2 if PAGE_SIZE is big.
swapon(8) has an option to trim (delete) all the blocks of a device at startup. However, if the first of those blocks is a bsd label, then trimming those blocks is destructive. Change swapon to leave the first BBSIZE bytes untrimmed.
Update manual pages to reflect changes in how swapon and how it may be used, espeically in association with savecore.
Reviewed by: alc Approved by: markj (mentor) MFC after: 3 days Differential Revision: https://reviews.freebsd.org/D21191
show more ...
|
#
58df81b3 |
| 30-Jul-2019 |
Alan Somers <asomers@FreeBSD.org> |
MFHead @350426
Sponsored by: The FreeBSD Foundation
|
#
23612f0d |
| 28-Jul-2019 |
Doug Moore <dougm@FreeBSD.org> |
In swap_pager_putpages, move the initialization of a free-blocks counter, and the final freeing of freed swap blocks, outside the region where an object lock is held. Correct some style(9) and spell
In swap_pager_putpages, move the initialization of a free-blocks counter, and the final freeing of freed swap blocks, outside the region where an object lock is held. Correct some style(9) and spelling errors. Change a panic() to a KASSERT(). Change a boolean_t to a bool.
Suggested by: alc Reviewed by: alc Approved by: kib, markj (mentors) Differential Revision: https://reviews.freebsd.org/D21093
show more ...
|