Revision tags: release/14.0.0 |
|
#
c9130a46 |
| 18-Sep-2023 |
Mateusz Guzik <mjg@FreeBSD.org> |
drm2: whack set-but-not-used warns
|
#
685dc743 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: one-line .c pattern
Remove /^[\s*]*__FBSDID\("\$FreeBSD\$"\);?\s*\n/
|
Revision tags: release/13.2.0, release/12.4.0, release/13.1.0, release/12.3.0, release/13.0.0, release/12.2.0, release/11.4.0 |
|
#
91019ea7 |
| 29-Feb-2020 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r358400 through r358465.
|
#
7aaf252c |
| 28-Feb-2020 |
Jeff Roberson <jeff@FreeBSD.org> |
Convert a few triviail consumers to the new unlocked grab API.
Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23847
|
Revision tags: release/12.1.0 |
|
#
0012f373 |
| 15-Oct-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are
(4/6) Protect page valid with the busy lock.
Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are in the updated vm_page.h comments.
Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21594
show more ...
|
#
61c1328e |
| 13-Sep-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r352105 through r352307.
|
#
c7575748 |
| 10-Sep-2019 |
Jeff Roberson <jeff@FreeBSD.org> |
Replace redundant code with a few new vm_page_grab facilities: - VM_ALLOC_NOCREAT will grab without creating a page. - vm_page_grab_valid() will grab and page in if necessary. - vm_page_busy_acqui
Replace redundant code with a few new vm_page_grab facilities: - VM_ALLOC_NOCREAT will grab without creating a page. - vm_page_grab_valid() will grab and page in if necessary. - vm_page_busy_acquire() automates some busy acquire loops.
Discussed with: alc, kib, markj Tested by: pho (part of larger branch) Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21546
show more ...
|
#
fee2a2fa |
| 09-Sep-2019 |
Mark Johnston <markj@FreeBSD.org> |
Change synchonization rules for vm_page reference counting.
There are several mechanisms by which a vm_page reference is held, preventing the page from being freed back to the page allocator. In pa
Change synchonization rules for vm_page reference counting.
There are several mechanisms by which a vm_page reference is held, preventing the page from being freed back to the page allocator. In particular, holding the page's object lock is sufficient to prevent the page from being freed; holding the busy lock or a wiring is sufficent as well. These references are protected by the page lock, which must therefore be acquired for many per-page operations. This results in false sharing since the page locks are external to the vm_page structures themselves and each lock protects multiple structures.
Transition to using an atomically updated per-page reference counter. The object's reference is counted using a flag bit in the counter. A second flag bit is used to atomically block new references via pmap_extract_and_hold() while removing managed mappings of a page. Thus, the reference count of a page is guaranteed not to increase if the page is unbusied, unmapped, and the object's write lock is held. As a consequence of this, the page lock no longer protects a page's identity; operations which move pages between objects are now synchronized solely by the objects' locks.
The vm_page_wire() and vm_page_unwire() KPIs are changed. The former requires that either the object lock or the busy lock is held. The latter no longer has a return value and may free the page if it releases the last reference to that page. vm_page_unwire_noq() behaves the same as before; the caller is responsible for checking its return value and freeing or enqueuing the page as appropriate. vm_page_wire_mapped() is introduced for use in pmap_extract_and_hold(). It fails if the page is concurrently being unmapped, typically triggering a fallback to the fault handler. vm_page_wire() no longer requires the page lock and vm_page_unwire() now internally acquires the page lock when releasing the last wiring of a page (since the page lock still protects a page's queue state). In particular, synchronization details are no longer leaked into the caller.
The change excises the page lock from several frequently executed code paths. In particular, vm_object_terminate() no longer bounces between page locks as it releases an object's pages, and direct I/O and sendfile(SF_NOCACHE) completions no longer require the page lock. In these latter cases we now get linear scalability in the common scenario where different threads are operating on different files.
__FreeBSD_version is bumped. The DRM ports have been updated to accomodate the KPI changes.
Reviewed by: jeff (earlier version) Tested by: gallatin (earlier version), pho Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D20486
show more ...
|
Revision tags: release/11.3.0, release/12.0.0 |
|
#
592ffb21 |
| 24-Aug-2018 |
Warner Losh <imp@FreeBSD.org> |
Revert drm2 removal.
Revert r338177, r338176, r338175, r338174, r338172
After long consultations with re@, core members and mmacy, revert these changes. Followup changes will be made to mark them a
Revert drm2 removal.
Revert r338177, r338176, r338175, r338174, r338172
After long consultations with re@, core members and mmacy, revert these changes. Followup changes will be made to mark them as deprecated and prent a message about where to find the up-to-date driver. Followup commits will be made to make this clear in the installer. Followup commits to reduce POLA in ways we're still exploring.
It's anticipated that after the freeze, this will be removed in 13-current (with the residual of the drm2 code copied to sys/arm/dev/drm2 for the TEGRA port's use w/o the intel or radeon drivers).
Due to the impending freeze, there was no formal core vote for this. I've been talking to different core members all day, as well as Matt Macey and Glen Barber. Nobody is completely happy, all are grudgingly going along with this. Work is in progress to mitigate the negative effects as much as possible.
Requested by: re@ (gjb, rgrimes)
show more ...
|
Revision tags: release/11.2.0, release/10.4.0, release/11.1.0, release/11.0.1, release/11.0.0, release/10.3.0 |
|
#
b626f5a7 |
| 04-Jan-2016 |
Glen Barber <gjb@FreeBSD.org> |
MFH r289384-r293170
Sponsored by: The FreeBSD Foundation
|
#
9a7cd2e6 |
| 22-Dec-2015 |
Bjoern A. Zeeb <bz@FreeBSD.org> |
MFH @r292599
This includes the pluggable TCP framework and other chnages to the netstack to track for VNET stability.
Security: The FreeBSD Foundation
|
#
b0cd2017 |
| 16-Dec-2015 |
Gleb Smirnoff <glebius@FreeBSD.org> |
A change to KPI of vm_pager_get_pages() and underlying VOP_GETPAGES().
o With new KPI consumers can request contiguous ranges of pages, and unlike before, all pages will be kept busied on return,
A change to KPI of vm_pager_get_pages() and underlying VOP_GETPAGES().
o With new KPI consumers can request contiguous ranges of pages, and unlike before, all pages will be kept busied on return, like it was done before with the 'reqpage' only. Now the reqpage goes away. With new interface it is easier to implement code protected from race conditions.
Such arrayed requests for now should be preceeded by a call to vm_pager_haspage() to make sure that request is possible. This could be improved later, making vm_pager_haspage() obsolete.
Strenghtening the promises on the business of the array of pages allows us to remove such hacks as swp_pager_free_nrpage() and vm_pager_free_nonreq().
o New KPI accepts two integer pointers that may optionally point at values for read ahead and read behind, that a pager may do, if it can. These pages are completely owned by pager, and not controlled by the caller.
This shifts the UFS-specific readahead logic from vm_fault.c, which should be file system agnostic, into vnode_pager.c. It also removes one VOP_BMAP() request per hard fault.
Discussed with: kib, alc, jeff, scottl Sponsored by: Nginx, Inc. Sponsored by: Netflix
show more ...
|
Revision tags: release/10.2.0, release/10.1.0, release/9.3.0, release/10.0.0 |
|
#
0bfd163f |
| 18-Oct-2013 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge head r233826 through r256722.
|
#
1ccca3b5 |
| 10-Oct-2013 |
Alan Somers <asomers@FreeBSD.org> |
IFC @256277
Approved by: ken (mentor)
|
Revision tags: release/9.2.0 |
|
#
ef90af83 |
| 20-Sep-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r255692
Comment out IA32_MISC_ENABLE MSR access - this doesn't exist on AMD. Need to sort out how arch-specific MSRs will be handled.
|
#
d1d01586 |
| 05-Sep-2013 |
Simon J. Gerraty <sjg@FreeBSD.org> |
Merge from head
|
#
46ed9e49 |
| 04-Sep-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r255209
|
#
12278dbb |
| 26-Aug-2013 |
Mark Murray <markm@FreeBSD.org> |
MFC
|
#
39db4184 |
| 25-Aug-2013 |
Jean-Sébastien Pédron <dumbbell@FreeBSD.org> |
ttm: "to_page->valid = VM_PAGE_BITS_ALL" before vm_page_dirty(to_page)
Approved by; kib@
|
#
e0653845 |
| 23-Aug-2013 |
Mark Murray <markm@FreeBSD.org> |
MFC
|
#
5944de8e |
| 22-Aug-2013 |
Konstantin Belousov <kib@FreeBSD.org> |
Remove the deprecated VM_ALLOC_RETRY flag for the vm_page_grab(9). The flag was mandatory since r209792, where vm_page_grab(9) was changed to only support the alloc retry semantic.
Suggested and rev
Remove the deprecated VM_ALLOC_RETRY flag for the vm_page_grab(9). The flag was mandatory since r209792, where vm_page_grab(9) was changed to only support the alloc retry semantic.
Suggested and reviewed by: alc Sponsored by: The FreeBSD Foundation
show more ...
|
#
c7aebda8 |
| 09-Aug-2013 |
Attilio Rao <attilio@FreeBSD.org> |
The soft and hard busy mechanism rely on the vm object lock to work. Unify the 2 concept into a real, minimal, sxlock where the shared acquisition represent the soft busy and the exclusive acquisitio
The soft and hard busy mechanism rely on the vm object lock to work. Unify the 2 concept into a real, minimal, sxlock where the shared acquisition represent the soft busy and the exclusive acquisition represent the hard busy. The old VPO_WANTED mechanism becames the hard-path for this new lock and it becomes per-page rather than per-object. The vm_object lock becames an interlock for this functionality: it can be held in both read or write mode. However, if the vm_object lock is held in read mode while acquiring or releasing the busy state, the thread owner cannot make any assumption on the busy state unless it is also busying it.
Also: - Add a new flag to directly shared busy pages while vm_page_alloc and vm_page_grab are being executed. This will be very helpful once these functions happen under a read object lock. - Move the swapping sleep into its own per-object flag
The KPI is heavilly changed this is why the version is bumped. It is very likely that some VM ports users will need to change their own code.
Sponsored by: EMC / Isilon storage division Discussed with: alc Reviewed by: jeff, kib Tested by: gavin, bapt (older version) Tested by: pho, scottl
show more ...
|
#
40f65a4d |
| 07-Aug-2013 |
Peter Grehan <grehan@FreeBSD.org> |
IFC @ r254014
|
#
552311f4 |
| 17-Jul-2013 |
Xin LI <delphij@FreeBSD.org> |
IFC @253398
|
#
cfe30d02 |
| 19-Jun-2013 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge fresh head.
|