#
45d72c7d |
| 20-Mar-2019 |
Konstantin Belousov <kib@FreeBSD.org> |
vm_fault_copy_entry: accept invalid source pages.
Either msync(MS_INVALIDATE) or the object unlock during vnode truncation can expose invalid pages backing wired entries. Accept them, but do not in
vm_fault_copy_entry: accept invalid source pages.
Either msync(MS_INVALIDATE) or the object unlock during vnode truncation can expose invalid pages backing wired entries. Accept them, but do not install them into destrination pmap. We must create copied pages in the copy case, because e.g. vm_object_unwire() expects that the entry is fully backed.
Reported by: syzkaller, via emaste Reported by: syzbot+514d40ce757a3f8b15bc@syzkaller.appspotmail.com Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D19615
show more ...
|
#
18b18078 |
| 25-Feb-2019 |
Enji Cooper <ngie@FreeBSD.org> |
MFhead@r344527
|
#
a8fe8db4 |
| 25-Feb-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r344178 through r344512.
|
#
e7a9df16 |
| 20-Feb-2019 |
Konstantin Belousov <kib@FreeBSD.org> |
Add kernel support for Intel userspace protection keys feature on Skylake Xeons.
See SDM rev. 68 Vol 3 4.6.2 Protection Keys and the description of the RDPKRU and WRPKRU instructions.
Reviewed by:
Add kernel support for Intel userspace protection keys feature on Skylake Xeons.
See SDM rev. 68 Vol 3 4.6.2 Protection Keys and the description of the RDPKRU and WRPKRU instructions.
Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 2 weeks Differential revision: https://reviews.freebsd.org/D18893
show more ...
|
#
30e009fc |
| 19-Feb-2019 |
Enji Cooper <ngie@FreeBSD.org> |
MFhead@r344270
|
#
c981cbbd |
| 15-Feb-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r343956 through r344177.
|
#
f6893f09 |
| 13-Feb-2019 |
Mark Johnston <markj@FreeBSD.org> |
Implement transparent 2MB superpage promotion for RISC-V.
This includes support for pmap_enter(..., psind=1) as described in the commit log message for r321378.
The changes are largely modelled aft
Implement transparent 2MB superpage promotion for RISC-V.
This includes support for pmap_enter(..., psind=1) as described in the commit log message for r321378.
The changes are largely modelled after amd64. arm64 has more stringent requirements around superpage creation to avoid the possibility of TLB conflict aborts, and these requirements do not apply to RISC-V, which like amd64 permits simultaneous caching of 4KB and 2MB translations for a given page. RISC-V's PTE format includes only two software bits, and as these are already consumed we do not have an analogue for amd64's PG_PROMOTED. Instead, pmap_remove_l2() always invalidates the entire 2MB address range.
pmap_ts_referenced() is modified to clear PTE_A, now that we support both hardware- and software-managed reference and dirty bits. Also fix pmap_fault_fixup() so that it does not set PTE_A or PTE_D on kernel mappings.
Reviewed by: kib (earlier version) Discussed with: jhb Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D18863 Differential Revision: https://reviews.freebsd.org/D18864 Differential Revision: https://reviews.freebsd.org/D18865 Differential Revision: https://reviews.freebsd.org/D18866 Differential Revision: https://reviews.freebsd.org/D18867 Differential Revision: https://reviews.freebsd.org/D18868
show more ...
|
Revision tags: release/12.0.0 |
|
#
2a22df74 |
| 04-Nov-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r339813 through r340125.
|
#
9f1abe3d |
| 27-Oct-2018 |
Alan Cox <alc@FreeBSD.org> |
Eliminate typically pointless calls to vm_fault_prefault() on soft, copy- on-write faults. On a page fault, when we call vm_fault_prefault(), it probes the pmap and the shadow chain of vm objects to
Eliminate typically pointless calls to vm_fault_prefault() on soft, copy- on-write faults. On a page fault, when we call vm_fault_prefault(), it probes the pmap and the shadow chain of vm objects to see if there are opportunities to create read and/or execute-only mappings to neighoring pages. For example, in the case of hard faults, such effort typically pays off, that is, mappings are created that eliminate future soft page faults. However, in the the case of soft, copy-on-write faults, the effort very rarely pays off. (See the review for some specific data.)
Reviewed by: kib, markj MFC after: 3 weeks Differential Revision: https://reviews.freebsd.org/D17367
show more ...
|
#
01d4e214 |
| 05-Oct-2018 |
Glen Barber <gjb@FreeBSD.org> |
MFH r338661 through r339200.
Sponsored by: The FreeBSD Foundation
|
#
ab530bf0 |
| 29-Sep-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r338988 through r339014.
|
#
c62637d6 |
| 28-Sep-2018 |
Konstantin Belousov <kib@FreeBSD.org> |
Correct vm_fault_copy_entry() handling of backing file truncation after the file mapping was wired.
if a wired map entry is backed by vnode and the file is truncated, corresponding pages are invalid
Correct vm_fault_copy_entry() handling of backing file truncation after the file mapping was wired.
if a wired map entry is backed by vnode and the file is truncated, corresponding pages are invalidated. vm_fault_copy_entry() should be aware of it and allow for invalid pages past end of file. Also, such pages should be not mapped into userspace. If userspace accesses the truncated part of the mapping later, it gets a signal, there is no way kernel can prevent the page fault.
Reported by: andrew using syzkaller Reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (gjb) MFC after: 1 week Differential revision: https://reviews.freebsd.org/D17323
show more ...
|
#
9f25ab83 |
| 28-Sep-2018 |
Konstantin Belousov <kib@FreeBSD.org> |
In vm_fault_copy_entry(), we should not assert that entry is charged if the dst_object is not of swap type.
It can only happen when entry does not require copy, otherwise vm_map_protect() already ad
In vm_fault_copy_entry(), we should not assert that entry is charged if the dst_object is not of swap type.
It can only happen when entry does not require copy, otherwise vm_map_protect() already adds the charge. So the assert was right for the case where swap object was allocated in the vm_fault_copy_entry(), but not when it was just copied from src_entry and its type is not swap.
Reported by: andrew using syzkaller Reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (gjb) MFC after: 1 week Differential revision: https://reviews.freebsd.org/D17323
show more ...
|
#
a60d3db1 |
| 28-Sep-2018 |
Konstantin Belousov <kib@FreeBSD.org> |
In vm_fault_copy_entry(), collect the code to initialize a newly allocated dst_object in a single place.
Suggested and reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (gjb) MFC
In vm_fault_copy_entry(), collect the code to initialize a newly allocated dst_object in a single place.
Suggested and reviewed by: alc Sponsored by: The FreeBSD Foundation Approved by: re (gjb) MFC after: 1 week Differential revision: https://reviews.freebsd.org/D17323
show more ...
|
#
3af64f03 |
| 11-Sep-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r338392 through r338594.
|
#
23984ce5 |
| 06-Sep-2018 |
Mark Johnston <markj@FreeBSD.org> |
Avoid resource deadlocks when one domain has exhausted its memory. Attempt other allowed domains if the requested domain is below the minimum paging threshold. Block in fork only if all domains avai
Avoid resource deadlocks when one domain has exhausted its memory. Attempt other allowed domains if the requested domain is below the minimum paging threshold. Block in fork only if all domains available to the forking thread are below the severe threshold rather than any.
Submitted by: jeff Reported by: mjg Reviewed by: alc, kib, markj Approved by: re (rgrimes) Differential Revision: https://reviews.freebsd.org/D16191
show more ...
|
#
21f01f45 |
| 06-Sep-2018 |
Mark Johnston <markj@FreeBSD.org> |
Remove vm_page_remque().
Testing m->queue != PQ_NONE is not sufficient; see the commit log message for r338276. As of r332974 vm_page_dequeue() handles already-dequeued pages, so just replace vm_pa
Remove vm_page_remque().
Testing m->queue != PQ_NONE is not sufficient; see the commit log message for r338276. As of r332974 vm_page_dequeue() handles already-dequeued pages, so just replace vm_page_remque() calls with vm_page_dequeue() calls.
Reviewed by: kib Tested by: pho Approved by: re (marius) Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D17025
show more ...
|
#
14b841d4 |
| 11-Aug-2018 |
Kyle Evans <kevans@FreeBSD.org> |
MFH @ r337607, in preparation for boarding
|
#
f9c0a512 |
| 10-Aug-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r337286 through r337585.
|
#
2bf8cb38 |
| 08-Aug-2018 |
Alan Cox <alc@FreeBSD.org> |
Add support for pmap_enter(..., psind=1) to the armv6 pmap. In other words, add support for explicitly requesting that pmap_enter() create a 1 MB page mapping. (Essentially, this feature allows the
Add support for pmap_enter(..., psind=1) to the armv6 pmap. In other words, add support for explicitly requesting that pmap_enter() create a 1 MB page mapping. (Essentially, this feature allows the machine-independent layer to create superpage mappings preemptively, and not wait for automatic promotion to occur.)
Export pmap_ps_enabled() to the machine-independent layer.
Add a flag to pmap_pv_insert_pte1() that specifies whether it should fail or reclaim a PV entry when one is not available.
Refactor pmap_enter_pte1() into two functions, one by the same name, that is a general-purpose function for creating pte1 mappings, and another, pmap_enter_1mpage(), that is used to prefault 1 MB read- and/or execute- only mappings for execve(2), mmap(2), and shmat(2).
In addition, as an optimization to pmap_enter(..., psind=0), eliminate the use of pte2_is_managed() from pmap_enter(). Unlike the x86 pmap implementations, armv6 does not have a managed bit defined within the PTE. So, pte2_is_managed() is actually a call to PHYS_TO_VM_PAGE(), which is O(n) in the number of vm_phys_segs[]. All but one call to PHYS_TO_VM_PAGE() in pmap_enter() can be avoided.
Reviewed by: kib, markj, mmel Tested by: mmel MFC after: 6 weeks Differential Revision: https://reviews.freebsd.org/D16555
show more ...
|
#
398a929f |
| 20-Jul-2018 |
Mark Johnston <markj@FreeBSD.org> |
Add support for pmap_enter(psind = 1) to the arm64 pmap.
See the commit log messages for r321378 and r336288 for descriptions of this functionality.
Reviewed by: alc Differential Revision: https://
Add support for pmap_enter(psind = 1) to the arm64 pmap.
See the commit log messages for r321378 and r336288 for descriptions of this functionality.
Reviewed by: alc Differential Revision: https://reviews.freebsd.org/D16303
show more ...
|
#
103cc0f6 |
| 19-Jul-2018 |
Alan Cox <alc@FreeBSD.org> |
Revert r329254. The underlying cause for the copy-on-write problem in multithreaded programs that was addressed by r329254 was in the implementation of pmap_enter() on some architectures, notably, a
Revert r329254. The underlying cause for the copy-on-write problem in multithreaded programs that was addressed by r329254 was in the implementation of pmap_enter() on some architectures, notably, amd64. kib@, markj@ and I have audited all of the pmap_enter() implementations, and fixed the broken ones, specifically, amd64 (r335784, r335971), i386 (r336092), mips (r336248), and riscv (r336294).
To be clear, the reason to address the problem within pmap_enter() and revert r329254 is not just a matter of principle. An effect of r329254 was that a copy-on-write fault actually entailed two page faults, not one, even for single-threaded programs. Now, in the expected case for either single- or multithreaded programs, we are back to a single page fault to complete a copy-on-write operation. (In extremely rare circumstances, a multithreaded program could suffer two page faults.)
Reviewed by: kib, markj Tested by: truckman MFC after: 3 weeks Differential Revision: https://reviews.freebsd.org/D16301
show more ...
|
#
8c087371 |
| 14-Jul-2018 |
Alan Cox <alc@FreeBSD.org> |
Add support for pmap_enter(..., psind=1) to the i386 pmap. In other words, add support for explicitly requesting that pmap_enter() create a 2 or 4 MB page mapping. (Essentially, this feature allows
Add support for pmap_enter(..., psind=1) to the i386 pmap. In other words, add support for explicitly requesting that pmap_enter() create a 2 or 4 MB page mapping. (Essentially, this feature allows the machine-independent layer to create superpage mappings preemptively, and not wait for automatic promotion to occur.)
Export pmap_ps_enabled() to the machine-independent layer.
Add a flag to pmap_pv_insert_pde() that specifies whether it should fail or reclaim a PV entry when one is not available.
Refactor pmap_enter_pde() into two functions, one by the same name, that is a general-purpose function for creating PDE PG_PS mappings, and another, pmap_enter_4mpage(), that is used to prefault 2 or 4 MB read- and/or execute-only mappings for execve(2), mmap(2), and shmat(2).
Reviewed by: kib Tested by: pho Differential Revision: https://reviews.freebsd.org/D16246
show more ...
|
Revision tags: release/11.2.0 |
|
#
6939b4d3 |
| 30-May-2018 |
Mark Johnston <markj@FreeBSD.org> |
Typo.
PR: 228533 Submitted by: Jakub Piecuch <j.piecuch96@gmail.com> MFC after: 1 week
|
#
6e1e759c |
| 28-May-2018 |
Alan Cox <alc@FreeBSD.org> |
Addendum to r334233. In vm_fault_populate(), since the page lock is held, we must use vm_page_xunbusy_maybelocked() rather than vm_page_xunbusy() to unbusy the page.
Reviewed by: kib X-MFC with: r3
Addendum to r334233. In vm_fault_populate(), since the page lock is held, we must use vm_page_xunbusy_maybelocked() rather than vm_page_xunbusy() to unbusy the page.
Reviewed by: kib X-MFC with: r334233
show more ...
|