Lines Matching +full:two +full:- +full:dimensional

1 .. SPDX-License-Identifier: GPL-2.0
13 - correctness:
18 - security:
21 - performance:
23 - scaling:
25 - hardware:
27 - integration:
31 - dirty tracking:
33 and framebuffer-based displays
34 - footprint:
37 - reliability:
56 tdp two dimensional paging (vendor neutral term for NPT and EPT)
62 The mmu supports first-generation mmu hardware, which allows an atomic switch
64 two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
76 - when guest paging is disabled, we translate guest physical addresses to
77 host physical addresses (gpa->hpa)
78 - when guest paging is enabled, we translate guest virtual addresses, to
79 guest physical addresses, to host physical addresses (gva->gpa->hpa)
80 - when the guest launches a guest of its own, we translate nested guest
82 addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
94 addresses (gpa->hva); note that two gpas may alias to the same hva, but not
108 - writes to control registers (especially cr3)
109 - invlpg/invlpga instruction execution
110 - access to missing or protected translations
114 - changes in the gpa->hpa translation (either through gpa->hva changes or
115 through hva->hpa changes)
116 - memory pressure (the shrinker)
128 A leaf spte corresponds to either one or two translations encoded into
133 The following table shows translations encoded by leaf ptes, with higher-level
136 Non-nested guests::
138 nonpaging: gpa->hpa
139 paging: gva->gpa->hpa
140 paging, tdp: (gva->)gpa->hpa
144 non-tdp: ngva->gpa->hpa (*)
145 tdp: (ngva->)ngpa->gpa->hpa
147 (*) the guest hypervisor will encode the ngva->gpa translation into its page
157 host pages, and gpa->hpa translations when NPT or EPT is active.
164 When role.has_4_byte_gpte=1, the guest uses 32-bit gptes while the host uses 64-bit
167 For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
168 first or second 512-gpte block in the guest page table. For second-level
169 page tables, each 32-bit gpte is converted to two 64-bit sptes
170 (since each first-level guest page is shadowed by two first-level
182 if direct map or 64-bit gptes are in use, '1' if 32-bit gptes are in use.
209 points to one. This is set if NPT uses 5-level page tables (host
210 CR4.LA57=1) and is shadowing L1's 4-level NPT (L1 CR4.LA57=0).
213 VM without blocking vCPUs too long. Specifically, KVM updates the per-VM
216 used. Therefore, vCPUs must load a new, valid root before re-entering the
218 use this field as non-root TDP MMU pages are reachable only from their
225 A pageful of 64-bit sptes containing the translations for this page.
227 The page pointed to by spt will have its page->private pointing back
229 sptes in spt point either at guest pages, or at lower-level shadow pages.
230 Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
231 at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
247 non-zero. See role.invalid. tdp_mmu_root_count is similar but exclusively
272 Only present on 32-bit hosts, where a 64-bit spte cannot be written
274 to detect in-progress updates and retry them until the writer has
278 emulations if the page needs to be write-protected (see "Synchronized
281 possible for non-leafs. This field counts the number of emulations
299 The guest uses two events to synchronize its tlb and page tables: tlb flushes
322 - guest page fault (or npt page fault, or ept violation)
326 - a true guest fault (the guest translation won't allow the access) (*)
327 - access to a missing translation
328 - access to a protected translation
329 - when logging dirty pages, memory is write protected
330 - synchronized shadow pages are write protected (*)
331 - access to untranslatable memory (mmio)
337 - if the RSV bit of the error code is set, the page fault is caused by guest
340 - walk shadow page table
341 - check for valid generation number in the spte (see "Fast invalidation of
343 - cache the information to vcpu->arch.mmio_gva, vcpu->arch.mmio_access and
344 vcpu->arch.mmio_gfn, and call the emulator
346 - If both P bit and R/W bit of error code are set, this could possibly
350 - if needed, walk the guest page tables to determine the guest translation
351 (gva->gpa or ngpa->gpa)
353 - if permissions are insufficient, reflect the fault back to the guest
355 - determine the host page
357 - if this is an mmio request, there is no host page; cache the info to
358 vcpu->arch.mmio_gva, vcpu->arch.mmio_access and vcpu->arch.mmio_gfn
360 - walk the shadow page table to find the spte for the translation,
363 - If this is an mmio request, cache the mmio info to the spte and set some
366 - try to unsynchronize the page
368 - if successful, we can let the guest continue and modify the gpte
370 - emulate the instruction
372 - if failed, unshadow the page and let the guest continue
374 - update any translations that were modified by the instruction
378 - walk the shadow page hierarchy and drop affected translations
379 - try to reinstantiate the indicated translation in the hope that the
384 - mov to cr3
386 - look up new shadow roots
387 - synchronize newly reachable shadow pages
389 - mov to cr0/cr4/efer
391 - set up mmu context for new paging mode
392 - look up new shadow roots
393 - synchronize newly reachable shadow pages
397 - mmu notifier called with updated hva
398 - look up affected sptes through reverse map
399 - drop (or update) translations
410 We handle this by mapping the permissions to two possible sptes, depending
413 - kernel write fault: spte.u=0, spte.w=1 (allows full kernel access,
415 - read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel
420 In the first case there are two additional complications:
422 - if CR4.SMEP is enabled: since we've turned the page into a kernel page,
427 - if CR4.SMAP is disabled: since the page has been changed to a kernel
436 with one value of cr0.wp cannot be used when cr0.wp has a different value -
447 two separate 2M pages, on both guest and host, since the mmu always uses PAE
452 - the spte must point to a large host page
453 - the guest pte must be a large pte of at least equivalent size (if tdp is
455 - if the spte will be writeable, the large page frame may not overlap any
456 write-protected pages
457 - the guest page must be wholly contained by a single memory slot
459 To check the last two conditions, the mmu maintains a ->disallow_lpage set of
463 artificially inflated ->disallow_lpages so they can never be instantiated.
476 kvm_memslots(kvm)->generation, and increased whenever guest memory info
484 Since only 18 bits are used to store generation-number on mmio spte, all
490 out-of-date information, but with an up-to-date generation number.
493 returns; thus, bit 63 of kvm_memslots(kvm)->generation set to 1 only during a
496 this without losing a bit in the MMIO spte. The "update in-progress" bit of the
499 spte while an update is in-progress, the next access to the spte will always be
501 miss due to the in-progress flag diverging, while an access after the update
508 - NPT presentation from KVM Forum 2008
509 https://www.linux-kvm.org/images/c/c8/KvmForum2008%24kdf2008_21.pdf