1.. SPDX-License-Identifier: GPL-2.0 2 3================= 4KVM Lock Overview 5================= 6 71. Acquisition Orders 8--------------------- 9 10The acquisition orders for mutexes are as follows: 11 12- kvm->lock is taken outside vcpu->mutex 13 14- kvm->lock is taken outside kvm->slots_lock and kvm->irq_lock 15 16- kvm->slots_lock is taken outside kvm->irq_lock, though acquiring 17 them together is quite rare. 18 19On x86: 20 21- vcpu->mutex is taken outside kvm->arch.hyperv.hv_lock 22 23- kvm->arch.mmu_lock is an rwlock. kvm->arch.tdp_mmu_pages_lock is 24 taken inside kvm->arch.mmu_lock, and cannot be taken without already 25 holding kvm->arch.mmu_lock (typically with ``read_lock``, otherwise 26 there's no need to take kvm->arch.tdp_mmu_pages_lock at all). 27 28Everything else is a leaf: no other lock is taken inside the critical 29sections. 30 312. Exception 32------------ 33 34Fast page fault: 35 36Fast page fault is the fast path which fixes the guest page fault out of 37the mmu-lock on x86. Currently, the page fault can be fast in one of the 38following two cases: 39 401. Access Tracking: The SPTE is not present, but it is marked for access 41 tracking. That means we need to restore the saved R/X bits. This is 42 described in more detail later below. 43 442. Write-Protection: The SPTE is present and the fault is caused by 45 write-protect. That means we just need to change the W bit of the spte. 46 47What we use to avoid all the race is the Host-writable bit and MMU-writable bit 48on the spte: 49 50- Host-writable means the gfn is writable in the host kernel page tables and in 51 its KVM memslot. 52- MMU-writable means the gfn is writable in the guest's mmu and it is not 53 write-protected by shadow page write-protection. 54 55On fast page fault path, we will use cmpxchg to atomically set the spte W 56bit if spte.HOST_WRITEABLE = 1 and spte.WRITE_PROTECT = 1, to restore the saved 57R/X bits if for an access-traced spte, or both. This is safe because whenever 58changing these bits can be detected by cmpxchg. 59 60But we need carefully check these cases: 61 621) The mapping from gfn to pfn 63 64The mapping from gfn to pfn may be changed since we can only ensure the pfn 65is not changed during cmpxchg. This is a ABA problem, for example, below case 66will happen: 67 68+------------------------------------------------------------------------+ 69| At the beginning:: | 70| | 71| gpte = gfn1 | 72| gfn1 is mapped to pfn1 on host | 73| spte is the shadow page table entry corresponding with gpte and | 74| spte = pfn1 | 75+------------------------------------------------------------------------+ 76| On fast page fault path: | 77+------------------------------------+-----------------------------------+ 78| CPU 0: | CPU 1: | 79+------------------------------------+-----------------------------------+ 80| :: | | 81| | | 82| old_spte = *spte; | | 83+------------------------------------+-----------------------------------+ 84| | pfn1 is swapped out:: | 85| | | 86| | spte = 0; | 87| | | 88| | pfn1 is re-alloced for gfn2. | 89| | | 90| | gpte is changed to point to | 91| | gfn2 by the guest:: | 92| | | 93| | spte = pfn1; | 94+------------------------------------+-----------------------------------+ 95| :: | 96| | 97| if (cmpxchg(spte, old_spte, old_spte+W) | 98| mark_page_dirty(vcpu->kvm, gfn1) | 99| OOPS!!! | 100+------------------------------------------------------------------------+ 101 102We dirty-log for gfn1, that means gfn2 is lost in dirty-bitmap. 103 104For direct sp, we can easily avoid it since the spte of direct sp is fixed 105to gfn. For indirect sp, we disabled fast page fault for simplicity. 106 107A solution for indirect sp could be to pin the gfn, for example via 108kvm_vcpu_gfn_to_pfn_atomic, before the cmpxchg. After the pinning: 109 110- We have held the refcount of pfn that means the pfn can not be freed and 111 be reused for another gfn. 112- The pfn is writable and therefore it cannot be shared between different gfns 113 by KSM. 114 115Then, we can ensure the dirty bitmaps is correctly set for a gfn. 116 1172) Dirty bit tracking 118 119In the origin code, the spte can be fast updated (non-atomically) if the 120spte is read-only and the Accessed bit has already been set since the 121Accessed bit and Dirty bit can not be lost. 122 123But it is not true after fast page fault since the spte can be marked 124writable between reading spte and updating spte. Like below case: 125 126+------------------------------------------------------------------------+ 127| At the beginning:: | 128| | 129| spte.W = 0 | 130| spte.Accessed = 1 | 131+------------------------------------+-----------------------------------+ 132| CPU 0: | CPU 1: | 133+------------------------------------+-----------------------------------+ 134| In mmu_spte_clear_track_bits():: | | 135| | | 136| old_spte = *spte; | | 137| | | 138| | | 139| /* 'if' condition is satisfied. */| | 140| if (old_spte.Accessed == 1 && | | 141| old_spte.W == 0) | | 142| spte = 0ull; | | 143+------------------------------------+-----------------------------------+ 144| | on fast page fault path:: | 145| | | 146| | spte.W = 1 | 147| | | 148| | memory write on the spte:: | 149| | | 150| | spte.Dirty = 1 | 151+------------------------------------+-----------------------------------+ 152| :: | | 153| | | 154| else | | 155| old_spte = xchg(spte, 0ull) | | 156| if (old_spte.Accessed == 1) | | 157| kvm_set_pfn_accessed(spte.pfn);| | 158| if (old_spte.Dirty == 1) | | 159| kvm_set_pfn_dirty(spte.pfn); | | 160| OOPS!!! | | 161+------------------------------------+-----------------------------------+ 162 163The Dirty bit is lost in this case. 164 165In order to avoid this kind of issue, we always treat the spte as "volatile" 166if it can be updated out of mmu-lock, see spte_has_volatile_bits(), it means, 167the spte is always atomically updated in this case. 168 1693) flush tlbs due to spte updated 170 171If the spte is updated from writable to readonly, we should flush all TLBs, 172otherwise rmap_write_protect will find a read-only spte, even though the 173writable spte might be cached on a CPU's TLB. 174 175As mentioned before, the spte can be updated to writable out of mmu-lock on 176fast page fault path, in order to easily audit the path, we see if TLBs need 177be flushed caused by this reason in mmu_spte_update() since this is a common 178function to update spte (present -> present). 179 180Since the spte is "volatile" if it can be updated out of mmu-lock, we always 181atomically update the spte, the race caused by fast page fault can be avoided, 182See the comments in spte_has_volatile_bits() and mmu_spte_update(). 183 184Lockless Access Tracking: 185 186This is used for Intel CPUs that are using EPT but do not support the EPT A/D 187bits. In this case, PTEs are tagged as A/D disabled (using ignored bits), and 188when the KVM MMU notifier is called to track accesses to a page (via 189kvm_mmu_notifier_clear_flush_young), it marks the PTE not-present in hardware 190by clearing the RWX bits in the PTE and storing the original R & X bits in more 191unused/ignored bits. When the VM tries to access the page later on, a fault is 192generated and the fast page fault mechanism described above is used to 193atomically restore the PTE to a Present state. The W bit is not saved when the 194PTE is marked for access tracking and during restoration to the Present state, 195the W bit is set depending on whether or not it was a write access. If it 196wasn't, then the W bit will remain clear until a write access happens, at which 197time it will be set using the Dirty tracking mechanism described above. 198 1993. Reference 200------------ 201 202:Name: kvm_lock 203:Type: mutex 204:Arch: any 205:Protects: - vm_list 206 207:Name: kvm_count_lock 208:Type: raw_spinlock_t 209:Arch: any 210:Protects: - hardware virtualization enable/disable 211:Comment: 'raw' because hardware enabling/disabling must be atomic /wrt 212 migration. 213 214:Name: kvm_arch::tsc_write_lock 215:Type: raw_spinlock 216:Arch: x86 217:Protects: - kvm_arch::{last_tsc_write,last_tsc_nsec,last_tsc_offset} 218 - tsc offset in vmcb 219:Comment: 'raw' because updating the tsc offsets must not be preempted. 220 221:Name: kvm->mmu_lock 222:Type: spinlock_t 223:Arch: any 224:Protects: -shadow page/shadow tlb entry 225:Comment: it is a spinlock since it is used in mmu notifier. 226 227:Name: kvm->srcu 228:Type: srcu lock 229:Arch: any 230:Protects: - kvm->memslots 231 - kvm->buses 232:Comment: The srcu read lock must be held while accessing memslots (e.g. 233 when using gfn_to_* functions) and while accessing in-kernel 234 MMIO/PIO address->device structure mapping (kvm->buses). 235 The srcu index can be stored in kvm_vcpu->srcu_idx per vcpu 236 if it is needed by multiple functions. 237 238:Name: blocked_vcpu_on_cpu_lock 239:Type: spinlock_t 240:Arch: x86 241:Protects: blocked_vcpu_on_cpu 242:Comment: This is a per-CPU lock and it is used for VT-d posted-interrupts. 243 When VT-d posted-interrupts is supported and the VM has assigned 244 devices, we put the blocked vCPU on the list blocked_vcpu_on_cpu 245 protected by blocked_vcpu_on_cpu_lock, when VT-d hardware issues 246 wakeup notification event since external interrupts from the 247 assigned devices happens, we will find the vCPU on the list to 248 wakeup. 249