Lines Matching full:and

5 Hyper-V can create and run Linux guests that are Confidential Computing
7 the confidentiality and integrity of data in the VM's memory, even in the
8 face of a hypervisor/VMM that has been compromised and may behave maliciously.
9 CoCo VMs on Hyper-V share the generic CoCo VM threat model and security
14 A Linux CoCo VM on Hyper-V requires the cooperation and interaction of the
26 SEV, or SEV-ES encryption, and such encryption is not sufficient for a CoCo
38 created and cannot be changed during the life of the VM.
41 enlightened to understand and manage all aspects of running as a CoCo VM.
43 * Paravisor mode. In this mode, a paravisor layer between the guest and the
48 Conceptually, fully-enlightened mode and paravisor mode may be treated as
52 aspects of running as a CoCo VM are handled by the paravisor, and a normal
55 does not go this far, and is somewhere in the middle of the spectrum. Some
59 paravisor, and there is no standardized mechanism for a guest OS to query the
66 than is currently envisioned for Coconut, and so is further toward the "no
72 and must be trusted by the guest OS. By implication, the hypervisor/VMM must
80 VMPL 0 and has full control of the guest context. In paravisor mode, the
81 guest OS runs in VMPL 2 and the paravisor runs in VMPL 0. The paravisor
90 L1 VM, and the guest OS runs in a nested L2 VM.
93 MSR indicates if the underlying processor uses AMD SEV-SNP or Intel TDX, and
95 kernel image that can boot and run properly on either architecture, and in
104 paravisor runs first and sets up the guest physical memory as encrypted. The
111 CoCo VM to route #VC and #VE exceptions to VMPL 0 and the L1 VM,
112 respectively, and not the guest Linux. Consequently, these exception handlers
113 do not run in the guest Linux and are not a required enlightenment for a
116 * CPUID flags. Both AMD SEV-SNP and Intel TDX provide a CPUID flag in the
119 the paravisor filters out these flags and the guest Linux does not see them.
122 abstracting the differences between SEV-SNP and TDX. But the
131 emulation of devices such as the IO-APIC and TPM. Because the emulation
140 memory between encrypted and decrypted requires coordinating with the
142 __set_memory_enc_pgtable(). In fully-enlightened mode, the normal SEV-SNP and
145 that the paravisor can coordinate the transitions and inform the hypervisor
153 interrupt injection into the guest OS, and ensures that the guest OS only
163 hypervisor. But the paravisor is idiosyncratic in this regard, and a few
165 hypervisor. These hypercall sites test for a paravisor being present, and use
171 CoCo VMs, Hyper-V has VMBus and VMBus devices that communicate using memory
172 shared between the Linux guest and the host. This shared memory must be
174 includes a compromised and potentially malicious host, the guest must guard
177 These Hyper-V and VMBus memory pages are marked as decrypted:
184 * Per-cpu hypercall input and output pages (unless running with a paravisor)
200 VMBus ring buffer, the length of the message is validated, and the message is
201 copied into a temporary (encrypted) buffer for further validation and
209 CoCo VM have not been hardened, and they are not allowed to load in a CoCo
213 storvsc for disk I/O and netvsc for network I/O. storvsc uses the normal
214 Linux kernel DMA APIs, and so bounce buffering through decrypted swiotlb
216 mode goes through send and receive buffer space that is explicitly allocated
217 by the netvsc driver, and is used for most smaller packets. These send and
220 equivalent of bounce buffering between encrypted and decrypted memory is
222 DMA APIs, and is bounce buffered through swiotlb memory implicitly like in
228 space, and the access traps to Hyper-V for emulation. But in CoCo VMs, memory
232 _hv_pcifront_read_config() and _hv_pcifront_write_config() and the
238 the untrusted host partition and the untrusted hypervisor. Instead, the guest
240 sensitive data. The hardware (SNP or TDX) encrypts the guest memory and the
242 processor to ensure trusted and confidential computing.
245 and the paravisor, ensuring that sensitive data is protected from hypervisor-
246 level access through memory encryption and register state isolation.
256 such VSPs does not need to be decrypted and thereby exposed to the
260 The data is transferred directly between the VM and a vPCI device (a.k.a.
262 and that supports encrypted memory. In such a case, neither the host partition
265 sensitive data, and the paravisor abstracts the details of communicating
271 provides bounce-buffering, and although the data is not encrypted, the backing
295 and the Confidential VMBus connection::
320 * https://openvmm.dev/, and
332 and of each VMBus channel that is created. When a Confidential VMBus
334 path that is used for VMBus device creation and deletion, and it provides a
338 indicates if the device uses encrypted ring buffers, and if the device uses
344 ring buffer only, or both encrypted ring buffer and external data. If a channel
347 and the VTL0 guest. However, other memory regions are often used for e.g. DMA,
348 so they need to be accessible by the underlying hardware, and must be
354 and DMA transfers, the guest must interact with two SynICs -- the one provided
355 by the paravisor and the one provided by the Hyper-V host when Confidential
357 but the guest must check for messages and for channel interrupts on both SynICs.
360 intercepted by the paravisor (this includes various MSRs such as the SIMP and
361 SIEFP, as well as hypercalls like HvPostMessage and HvSignalEvent). If the
364 kind: with confidential VMBus, messages use the paravisor SynIC, and if the
367 (non-confidential, using the VMBus relay) and use the hypervisor SynIC, and
368 some on the paravisor and use its SynIC. The RelIDs are coordinated by the
369 OpenHCL VMBus server and are guaranteed to be unique regardless of whether
374 When transitioning memory between encrypted and decrypted, the caller of
376 the memory isn't in use and isn't referenced while the transition is in
377 progress. The transition has multiple steps, and includes interaction with
388 the guest kernel, and in such a case, the load_unaligned_zeropad() fixup code
394 normal page fault is generated instead of #VC or #VE, and the page-fault-
397 again. See hv_vtom_clear_present() and hv_vtom_set_host_visibility().