Lines Matching +full:always +full:- +full:running

1 .. SPDX-License-Identifier: GPL-2.0
3 PCI pass-thru devices
5 In a Hyper-V guest VM, PCI pass-thru devices (also called
13 would when running on bare metal, so no changes are required
16 Hyper-V terminology for vPCI devices is "Discrete Device
17 Assignment" (DDA). Public documentation for Hyper-V DDA is
20 …tps://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devi…
23 and for GPUs. A similar mechanism for NICs is called SR-IOV
25 driver to interact directly with the hardware. See Hyper-V
26 public documentation here: `SR-IOV`_
28 .. _SR-IOV: https://learn.microsoft.com/en-us/windows-hardware/drivers/network/overview-of-single-r…
30 This discussion of vPCI devices includes DDA and SR-IOV
34 -------------------
35 Hyper-V provides full PCI functionality for a vPCI device when
40 its integration with the Linux PCI subsystem must use Hyper-V
41 specific mechanisms. Consequently, vPCI devices on Hyper-V
46 drivers/pci/controller/pci-hyperv.c handles a newly introduced
49 exist if the PCI device were discovered via ACPI on a bare-
53 were running in Linux on bare-metal. Because vPCI devices are
62 VMBus connection to the vPCI VSP on the Hyper-V host. That
69 the VM while the VM is running. The ongoing operation of the
75 ----------------
76 PCI device setup follows a sequence that Hyper-V originally
77 created for Windows guests, and that can be ill-suited for
80 with a bit of hackery in the Hyper-V virtual PCI driver for
88 device. The Hyper-V host does not guarantee that these bytes
97 to the Hyper-V host over the VMBus channel as part of telling
100 MMIO range, the Hyper-V host intercepts the accesses and maps
104 the Hyper-V host, and uses this information to allocate MMIO
110 point the Hyper-V virtual PCI driver hackery is done, and the
116 ------------------
117 A Hyper-V host may initiate removal of a vPCI device from a
119 is instigated by an admin action taken on the Hyper-V host and
125 a message, the Hyper-V virtual PCI driver in Linux
129 Hyper-V over the VMBus channel indicating that the device has
130 been removed. At this point, Hyper-V sends a VMBus rescind
135 message also indicates to the guest that Hyper-V has stopped
142 After sending the Eject message, Hyper-V allows the guest VM
146 within the allowed 60 seconds, the Hyper-V host forcibly
154 Hyper-V virtual PCI driver is very tricky. Ejection has been
156 fully setup. The Hyper-V virtual PCI driver has been updated
159 modifying this code to prevent re-introducing such problems.
163 --------------------
164 The Hyper-V virtual PCI driver supports vPCI devices using
165 MSI, multi-MSI, or MSI-X. Assigning the guest vCPU that will
166 receive the interrupt for a particular MSI or MSI-X message is
168 the Hyper-V interfaces. For the single-MSI and MSI-X cases,
174 with Hyper-V, which must decide which physical CPU should
176 Unfortunately, the Hyper-V decision-making process is a bit
182 The Hyper-V virtual PCI driver implements the
184 Unfortunately, on Hyper-V the implementation requires sending
185 a VMBus message to the Hyper-V host and awaiting an interrupt
196 Most of the code in the Hyper-V virtual PCI driver (pci-
197 hyperv.c) applies to Hyper-V and Linux guests running on x86
199 interrupt assignments are managed. On x86, the Hyper-V
201 Hyper-V which guest vCPU should be interrupted by each
202 MSI/MSI-X interrupt, and the x86 interrupt vector number that
205 Hyper-V virtual PCI driver manages the allocation of an SPI
206 for each MSI/MSI-X interrupt. The Hyper-V virtual PCI driver
208 which Hyper-V emulates, so no hypercall is necessary as with
209 x86. Hyper-V does not support using LPIs for vPCI devices in
212 The Hyper-V virtual PCI driver in Linux supports vPCI devices
215 interface, the Hyper-V virtual PCI driver is called to tell
216 the Hyper-V host to change the interrupt targeting and
219 running out of vectors on a CPU, there's no path to inform the
220 Hyper-V host of the change, and things break. Fortunately,
227 ---
228 By default, Hyper-V pins all guest VM memory in the host
236 are in "direct" mode since Hyper-V does not provide a virtual
239 Hyper-V assumes that physical PCI devices always perform
240 cache-coherent DMA. When running on x86, this behavior is
241 required by the architecture. When running on arm64, the
242 architecture allows for both cache-coherent and
243 non-cache-coherent devices, with the behavior of each device
246 Hyper-V VMBus driver propagates cache-coherency information
250 Current Hyper-V versions always indicate that the VMBus is
251 cache coherent, so vPCI devices on arm64 always get marked as
256 ----------------------
258 messages are passed over a VMBus channel between the Hyper-V
259 host and the Hyper-v vPCI driver in the Linux guest. Some
260 messages have been revised in newer versions of Hyper-V, so
271 ------------------------
287 ------------------------------------
290 In Hyper-V guests these standard functions map to functions
292 in the Hyper-V virtual PCI driver. In normal VMs,
294 space, and the accesses trap to Hyper-V to be handled.
295 But in CoCo VMs, memory encryption prevents Hyper-V
301 Config Block back-channel
302 -------------------------
303 The Hyper-V host and Hyper-V virtual PCI driver in Linux
304 together implement a non-standard back-channel communication
305 path between the host and guest. The back-channel path uses
311 diagnostic data to a Hyper-V host running in the Azure public
314 (pci-hyperv-intf.c, under CONFIG_PCI_HYPERV_INTERFACE) that
315 effectively stubs them out when running in non-Hyper-V