945d880d | 20-Dec-2023 |
Andrew Jones <ajones@ventanamicro.com> |
RISC-V: KVM: selftests: Add guest_sbi_probe_extension
Add guest_sbi_probe_extension(), allowing guest code to probe for SBI extensions. As guest_sbi_probe_extension() needs SBI_ERR_NOT_SUPPORTED, ta
RISC-V: KVM: selftests: Add guest_sbi_probe_extension
Add guest_sbi_probe_extension(), allowing guest code to probe for SBI extensions. As guest_sbi_probe_extension() needs SBI_ERR_NOT_SUPPORTED, take the opportunity to bring in all SBI error codes. We don't bring in all current extension IDs or base extension function IDs though, even though we need one of each, because we'd prefer to bring those in as necessary.
Reviewed-by: Anup Patel <anup@brainfault.org> Reviewed-by: Atish Patra <atishp@rivosinc.com> Signed-off-by: Andrew Jones <ajones@ventanamicro.com> Signed-off-by: Anup Patel <anup@brainfault.org>
show more ...
|
426729b2 | 06-Oct-2022 |
Peter Gonda <pgonda@google.com> |
KVM: selftests: Add ucall pool based implementation
To play nice with guests whose stack memory is encrypted, e.g. AMD SEV, introduce a new "ucall pool" implementation that passes the ucall struct v
KVM: selftests: Add ucall pool based implementation
To play nice with guests whose stack memory is encrypted, e.g. AMD SEV, introduce a new "ucall pool" implementation that passes the ucall struct via dedicated memory (which can be mapped shared, a.k.a. as plain text).
Because not all architectures have access to the vCPU index in the guest, use a bitmap with atomic accesses to track which entries in the pool are free/used. A list+lock could also work in theory, but synchronizing the individual pointers to the guest would be a mess.
Note, there's no need to rewalk the bitmap to ensure success. If all vCPUs are simply allocating, success is guaranteed because there are enough entries for all vCPUs. If one or more vCPUs are freeing and then reallocating, success is guaranteed because vCPUs _always_ walk the bitmap from 0=>N; if vCPU frees an entry and then wins a race to re-allocate, then either it will consume the entry it just freed (bit is the first free bit), or the losing vCPU is guaranteed to see the freed bit (winner consumes an earlier bit, which the loser hasn't yet visited).
Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Peter Gonda <pgonda@google.com> Co-developed-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221006003409.649993-8-seanjc@google.com
show more ...
|
28a65567 | 06-Oct-2022 |
Sean Christopherson <seanjc@google.com> |
KVM: selftests: Drop now-unnecessary ucall_uninit()
Drop ucall_uninit() and ucall_arch_uninit() now that ARM doesn't modify the host's copy of ucall_exit_mmio_addr, i.e. now that there's no need to
KVM: selftests: Drop now-unnecessary ucall_uninit()
Drop ucall_uninit() and ucall_arch_uninit() now that ARM doesn't modify the host's copy of ucall_exit_mmio_addr, i.e. now that there's no need to reset the pointer before potentially creating a new VM. The few calls to ucall_uninit() are all immediately followed by kvm_vm_free(), and that is likely always going to hold true, i.e. it's extremely unlikely a test will want to effectively disable ucall in the middle of a test.
Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Tested-by: Peter Gonda <pgonda@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221006003409.649993-7-seanjc@google.com
show more ...
|
dc88244b | 06-Oct-2022 |
Sean Christopherson <seanjc@google.com> |
KVM: selftests: Automatically do init_ucall() for non-barebones VMs
Do init_ucall() automatically during VM creation to kill two (three?) birds with one stone.
First, initializing ucall immediately
KVM: selftests: Automatically do init_ucall() for non-barebones VMs
Do init_ucall() automatically during VM creation to kill two (three?) birds with one stone.
First, initializing ucall immediately after VM creations allows forcing aarch64's MMIO ucall address to immediately follow memslot0. This is still somewhat fragile as tests could clobber the MMIO address with a new memslot, but it's safe-ish since tests have to be conversative when accounting for memslot0. And this can be hardened in the future by creating a read-only memslot for the MMIO page (KVM ARM exits with MMIO if the guest writes to a read-only memslot). Add a TODO to document that selftests can and should use a memslot for the ucall MMIO (doing so requires yet more rework because tests assumes thay can use all memslots except memslot0).
Second, initializing ucall for all VMs prepares for making ucall initialization meaningful on all architectures. aarch64 is currently the only arch that needs to do any setup, but that will change in the future by switching to a pool-based implementation (instead of the current stack-based approach).
Lastly, defining the ucall MMIO address from common code will simplify switching all architectures (except s390) to a common MMIO-based ucall implementation (if there's ever sufficient motivation to do so).
Cc: Oliver Upton <oliver.upton@linux.dev> Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Tested-by: Peter Gonda <pgonda@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221006003409.649993-4-seanjc@google.com
show more ...
|
ef38871e | 06-Oct-2022 |
Sean Christopherson <seanjc@google.com> |
KVM: selftests: Consolidate boilerplate code in get_ucall()
Consolidate the actual copying of a ucall struct from guest=>host into the common get_ucall(). Return a host virtual address instead of a
KVM: selftests: Consolidate boilerplate code in get_ucall()
Consolidate the actual copying of a ucall struct from guest=>host into the common get_ucall(). Return a host virtual address instead of a guest virtual address even though the addr_gva2hva() part could be moved to get_ucall() too. Conceptually, get_ucall() is invoked from the host and should return a host virtual address (and returning NULL for "nothing to see here" is far superior to returning 0).
Use pointer shenanigans instead of an unnecessary bounce buffer when the caller of get_ucall() provides a valid pointer.
Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Tested-by: Peter Gonda <pgonda@google.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Link: https://lore.kernel.org/r/20221006003409.649993-3-seanjc@google.com
show more ...
|
1446e331 | 17-Oct-2022 |
Ricardo Koller <ricarkol@google.com> |
KVM: selftests: Use the right memslot for code, page-tables, and data allocations
Now that kvm_vm allows specifying different memslots for code, page tables, and data, use the appropriate memslot wh
KVM: selftests: Use the right memslot for code, page-tables, and data allocations
Now that kvm_vm allows specifying different memslots for code, page tables, and data, use the appropriate memslot when making allocations in common/libraty code. Change them accordingly:
- code (allocated by lib/elf) use the CODE memslot - stacks, exception tables, and other core data pages (like the TSS in x86) use the DATA memslot - page tables and the PGD use the PT memslot - test data (anything allocated with vm_vaddr_alloc()) uses the TEST_DATA memslot
No functional change intended. All allocators keep using memslot #0.
Cc: Sean Christopherson <seanjc@google.com> Cc: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Andrew Jones <andrew.jones@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221017195834.2295901-10-ricarkol@google.com
show more ...
|