a7549636 | 26-Jun-2025 |
Gerd Hoffmann <kraxel@redhat.com> |
x86/sev: Let sev_es_efi_map_ghcbs() map the CA pages too
OVMF EFI firmware needs access to the CA page to do SVSM protocol calls. For example, when the SVSM implements an EFI variable store, such ca
x86/sev: Let sev_es_efi_map_ghcbs() map the CA pages too
OVMF EFI firmware needs access to the CA page to do SVSM protocol calls. For example, when the SVSM implements an EFI variable store, such calls will be necessary.
So add that to sev_es_efi_map_ghcbs() and also rename the function to reflect the additional job it is doing now.
[ bp: Massage. ]
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/20250626114014.373748-4-kraxel@redhat.com
show more ...
|
7b22e043 | 26-Jun-2025 |
Gerd Hoffmann <kraxel@redhat.com> |
x86/sev/vc: Fix EFI runtime instruction emulation
In case efi_mm is active go use the userspace instruction decoder which supports fetching instructions from active_mm. This is needed to make instr
x86/sev/vc: Fix EFI runtime instruction emulation
In case efi_mm is active go use the userspace instruction decoder which supports fetching instructions from active_mm. This is needed to make instruction emulation work for EFI runtime code, so it can use CPUID and RDMSR.
EFI runtime code uses the CPUID instruction to gather information about the environment it is running in, such as SEV being enabled or not, and choose (if needed) the SEV code path for ioport access.
EFI runtime code uses the RDMSR instruction to get the location of the CAA page (see SVSM spec, section 4.2 - "Post Boot").
The big picture behind this is that the kernel needs to be able to properly handle #VC exceptions that come from EFI runtime services. Since EFI runtime services have a special page table mapping for the EFI virtual address space, the efi_mm context must be used when decoding instructions during #VC handling.
[ bp: Massage. ]
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com> Link: https://lore.kernel.org/20250626114014.373748-2-kraxel@redhat.com
show more ...
|
040ed574 | 11-Jun-2025 |
Alexey Kardashevskiy <aik@amd.com> |
x86/sev: Drop unnecessary parameter in snp_issue_guest_request()
Commit
3e385c0d6ce8 ("virt: sev-guest: Move SNP Guest Request data pages handling under snp_cmd_mutex")
moved @input from snp_msg
x86/sev: Drop unnecessary parameter in snp_issue_guest_request()
Commit
3e385c0d6ce8 ("virt: sev-guest: Move SNP Guest Request data pages handling under snp_cmd_mutex")
moved @input from snp_msg_desc to snp_guest_req which is passed to snp_issue_guest_request().
Drop the extra parameter.
No functional change intended.
Signed-off-by: Alexey Kardashevskiy <aik@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Dionna Glaze <dionnaglaze@google.com> Link: https://lore.kernel.org/20250611040842.2667262-5-aik@amd.com
show more ...
|
7ffeb2fc | 11-Jun-2025 |
Alexey Kardashevskiy <aik@amd.com> |
x86/sev: Document requirement for linear mapping of guest request buffers
The Guest Request supports 3 types of messages now, the largest is the extended variant of MSG_REPORT_REQ: sizeof(snp_ext_re
x86/sev: Document requirement for linear mapping of guest request buffers
The Guest Request supports 3 types of messages now, the largest is the extended variant of MSG_REPORT_REQ: sizeof(snp_ext_report_req)==112. These used to be allocated on stack and then moved to the SNP guest platform device (snp_guest_dev) for the reason explained in
db10cb9b5746 ("virt: sevguest: Fix passing a stack buffer as a scatterlist target"):
aesgcm_encrypt() and aesgcm_decrypt() are used for guest messages and might potentially use a crypto accelerator which requires DMA buffers to be in the linear mapping.
Add a comment, warn and return an error when the buffers are not in linear mapping.
Signed-off-by: Alexey Kardashevskiy <aik@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Dionna Glaze <dionnaglaze@google.com> Link: https://lore.kernel.org/20250611040842.2667262-4-aik@amd.com
show more ...
|
d100016e | 11-Jun-2025 |
Alexey Kardashevskiy <aik@amd.com> |
x86/sev: Allocate request in TSC_INFO_REQ on stack
Allocate a 88 byte request structure on stack and skip needless kzalloc/kfree.
While at this, correct indent.
No functional change intended.
Sig
x86/sev: Allocate request in TSC_INFO_REQ on stack
Allocate a 88 byte request structure on stack and skip needless kzalloc/kfree.
While at this, correct indent.
No functional change intended.
Signed-off-by: Alexey Kardashevskiy <aik@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Dionna Glaze <dionnaglaze@google.com> Link: https://lore.kernel.org/20250611040842.2667262-3-aik@amd.com
show more ...
|
82b7f88f | 06-May-2025 |
Ashish Kalra <ashish.kalra@amd.com> |
x86/sev: Make sure pages are not skipped during kdump
When shared pages are being converted to private during kdump, additional checks are performed. They include handling the case of a GHCB page be
x86/sev: Make sure pages are not skipped during kdump
When shared pages are being converted to private during kdump, additional checks are performed. They include handling the case of a GHCB page being contained within a huge page.
Currently, this check incorrectly skips a page just below the GHCB page from being transitioned back to private during kdump preparation.
This skipped page causes a 0x404 #VC exception when it is accessed later while dumping guest memory for vmcore generation.
Correct the range to be checked for GHCB contained in a huge page. Also, ensure that the skipped huge page containing the GHCB page is transitioned back to private by applying the correct address mask later when changing GHCBs to private at end of kdump preparation.
[ bp: Massage commit message. ]
Fixes: 3074152e56c9 ("x86/sev: Convert shared memory back to private on kexec") Signed-off-by: Ashish Kalra <ashish.kalra@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Tested-by: Srikanth Aithal <sraithal@amd.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20250506183529.289549-1-Ashish.Kalra@amd.com
show more ...
|
3204877d | 27-Apr-2025 |
Xin Li (Intel) <xin@zytor.com> |
x86/msr: Convert __rdmsr() uses to native_rdmsrq() uses
__rdmsr() is the lowest level MSR write API, with native_rdmsr() and native_rdmsrq() serving as higher-level wrappers around it.
#define na
x86/msr: Convert __rdmsr() uses to native_rdmsrq() uses
__rdmsr() is the lowest level MSR write API, with native_rdmsr() and native_rdmsrq() serving as higher-level wrappers around it.
#define native_rdmsr(msr, val1, val2) \ do { \ u64 __val = __rdmsr((msr)); \ (void)((val1) = (u32)__val); \ (void)((val2) = (u32)(__val >> 32)); \ } while (0)
static __always_inline u64 native_rdmsrq(u32 msr) { return __rdmsr(msr); }
However, __rdmsr() continues to be utilized in various locations.
MSR APIs are designed for different scenarios, such as native or pvops, with or without trace, and safe or non-safe. Unfortunately, the current MSR API names do not adequately reflect these factors, making it challenging to select the most appropriate API for various situations.
To pave the way for improving MSR API names, convert __rdmsr() uses to native_rdmsrq() to ensure consistent usage. Later, these APIs can be renamed to better reflect their implications, such as native or pvops, with or without trace, and safe or non-safe.
No functional change intended.
Signed-off-by: Xin Li (Intel) <xin@zytor.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Stefano Stabellini <sstabellini@kernel.org> Cc: Uros Bizjak <ubizjak@gmail.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20250427092027.1598740-10-xin@zytor.com
show more ...
|
18ea89ea | 23-Apr-2025 |
Tom Lendacky <thomas.lendacky@amd.com> |
x86/sev: Share the sev_secrets_pa value again
This commits breaks SNP guests:
234cf67fc3bd ("x86/sev: Split off startup code from core code")
The SNP guest boots, but no longer has access to the
x86/sev: Share the sev_secrets_pa value again
This commits breaks SNP guests:
234cf67fc3bd ("x86/sev: Split off startup code from core code")
The SNP guest boots, but no longer has access to the VMPCK keys needed to communicate with the ASP, which is used, for example, to obtain an attestation report.
The secrets_pa value is defined as static in both startup.c and core.c. It is set by a function in startup.c and so when used in core.c its value will be 0.
Share it again and add the sev_ prefix to put it into the global SEV symbols namespace.
[ mingo: Renamed to sev_secrets_pa ]
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: Kevin Loughlin <kevinloughlin@google.com> Link: https://lore.kernel.org/r/cf878810-81ed-3017-52c6-ce6aa41b5f01@amd.com
show more ...
|
a3cbbb47 | 18-Apr-2025 |
Ard Biesheuvel <ardb@kernel.org> |
x86/boot: Move SEV startup code into startup/
Move the SEV startup code into arch/x86/boot/startup/, where it will reside along with other code that executes extremely early, and therefore needs to
x86/boot: Move SEV startup code into startup/
Move the SEV startup code into arch/x86/boot/startup/, where it will reside along with other code that executes extremely early, and therefore needs to be built in a special manner.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20250418141253.2601348-12-ardb+git@google.com
show more ...
|
234cf67f | 18-Apr-2025 |
Ard Biesheuvel <ardb@kernel.org> |
x86/sev: Split off startup code from core code
Disentangle the SEV core code and the SEV code that is called during early boot. The latter piece will be moved into startup/ in a subsequent patch.
S
x86/sev: Split off startup code from core code
Disentangle the SEV core code and the SEV code that is called during early boot. The latter piece will be moved into startup/ in a subsequent patch.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Dionna Amalie Glaze <dionnaglaze@google.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Juergen Gross <jgross@suse.com> Cc: Kees Cook <keescook@chromium.org> Cc: Kevin Loughlin <kevinloughlin@google.com> Cc: Len Brown <len.brown@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20250418141253.2601348-11-ardb+git@google.com
show more ...
|