Searched hist:"7 bab16a6075b7b94999666355ab532c3dabb94f9" (Results 1 – 1 of 1) sorted by relevance
/linux/arch/arm64/kvm/hyp/nvhe/ |
H A D | hyp.lds.S | diff 7bab16a6075b7b94999666355ab532c3dabb94f9 Fri Nov 13 16:04:06 CET 2020 Jamie Iles <jamie@nuviainc.com> KVM: arm64: Correctly align nVHE percpu data
The nVHE percpu data is partially linked but the nVHE linker script did not align the percpu section. The PERCPU_INPUT macro would then align the data to a page boundary:
#define PERCPU_INPUT(cacheline) \ __per_cpu_start = .; \ *(.data..percpu..first) \ . = ALIGN(PAGE_SIZE); \ *(.data..percpu..page_aligned) \ . = ALIGN(cacheline); \ *(.data..percpu..read_mostly) \ . = ALIGN(cacheline); \ *(.data..percpu) \ *(.data..percpu..shared_aligned) \ PERCPU_DECRYPTED_SECTION \ __per_cpu_end = .;
but then when the final vmlinux linking happens the hypervisor percpu data is included after page alignment and so the offsets potentially don't match. On my build I saw that the .hyp.data..percpu section was at address 0x20 and then the percpu data would begin at 0x1000 (because of the page alignment in PERCPU_INPUT), but when linked into vmlinux, everything would be shifted down by 0x20 bytes.
This manifests as one of the CPUs getting lost when running kvm-unit-tests or starting any VM and subsequent soft lockup on a Cortex A72 device.
Fixes: 30c953911c43 ("kvm: arm64: Set up hyp percpu data for nVHE") Signed-off-by: Jamie Iles <jamie@nuviainc.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: David Brazdil <dbrazdil@google.com> Cc: David Brazdil <dbrazdil@google.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201113150406.14314-1-jamie@nuviainc.com
|