Revision tags: v5.16-rc8, v5.16-rc7, v5.16-rc6 |
|
#
17580470 |
| 17-Dec-2021 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-next into drm-misc-next-fixes
Backmerging to bring drm-misc-next-fixes up to the latest state for the current release cycle.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
Revision tags: v5.16-rc5, v5.16-rc4, v5.16-rc3, v5.16-rc2, v5.16-rc1 |
|
#
1c3b9091 |
| 10-Nov-2021 |
Peter Zijlstra <peterz@infradead.org> |
x86/fpu: Remove .fixup usage
Employ EX_TYPE_EFAULT_REG to store '-EFAULT' into the %[err] register on exception. All the callers only ever test for 0, so the change from -1 to -EFAULT is immaterial.
x86/fpu: Remove .fixup usage
Employ EX_TYPE_EFAULT_REG to store '-EFAULT' into the %[err] register on exception. All the callers only ever test for 0, so the change from -1 to -EFAULT is immaterial.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20211110101325.604494664@infradead.org
show more ...
|
#
86329873 |
| 09-Dec-2021 |
Daniel Lezcano <daniel.lezcano@linaro.org> |
Merge branch 'reset/of-get-optional-exclusive' of git://git.pengutronix.de/pza/linux into timers/drivers/next
"Add optional variant of of_reset_control_get_exclusive(). If the requested reset is not
Merge branch 'reset/of-get-optional-exclusive' of git://git.pengutronix.de/pza/linux into timers/drivers/next
"Add optional variant of of_reset_control_get_exclusive(). If the requested reset is not specified in the device tree, this function returns NULL instead of an error."
This dependency is needed for the Generic Timer Module (a.k.a OSTM) support for RZ/G2L.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
show more ...
|
#
448cc2fb |
| 22-Nov-2021 |
Jani Nikula <jani.nikula@intel.com> |
Merge drm/drm-next into drm-intel-next
Sync up with drm-next to get v5.16-rc2.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
#
8626afb1 |
| 22-Nov-2021 |
Tvrtko Ursulin <tvrtko.ursulin@intel.com> |
Merge drm/drm-next into drm-intel-gt-next
Thomas needs the dma_resv_for_each_fence API for i915/ttm async migration work.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
|
#
50fc2494 |
| 18-Nov-2021 |
Jakub Kicinski <kuba@kernel.org> |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
#
a713ca23 |
| 18-Nov-2021 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-next into drm-misc-next
Backmerging from drm/drm-next for v5.16-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
#
467dd91e |
| 16-Nov-2021 |
Maxime Ripard <maxime@cerno.tech> |
Merge drm/drm-fixes into drm-misc-fixes
We need -rc1 to address a breakage in drm/scheduler affecting panfrost.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
|
#
1654e95e |
| 14-Nov-2021 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'x86_urgent_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Add the model number of a new, Raptor Lake CPU, to intel-family
Merge tag 'x86_urgent_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Borislav Petkov:
- Add the model number of a new, Raptor Lake CPU, to intel-family.h
- Do not log spurious corrected MCEs on SKL too, due to an erratum
- Clarify the path of paravirt ops patches upstream
- Add an optimization to avoid writing out AMX components to sigframes when former are in init state
* tag 'x86_urgent_for_v5.16_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Add Raptor Lake to Intel family x86/mce: Add errata workaround for Skylake SKX37 MAINTAINERS: Add some information to PARAVIRT_OPS entry x86/fpu: Optimize out sigframe xfeatures when in init state
show more ...
|
#
7f9f8792 |
| 06-Nov-2021 |
Arnaldo Carvalho de Melo <acme@redhat.com> |
Merge remote-tracking branch 'torvalds/master' into perf/core
To pick up some tools/perf/ patches that went via tip/perf/core, such as:
tools/perf: Add mem_hops field in perf_mem_data_src structu
Merge remote-tracking branch 'torvalds/master' into perf/core
To pick up some tools/perf/ patches that went via tip/perf/core, such as:
tools/perf: Add mem_hops field in perf_mem_data_src structure
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
show more ...
|
#
30d02551 |
| 02-Nov-2021 |
Dave Hansen <dave.hansen@linux.intel.com> |
x86/fpu: Optimize out sigframe xfeatures when in init state
tl;dr: AMX state is ~8k. Signal frames can have space for this ~8k and each signal entry writes out all 8k even if it is zeros. Skip writ
x86/fpu: Optimize out sigframe xfeatures when in init state
tl;dr: AMX state is ~8k. Signal frames can have space for this ~8k and each signal entry writes out all 8k even if it is zeros. Skip writing zeros for AMX to speed up signal delivery by about 4% overall when AMX is in its init state.
This is a user-visible change to the sigframe ABI.
== Hardware XSAVE Background ==
XSAVE state components may be tracked by the processor as being in their initial configuration. Software can detect which features are in this configuration by looking at the XSTATE_BV field in an XSAVE buffer or with the XGETBV(1) instruction.
Both the XSAVE and XSAVEOPT instructions enumerate features s being in the initial configuration via the XSTATE_BV field in the XSAVE header, However, XSAVEOPT declines to actually write features in their initial configuration to the buffer. XSAVE writes the feature unconditionally, regardless of whether it is in the initial configuration or not.
Basically, XSAVE users never need to inspect XSTATE_BV to determine if the feature has been written to the buffer. XSAVEOPT users *do* need to inspect XSTATE_BV. They might also need to clear out the buffer if they want to make an isolated change to the state, like modifying one register.
== Software Signal / XSAVE Background ==
Signal frames have historically been written with XSAVE itself. Each state is written in its entirety, regardless of being in its initial configuration.
In other words, the signal frame ABI uses the XSAVE behavior, not the XSAVEOPT behavior.
== Problem ==
This means that any application which has acquired permission to use AMX via ARCH_REQ_XCOMP_PERM will write 8k of state to the signal frame. This 8k write will occur even when AMX was in its initial configuration and software *knows* this because of XSTATE_BV.
This problem also exists to a lesser degree with AVX-512 and its 2k of state. However, AVX-512 use does not require ARCH_REQ_XCOMP_PERM and is more likely to have existing users which would be impacted by any change in behavior.
== Solution ==
Stop writing out AMX xfeatures which are in their initial state to the signal frame. This effectively makes the signal frame XSAVE buffer look as if it were written with a combination of XSAVEOPT and XSAVE behavior. Userspace which handles XSAVEOPT- style buffers should be able to handle this naturally.
For now, include only the AMX xfeatures: XTILE and XTILEDATA in this new behavior. These require new ABI to use anyway, which makes their users very unlikely to be broken. This XSAVEOPT-like behavior should be expected for all future dynamic xfeatures. It may also be extended to legacy features like AVX-512 in the future.
Only attempt this optimization on systems with dynamic features. Disable dynamic feature support (XFD) if XGETBV1 is unavailable by adding a CPUID dependency.
This has been measured to reduce the *overall* cycle cost of signal delivery by about 4%.
Fixes: 2308ee57d93d ("x86/fpu/amx: Enable the AMX feature in 64-bit mode") Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: "Chang S. Bae" <chang.seok.bae@intel.com> Link: https://lore.kernel.org/r/20211102224750.FA412E26@davehans-spike.ostc.intel.com
show more ...
|
#
8cb1ae19 |
| 01-Nov-2021 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'x86-fpu-2021-11-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fpu updates from Thomas Gleixner:
- Cleanup of extable fixup handling to be more robust, which in t
Merge tag 'x86-fpu-2021-11-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fpu updates from Thomas Gleixner:
- Cleanup of extable fixup handling to be more robust, which in turn allows to make the FPU exception fixups more robust as well.
- Change the return code for signal frame related failures from explicit error codes to a boolean fail/success as that's all what the calling code evaluates.
- A large refactoring of the FPU code to prepare for adding AMX support:
- Distangle the public header maze and remove especially the misnomed kitchen sink internal.h which is despite it's name included all over the place.
- Add a proper abstraction for the register buffer storage (struct fpstate) which allows to dynamically size the buffer at runtime by flipping the pointer to the buffer container from the default container which is embedded in task_struct::tread::fpu to a dynamically allocated container with a larger register buffer.
- Convert the code over to the new fpstate mechanism.
- Consolidate the KVM FPU handling by moving the FPU related code into the FPU core which removes the number of exports and avoids adding even more export when AMX has to be supported in KVM. This also removes duplicated code which was of course unnecessary different and incomplete in the KVM copy.
- Simplify the KVM FPU buffer handling by utilizing the new fpstate container and just switching the buffer pointer from the user space buffer to the KVM guest buffer when entering vcpu_run() and flipping it back when leaving the function. This cuts the memory requirements of a vCPU for FPU buffers in half and avoids pointless memory copy operations.
This also solves the so far unresolved problem of adding AMX support because the current FPU buffer handling of KVM inflicted a circular dependency between adding AMX support to the core and to KVM. With the new scheme of switching fpstate AMX support can be added to the core code without affecting KVM.
- Replace various variables with proper data structures so the extra information required for adding dynamically enabled FPU features (AMX) can be added in one place
- Add AMX (Advanced Matrix eXtensions) support (finally):
AMX is a large XSTATE component which is going to be available with Saphire Rapids XEON CPUs. The feature comes with an extra MSR (MSR_XFD) which allows to trap the (first) use of an AMX related instruction, which has two benefits:
1) It allows the kernel to control access to the feature
2) It allows the kernel to dynamically allocate the large register state buffer instead of burdening every task with the the extra 8K or larger state storage.
It would have been great to gain this kind of control already with AVX512.
The support comes with the following infrastructure components:
1) arch_prctl() to - read the supported features (equivalent to XGETBV(0)) - read the permitted features for a task - request permission for a dynamically enabled feature
Permission is granted per process, inherited on fork() and cleared on exec(). The permission policy of the kernel is restricted to sigaltstack size validation, but the syscall obviously allows further restrictions via seccomp etc.
2) A stronger sigaltstack size validation for sys_sigaltstack(2) which takes granted permissions and the potentially resulting larger signal frame into account. This mechanism can also be used to enforce factual sigaltstack validation independent of dynamic features to help with finding potential victims of the 2K sigaltstack size constant which is broken since AVX512 support was added.
3) Exception handling for #NM traps to catch first use of a extended feature via a new cause MSR. If the exception was caused by the use of such a feature, the handler checks permission for that feature. If permission has not been granted, the handler sends a SIGILL like the #UD handler would do if the feature would have been disabled in XCR0. If permission has been granted, then a new fpstate which fits the larger buffer requirement is allocated.
In the unlikely case that this allocation fails, the handler sends SIGSEGV to the task. That's not elegant, but unavoidable as the other discussed options of preallocation or full per task permissions come with their own set of horrors for kernel and/or userspace. So this is the lesser of the evils and SIGSEGV caused by unexpected memory allocation failures is not a fundamentally new concept either.
When allocation succeeds, the fpstate properties are filled in to reflect the extended feature set and the resulting sizes, the fpu::fpstate pointer is updated accordingly and the trap is disarmed for this task permanently.
4) Enumeration and size calculations
5) Trap switching via MSR_XFD
The XFD (eXtended Feature Disable) MSR is context switched with the same life time rules as the FPU register state itself. The mechanism is keyed off with a static key which is default disabled so !AMX equipped CPUs have zero overhead. On AMX enabled CPUs the overhead is limited by comparing the tasks XFD value with a per CPU shadow variable to avoid redundant MSR writes. In case of switching from a AMX using task to a non AMX using task or vice versa, the extra MSR write is obviously inevitable.
All other places which need to be aware of the variable feature sets and resulting variable sizes are not affected at all because they retrieve the information (feature set, sizes) unconditonally from the fpstate properties.
6) Enable the new AMX states
Note, this is relatively new code despite the fact that AMX support is in the works for more than a year now.
The big refactoring of the FPU code, which allowed to do a proper integration has been started exactly 3 weeks ago. Refactoring of the existing FPU code and of the original AMX patches took a week and has been subject to extensive review and testing. The only fallout which has not been caught in review and testing right away was restricted to AMX enabled systems, which is completely irrelevant for anyone outside Intel and their early access program. There might be dragons lurking as usual, but so far the fine grained refactoring has held up and eventual yet undetected fallout is bisectable and should be easily addressable before the 5.16 release. Famous last words...
Many thanks to Chang Bae and Dave Hansen for working hard on this and also to the various test teams at Intel who reserved extra capacity to follow the rapid development of this closely which provides the confidence level required to offer this rather large update for inclusion into 5.16-rc1
* tag 'x86-fpu-2021-11-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (110 commits) Documentation/x86: Add documentation for using dynamic XSTATE features x86/fpu: Include vmalloc.h for vzalloc() selftests/x86/amx: Add context switch test selftests/x86/amx: Add test cases for AMX state management x86/fpu/amx: Enable the AMX feature in 64-bit mode x86/fpu: Add XFD handling for dynamic states x86/fpu: Calculate the default sizes independently x86/fpu/amx: Define AMX state components and have it used for boot-time checks x86/fpu/xstate: Prepare XSAVE feature table for gaps in state component numbers x86/fpu/xstate: Add fpstate_realloc()/free() x86/fpu/xstate: Add XFD #NM handler x86/fpu: Update XFD state where required x86/fpu: Add sanity checks for XFD x86/fpu: Add XFD state to fpstate x86/msr-index: Add MSRs for XFD x86/cpufeatures: Add eXtended Feature Disabling (XFD) feature bit x86/fpu: Reset permission and fpstate on exec() x86/fpu: Prepare fpu_clone() for dynamically enabled features x86/fpu/signal: Prepare for variable sigframe length x86/signal: Use fpu::__state_user_size for sigalt stack validation ...
show more ...
|
Revision tags: v5.15, v5.15-rc7 |
|
#
67236547 |
| 22-Oct-2021 |
Chang S. Bae <chang.seok.bae@intel.com> |
x86/fpu: Update XFD state where required
The IA32_XFD_MSR allows to arm #NM traps for XSTATE components which are enabled in XCR0. The register has to be restored before the tasks XSTATE is restored
x86/fpu: Update XFD state where required
The IA32_XFD_MSR allows to arm #NM traps for XSTATE components which are enabled in XCR0. The register has to be restored before the tasks XSTATE is restored. The life time rules are the same as for FPU state.
XFD is updated on return to userspace only when the FPU state of the task is not up to date in the registers. It's updated before the XRSTORS so that eventually enabled dynamic features are restored as well and not brought into init state.
Also in signal handling for restoring FPU state from user space the correctness of the XFD state has to be ensured.
Add it to CPU initialization and resume as well.
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20211021225527.10184-17-chang.seok.bae@intel.com
show more ...
|
#
5529acf4 |
| 22-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Add sanity checks for XFD
Add debug functionality to ensure that the XFD MSR is up to date for XSAVE* and XRSTOR* operations.
[ tglx: Improve comment. ]
Signed-off-by: Thomas Gleixner <t
x86/fpu: Add sanity checks for XFD
Add debug functionality to ensure that the XFD MSR is up to date for XSAVE* and XRSTOR* operations.
[ tglx: Improve comment. ]
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211021225527.10184-16-chang.seok.bae@intel.com
show more ...
|
#
8bf26758 |
| 22-Oct-2021 |
Chang S. Bae <chang.seok.bae@intel.com> |
x86/fpu: Add XFD state to fpstate
Add storage for XFD register state to struct fpstate. This will be used to store the XFD MSR state. This will be used for switching the XFD MSR when FPU content is
x86/fpu: Add XFD state to fpstate
Add storage for XFD register state to struct fpstate. This will be used to store the XFD MSR state. This will be used for switching the XFD MSR when FPU content is restored.
Add a per-CPU variable to cache the current MSR value so the MSR has only to be written when the values are different.
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211021225527.10184-15-chang.seok.bae@intel.com
show more ...
|
#
db8268df |
| 22-Oct-2021 |
Chang S. Bae <chang.seok.bae@intel.com> |
x86/arch_prctl: Add controls for dynamic XSTATE components
Dynamically enabled XSTATE features are by default disabled for all processes. A process has to request permission to use such a feature.
x86/arch_prctl: Add controls for dynamic XSTATE components
Dynamically enabled XSTATE features are by default disabled for all processes. A process has to request permission to use such a feature.
To support this implement a architecture specific prctl() with the options:
- ARCH_GET_XCOMP_SUPP
Copies the supported feature bitmap into the user space provided u64 storage. The pointer is handed in via arg2
- ARCH_GET_XCOMP_PERM
Copies the process wide permitted feature bitmap into the user space provided u64 storage. The pointer is handed in via arg2
- ARCH_REQ_XCOMP_PERM
Request permission for a feature set. A feature set can be mapped to a facility, e.g. AMX, and can require one or more XSTATE components to be enabled.
The feature argument is the number of the highest XSTATE component which is required for a facility to work.
The request argument is not a user supplied bitmap because that makes filtering harder (think seccomp) and even impossible because to support 32bit tasks the argument would have to be a pointer.
The permission mechanism works this way:
Task asks for permission for a facility and kernel checks whether that's supported. If supported it does:
1) Check whether permission has already been granted
2) Compute the size of the required kernel and user space buffer (sigframe) size.
3) Validate that no task has a sigaltstack installed which is smaller than the resulting sigframe size
4) Add the requested feature bit(s) to the permission bitmap of current->group_leader->fpu and store the sizes in the group leaders fpu struct as well.
If that is successful then the feature is still not enabled for any of the tasks. The first usage of a related instruction will result in a #NM trap. The trap handler validates the permission bit of the tasks group leader and if permitted it installs a larger kernel buffer and transfers the permission and size info to the new fpstate container which makes all the FPU functions which require per task information aware of the extended feature set.
[ tglx: Adopted to new base code, added missing serialization, massaged namings, comments and changelog ]
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211021225527.10184-7-chang.seok.bae@intel.com
show more ...
|
Revision tags: v5.15-rc6 |
|
#
d72c8701 |
| 15-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu/xstate: Move remaining xfeature helpers to core
Now that everything is mopped up, move all the helpers and prototypes into the core header. They are not required by the outside.
Signed-off-
x86/fpu/xstate: Move remaining xfeature helpers to core
Now that everything is mopped up, move all the helpers and prototypes into the core header. They are not required by the outside.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211014230739.514095101@linutronix.de
show more ...
|
#
2bd264bc |
| 15-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Move xstate size to fpu_*_cfg
Use the new kernel and user space config storage to store and retrieve the XSTATE buffer sizes. The default and the maximum size are the same for now, but will
x86/fpu: Move xstate size to fpu_*_cfg
Use the new kernel and user space config storage to store and retrieve the XSTATE buffer sizes. The default and the maximum size are the same for now, but will change when support for dynamically enabled features is added.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211014230739.296830097@linutronix.de
show more ...
|
#
49e4eb41 |
| 13-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu/xstate: Use fpstate for copy_uabi_to_xstate()
Prepare for dynamically enabled states per task. The function needs to retrieve the features and sizes which are valid in a fpstate context. Ret
x86/fpu/xstate: Use fpstate for copy_uabi_to_xstate()
Prepare for dynamically enabled states per task. The function needs to retrieve the features and sizes which are valid in a fpstate context. Retrieve them from fpstate.
Move the function declarations to the core header as they are not required anywhere else.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211013145323.233529986@linutronix.de
show more ...
|
#
3ac8d757 |
| 13-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Use fpstate in __copy_xstate_to_uabi_buf()
With dynamically enabled features the copy function must know the features and the size which is valid for the task. Retrieve them from fpstate.
x86/fpu: Use fpstate in __copy_xstate_to_uabi_buf()
With dynamically enabled features the copy function must know the features and the size which is valid for the task. Retrieve them from fpstate.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211013145323.181495492@linutronix.de
show more ...
|
#
0b2d39aa |
| 13-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu/xstate: Use fpstate for xsave_to_user_sigframe()
With dynamically enabled features the sigframe code must know the features which are enabled for the task. Get them from fpstate.
Signed-off
x86/fpu/xstate: Use fpstate for xsave_to_user_sigframe()
With dynamically enabled features the sigframe code must know the features which are enabled for the task. Get them from fpstate.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211013145323.077781448@linutronix.de
show more ...
|
#
073e627a |
| 13-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu/xstate: Use fpstate for os_xsave()
With variable feature sets XSAVE[S] requires to know the feature set for which the buffer is valid. Retrieve it from fpstate.
Signed-off-by: Thomas Gleixn
x86/fpu/xstate: Use fpstate for os_xsave()
With variable feature sets XSAVE[S] requires to know the feature set for which the buffer is valid. Retrieve it from fpstate.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211013145323.025695590@linutronix.de
show more ...
|
#
087df48c |
| 13-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Replace KVMs xstate component clearing
In order to prepare for the support of dynamically enabled FPU features, move the clearing of xstate components to the FPU core code.
No functional c
x86/fpu: Replace KVMs xstate component clearing
In order to prepare for the support of dynamically enabled FPU features, move the clearing of xstate components to the FPU core code.
No functional change.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: kvm@vger.kernel.org Link: https://lkml.kernel.org/r/20211013145322.399567049@linutronix.de
show more ...
|
#
6415bb80 |
| 15-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Mop up the internal.h leftovers
Move the global interfaces to api.h and the rest into the core.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.
x86/fpu: Mop up the internal.h leftovers
Move the global interfaces to api.h and the rest into the core.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211015011539.948837194@linutronix.de
show more ...
|
#
df95b0f1 |
| 15-Oct-2021 |
Thomas Gleixner <tglx@linutronix.de> |
x86/fpu: Move os_xsave() and os_xrstor() to core
Nothing outside the core code needs these.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: htt
x86/fpu: Move os_xsave() and os_xrstor() to core
Nothing outside the core code needs these.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20211015011539.513368075@linutronix.de
show more ...
|