Home
last modified time | relevance | path

Searched refs:L1D (Results 1 – 25 of 27) sorted by relevance

12

/linux/Documentation/admin-guide/hw-vuln/
H A Dl1d_flush.rst1 L1D Flushing
5 leaks from the Level 1 Data cache (L1D) the kernel provides an opt-in
6 mechanism to flush the L1D cache on context switch.
10 (snooping of) from the L1D cache.
34 When PR_SPEC_L1D_FLUSH is enabled for a task a flush of the L1D cache is
38 If the underlying CPU supports L1D flushing in hardware, the hardware
44 The kernel command line allows to control the L1D flush mitigations at boot
58 The mechanism does not mitigate L1D data leaks between tasks belonging to
66 **NOTE** : The opt-in of a task for L1D flushing works only when the task's
68 requested L1D flushing is scheduled on a SMT-enabled core the kernel sends
H A Dl1tf.rst97 share the L1 Data Cache (L1D) is important for this. As the flaw allows
98 only to attack data which is present in L1D, a malicious guest running
99 on one Hyperthread can attack the data which is brought into the L1D by
145 - L1D Flush mode:
148 'L1D vulnerable' L1D flushing is disabled
150 'L1D conditional cache flushes' L1D flush is conditionally enabled
152 'L1D cache flushes' L1D flush is unconditionally enabled
170 1. L1D flush on VMENTER
173 To make sure that a guest cannot attack data which is present in the L1D
174 the hypervisor flushes the L1D before entering the guest.
[all …]
/linux/arch/alpha/kernel/
H A Dsetup.c1196 int L1I, L1D, L2, L3; in determine_cpu_caches() local
1206 L1D = L1I; in determine_cpu_caches()
1227 L1I = L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1242 L1I = L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1268 L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1271 L1D = CSHAPE(16*1024, 5, 1); in determine_cpu_caches()
1294 L1I = L1D = CSHAPE(64*1024, 6, 2); in determine_cpu_caches()
1301 L1I = L1D = CSHAPE(64*1024, 6, 2); in determine_cpu_caches()
1308 L1I = L1D = L2 = L3 = 0; in determine_cpu_caches()
1313 alpha_l1d_cacheshape = L1D; in determine_cpu_caches()
/linux/arch/powerpc/perf/
H A De6500-pmu.c36 [C(L1D)] = {
H A De500-pmu.c39 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower10-pmu.c359 [C(L1D)] = {
460 [C(L1D)] = {
H A Dgeneric-compat-pmu.c186 [ C(L1D) ] = {
H A Dmpc7450-pmu.c366 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower8-pmu.c267 [ C(L1D) ] = {
H A Dpower7-pmu.c340 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dppc970-pmu.c439 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower6-pmu.c500 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower9-pmu.c338 [ C(L1D) ] = {
H A Dpower5-pmu.c568 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower5+-pmu.c626 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
/linux/Documentation/userspace-api/
H A Dspec_ctrl.rst111 - PR_SPEC_L1D_FLUSH: Flush L1D Cache on context switch out of the task
114 For this control, PR_SPEC_ENABLE means that the **mitigation** is enabled (L1D
/linux/arch/sh/kernel/cpu/sh4a/
H A Dperf_event.c116 [ C(L1D) ] = {
/linux/arch/sh/kernel/cpu/sh4/
H A Dperf_event.c91 [ C(L1D) ] = {
/linux/tools/perf/Documentation/
H A Dperf-arm-spe.txt159 only samples that have both 'L1D cache refill' AND 'TLB walk' are recorded. When
162 and 5 in 'inv_event_filter' will exclude any events that are either L1D cache
168 bit 2 - L1D access (FEAT_SPEv1p4)
169 bit 3 - L1D refill
/linux/arch/sparc/kernel/
H A Dperf_event.c221 [C(L1D)] = {
359 [C(L1D)] = {
494 [C(L1D)] = {
631 [C(L1D)] = {
/linux/tools/perf/util/
H A Darm-spe.c855 if (arm_spe_is_cache_hit(record->type, L1D)) { in arm_spe__synth_ld_memory_level()
878 } else if (arm_spe_is_cache_miss(record->type, L1D)) { in arm_spe__synth_ld_memory_level()
898 } else if (arm_spe_is_cache_level(record->type, L1D)) { in arm_spe__synth_st_memory_level()
900 data_src->mem_lvl |= arm_spe_is_cache_miss(record->type, L1D) ? in arm_spe__synth_st_memory_level()
/linux/arch/x86/events/intel/
H A Dcore.c556 [ C(L1D ) ] = {
702 [ C(L1D ) ] = {
930 [ C(L1D) ] = {
1086 [ C(L1D ) ] = {
1238 [ C(L1D) ] = {
1421 [ C(L1D) ] = {
1536 [ C(L1D) ] = {
1627 [ C(L1D) ] = {
1778 [ C(L1D) ] = {
1912 [C(L1D)] = {
[all …]
/linux/arch/loongarch/kernel/
H A Dperf_event.c691 [C(L1D)] = {
/linux/Documentation/arch/x86/
H A Dmds.rst81 clearing. Either the modified VERW instruction or via the L1D Flush
/linux/Documentation/admin-guide/
H A Dkernel-parameters.txt3257 always: L1D cache flush on every VMENTER.
3258 cond: Flush L1D on VMENTER only when the code between
3270 Control mitigation for L1D based snooping vulnerability.
3296 hypervisors, i.e. unconditional L1D flush.
3298 SMT control and L1D flush control via the
3303 i.e. SMT enabled or L1D flush disabled.
3306 Same as 'full', but disables SMT and L1D
3314 L1D flush.
3316 SMT control and L1D flush control via the
3321 i.e. SMT enabled or L1D flush disabled.
[all …]

12