Home
last modified time | relevance | path

Searched refs:L1D (Results 1 – 25 of 27) sorted by relevance

12

/linux/Documentation/admin-guide/hw-vuln/
H A Dl1d_flush.rst1 L1D Flushing
5 leaks from the Level 1 Data cache (L1D) the kernel provides an opt-in
6 mechanism to flush the L1D cache on context switch.
10 (snooping of) from the L1D cache.
34 When PR_SPEC_L1D_FLUSH is enabled for a task a flush of the L1D cache is
38 If the underlying CPU supports L1D flushing in hardware, the hardware
44 The kernel command line allows to control the L1D flush mitigations at boot
58 The mechanism does not mitigate L1D data leaks between tasks belonging to
66 **NOTE** : The opt-in of a task for L1D flushing works only when the task's
68 requested L1D flushing is scheduled on a SMT-enabled core the kernel sends
H A Dl1tf.rst97 share the L1 Data Cache (L1D) is important for this. As the flaw allows
98 only to attack data which is present in L1D, a malicious guest running
99 on one Hyperthread can attack the data which is brought into the L1D by
145 - L1D Flush mode:
148 'L1D vulnerable' L1D flushing is disabled
150 'L1D conditional cache flushes' L1D flush is conditionally enabled
152 'L1D cache flushes' L1D flush is unconditionally enabled
170 1. L1D flush on VMENTER
173 To make sure that a guest cannot attack data which is present in the L1D
174 the hypervisor flushes the L1D before entering the guest.
[all …]
/linux/arch/alpha/kernel/
H A Dsetup.c1195 int L1I, L1D, L2, L3; in determine_cpu_caches() local
1205 L1D = L1I; in determine_cpu_caches()
1226 L1I = L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1241 L1I = L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1267 L1D = CSHAPE(8*1024, 5, 1); in determine_cpu_caches()
1270 L1D = CSHAPE(16*1024, 5, 1); in determine_cpu_caches()
1293 L1I = L1D = CSHAPE(64*1024, 6, 2); in determine_cpu_caches()
1300 L1I = L1D = CSHAPE(64*1024, 6, 2); in determine_cpu_caches()
1307 L1I = L1D = L2 = L3 = 0; in determine_cpu_caches()
1312 alpha_l1d_cacheshape = L1D; in determine_cpu_caches()
/linux/drivers/perf/
H A Driscv_pmu_sbi.c161 [C(L1D)] = {
164 C(OP_READ), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
166 C(OP_READ), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
170 C(OP_WRITE), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
172 C(OP_WRITE), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
176 C(OP_PREFETCH), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
178 C(OP_PREFETCH), C(L1D), SBI_PMU_EVENT_TYPE_CACHE, 0}},
/linux/arch/powerpc/perf/
H A De6500-pmu.c36 [C(L1D)] = {
H A De500-pmu.c39 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower10-pmu.c359 [C(L1D)] = {
460 [C(L1D)] = {
H A Dgeneric-compat-pmu.c186 [ C(L1D) ] = {
H A Dmpc7450-pmu.c366 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower8-pmu.c267 [ C(L1D) ] = {
H A Dpower7-pmu.c340 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dppc970-pmu.c439 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower6-pmu.c500 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower9-pmu.c338 [ C(L1D) ] = {
H A Dpower5-pmu.c568 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
H A Dpower5+-pmu.c626 [C(L1D)] = { /* RESULT_ACCESS RESULT_MISS */
/linux/arch/sh/kernel/cpu/sh4a/
H A Dperf_event.c116 [ C(L1D) ] = {
/linux/arch/sh/kernel/cpu/sh4/
H A Dperf_event.c91 [ C(L1D) ] = {
/linux/tools/perf/Documentation/
H A Dperf-arm-spe.txt159 only samples that have both 'L1D cache refill' AND 'TLB walk' are recorded. When
162 and 5 in 'inv_event_filter' will exclude any events that are either L1D cache
168 bit 2 - L1D access (FEAT_SPEv1p4)
169 bit 3 - L1D refill
/linux/arch/sparc/kernel/
H A Dperf_event.c221 [C(L1D)] = {
359 [C(L1D)] = {
494 [C(L1D)] = {
631 [C(L1D)] = {
/linux/tools/perf/util/
H A Darm-spe.c855 if (arm_spe_is_cache_hit(record->type, L1D)) { in arm_spe__synth_ld_memory_level()
878 } else if (arm_spe_is_cache_miss(record->type, L1D)) { in arm_spe__synth_ld_memory_level()
898 } else if (arm_spe_is_cache_level(record->type, L1D)) { in arm_spe__synth_st_memory_level()
900 data_src->mem_lvl |= arm_spe_is_cache_miss(record->type, L1D) ? in arm_spe__synth_st_memory_level()
/linux/arch/x86/events/intel/
H A Dcore.c635 [ C(L1D ) ] = {
737 [ C(L1D ) ] = {
877 [ C(L1D ) ] = {
1105 [ C(L1D) ] = {
1261 [ C(L1D ) ] = {
1413 [ C(L1D) ] = {
1596 [ C(L1D) ] = {
1711 [ C(L1D) ] = {
1802 [ C(L1D) ] = {
1953 [ C(L1D) ] = {
[all …]
/linux/arch/loongarch/kernel/
H A Dperf_event.c691 [C(L1D)] = {
/linux/Documentation/arch/x86/
H A Dmds.rst81 clearing. Either the modified VERW instruction or via the L1D Flush
/linux/Documentation/admin-guide/
H A Dkernel-parameters.txt3328 always: L1D cache flush on every VMENTER.
3329 cond: Flush L1D on VMENTER only when the code between
3341 Control mitigation for L1D based snooping vulnerability.
3367 hypervisors, i.e. unconditional L1D flush.
3369 SMT control and L1D flush control via the
3374 i.e. SMT enabled or L1D flush disabled.
3377 Same as 'full', but disables SMT and L1D
3385 L1D flush.
3387 SMT control and L1D flush control via the
3392 i.e. SMT enabled or L1D flush disabled.
[all …]

12