/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-n1/ |
H A D | l2_cache.json | 4 …accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for mis… 8 …e for data and instruction accesses. Accesses are for misses in the level 1 caches or translation … 20 …accesses due to memory read operations. level 2 cache is a unified cache for data and instruction … 24 …accesses due to memory write operations. level 2 cache is a unified cache for data and instruction… 28 …accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache fo… 32 …accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache f…
|
H A D | metrics.json | 47 …lks to the total number of data TLB accesses. This gives an indication of the effectiveness of the… 82 … total number of instruction TLB accesses. This gives an indication of the effectiveness of the in… 89 …c measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cac… 96 …"BriefDescription": "This metric measures the number of level 1 data cache accesses missed per tho… 103 …tric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TL… 110 …"BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed pe… 117 … the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction… 124 …"BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed … 131 …res the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instructio… 138 …"BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed pe… [all …]
|
H A D | tlb.json | 8 …accesses that resulted in TLB refills. If there are multiple misses in the TLB that are resolved b… 12 …"PublicDescription": "Counts level 1 data TLB accesses caused by any memory load or store operatio… 16 …ion TLB accesses, whether the access hits or misses in the TLB. This event counts both demand acce… 24 …"PublicDescription": "Counts level 2 TLB accesses except those caused by TLB maintenance operation… 36 … counts for refills caused by preload instructions or hardware prefetch accesses. This event count… 40 … counts for refills caused by preload instructions or hardware prefetch accesses. This event count… 44 …"PublicDescription": "Counts level 1 data TLB accesses caused by memory read operations. This even… 48 …"PublicDescription": "Counts any L1 data side TLB accesses caused by memory write operations. This… 60 …"PublicDescription": "Counts level 2 TLB accesses caused by memory read operations from both data … 64 …"PublicDescription": "Counts level 2 TLB accesses caused by memory write operations from both data…
|
H A D | memory.json | 4 …accesses issued by the CPU load store unit, where those accesses are issued due to load or store o… 12 …"PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mes… 16 …accesses issued by the CPU due to load operations. The event counts any memory load access, no mat… 20 …accesses issued by the CPU due to store operations. The event counts any memory store access, no m…
|
H A D | l3_cache.json | 8 "PublicDescription": "Counts level 3 accesses that receive data from outside the L3 cache." 12 …accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for mis…
|
H A D | spec_operation.json | 16 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently … 20 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently … 24 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently …
|
/linux/tools/perf/pmu-events/arch/arm64/arm/neoverse-v1/ |
H A D | l2_cache.json | 4 …accesses. level 2 cache is a unified cache for data and instruction accesses. Accesses are for mis… 8 …e for data and instruction accesses. Accesses are for misses in the level 1 caches or translation … 20 …accesses due to memory read operations. level 2 cache is a unified cache for data and instruction … 24 …accesses due to memory write operations. level 2 cache is a unified cache for data and instruction… 28 …accesses due to memory read operation counted by L2D_CACHE_RD. level 2 cache is a unified cache fo… 32 …accesses due to memory write operation counted by L2D_CACHE_WR. level 2 cache is a unified cache f…
|
H A D | metrics.json | 54 …lks to the total number of data TLB accesses. This gives an indication of the effectiveness of the… 93 … total number of instruction TLB accesses. This gives an indication of the effectiveness of the in… 100 …c measures the ratio of level 1 data cache accesses missed to the total number of level 1 data cac… 107 …"BriefDescription": "This metric measures the number of level 1 data cache accesses missed per tho… 114 …tric measures the ratio of level 1 data TLB accesses missed to the total number of level 1 data TL… 121 …"BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed pe… 128 … the ratio of level 1 instruction cache accesses missed to the total number of level 1 instruction… 135 …"BriefDescription": "This metric measures the number of level 1 instruction cache accesses missed … 142 …res the ratio of level 1 instruction TLB accesses missed to the total number of level 1 instructio… 149 …"BriefDescription": "This metric measures the number of level 1 instruction TLB accesses missed pe… [all …]
|
H A D | tlb.json | 8 …accesses that resulted in TLB refills. If there are multiple misses in the TLB that are resolved b… 12 …"PublicDescription": "Counts level 1 data TLB accesses caused by any memory load or store operatio… 16 …ion TLB accesses, whether the access hits or misses in the TLB. This event counts both demand acce… 24 …"PublicDescription": "Counts level 2 TLB accesses except those caused by TLB maintenance operation… 36 … counts for refills caused by preload instructions or hardware prefetch accesses. This event count… 40 … counts for refills caused by preload instructions or hardware prefetch accesses. This event count… 44 …"PublicDescription": "Counts level 1 data TLB accesses caused by memory read operations. This even… 48 …"PublicDescription": "Counts any L1 data side TLB accesses caused by memory write operations. This… 60 …"PublicDescription": "Counts level 2 TLB accesses caused by memory read operations from both data … 64 …"PublicDescription": "Counts level 2 TLB accesses caused by memory write operations from both data…
|
H A D | memory.json | 4 …accesses issued by the CPU load store unit, where those accesses are issued due to load or store o… 12 …"PublicDescription": "Counts accesses to another chip, which is implemented as a different CMN mes… 16 …accesses issued by the CPU due to load operations. The event counts any memory load access, no mat… 20 …accesses issued by the CPU due to store operations. The event counts any memory store access, no m…
|
H A D | l3_cache.json | 8 "PublicDescription": "Counts level 3 accesses that receive data from outside the L3 cache." 12 …accesses. level 3 cache is a unified cache for data and instruction accesses. Accesses are for mis…
|
H A D | spec_operation.json | 20 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently … 24 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently … 28 …t counts unaligned accesses (as defined by the actual instruction), even if they are subsequently …
|
/linux/tools/memory-model/Documentation/ |
H A D | ordering.txt | 15 2. Ordered memory accesses. These operations order themselves 16 against some or all of the CPU's prior accesses or some or all 17 of the CPU's subsequent accesses, depending on the subcategory 20 3. Unordered accesses, as the name indicates, have no ordering 48 a device driver, which must correctly order accesses to a physical 68 accesses against all subsequent accesses from the viewpoint of all CPUs. 89 CPU's accesses into three groups: 242 Ordered Memory Accesses 245 The Linux kernel provides a wide variety of ordered memory accesses: 264 of the CPU's prior memory accesses. Release operations often provide [all …]
|
H A D | access-marking.txt | 1 MARKING SHARED-MEMORY ACCESSES 5 normal accesses to shared memory, that is "normal" as in accesses that do 7 document these accesses, both with comments and with special assertions 18 1. Plain C-language accesses (unmarked), for example, "a = b;" 39 Neither plain C-language accesses nor data_race() (#1 and #2 above) place 46 C-language accesses. It is permissible to combine #2 and #3, for example, 51 C-language accesses, but marking all accesses involved in a given data 60 data_race() and even plain C-language accesses is preferable to 88 reads can enable better checking of the remaining accesses implementing 135 the other accesses to the relevant shared variables. But please note [all …]
|
/linux/include/linux/ |
H A D | kcsan-checks.h | 4 * uninstrumented accesses, or change KCSAN checking behaviour of accesses. 87 * Accesses within the atomic region may appear to race with other accesses but 100 * Accesses within the atomic region may appear to race with other accesses but 111 * kcsan_atomic_next - consider following accesses as atomic 113 * Force treating the next n memory accesses for the current context as atomic 116 * @n: number of following memory accesses to treat as atomic. 123 * Set the access mask for all accesses for the current context if non-zero. 163 * Scoped accesses are implemented by appending @sa to an internal list for the 223 * Only use these to disable KCSAN for accesses in the current compilation unit; 323 * Check for atomic accesses: if atomic accesses are not ignored, this simply [all …]
|
/linux/Documentation/core-api/ |
H A D | unaligned-memory-access.rst | 2 Unaligned Memory Accesses 15 unaligned accesses, why you need to write code that doesn't cause them, 22 Unaligned memory accesses occur when you try to read N bytes of data starting 59 - Some architectures are able to perform unaligned memory accesses 61 - Some architectures raise processor exceptions when unaligned accesses 64 - Some architectures raise processor exceptions when unaligned accesses 72 memory accesses to happen, your code will not work correctly on certain 103 to pad structures so that accesses to fields are suitably aligned (assuming 136 lead to unaligned accesses when accessing fields that do not satisfy 183 Here is another example of some code that could cause unaligned accesses:: [all …]
|
/linux/Documentation/arch/riscv/ |
H A D | hwprobe.rst | 283 the performance of misaligned scalar native word accesses on the selected set 287 misaligned scalar accesses is unknown. 290 accesses are emulated via software, either in or below the kernel. These 291 accesses are always extremely slow. 294 word sized accesses are slower than the equivalent quantity of byte 295 accesses. Misaligned accesses may be supported directly in hardware, or 299 word sized accesses are faster than the equivalent quantity of byte 300 accesses. 303 accesses are not supported at all and will generate a misaligned address 315 performance of misaligned vector accesses on the selected set of processors. [all …]
|
/linux/tools/perf/pmu-events/arch/x86/elkhartlake/ |
H A D | frontend.json | 55 "EventName": "ICACHE.ACCESSES", 56 … accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count … 65 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count… 74 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count…
|
/linux/tools/perf/pmu-events/arch/x86/snowridgex/ |
H A D | frontend.json | 55 "EventName": "ICACHE.ACCESSES", 56 … accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count … 65 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count… 74 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count…
|
/linux/tools/testing/selftests/bpf/progs/ |
H A D | user_ringbuf_fail.c | 40 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 63 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 83 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 103 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 125 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 145 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 165 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should 183 /* A callback that accesses a dynptr in a bpf_user_ringbuf_drain callback should
|
/linux/kernel/kcsan/ |
H A D | permissive.h | 3 * Special rules for ignoring entire classes of data-racy memory accesses. None 44 * Rules here are only for plain read accesses, so that we still report in kcsan_ignore_data_race() 45 * data races between plain read-write accesses. in kcsan_ignore_data_race() 60 * While it is still recommended that such accesses be marked in kcsan_ignore_data_race() 66 * optimizations (including those that tear accesses), because no more in kcsan_ignore_data_race() 67 * than 1 bit changed, the plain accesses are safe despite the presence in kcsan_ignore_data_race()
|
/linux/Documentation/i2c/ |
H A D | i2c-topology.rst | 83 This means that accesses to D2 are lockout out for the full duration 84 of the entire operation. But accesses to D3 are possibly interleaved 165 This means that accesses to both D2 and D3 are locked out for the full 231 When device D1 is accessed, accesses to D2 are locked out for the 233 are locked). But accesses to D3 and D4 are possibly interleaved at 236 Accesses to D3 locks out D1 and D2, but accesses to D4 are still possibly 254 When device D1 is accessed, accesses to D2 and D3 are locked out 256 root adapter). But accesses to D4 are possibly interleaved at any 267 mux. In that case, any interleaved accesses to D4 might close M2 288 When D1 is accessed, accesses to D2 are locked out for the full [all …]
|
/linux/arch/riscv/ |
H A D | Kconfig | 944 prompt "Unaligned Accesses Support" 947 This determines the level of support for unaligned accesses. This 957 speed of unaligned accesses. This probing will dynamically determine 958 the speed of unaligned accesses on the underlying system. If unaligned 959 memory accesses trap into the kernel as they are not supported by the 960 system, the kernel will emulate the unaligned accesses to preserve the 967 If unaligned memory accesses trap into the kernel as they are not 969 accesses to preserve the UABI. When the underlying system does support 970 unaligned accesses, the unaligned accesses are assumed to be slow. 973 bool "Assume the system supports slow unaligned memory accesses" [all …]
|
/linux/tools/perf/pmu-events/arch/x86/alderlaken/ |
H A D | frontend.json | 15 "EventName": "ICACHE.ACCESSES", 16 … accesses, so that multiple back to back fetches to the exact same cache line or byte chunk count … 25 … accesses, so that multiple back to back fetches to the exact same cache line and byte chunk count…
|
/linux/tools/perf/pmu-events/arch/nds32/n13/ |
H A D | atcpmu.json | 75 "PublicDescription": "uITLB accesses", 78 "BriefDescription": "V3 uITLB accesses" 81 "PublicDescription": "uDTLB accesses", 84 "BriefDescription": "V3 uDTLB accesses" 87 "PublicDescription": "MTLB accesses", 90 "BriefDescription": "V3 MTLB accesses" 108 "BriefDescription": "V3 ILM accesses"
|