Home
last modified time | relevance | path

Searched refs:loads (Results 1 – 25 of 166) sorted by relevance

1234567

/linux/tools/testing/selftests/drivers/net/hw/
H A Ddevlink_port_split.py60 ports = json.loads(stdout)['port']
84 values = list(json.loads(stdout)['port'].values())[0]
102 values = list(json.loads(stdout)['port'].values())[0]
266 validate_devlink_output(json.loads(stdout))
267 devs = json.loads(stdout)['dev']
/linux/kernel/sched/
H A Dloadavg.c71 void get_avenrun(unsigned long *loads, unsigned long offset, int shift) in get_avenrun() argument
73 loads[0] = (avenrun[0] + offset) << shift; in get_avenrun()
74 loads[1] = (avenrun[1] + offset) << shift; in get_avenrun()
75 loads[2] = (avenrun[2] + offset) << shift; in get_avenrun()
/linux/arch/powerpc/perf/
H A Dpower9-pmu.c174 GENERIC_EVENT_ATTR(mem-loads, MEM_LOADS);
178 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
182 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
185 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
188 CACHE_EVENT_ATTR(branch-loads, PM_BR_CMPL);
H A Dpower8-pmu.c134 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
139 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
143 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
149 CACHE_EVENT_ATTR(branch-loads, PM_BRU_FIN);
H A Dpower10-pmu.c127 GENERIC_EVENT_ATTR(mem-loads, MEM_LOADS);
134 CACHE_EVENT_ATTR(L1-dcache-loads, PM_LD_REF_L1);
138 CACHE_EVENT_ATTR(L1-icache-loads, PM_INST_FROM_L1);
141 CACHE_EVENT_ATTR(LLC-loads, PM_DATA_FROM_L3);
146 CACHE_EVENT_ATTR(branch-loads, PM_BR_CMPL);
/linux/arch/alpha/lib/
H A Dev6-copy_user.S64 EXI( ldbu $1,0($17) ) # .. .. .. L : Keep loads separate from stores
116 EXI ( ldbu $2,0($17) ) # .. .. .. L : No loads in the same quad
203 EXI ( ldbu $2,0($17) ) # .. .. .. L : No loads in the same quad
/linux/tools/testing/selftests/kvm/x86_64/
H A Dpmu_event_filter_test.c56 uint64_t loads; member
421 const uint64_t loads = rdmsr(msr_base + 0); in masked_events_guest_test() local
432 pmc_results.loads = rdmsr(msr_base + 0) - loads; in masked_events_guest_test()
620 TEST_ASSERT(bool_eq(pmc_results.loads, test->flags & ALLOW_LOADS) && in run_masked_events_tests()
625 test->msg, pmc_results.loads, pmc_results.stores, in run_masked_events_tests()
/linux/tools/net/ynl/
H A Dcli.py73 attrs = json.loads(args.json_text)
101 ops = [ (item[0], json.loads(item[1]), args.flags or []) for item in args.multi ]
/linux/scripts/atomic/kerneldoc/
H A Dread6 * Atomically loads the value of @v with ${desc_order} ordering.
/linux/include/uapi/linux/
H A Dsysinfo.h10 __kernel_ulong_t loads[3]; /* 1, 5, and 15 minute load averages */ member
/linux/Documentation/arch/x86/
H A Dtsx_async_abort.rst13 case certain loads may speculatively pass invalid data to dependent operations
15 Synchronization Extensions (TSX) transaction. This includes loads with no
16 fault or assist condition. Such loads may speculatively expose stale data from
/linux/tools/perf/Documentation/
H A Dperf-mem.txt19 right set of options to display a memory access profile. By default, loads
20 and stores are sampled. Use the -t option to limit to loads or stores.
70 Specify desired latency for loads event. Supported on Intel and Arm64
/linux/include/linux/sched/
H A Dloadavg.h16 extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift);
/linux/Documentation/core-api/
H A Drefcount-vs-atomic.rst41 A strong (full) memory ordering guarantees that all prior loads and
49 A RELEASE memory ordering guarantees that all prior loads and
57 An ACQUIRE memory ordering guarantees that all post loads and
/linux/Documentation/tee/
H A Damd-tee.rst55 * TEE_CMD_ID_LOAD_TA - loads a Trusted Application (TA) binary into
72 * open_session - loads the TA binary and opens session with loaded TA.
/linux/Documentation/
H A Dmemory-barriers.txt177 perceived by the loads made by another CPU in the same order as the stores were
246 (*) Overlapping loads and stores within a particular CPU will appear to be
274 (*) It _must_not_ be assumed that independent loads and stores will be issued
368 deferral and combination of memory operations; speculative loads; speculative
387 to have any effect on loads.
405 case where two loads are performed such that the second depends on the
412 loads only; it is not required to have any effect on stores, independent
413 loads or overlapping loads.
421 that touched by the load will be perceptible to any loads issued after
438 dependency barriers. Nowadays, APIs for marking loads from shared
[all …]
/linux/tools/memory-model/Documentation/
H A Dcontrol-dependencies.txt42 fuse the load from "a" with other loads. Without the WRITE_ONCE(),
219 (*) Control dependencies can order prior loads against later stores.
221 Not prior loads against later loads, nor prior stores against
224 stores and later loads, smp_mb().
H A Drecipes.txt46 tearing, load/store fusing, and invented loads and stores.
204 and another CPU execute a pair of loads from this same pair of variables,
311 smp_rmb() macro orders prior loads against later loads. Therefore, if
354 second, while another CPU loads from the second variable and then stores
475 that one CPU first stores to one variable and then loads from a second,
476 while another CPU stores to the second variable and then loads from the
H A Dexplanation.txt79 for the loads, the model will predict whether it is possible for the
80 code to run in such a way that the loads will indeed obtain the
142 shared memory locations and another CPU loads from those locations in
154 A memory model will predict what values P1 might obtain for its loads
197 Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
198 it, as loads can obtain values only from earlier stores.
200 P1 loads from flag before loading from buf, since CPUs execute
223 each CPU stores to its own shared location and then loads from the
272 X: P1 loads 1 from flag executes before
273 Y: P1 loads 0 from buf executes before
[all …]
H A Daccess-marking.txt44 using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
70 1. Data-racy loads from shared variables whose values are used only
101 In theory, plain C-language loads can also be used for this use case.
125 In theory, plain C-language loads can also be used for this use case.
136 that data_race() loads are subject to load fusing, which can result in
146 In theory, plain C-language loads can also be used for this use case.
189 5. Any other loads for which there is not supposed to be a concurrent
193 loads nor concurrent stores to that same variable.
/linux/kernel/debug/kdb/
H A Dkdb_main.c2476 val->loads[0] = avenrun[0]; in kdb_sysinfo()
2477 val->loads[1] = avenrun[1]; in kdb_sysinfo()
2478 val->loads[2] = avenrun[2]; in kdb_sysinfo()
2515 LOAD_INT(val.loads[0]), LOAD_FRAC(val.loads[0]), in kdb_summary()
2516 LOAD_INT(val.loads[1]), LOAD_FRAC(val.loads[1]), in kdb_summary()
2517 LOAD_INT(val.loads[2]), LOAD_FRAC(val.loads[2])); in kdb_summary()
/linux/arch/mips/kernel/
H A Dmips-r2-to-r6-emul.c1274 MIPS_R2_STATS(loads); in mipsr2_decoder()
1348 MIPS_R2_STATS(loads); in mipsr2_decoder()
1608 MIPS_R2_STATS(loads); in mipsr2_decoder()
1727 MIPS_R2_STATS(loads); in mipsr2_decoder()
2267 (unsigned long)__this_cpu_read(mipsr2emustats.loads), in mipsr2_emul_show()
2268 (unsigned long)__this_cpu_read(mipsr2bdemustats.loads)); in mipsr2_emul_show()
2324 __this_cpu_write((mipsr2emustats).loads, 0); in mipsr2_clear_show()
2325 __this_cpu_write((mipsr2bdemustats).loads, 0); in mipsr2_clear_show()
/linux/arch/mips/include/asm/
H A Dmips-r2-to-r6-emul.h22 u64 loads; member
H A Dfpu_emulator.h26 unsigned long loads; member
/linux/drivers/tee/optee/
H A DKconfig17 This loads the BL32 image for OP-TEE as firmware when the driver is

1234567