| /linux/tools/perf/tests/shell/lib/ |
| H A D | perf_metric_validation.py | 13 self.workloads = [wl] # multiple workloads possible 24 … \tRelationship rule description: \'{5}\'".format(self.metric, self.collectedValue, self.workloads, 28 \tworkload(s): {1}".format(self.metric, self.workloads) 33 .format(self.metric, self.collectedValue, self.workloads, 49 self.workloads = [x for x in workload.split(",") if x] 50 self.wlidx = 0 # idx of current workloads 204 … [TestError([m], self.workloads[self.wlidx], negmetric[m], 0) for m in negmetric.keys()]) 279 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [], 282 …self.errlist.append(TestError([m['Name'] for m in rule['Metrics']], self.workloads[self.wlidx], [v… 334 self.errlist.extend([TestError([name], self.workloads[self.wlidx], val, [all …]
|
| /linux/Documentation/driver-api/ |
| H A D | dma-buf.rst | 292 randomly hangs workloads until the timeout kicks in. Workloads, which from 305 workloads. This also means no implicit fencing for shared buffers in these 327 faults on GPUs are limited to pure compute workloads. 343 - Compute workloads can always be preempted, even when a page fault is pending 346 - DMA fence workloads and workloads which need page fault handling have 349 reservations for DMA fence workloads. 352 hardware resources for DMA fence workloads when they are in-flight. This must 357 all workloads must be flushed from the GPU when switching between jobs 361 made visible anywhere in the system, all compute workloads must be preempted 372 Note that workloads that run on independent hardware like copy engines or other [all …]
|
| /linux/Documentation/timers/ |
| H A D | no_hz.rst | 26 workloads, you will normally -not- want this option. 39 right approach, for example, in heavy workloads with lots of tasks 42 hundreds of microseconds). For these types of workloads, scheduling 56 are running light workloads, you should therefore read the following 118 computationally intensive short-iteration workloads: If any CPU is 228 aggressive real-time workloads, which have the option of disabling 230 some workloads will no doubt want to use adaptive ticks to 232 options for these workloads: 252 workloads, which have few such transitions. Careful benchmarking 253 will be required to determine whether or not other workloads
|
| /linux/Documentation/gpu/ |
| H A D | drm-compute.rst | 2 Long running workloads and compute 5 Long running workloads (compute) are workloads that will not complete in 10 7 This means that other techniques need to be used to manage those workloads,
|
| H A D | drm-vm-bind-async.rst | 103 exec functions. For long-running workloads, such pipelining of a bind 109 operations for long-running workloads will not allow for pipelining 110 anyway since long-running workloads don't allow for dma-fences as 121 deeply pipelined behind other VM_BIND operations and workloads
|
| /linux/Documentation/mm/damon/ |
| H A D | index.rst | 16 of the size of target workloads). 21 their workloads can write personalized applications for better understanding 22 and optimizations of their workloads and systems.
|
| /linux/drivers/accel/qaic/ |
| H A D | Kconfig | 14 designed to accelerate Deep Learning inference workloads. 17 for users to submit workloads to the devices.
|
| /linux/security/ |
| H A D | Kconfig.hardening | 105 sees a 1% slowdown, other systems and workloads may vary and you 149 for your workloads. 170 workloads have measured as high as 7%. 188 synthetic workloads have measured as high as 8%. 208 workloads. Image size growth depends on architecture, and should
|
| /linux/drivers/cpuidle/ |
| H A D | Kconfig | 33 Some workloads benefit from using it and it generally should be safe 45 Some virtualized workloads benefit from using it.
|
| /linux/Documentation/networking/device_drivers/ethernet/intel/ |
| H A D | idpf.rst | 81 Driver defaults are meant to fit a wide variety of workloads, but if further 89 is tuned for general workloads. The user can customize the interrupt rate 90 control for specific workloads, via ethtool, adjusting the number of
|
| /linux/Documentation/accounting/ |
| H A D | psi.rst | 10 When CPU, memory or IO devices are contended, workloads experience 19 such resource crunches and the time impact it has on complex workloads 23 scarcity aids users in sizing workloads to hardware--or provisioning
|
| /linux/Documentation/admin-guide/pm/ |
| H A D | intel_uncore_frequency_scaling.rst | 23 Users may have some latency sensitive workloads where they do not want any 24 change to uncore frequency. Also, users may have workloads which require 133 latency sensitive workloads further tuning can be done by SW to
|
| /linux/drivers/gpu/drm/xe/abi/ |
| H A D | guc_klvs_abi.h | 163 * workloads from different address spaces. By default, the GuC prioritizes 164 * RCS submissions over CCS ones, which can lead to CCS workloads being 269 * Virtualization config enabled for heavier workloads like AI/ML. 290 * on a 1PF+1VF Virtualization config enabled for heavier workloads like 325 * workloads.
|
| /linux/fs/squashfs/ |
| H A D | Kconfig | 106 poor performance on parallel I/O workloads when using multiple CPU 121 poor performance on parallel I/O workloads when using multiple CPU 159 reducinng performance in workloads like fio-based benchmarks.
|
| /linux/Documentation/accel/qaic/ |
| H A D | qaic.rst | 158 and detach slice calls allows userspace to use a BO with multiple workloads. 164 client, and multiple clients can each consume one or more DBCs. Workloads 170 workloads. Attempts to access resources assigned to other clients will be
|
| /linux/tools/perf/tests/ |
| H A D | builtin-test.c | 146 static struct test_workload *workloads[] = { variable 158 for (unsigned i = 0; i < ARRAY_SIZE(workloads) && ({ workload = workloads[i]; 1; }); i++) 794 …OPT_STRING('w', "workload", &workload, "work", "workload to run for testing, use '--list-workloads… in cmd_test() 795 …OPT_BOOLEAN(0, "list-workloads", &list_workloads, "List the available builtin workloads to use wit… in cmd_test()
|
| H A D | tests.h | 215 * Define test workloads to be used in test suites. 233 /* The list of test workloads */
|
| /linux/Documentation/driver-api/md/ |
| H A D | raid5-cache.rst | 58 completely avoid the overhead, so it's very helpful for some workloads. A 74 mode depending on the workloads. It's recommended to use a cache disk with at
|
| /linux/drivers/crypto/cavium/nitrox/ |
| H A D | Kconfig | 18 for accelerating crypto workloads.
|
| /linux/Documentation/admin-guide/hw-vuln/ |
| H A D | core-scheduling.rst | 9 workloads may benefit from running on the same core as they don't need the same 24 world workloads. In theory, core scheduling aims to perform at least as good as 30 total number of CPUs. Please measure the performance of your workloads always.
|
| /linux/drivers/infiniband/hw/mana/ |
| H A D | Kconfig | 8 for workloads (e.g. DPDK, MPI etc) that uses RDMA verbs to directly
|
| /linux/Documentation/arch/x86/ |
| H A D | sgx.rst | 249 SGX workloads, (or just any new workloads), and migrate all valuable 250 workloads. Although a machine reboot can recover all EPC memory, the bug
|
| /linux/tools/perf/pmu-events/arch/x86/jaketown/ |
| H A D | uncore-cache.json | 1310 …eads. This count only tracks the regular credits Common high banwidth workloads should be able t… 1320 …eads. This count only tracks the regular credits Common high banwidth workloads should be able t… 1330 …eads. This count only tracks the regular credits Common high banwidth workloads should be able t… 1340 …eads. This count only tracks the regular credits Common high banwidth workloads should be able t… 1350 …l' credits. This statistic is generally not interesting for general IA workloads, but may be of i… 1360 …l' credits. This statistic is generally not interesting for general IA workloads, but may be of i… 1370 …l' credits. This statistic is generally not interesting for general IA workloads, but may be of i… 1380 …l' credits. This statistic is generally not interesting for general IA workloads, but may be of i… 1903 …ites. This count only tracks the regular credits Common high banwidth workloads should be able t… 1913 …ites. This count only tracks the regular credits Common high banwidth workloads should be able t… [all …]
|
| /linux/drivers/gpu/drm/i915/gt/uc/ |
| H A D | intel_huc.c | 36 * a first auth for clear-media workloads via GuC and a second one for all 37 * workloads via GSC. 42 * impact power usage and/or performance of media workloads, depending on the 434 return partial ? "clear media" : "all workloads"; in auth_mode_string()
|
| /linux/Documentation/block/ |
| H A D | bfq-iosched.rst | 87 background workloads are being executed: 122 sequential workloads considered in our tests. With random workloads, 123 and with all the workloads on flash-based devices, BFQ achieves, 142 possibly heavy workloads are being served, BFQ guarantees:
|