Home
last modified time | relevance | path

Searched full:performance (Results 1 – 25 of 1788) sorted by relevance

12345678910>>...72

/linux/Documentation/devicetree/bindings/dvfs/
H A Dperformance-domain.yaml4 $id: http://devicetree.org/schemas/dvfs/performance-domain.yaml#
7 title: Generic performance domains
13 This binding is intended for performance management of groups of devices or
14 CPUs that run in the same performance domain. Performance domains must not
15 be confused with power domains. A performance domain is defined by a set
16 of devices that always have to run at the same performance level. For a given
17 performance domain, there is a single point of control that affects all the
18 devices in the domain, making it impossible to set the performance level of
21 have a common frequency control, is said to be in the same performance
24 This device tree binding can be used to bind performance domain consumer
[all …]
/linux/drivers/acpi/
H A Dprocessor_perflib.c26 #define ACPI_PROCESSOR_FILE_PERFORMANCE "performance"
79 ppc >= pr->performance->state_count) in acpi_processor_get_platform_limit()
97 qos_value = pr->performance->states[index].core_frequency * 1000; in acpi_processor_get_platform_limit()
113 * 0: success. OSPM is now using the performance state specified.
127 if (ignore_ppc || !pr->performance) { in acpi_processor_ppc_has_changed()
157 if (!pr || !pr->performance || !pr->performance->state_count) in acpi_processor_get_bios_limit()
160 *limit = pr->performance->states[pr->performance_platform_limit]. in acpi_processor_get_bios_limit()
200 if (!pr->performance) in acpi_processor_ppc_init()
259 memcpy(&pr->performance->control_register, obj.buffer.pointer, in acpi_processor_get_performance_control()
275 memcpy(&pr->performance->status_register, obj.buffer.pointer, in acpi_processor_get_performance_control()
[all …]
/linux/Documentation/admin-guide/acpi/
H A Dcppc_sysfs.rst4 Collaborative Processor Performance Control (CPPC)
13 performance of a logical processor on a contiguous and abstract performance
14 scale. CPPC exposes a set of registers to describe abstract performance scale,
15 to request performance levels and to measure per-cpu delivered performance.
40 * highest_perf : Highest performance of this processor (abstract scale).
41 * nominal_perf : Highest sustained performance of this processor
43 * lowest_nonlinear_perf : Lowest performance of this processor with nonlinear
45 * lowest_perf : Lowest performance of this processor (abstract scale).
49 The above frequencies should only be used to report processor performance in
53 * feedback_ctrs : Includes both Reference and delivered performance counter.
[all …]
H A Dfan_performance_states.rst4 ACPI Fan Performance States
10 These attributes list properties of fan performance states.
37 where each of the "state*" files represents one performance state of the fan
47 to this performance state (0-9).
71 Here use can look at fan performance states for a reference speed (speed_rpm)
74 not defined in the performance states.
80 This sysfs attribute is presented in the same directory as performance states.
82 ACPI Fan Performance Feedback
90 in the same directory as performance states.
/linux/Documentation/netlink/specs/
H A Ddev-energymodel.yaml23 The performance state is inefficient. There is in this perf-domain,
24 another performance state with a higher frequency but a lower or
48 Information on a single performance domains.
57 A unique ID number for each performance domain.
62 Bitmask of performance domain flags.
69 CPUs that belong to this performance domain.
73 Performance states table.
79 A unique ID number for each performance domain.
88 Performance state of a performance domain.
94 name: performance
[all …]
/linux/tools/power/x86/x86_energy_perf_policy/
H A Dx86_energy_perf_policy.85 x86_energy_perf_policy \- Manage Energy vs. Performance Policy
20 .RB "value: # | default | performance | balance-performance | balance-power | power"
23 displays and updates energy-performance policy settings specific to
27 While \fBx86_energy_perf_policy\fP can manage energy-performance policy
35 and Processor Performance States (P-states).
38 Further, it allows the OS to influence energy/performance trade-offs where there
89 Set a policy with a normal balance between performance and energy efficiency.
90 The processor will tolerate minor performance compromise
95 .I performance
96 Set a policy for maximum performance,
[all …]
/linux/arch/powerpc/include/asm/
H A Dreg_fsl_emb.h3 * Contains register definitions for the Freescale Embedded Performance
13 /* Performance Monitor Registers */
37 /* Freescale Book E Performance Monitor APU Registers */
38 #define PMRN_PMC0 0x010 /* Performance Monitor Counter 0 */
39 #define PMRN_PMC1 0x011 /* Performance Monitor Counter 1 */
40 #define PMRN_PMC2 0x012 /* Performance Monitor Counter 2 */
41 #define PMRN_PMC3 0x013 /* Performance Monitor Counter 3 */
42 #define PMRN_PMC4 0x014 /* Performance Monitor Counter 4 */
43 #define PMRN_PMC5 0x015 /* Performance Monitor Counter 5 */
84 #define PMRN_UPMC0 0x000 /* User Performance Monitor Counter 0 */
[all …]
/linux/Documentation/admin-guide/
H A Dperf-security.rst9 Usage of Performance Counters for Linux (perf_events) [1]_ , [2]_ , [3]_
14 depends on the nature of data that perf_events performance monitoring
15 units (PMU) [2]_ and Perf collect and expose for performance analysis.
16 Collected system and performance data may be split into several
21 its topology, used kernel and Perf versions, performance monitoring
30 faults, CPU migrations), architectural hardware performance counters
46 So, perf_events performance monitoring and observability operations are
56 all kernel security permission checks so perf_events performance
70 as privileged processes with respect to perf_events performance
73 privilege [13]_ (POSIX 1003.1e: 2.2.2.39) for performance monitoring and
[all …]
/linux/Documentation/scheduler/
H A Dsched-util-clamp.rst11 feature that allows user space to help in managing the performance requirement
16 performance requirements and restrictions of the tasks, thus it helps the
23 system run at a certain performance point.
26 performance constraints. It consists of two tunables:
31 These two bounds will ensure a task will operate within this performance range
36 performance point to operate at to deliver the desired user experience. Or one
38 much resources and should not go above a specific performance point. Viewing
39 the uclamp values as performance points rather than utilization is a better
44 performance point required by its display pipeline to ensure no frame is
58 resources background tasks are consuming by capping the performance point they
[all …]
/linux/arch/x86/events/
H A DKconfig2 menu "Performance monitoring"
5 tristate "Intel uncore performance events"
9 Include support for Intel uncore performance events. These are
13 tristate "Intel/AMD rapl performance events"
17 Include support for Intel and AMD rapl performance events for power
21 tristate "Intel cstate performance events"
25 Include support for Intel cstate performance events for power
38 tristate "AMD Uncore performance events"
42 Include support for AMD uncore performance events for use with
/linux/Documentation/devicetree/bindings/cpufreq/
H A Dcpufreq-mediatek-hw.yaml29 "#performance-domain-cells":
31 Number of cells in a performance domain specifier.
33 performance domains.
39 - "#performance-domain-cells"
53 performance-domains = <&performance 0>;
64 performance: performance-controller@11bc00 {
68 #performance-domain-cells = <1>;
/linux/Documentation/admin-guide/perf/
H A Dhns3-pmu.rst2 HNS3 Performance Monitoring Unit (PMU)
5 HNS3(HiSilicon network system 3) Performance Monitoring Unit (PMU) is an
6 End Point device to collect performance statistics of HiSilicon SoC NIC.
9 HNS3 PMU supports collection of performance statistics such as bandwidth,
48 Each performance statistic has a pair of events to get two values to
49 calculate real performance data in userspace.
57 computation to calculate real performance data is:::
82 PMU collect performance statistics for all HNS3 PCIe functions of IO DIE.
89 PMU collect performance statistic of one whole physical port. The port id
98 PMU collect performance statistic of one tc of physical port. The port id
[all …]
/linux/Documentation/ABI/testing/
H A Dsysfs-bus-event_source-devices-hv_gpci100 runtime by setting "Enable Performance Information Collection" option.
107 * "-EPERM" : Partition is not permitted to retrieve performance information,
108 required to set "Enable Performance Information Collection" option.
132 runtime by setting "Enable Performance Information Collection" option.
139 * "-EPERM" : Partition is not permitted to retrieve performance information,
140 required to set "Enable Performance Information Collection" option.
164 runtime by setting "Enable Performance Information Collection" option.
171 * "-EPERM" : Partition is not permitted to retrieve performance information,
172 required to set "Enable Performance Information Collection" option.
196 runtime by setting "Enable Performance Information Collection" option.
[all …]
H A Dsysfs-class-platform-profile21 and performance
22 balanced-performance Balance between performance and low
24 towards performance
25 performance High performance operation
26 max-power Higher performance operation that may exceed
/linux/kernel/power/
H A Denergy_model.c23 * Mutex serializing the registrations of performance domains and letting
29 * Manage performance domains with IDs. One can iterate the performance domains
77 DEFINE_EM_DBG_SHOW(performance, performance);
107 debugfs_create_file("performance", 0444, d, &em_dbg[i], in em_debug_create_ps()
147 /* Create the directory of the performance domain */ in em_debug_create_pd()
164 /* Create a sub-directory for each performance state */ in em_debug_create_pd()
207 * @pd : EM performance domain for which this must be done
242 * Calculate the performance value for each frequency with in em_init_performance()
249 table[i].performance = div64_u64(max_cap * table[i].frequency, in em_init_performance()
264 /* Compute the cost of each performance state. */ in em_compute_costs()
[all …]
/linux/Documentation/arch/x86/
H A Damd-hfi.rst17 power capabilities: performance-oriented *classic cores* and power-efficient
27 threads to the classic cores. From a performance perspective, sending
32 performance impact.
37 The ``amd_hfi`` driver delivers the operating system a performance and energy
45 describes an efficiency and performance ranking for each classification.
48 represent thread performance/power characteristics that may benefit from
74 about the performance and energy efficiency of each CPU in the system. Each
76 performance value indicates higher performance capability, and a higher
77 efficiency value indicates more efficiency. Energy efficiency and performance
83 a reordering of the performance and efficiency ranking. Table updates happen
[all …]
/linux/tools/power/cpupower/bench/
H A DREADME-BENCH7 - Identify worst case performance loss when doing dynamic frequency
12 - Identify cpufreq related performance regressions between kernels
18 - Power saving related regressions (In fact as better the performance
28 For that purpose, it compares the performance governor to a configured
56 takes on this machine and needs to be run in a loop using the performance
58 Then the above test runs are processed using the performance governor
61 on full performance and you get the overall performance loss.
80 trigger of the cpufreq-bench, you will see no performance loss (compare with
84 will always see 50% loads and you get worst performance impact never
/linux/tools/power/cpupower/man/
H A Dcpupower-set.127 its policy for the relative importance of performance versus energy savings to
31 performance and 15 is maximum energy efficiency.
34 when it must select trade-offs between performance and
37 This policy hint does not supersede Processor Performance states
51 Setting the performance bias value on one CPU can modify the setting on
62 Sets the energy performance policy preference on supported Intel or AMD
68 default performance balance_performance balance_power power
/linux/drivers/perf/hisilicon/
H A DKconfig6 Support for HiSilicon SoC L3 Cache performance monitor, Hydra Home
7 Agent performance monitor and DDR Controller performance monitor.
13 Provide support for HiSilicon PCIe performance monitoring unit (PMU)
23 Provide support for HNS3 performance monitoring unit (PMU) RCiEP
/linux/tools/testing/selftests/mm/
H A Dtest_vmalloc.sh9 # a) analyse performance of vmalloc allocations;
26 # Static templates for performance, stressing and smoke tests.
74 echo "Run performance tests to evaluate how fast vmalloc allocation is."
99 echo "for deep performance analysis as well as stress testing."
109 echo -n "Usage: $0 [ performance ] | [ stress ] | | [ smoke ] | "
133 echo "# Performance analysis"
134 echo "./${DRIVER}.sh performance"
187 if [[ "$1" = "performance" ]]; then
/linux/Documentation/admin-guide/pm/
H A Dintel_uncore_frequency_scaling.rst17 performance, SoCs have internal algorithms for scaling uncore frequency. These
20 It is possible that users have different expectations of uncore performance and
22 the scaling min/max frequencies via cpufreq sysfs to improve CPU performance.
25 different core and uncore performance at distinct phases and they may want to
27 improve overall performance.
130 The Efficiency Latency Control (ELC) feature improves performance
134 get desired performance.
138 While this may result in the best performance per watt, workload may be
139 expecting higher performance at the expense of power. Consider an
143 target performance.
H A Dintel_epb.rst5 Intel Performance and Energy Bias Hint
16 Intel Performance and Energy Bias Attribute in ``sysfs``
19 The Intel Performance and Energy Bias Hint (EPB) value for a given (logical) CPU
26 a value of 0 corresponds to a hint preference for highest performance
31 with one of the strings: "performance", "balance-performance", "normal",
/linux/Documentation/driver-api/cxl/linux/
H A Daccess-coordinates.rst10 A memory region performance coordinates (latency and bandwidth) are typically
15 the performance coordinates by retrieving data from several components.
19 would be the CXL hostbridge. Using this association, the performance
22 performance coordinates between a CPU and a Generic Port (CXL hostbridge).
24 The :doc:`CDAT <../platform/cdat>` provides the performance coordinates for
28 performance coordinates that's tied to a DSMADhandle and this ties the two
29 table entries together to provide the performance coordinates for each DPA
31 then there would be different performance characteristsics for each of those
34 If there's a CXL switch in the topology, then the performance coordinates for the
96 maximizes performance. When asymmetric topology is detected, the calculation
[all …]
/linux/Documentation/admin-guide/mm/
H A Dnumaperf.rst2 NUMA Memory Performance
10 as CPU cache coherence, but may have different performance. For example,
14 under different domains, or "nodes", based on locality and performance
36 performance when accessing a given memory target. Each initiator-target
56 nodes' access characteristics share the same performance relative to other
65 NUMA Performance
69 be allocated from based on the node's performance characteristics. If
79 The performance characteristics the kernel provides for the local initiators
103 performance characteristics in order to provide large address space of
129 attributes in order to maximize the performance out of such a setup.
/linux/fs/squashfs/
H A DKconfig51 Doing so can significantly improve performance because
89 decompression performance and CPU and memory usage.
106 poor performance on parallel I/O workloads when using multiple CPU
110 using this option may improve overall I/O performance.
121 poor performance on parallel I/O workloads when using multiple CPU
159 reducinng performance in workloads like fio-based benchmarks.
165 Enabling this option restores performance to pre-regression levels by
254 This, however, gives poor performance on MTD NAND devices where
259 performance for some file access patterns (e.g. sequential

12345678910>>...72