/linux/Documentation/hwmon/ |
H A D | ibmpowernv.rst | 18 'hwmon' populates the 'sysfs' tree having attribute files, each for a given 21 All the nodes in the DT appear under "/ibm,opal/sensors" and each valid node in 45 each OCC. Using this attribute each OCC can be asked to 58 each OCC. Using this attribute each OCC can be asked to 69 each OCC. Using this attribute each OCC can be asked to 80 each OCC. Using this attribute each OCC can be asked to
|
/linux/drivers/net/ethernet/qlogic/qlcnic/ |
H A D | qlcnic_dcb.c | 570 struct qlcnic_dcb_param *each; in qlcnic_83xx_dcb_query_cee_param() local 597 each = &mbx_out.type[j]; in qlcnic_83xx_dcb_query_cee_param() 599 each->hdr_prio_pfc_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 600 each->hdr_prio_pfc_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 601 each->prio_pg_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 602 each->prio_pg_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 603 each->pg_bw_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 604 each->pg_bw_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 605 each->pg_tsa_map[0] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() 606 each->pg_tsa_map[1] = cmd.rsp.arg[k++]; in qlcnic_83xx_dcb_query_cee_param() [all …]
|
/linux/tools/testing/selftests/firmware/ |
H A D | settings | 2 # 2 seconds). There are 3 test configs, each done with and without firmware 3 # present, each with 2 "nowait" functions tested 5 times. Expected time for a 5 # Additionally, fw_fallback may take 5 seconds for internal timeouts in each 7 # 10 seconds for each testing config: 120 + 15 + 30
|
/linux/tools/perf/tests/shell/ |
H A D | stat_bpf_counters_cgrp.sh | 15 if ! perf stat -a --bpf-counters --for-each-cgroup / true > /dev/null 2>&1; then 18 perf --no-pager stat -a --bpf-counters --for-each-cgroup / true || true 51 …check_system_wide_counted_output=$(perf stat -a --bpf-counters --for-each-cgroup ${test_cgroups} -… 63 …check_cpu_list_counted_output=$(perf stat -C 0,1 --bpf-counters --for-each-cgroup ${test_cgroups} …
|
/linux/Documentation/devicetree/bindings/phy/ |
H A D | apm-xgene-phy.txt | 19 Two set of 3-tuple setting for each (up to 3) 25 Two set of 3-tuple setting for each (up to 3) 28 gain control. Two set of 3-tuple setting for each 32 each (up to 3) supported link speed on the host. 36 3-tuple setting for each (up to 3) supported link 40 3-tuple setting for each (up to 3) supported link 46 - apm,tx-speed : Tx operating speed. One set of 3-tuple for each
|
/linux/scripts/ |
H A D | find-unused-docs.sh | 44 for each in "${files_included[@]}"; do 45 FILES_INCLUDED[$each]="$each"
|
/linux/Documentation/core-api/ |
H A D | protection-keys.rst | 19 Pkeys work by dedicating 4 previously Reserved bits in each page table entry to 22 Protections for each key are defined with a per-CPU user-accessible register 24 and Write Disable) for each of 16 keys. 26 Being a CPU register, PKRU is inherently thread-local, potentially giving each 37 Pkeys use 3 bits in each page table entry, to encode a "protection key index", 40 Protections for each key are defined with a per-CPU user-writable system 42 overlay permissions for each protection key index. 45 each thread a different set of protections from every other thread.
|
/linux/Documentation/devicetree/bindings/gpio/ |
H A D | gpio-max3191x.txt | 18 - maxim,modesel-gpios: GPIO pins to configure modesel of each chip. 20 (if each chip is driven by a separate pin) or 1 22 - maxim,fault-gpios: GPIO pins to read fault of each chip. 25 - maxim,db0-gpios: GPIO pins to configure debounce of each chip. 28 - maxim,db1-gpios: GPIO pins to configure debounce of each chip.
|
/linux/Documentation/devicetree/bindings/pinctrl/ |
H A D | pinctrl-bindings.txt | 9 designated client devices. Again, each client device must be represented as a 16 device is inactive. Hence, each client device can define a set of named 35 For each client device individually, every pin state is assigned an integer 36 ID. These numbers start at 0, and are contiguous. For each state ID, a unique 47 pinctrl-0: List of phandles, each pointing at a pin configuration 52 from multiple nodes for a single pin controller, each 65 pinctrl-1: List of phandles, each pointing at a pin configuration 68 pinctrl-n: List of phandles, each pointing at a pin configuration
|
H A D | fsl,imx-pinctrl.txt | 5 multiplexing the PAD input/output signals. For each PAD there are up to 22 Please refer to each fsl,<soc>-pinctrl.txt binding doc for supported SoCs. 25 - fsl,pins: each entry consists of 6 integers and represents the mux and config 41 Please refer to each fsl,<soc>-pinctrl,txt binding doc for SoC specific part 93 User should refer to each SoC spec to set the correct value.
|
/linux/Documentation/filesystems/nfs/ |
H A D | pnfs.rst | 6 reference multiple devices, each of which can reference multiple data servers. 20 We reference the header for the inode pointing to it, across each 22 LAYOUTCOMMIT), and for each lseg held within. 34 nfs4_deviceid_cache). The cache itself is referenced across each 36 the lifetime of each lseg referencing them. 66 layout types: "files", "objects", "blocks", and "flexfiles". For each
|
/linux/drivers/pinctrl/sophgo/ |
H A D | Kconfig | 20 each pin. This driver can also be built as a module called 31 each pin. This driver can also be built as a module called 42 each pin. This driver can also be built as a module called 53 each pin. This driver can also be built as a module called
|
/linux/Documentation/admin-guide/mm/damon/ |
H A D | usage.rst | 60 figure, parents-children relations are represented with indentations, each 61 directory is having ``/`` suffix, and files in each directory are separated by 121 child directories named ``0`` to ``N-1``. Each directory represents each 129 In each kdamond directory, two files (``state`` and ``pid``) and one directory 143 - ``update_schemes_stats``: Update the contents of stats files for each 147 action tried regions directory for each DAMON-based operation scheme of the 154 action tried regions directory for each DAMON-based operation scheme of the 157 ``effective_bytes`` files for each DAMON-based operation scheme of the 172 ``0`` to ``N-1``. Each directory represents each monitoring context (refer to 182 In each context directory, two files (``avail_operations`` and ``operations``) [all …]
|
/linux/Documentation/bpf/ |
H A D | map_cgroup_storage.rst | 127 per-CPU variant will have different memory regions for each CPU for each 128 storage. The non-per-CPU will have the same memory region for each storage. 133 multiple attach types, and each attach creates a fresh zeroed storage. The 136 There is a one-to-one association between the map of each type (per-CPU and 138 each map can only be used by one BPF program and each BPF program can only use 139 one storage map of each type. Because of map can only be used by one BPF 153 However, the BPF program can still only associate with one map of each type
|
/linux/Documentation/userspace-api/media/v4l/ |
H A D | ext-ctrls-detect.rst | 37 - The image is divided into a grid, each cell with its own motion 41 - The image is divided into a grid, each cell with its own region 55 Sets the motion detection thresholds for each cell in the grid. To 61 Sets the motion detection region value for each cell in the grid. To
|
/linux/lib/ |
H A D | Kconfig.kasan | 75 See Documentation/dev-tools/kasan.rst for details about each mode. 109 comparison, as it embeds a tag into the top byte of each pointer. 125 comparison, as it embeds a tag into the top byte of each pointer. 138 is accessible before each memory access. Slower than KASAN_INLINE, but 146 each memory access. Faster than KASAN_OUTLINE (gives ~x2 boost for 213 recorded for each heap block at allocation and free time, and 214 8 bytes will be added to each metadata structure that records 217 In Generic KASAN, each kmalloc-8 and kmalloc-16 object will add 218 16 bytes of additional memory consumption, and each kmalloc-32
|
/linux/Documentation/cpu-freq/ |
H A D | cpufreq-stats.rst | 22 cpufreq-stats is a driver that provides CPU frequency statistics for each CPU. 25 in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU. 65 This gives the amount of time spent in each of the frequencies supported by 66 this CPU. The cat output will have "<frequency> <time>" pair in each line, which 68 will have one line for each of the supported frequencies. usertime units here 100 also contains the actual freq values for each row and column for better
|
/linux/Documentation/devicetree/bindings/powerpc/4xx/ |
H A D | cpm.txt | 16 - unused-units : specifier consist of one cell. For each 20 - idle-doze : specifier consist of one cell. For each 24 - standby : specifier consist of one cell. For each 28 - suspend : specifier consist of one cell. For each
|
/linux/Documentation/leds/ |
H A D | leds-qcom-lpg.rst | 16 channels. The output of each PWM channel is routed to other hardware 19 The each PWM channel can operate with a period between 27us and 384 seconds and 37 therefore be identical for each element in the pattern (except for the pauses 39 transitions expected by the leds-trigger-pattern format, each entry in the 73 mode, in which case each run through the pattern is performed by first running
|
/linux/Documentation/ABI/testing/ |
H A D | procfs-smaps_rollup | 7 except instead of an entry for each VMA in a process, 9 for which each field is the sum of the corresponding 13 the sum of the Pss field of each type (anon, file, shmem).
|
/linux/tools/power/cpupower/ |
H A D | TODO | 17 -> Bind forked process to each cpu. 19 each cpu. 22 each cpu.
|
/linux/Documentation/virt/acrn/ |
H A D | io-request.rst | 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 20 used as an array of 16 I/O request slots with each I/O request slot being 256 27 GPA falls in a certain range. Multiple I/O clients can be associated with each 28 User VM. There is a special client associated with each User VM, called the 30 any other clients. The ACRN userspace acts as the default client for each User
|
/linux/Documentation/networking/ |
H A D | scaling.rst | 30 applying a filter to each packet that assigns it to one of a small number 31 of logical flows. Packets for each flow are steered to a separate receive 41 implementation of RSS uses a 128-entry indirection table where each entry 75 for each CPU if the device supports enough queues, or otherwise at least 76 one for each memory domain, where a memory domain is a set of CPUs that 94 that can route each interrupt to a particular CPU. The active mapping 99 affinity of each interrupt see Documentation/core-api/irq/irq-affinity.rst. Some systems 115 interrupts (and thus work) grows with each additional queue. 118 processors with hyperthreading (HT), each hyperthread is represented as 198 RPS may enqueue packets for processing. For each received packet, [all …]
|
/linux/Documentation/PCI/endpoint/ |
H A D | pci-ntb-function.rst | 10 with each other by exposing each host as a device to the other host. 17 with each other by configuring the endpoint instances in such a way that 24 communicate with each other using SoC as a bridge. 63 exchange information with each other using this region. Config Region has 135 NTB applications can start communicating with each other. 187 Outbound Address Space for each of the interrupts. This region will 216 Doorbell Registers are used by the hosts to interrupt each other. 233 If one 32-bit BAR is allocated for each of these regions, the scheme would 247 However if we allocate a separate BAR for each of the regions, there would not 347 This is modeled the same was as MW1 but each of the additional memory windows
|
/linux/Documentation/devicetree/bindings/sound/ |
H A D | nvidia,tegra30-ahub.txt | 8 - reg : Should contain the register physical address and length for each of 13 - clocks : Must contain an entry for each entry in clock-names. 18 - resets : Must contain an entry for each entry in reset-names. 47 - dmas : Must contain an entry for each entry in clock-names.
|