/linux/Documentation/devicetree/bindings/scsi/ |
H A D | hisilicon-sas.txt | 31 (broadcast, phyup, and abnormal) in increasing order. 34 The interrupts are ordered in increasing order. 47 interrupt. The interrupts are ordered in increasing 51 increasing order.
|
/linux/include/linux/ |
H A D | topology.h | 285 * for_each_node_numadist() - iterate over nodes in increasing distance 290 * This macro iterates over NUMA node IDs in increasing distance from the 299 * the latter iterates over nodes in increasing order of distance. 315 * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
|
/linux/drivers/media/test-drivers/vidtv/ |
H A D | vidtv_common.c | 45 …pr_err_ratelimited("overflow detected, skipping. Try increasing the buffer size. Needed %zu, had %… in vidtv_memcpy() 81 …pr_err_ratelimited("overflow detected, skipping. Try increasing the buffer size. Needed %zu, had %… in vidtv_memset()
|
/linux/tools/perf/pmu-events/arch/x86/ivytown/ |
H A D | uncore-interconnect.json | 864 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 873 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 883 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 893 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 903 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 913 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 923 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 933 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 943 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 953 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… [all …]
|
/linux/fs/btrfs/ |
H A D | space-info.h | 186 * Monotonically increasing counter of block group reclaim attempts 192 * Monotonically increasing counter of reclaimed bytes 198 * Monotonically increasing counter of reclaim errors
|
/linux/Documentation/block/ |
H A D | deadline-iosched.rst | 36 write) which are serviced in increasing sector order. To limit extra seeking, 42 a value of 1 yields first-come first-served behaviour). Increasing fifo_batch
|
/linux/Documentation/networking/device_drivers/ethernet/intel/ |
H A D | e1000.rst | 193 by the driver. Increasing this value allows the driver to buffer more 214 properly tuned for specific network traffic. Increasing this value adds 265 Increasing this value allows the driver to queue more transmits. Each 414 environments. If this is observed, increasing the application's socket buffer 415 size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
|
/linux/tools/testing/selftests/drivers/net/hw/ |
H A D | pp_alloc_fail.py | 95 raise KsftSkipEx("Allocation failures not increasing") 100 raise KsftSkipEx("Allocation increasing too slowly", seen_fails,
|
H A D | rss_ctx.py | 449 ksft_pr(f"Increasing queue count {qcnt} -> {2 + 2 * ctx_cnt}") 537 ksft_pr(f"Increasing queue count {qcnt} -> {2 + 2 * ctx_cnt}") 610 ksft_pr(f"Increasing queue count {queue_cnt} -> 4") 728 ksft_pr(f"Increasing queue count {queue_cnt} -> 4") 771 ksft_pr(f"Increasing queue count {queue_cnt} -> 4")
|
/linux/drivers/gpu/drm/arm/display/komeda/ |
H A D | komeda_kms.c | 123 /* Considering the list sequence is zpos increasing, so if list is empty in komeda_plane_state_list_add() 132 /* Build the list by zpos increasing */ in komeda_plane_state_list_add() 177 /* Build a list by zpos increasing */ in komeda_crtc_normalize_zpos()
|
/linux/drivers/media/dvb-frontends/ |
H A D | a8293.h | 21 * @volt_slew_nanos_per_mv: Slew rate when increasing LNB voltage,
|
/linux/tools/perf/pmu-events/arch/x86/haswellx/ |
H A D | uncore-interconnect.json | 824 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.", 833 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.", 843 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.", 853 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.", 863 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.", 873 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.", 883 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.", 893 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.", 903 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.", 913 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing th [all...] |
/linux/Documentation/devicetree/bindings/cpufreq/ |
H A D | cpufreq-spear.txt | 10 increasing order.
|
/linux/Documentation/filesystems/ |
H A D | directory-locking.rst | 11 always acquire the locks in order by increasing address. We'll call 154 increasing address of inode 158 increasing address of inode.
|
/linux/include/linux/usb/ |
H A D | isp116x.h | 22 prevents stopping internal clock, increasing
|
/linux/tools/perf/pmu-events/arch/x86/broadwellx/ |
H A D | uncore-interconnect.json | 823 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy.", 832 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.", 842 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors DRS flits only.", 852 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.", 862 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors HOM flits only.", 872 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.", 882 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCB flits only.", 892 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.", 902 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. This event can be used in conjunction with the Flit Buffer Occupancy Accumulator event to calculate the average occupancy. This monitors NCS flits only.", 912 "PublicDescription": "Counts the number of cycles that the QPI RxQ was not empty. Generally, when data is transmitted across QPI, it will bypass the RxQ and pass directly to the ring interface. If things back up getting transmitted onto the ring, however, it may need to allocate into this buffer, thus increasing th [all...] |
/linux/tools/perf/pmu-events/arch/x86/jaketown/ |
H A D | uncore-power.json | 324 …ed by the transition. This event is calculated by or'ing together the increasing and decreasing e… 337 "BriefDescription": "Cycles Increasing Voltage", 342 …"PublicDescription": "Counts the number of cycles when the system is increasing voltage. There is…
|
H A D | uncore-interconnect.json | 523 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 692 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 701 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 710 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 719 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 728 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 737 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 746 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 755 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… 764 …onto the ring, however, it may need to allocate into this buffer, thus increasing the latency. Th… [all …]
|
/linux/tools/testing/selftests/filesystems/statmount/ |
H A D | listmount_test.c | 22 /* Check that all mount ids are in increasing order. */
|
/linux/arch/mips/net/ |
H A D | bpf_jit_comp.h | 143 * pointer and increasing. The next depth to be written is returned. 149 * pointer and increasing. The next depth to be read is returned.
|
/linux/drivers/media/dvb-core/ |
H A D | Kconfig | 38 Maximum number of DVB/ATSC adapters. Increasing this number
|
/linux/include/linux/platform_data/ |
H A D | dma-dw.h | 47 * @chan_priority: Set channel priority increasing from 0 to 7 or 7 to 0.
|
/linux/kernel/sched/ |
H A D | ext_idle.c | 166 * Traverse all nodes in order of increasing distance, starting in pick_idle_cpu_from_online_nodes() 444 * increasing distance. 595 * order of increasing distance. in scx_select_cpu_dfl() 643 * increasing distance. in scx_select_cpu_dfl() 1205 * order of increasing distance (unless SCX_PICK_IDLE_IN_NODE is specified, 1287 * order of increasing distance (unless %SCX_PICK_IDLE_IN_NODE is specified,
|
/linux/drivers/mmc/core/ |
H A D | pwrseq.c | 35 "increasing module refcount failed\n"); in mmc_pwrseq_alloc()
|
/linux/Documentation/networking/device_drivers/fddi/ |
H A D | defza.rst | 28 increasing TURBOchannel slot numbers.
|