/linux/Documentation/trace/ |
H A D | kprobes.rst | 192 instruction (the "optimized region") lies entirely within one function. 197 jump into the optimized region. Specifically: 202 optimized region -- Kprobes checks the exception tables to verify this); 203 - there is no near jump to the optimized region (other than to the first 206 - For each instruction in the optimized region, Kprobes verifies that 218 - the instructions from the optimized region 228 - Other instructions in the optimized region are probed. 235 If the kprobe can be optimized, Kprobes enqueues the kprobe to an 237 it. If the to-be-optimized probepoint is hit before being optimized, 248 optimized region [3]_. As you know, synchronize_rcu() can ensure [all …]
|
/linux/arch/microblaze/ |
H A D | Kconfig.platform | 11 bool "Optimized lib function" 14 Turns on optimized library functions (memcpy and memmove). 15 They are optimized by using word alignment. This will work 22 bool "Optimized lib function ASM" 27 Turns on optimized library functions (memcpy and memmove).
|
/linux/drivers/opp/ |
H A D | ti-opp-supply.c | 26 * struct ti_opp_supply_optimum_voltage_table - optimized voltage table 28 * @optimized_uv: Optimized voltage from efuse 37 * @vdd_table: Optimized voltage mapping table 69 * _store_optimized_voltages() - store optimized voltages 73 * Picks up efuse based optimized voltages for VDD unique per device and 158 * Some older samples might not have optimized efuse in _store_optimized_voltages() 193 * Return: if a match is found, return optimized voltage, else return 216 dev_err_ratelimited(dev, "%s: Failed optimized voltage match for %d\n", in _get_optimal_vdd_voltage() 388 /* If we need optimized voltage */ in ti_opp_supply_probe()
|
/linux/Documentation/devicetree/bindings/opp/ |
H A D | ti,omap-opp-supply.yaml | 37 - description: OMAP5+ optimized voltages in efuse(Class 0) VDD along with 40 - description: OMAP5+ optimized voltages in efuse(class0) VDD but no VBB 54 optimized efuse configuration. 63 - description: efuse offset where the optimized voltage is located
|
/linux/arch/arm/kernel/ |
H A D | io.c | 43 * This needs to be optimized. 59 * This needs to be optimized. 75 * This needs to be optimized.
|
/linux/lib/crc/ |
H A D | Kconfig | 89 bool "Enable optimized CRC implementations" if EXPERT 94 architecture-optimized implementations of any CRC variants that are 113 optimized versions. If unsure, say N.
|
/linux/drivers/video/fbdev/aty/ |
H A D | atyfb.h | 230 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_le32() 243 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le32() 257 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_le16() 269 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_ld_8() 281 /* Hack for bloc 1, should be cleanly optimized by compiler */ in aty_st_8()
|
/linux/Documentation/dev-tools/ |
H A D | propeller.rst | 25 build-optimized". 53 #. Optimized build: Build the AutoFDO or AutoFDO+ThinLTO optimized 60 #. Deployment: The optimized kernel binary is deployed and used
|
/linux/tools/testing/selftests/ftrace/test.d/kprobe/ |
H A D | kprobe_opt_types.tc | 4 # description: Register/unregister optimized probe 28 if echo $PROBE | grep -q OPTIMIZED; then
|
/linux/fs/crypto/ |
H A D | Kconfig | 26 # algorithms, not any per-architecture optimized implementations. It is 27 # strongly recommended to enable optimized implementations too.
|
/linux/Documentation/devicetree/bindings/memory-controllers/ |
H A D | atmel,ebi.txt | 67 - atmel,smc-tdf-mode: "normal" or "optimized". When set to 68 "optimized" the data float time is optimized
|
/linux/include/linux/ |
H A D | crc32.h | 90 #define CRC32_LE_OPTIMIZATION BIT(0) /* crc32_le() is optimized */ 91 #define CRC32_BE_OPTIMIZATION BIT(1) /* crc32_be() is optimized */ 92 #define CRC32C_OPTIMIZATION BIT(2) /* crc32c() is optimized */
|
/linux/arch/x86/include/asm/ |
H A D | qspinlock_paravirt.h | 12 * and restored. So an optimized version of __pv_queued_spin_unlock() is 21 * Optimized assembly version of __raw_callee_save___pv_queued_spin_unlock
|
/linux/mm/ |
H A D | hugetlb_vmemmap.c | 490 * hugetlb_vmemmap_restore_folio - restore previously optimized (by 537 /* Add non-optimized folios to output list */ in hugetlb_vmemmap_restore_folios() 548 /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ 587 * page could be to the OLD struct pages. Set the vmemmap optimized in __hugetlb_vmemmap_optimize_folio() 618 * @folio: the folio whose vmemmap pages will be optimized. 623 * vmemmap pages have been optimized. 668 * Already optimized by pre-HVO, just map the in __hugetlb_vmemmap_optimize_folios() 701 * skip any folios that already have the optimized flag in __hugetlb_vmemmap_optimize_folios()
|
/linux/arch/sparc/lib/ |
H A D | strlen.S | 2 /* strlen.S: Sparc optimized strlen code 3 * Hand optimized from GNU libc's strlen
|
H A D | M7memset.S | 2 * M7memset.S: SPARC M7 optimized memset. 8 * M7memset.S: M7 optimized memset. 100 * (can create a more optimized version later.) 114 * (can create a more optimized version later.)
|
/linux/tools/testing/selftests/bpf/progs/ |
H A D | test_global_map_resize.c | 57 /* see above; ensure this is not optimized out */ in bss_array_sum() 75 /* see above; ensure this is not optimized out */ in data_array_sum()
|
/linux/kernel/ |
H A D | kprobes.c | 422 * This must be called from arch-dep optimized caller. 438 /* Free optimized instructions and optimized_kprobe */ 490 * Return an optimized kprobe whose optimizing code replaces 679 /* Optimize kprobe if p is ready to be optimized */ 689 /* kprobes with 'post_handler' can not be optimized */ in optimize_kprobe() 695 /* Check there is no other kprobes at the optimized instructions */ in optimize_kprobe() 699 /* Check if it is already optimized. */ in optimize_kprobe() 711 * 'op' must have OPTIMIZED flag in optimize_kprobe() 728 /* Unoptimize a kprobe if p is optimized */ 734 return; /* This is not an optprobe nor optimized */ in unoptimize_kprobe() [all …]
|
/linux/kernel/irq/ |
H A D | migration.c | 110 * and it should be optimized away when CONFIG_IRQ_DOMAIN_HIERARCHY is in __irq_move_irq() 134 * Get the top level irq_data in the hierarchy, which is optimized in irq_can_move_in_process_context()
|
/linux/arch/arc/lib/ |
H A D | memset-archs.S | 10 * The memset implementation below is optimized to use prefetchw and prealloc 12 * If you want to implement optimized memset for other possible L1 data cache
|
/linux/arch/m68k/include/asm/ |
H A D | delay.h | 72 * the const factor (4295 = 2**32 / 1000000) can be optimized out when 88 * first constant multiplications gets optimized away if the delay is
|
/linux/Documentation/devicetree/bindings/i2c/ |
H A D | snps,designware-i2c.yaml | 112 snps,clk-freq-optimized: 144 snps,clk-freq-optimized;
|
/linux/arch/s390/include/asm/ |
H A D | checksum.h | 8 * Martin Schwidefsky (heavily optimized CKSM version) 53 * This is a version of ip_compute_csum() optimized for IP headers,
|
/linux/Documentation/locking/ |
H A D | rt-mutex.rst | 40 RT-mutexes are optimized for fastpath operations and have no internal 42 without waiters. The optimized fastpath operations require cmpxchg
|
/linux/drivers/gpu/drm/amd/pm/swsmu/inc/ |
H A D | smu_v13_0_7_pptable.h | 79 …ER_MODE = 1 << SMU_13_0_7_ODCAP_POWER_MODE, //Optimized GPU Power Mode f… 151 int16_t pm_setting[SMU_13_0_7_MAX_PMSETTING]; //Optimized power mode feature settings
|