Lines Matching +defs:a +defs:code

32 #define PF_WQ_WORKER			0x00000020	/* I'm a workqueue worker */
35 #define PF_KTHREAD 0x00200000 /* I am a kernel thread */
53 * lead to really confusing misbehaviors. Let's trigger a build failure.
141 * exit cleanly with the specified exit code being passed to user space.
143 #define scx_bpf_exit(code, fmt, args...) \
146 scx_bpf_exit_bstr(code, ___fmt, ___param, sizeof(___param)); \
177 * scx_bpf_dump_header() is a wrapper around scx_bpf_dump that adds a header
204 * bss, data, rodata) themselves are maps, a data section can be resized. If
205 * a data section has an array as its last element, the BTF info for that
210 * this array exists in a custom sub data section which can be resized
219 * MEMBER_VPTR - Obtain the verified pointer to a struct or array member
225 * compiler to generate a code sequence which first calculates the byte offset,
241 * be a pointer to the area. Use `MEMBER_VPTR(*ptr, .member)` instead of
250 "@base is smaller than @member, is @base a pointer?"); \
271 * It can be used in cases where a global array is defined with an initial
298 * No-op at runtime. The empty inline assembly with a read-write constraint
301 * 1. Compiler: treats @expr as both read and written, preventing dead-code
307 * and stack slot. While useful, precision means each distinct value creates a
329 * Example - keeping a reference alive::
367 bool (less)(struct bpf_rb_node *a, const struct bpf_rb_node *b),
488 * Access a cpumask in read-only mode (typically to check bits).
496 * Return true if task @p cannot migrate to a different CPU, false
502 * Testing p->migration_disabled in a BPF code is tricky because the
503 * migration is _always_ disabled while running the BPF code.
505 * code execution disable and re-enable the migration of the current
508 * two or greater when a sched_ext ops BPF code (e.g., ops.tick) is
509 * executed in the middle of the other BPF code execution.
546 * time_after - returns true if the time a is after time b.
547 * @a: first comparable as u64
551 * good compiler would generate better code (and a really good compiler
554 * Return: %true is time a is after time b, otherwise %false.
556 static inline bool time_after(u64 a, u64 b)
558 return (s64)(b - a) < 0;
562 * time_before - returns true if the time a is before time b.
563 * @a: first comparable as u64
566 * Return: %true is time a is before time b, otherwise %false.
568 static inline bool time_before(u64 a, u64 b)
570 return time_after(b, a);
574 * time_after_eq - returns true if the time a is after or the same as time b.
575 * @a: first comparable as u64
578 * Return: %true is time a is after or the same as time b, otherwise %false.
580 static inline bool time_after_eq(u64 a, u64 b)
582 return (s64)(a - b) >= 0;
586 * time_before_eq - returns true if the time a is before or the same as time b.
587 * @a: first comparable as u64
590 * Return: %true is time a is before or the same as time b, otherwise %false.
592 static inline bool time_before_eq(u64 a, u64 b)
594 return time_after_eq(b, a);
598 * time_in_range - Calculate whether a is in the range of [b, c].
599 * @a: time to test
603 * Return: %true is time a is in the range [b, c], otherwise %false.
605 static inline bool time_in_range(u64 a, u64 b, u64 c)
607 return time_after_eq(a, b) && time_before_eq(a, c);
611 * time_in_range_open - Calculate whether a is in the range of [b, c).
612 * @a: time to test
616 * Return: %true is time a is in the range [b, c), otherwise %false.
618 static inline bool time_in_range_open(u64 a, u64 b, u64 c)
620 return time_after_eq(a, b) && time_before(a, c);
680 * Prefer C11 _Generic for better compile-times and simpler code. Note: 'char'
681 * is not type-compatible with 'signed char', and we define a separate case.
686 * This is because LLVM has a bug where for lvalue (x), it does not get rid of
723 * With a larger @decay value, the moving average changes slowly, exhibiting
741 * log2_u32 - Compute the base 2 logarithm of a 32-bit exponential value.
758 * log2_u64 - Compute the base 2 logarithm of a 64-bit exponential value.
834 * Each isolated bit produces a unique 6-bit value, guaranteed by the
835 * De Bruijn property. Calculate a unique index into the lookup table
836 * using the magic constant and a right shift.
838 * Multiplying by the 64-bit constant "spreads out" that 1-bit into a
840 * exactly what a De Bruijn sequence guarantees: Every possible 6-bit
842 * the constant 0x03f79d71b4cb0a89ULL is carefully chosen to be a
848 * Lookup in a precomputed table. No collision is guaranteed by the
857 * Return a value proportionally scaled to the task's weight.
865 * Return a value inversely proportional to the task's weight.
874 * Get a random u64 from the kernel's pseudo-random generator.
882 * Define the shadow structure to avoid a compilation error when
884 * suffix is a CO-RE convention that tells the loader to match this
886 * preserve_access_index tells the compiler to generate a CO-RE
894 * It does advance during idle (the idle task counts as a running task
900 * frequency. For example, clock_pelt advances 2x slower on a CPU
905 * by subtracting lost_idle_time, yielding a clock that appears
912 * Subtracting this from clock_pelt gives rq_clock_pelt(): a
924 * Unlike irqtime->total (a plain kernel-side field), the live stolen
926 * kernel-side equivalent readable from BPF in a hypervisor-agnostic
938 * Define the shadow structure to avoid a compilation error when
951 * no-ops, so a plain BPF_CORE_READ of this field is safe.
959 * cpu_irqtime is a per-CPU variable defined only when
969 * This is a workaround to get an rq pointer since we decided to
997 * is a continuous, capacity-invariant clock safe for both task
1028 * bpf_per_cpu_ptr() call below dead code that the verifier never sees.