Lines Matching +full:slice +full:- +full:per +full:- +full:line
1 /* SPDX-License-Identifier: GPL-2.0 */
3 * BPF extensible scheduler class: Documentation/scheduler/sched-ext.rst
32 SCX_EXIT_UNREG = 64, /* user-space initiated unregistration */
33 SCX_EXIT_UNREG_BPF, /* BPF-initiated unregistration */
34 SCX_EXIT_UNREG_KERN, /* kernel-initiated unregistration */
50 * SYS ACT: System-defined exit actions
51 * SYS RSN: System-defined exit reasons
52 * USR : User-defined exit codes and reasons
55 * actions and/or system reasons with a user-defined exit code.
80 /* %SCX_EXIT_* - broad category of the exit reason */
106 * Keep built-in idle tracking even if ops.update_idle() is implemented.
112 * keeps running the current task even after its slice expires. If this
140 * the default slice on enqueue. If this ops flag is set, they also go
144 * only select the current CPU. Also, p->cpus_ptr will only contain its
145 * current CPU while p->nr_cpus_allowed keeps tracking p->user_cpus_ptr
146 * and thus may disagree with cpumask_weight(p->cpus_ptr).
153 * previous CPU via IPI (inter-processor interrupt) to reduce cacheline
167 * If set, enable per-node idle cpumasks. If clear, use a single global
211 /* argument container for ops->cgroup_init() */
234 * Argument container for ops->cpu_acquire(). Currently empty, but may be
239 /* argument container for ops->cpu_release() */
260 * struct sched_ext_ops - Operation table for BPF scheduler implementation
277 * saves a small bit of overhead down the line.
321 * on the scheduling logic, this can lead to confusing behaviors - e.g.
341 * When not %NULL, @prev is an SCX task with its slice depleted. If
343 * @prev->scx.flags, it is not enqueued yet and will be enqueued after
354 * executing an SCX task. Setting @p->scx.slice to 0 will trigger an
365 * execution state transitions. A task becomes ->runnable() on a CPU,
366 * and then goes through one or more ->running() and ->stopping() pairs
367 * as it runs on the CPU, and eventually becomes ->quiescent() when it's
372 * - waking up (%SCX_ENQ_WAKEUP)
373 * - being moved from another CPU
374 * - being restored after temporarily taken off the queue for an
377 * This and ->enqueue() are related but not coupled. This operation
378 * notifies @p's state transition and may not be followed by ->enqueue()
381 * task may be ->enqueue()'d without being preceded by this operation
382 * e.g. after exhausting its slice.
399 * See ->runnable() for explanation on the task state notifiers.
417 * See ->runnable() for explanation on the task state notifiers. If
418 * !@runnable, ->quiescent() will be invoked after this operation
428 * See ->runnable() for explanation on the task state notifiers.
432 * - sleeping (%SCX_DEQ_SLEEP)
433 * - being moved to another CPU
434 * - being temporarily taken off the queue for an attribute change
437 * This and ->dequeue() are related but not coupled. This operation
438 * notifies @p's state transition and may not be preceded by ->dequeue()
453 * If @to is not-NULL, @from wants to yield the CPU to @to. If the bpf
459 * @core_sched_before: Task ordering for core-sched
463 * Used by core-sched to determine the ordering between two tasks. See
464 * Documentation/admin-guide/hw-vuln/core-scheduling.rst for details on
465 * core-sched.
501 * state. By default, implementing this operation disables the built-in
504 * - scx_bpf_select_cpu_dfl()
505 * - scx_bpf_select_cpu_and()
506 * - scx_bpf_test_and_clear_cpu_idle()
507 * - scx_bpf_pick_idle_cpu()
512 * Specify the %SCX_OPS_KEEP_BUILTIN_IDLE flag to keep the built-in idle
535 * caller should consult @args->reason to determine the cause.
548 * Return 0 for success, -errno for failure. An error return while
555 * @exit_task: Exit a previously-running task from the system
622 * Return 0 for success, -errno for failure. An error return while
647 * Return 0 for success, -errno for failure. An error return aborts the
786 * Must be a non-zero valid BPF object name including only isalnum(),
841 * Total number of times a task's time slice was refilled with the
864 * The event counters are in a per-CPU variable to minimize the
865 * accounting overhead. A system-wide view on the event counter is
878 * The global DSQ (%SCX_DSQ_GLOBAL) is split per-node for scalability.
879 * This is to avoid live-locking in bypass mode where all tasks are
881 * per-node split isn't sufficient, it can be further split.
918 * scx_bpf_dsq_insert() with a local dsq as the target. The slice of the
928 * invoked in a ->cpu_release() callback, and the task is again
929 * dispatched back to %SCX_LOCAL_DSQ by this current ->enqueue(), the
931 * of the ->cpu_acquire() callback.
941 * The BPF scheduler is responsible for triggering a follow-up
960 * The generic core-sched layer decided to execute the task even though
982 * current task of the target CPU is an SCX task, its ->scx.slice is
1015 * sched_ext_entity->ops_state
1020 * NONE -> QUEUEING -> QUEUED -> DISPATCHING
1023 * \-------------------------------/
1048 * p->scx.ops_state is atomic_long_t which leaves 30 bits for QSEQ on
1056 #define SCX_OPSS_STATE_MASK ((1LU << SCX_OPSS_QSEQ_SHIFT) - 1)
1072 return !current->scx.kf_mask; in scx_kf_allowed_if_unlocked()
1077 return unlikely(rq->scx.flags & SCX_RQ_BYPASSING); in scx_rq_bypassing()