Lines Matching defs:schedule

950 	 * Don't schedule slices shorter than 10000ns, that just
956 * If this is in the middle of schedule() only note the delay
1315 * and a round trip to schedule(). Now this could be optimized
2257 * A queue event has occurred, and we're going to schedule. In
3150 * proper CPU and schedule it away if the CPU it's executing on
3750 * schedule();
3754 * between set_current_state() and schedule(). In this case @p is still
3758 * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
3759 * then schedule() must still happen and p->state can be changed to
3760 * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
4085 * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
4127 * This function is atomic against schedule() which would dequeue the task.
4167 * schedule()'s block_task(), as such this must not observe
4243 * schedule()'s block_task() has 'happened' and p will no longer
4257 * If the owning (remote) CPU is still in the middle of schedule() with
4280 * If the owning (remote) CPU is still in the middle of schedule() with
4372 * - queued, and we're holding off schedule (rq->lock)
4373 * - running, and we're holding off de-schedule (rq->lock)
5193 * with the lock held can cause deadlocks; see schedule() for
5197 * local variables which were saved when this task called schedule() in the
5212 * schedule()
5229 * schedule one last time. The schedule call will never return, and
5259 * schedule between user->kernel->user threads without passing though
5313 * schedule for the first time in this path.
5941 * Various schedule()-time debugging checks and statistics:
6622 * After this, schedule() must not care about p->state any more.
6903 * rq lock. As a simple solution, just schedule rq->idle to give
6942 * So schedule rq->idle so that ttwu_runnable() can get the rq
6990 * 3. Wakeups don't really cause entry into schedule(). They add a
6994 * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
7010 * - explicit schedule() call
7052 * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
7272 asmlinkage __visible void __sched schedule(void)
7285 EXPORT_SYMBOL(schedule);
7292 * (schedule out non-voluntarily).
7326 schedule();
7339 schedule();
7375 * between schedule and now.
7382 * This is the entry point to schedule() from in-kernel preemption
7495 * This is the entry point to schedule() from kernel preemption
7754 * call schedule, and on return reacquire the lock.
7757 * operations here to prevent schedule() from being called twice (once via
8109 schedule();
8452 * schedule(). The next pick is obviously going to be the stop task
8625 * Clear the balance_push callback and prepare to schedule
9050 * Make us the idle thread. Technically, schedule() should not be
10706 * CPU. Tasks which schedule in before the task walk reaches them do the