Lines Matching defs:thread

88 /* platform-specific routine to call when thread is enqueued */
94 pri_t intr_pri; /* interrupt thread priority base level */
118 * Parameter that determines how recently a thread must have run
126 * Parameter that determines how long (in nanoseconds) a thread must
149 * a thread because it was sitting on its run queue for a very short
189 * compute new interrupt thread base priority
267 * thread (or, as is often the case, task queue) is performing a task
555 * - That work isn't a thread undergoing a
601 kthread_t *t; /* taken thread */
656 * If there was a thread but we couldn't steal
670 * Preempt the currently running thread in favor of the highest
671 * priority thread. The class of the current thread controls
690 * this thread has already been chosen to be run on
715 * disp() - find the highest priority thread for this processor to run, and
733 * Find the highest priority loaded, runnable thread.
759 * Choose the idle thread if the CPU is quiesced.
795 ASSERT(tp->t_schedflag & TS_LOAD); /* thread must be swapped in */
813 * last runnable thread off the
836 * out this thread before we have a chance to run it.
858 * Find best runnable thread and run it.
859 * Called with the current thread already switched to a new state,
878 * We are an interrupt thread. Setup and return
879 * the interrupted thread to be resumed.
918 * from this thread, set its t_waitrq if it is on a run
926 * restore mstate of thread that we are switching to
951 * There is however a window between where the thread
955 * thread, setting t_waitrq in the process. We detect
976 * Find best runnable thread and run it.
977 * Called with the current thread zombied.
1015 * Search the given dispatch queues for thread tp.
1045 * queues for thread tp. Return 1 if tp is found, otherwise return 0.
1083 * like swtch(), but switch to a specified thread taken from another CPU.
1113 * switching away from this thread, set its t_waitrq if it is on a run
1120 /* restore next thread to previously running microstate */
1171 * must match. When per-thread TS_RUNQMATCH flag is set, setbackdq() will
1172 * try to keep runqs perfectly balanced regardless of the thread priority.
1179 * Macro that evaluates to true if it is likely that the thread has cache
1181 * thread last ran. If that amount of time is less than "rechoose_interval"
1182 * ticks, then we decide that the thread has enough cache warmth to warrant
1185 #define THREAD_HAS_CACHE_WARMTH(thread) \
1186 ((thread == curthread) || \
1187 ((ddi_get_lbolt() - thread->t_disp_time) <= rechoose_interval))
1189 * Put the specified thread on the back of the dispatcher
1192 * Called with the thread in transition, onproc or stopped state
1194 * Returns with the thread in TS_RUN state and still locked.
1211 * If thread is "swapped" or on the swap queue don't
1236 * We'll generally let this thread continue to run where
1238 * - We thread probably doesn't have much cache warmth.
1241 * - The thread last ran outside it's home lgroup.
1303 * A thread that is ONPROC may be temporarily placed on the run queue
1304 * but then chosen to run again by disp. If the thread we're placing on
1307 * situation, curthread is the only thread that could be in the ONPROC
1366 * this thread while we are in the middle of a
1379 * Put the specified thread on the front of the dispatcher
1382 * Called with the thread in transition, onproc or stopped state
1384 * Returns with the thread in TS_RUN state and still locked.
1400 * If thread is "swapped" or on the swap queue don't
1424 * We'll generally let this thread continue to run
1426 * - The thread last ran outside it's home lgroup.
1430 * - The thread isn't the highest priority thread where
1463 * A thread that is ONPROC may be temporarily placed on the run queue
1464 * but then chosen to run again by disp. If the thread we're placing on
1467 * situation, curthread is the only thread that could be in the ONPROC
1526 * this thread while we are in the middle of a
1539 * Put a high-priority unbound thread on the kp queue
1617 * Remove a thread from the dispatcher queue if it is on it.
1637 * The thread is "swapped" or is on the swap queue and
1654 * Search for thread in queue.
1664 panic("dispdeq: thread not on queue");
1699 * incrementing/decrementing the sruncnts when a thread on
1703 * The caller MUST have the thread lock and therefore the dispatcher
1705 * the flag, the operation that checks the status of the thread to
1711 * Called by sched AFTER TS_LOAD flag is set on a swapped, runnable thread.
1725 * Called by sched BEFORE TS_LOAD flag is cleared on a runnable thread.
1737 * Change the dispatcher lock of thread to the "swapped_lock"
1738 * and return with thread lock still held.
1765 * This routine is called by setbackdq/setfrontdq if the thread is
1768 * Thread state TS_SLEEP implies that a swapped thread
1772 * thread is being increased by scheduling class (e.g. ts_update).
1785 * thread priority is above maxclsyspri.
1801 * Make a thread give up its processor. Find the processor on
1802 * which this thread is executing, and have that processor
1823 cpup = tp->t_disp_queue->disp_cpu; /* CPU thread dispatched to */
1849 * Make the target thread take an excursion through trap()
1910 * If there is, dequeue the best thread and return.
1933 * Try to take a thread from the kp_queue.
1945 * priority unbound thread.
2002 * If there's only one thread and the CPU
2004 * or it's currently running the idle thread,
2147 * If a thread was found, set the priority and return.
2152 * pri holds the maximum unbound thread priority or -1.
2159 * disp_adjust_unbound_pri() - thread is becoming unbound, so we should
2172 * Don't do anything if the thread is not bound, or
2189 * De-queue the highest priority unbound runnable thread.
2190 * Returns with the thread unlocked and onproc but at splhigh (like disp()).
2192 * Returns T_DONTSTEAL if the thread was not stealable.
2211 * context switch of the only thread, return NULL.
2248 * The thread is a candidate for stealing from its run queue. We
2256 * to preserve its cache investment. For the thread to remain on
2266 * - it should be the only thread on the run queue (to prevent
2315 * Calculate when this thread becomes stealable
2320 * Calculate time when some thread becomes stealable
2347 * Found a runnable, unbound thread, so remove it from queue.
2348 * dispdeq() requires that we have the thread locked, and we do,
2350 * put the thread in transition state, thereby dropping the dispq
2375 * Setup thread to run on the current CPU.
2385 * to preempt the current thread to run the enqueued thread while
2386 * disp_getbest() and disp_ratify() are changing the current thread
2387 * to the stolen thread. This may lead to a situation where
2388 * cpu_resched() tries to preempt the wrong thread and the
2389 * stolen thread continues to run on the CPU which has been tagged
2391 * Later the clock thread gets enqueued but doesn't get to run on the
2422 * pidlock to stop the thread list from changing (eg, if
2439 * If an interrupt thread is busy, but the
2448 * Skip the idle thread for the CPU
2455 * Skip the pause thread for the CPU
2572 * disp_lowpri_cpu - find CPU running the lowest priority thread.
2575 * used CPU for the thread.
2579 * the thread priority will indicate whether the thread will actually run
2582 * remote CPU is chosen only if the thread will not run locally on a CPU
2583 * within the lgroup, but will run on the remote CPU. If the thread
2590 * behalf of the current thread. (curthread is looking for a new cpu)
2591 * In this case, cpu_dispatch_pri for this thread's cpu should be
2617 * Scan for a CPU currently running the lowest priority thread.