Lines Matching +defs:indent +defs:to

35  *   ULE is the last three letters in schedule.  It owes its name to a
136 * SCHED_TICK_SECS: Number of seconds to average the cpu usage across.
137 * SCHED_TICK_TARG: Number of hz ticks to average the cpu usage across.
139 * SCHED_TICK_SHIFT: Shift factor to avoid rounding away results.
153 * by the ratio of ticks to the tick total. NHALF priorities at the start
154 * and end of the MIN to MAX timeshare range are only reachable with negative
180 * SLP_RUN_FORK: Maximum slp+run time to inherit at fork time.
203 * due to rounding would be unacceptably high.
227 * tdq - per processor runqs and statistics. A mutex synchronizes access to
240 * Ordered to improve efficiency of cpu_search() and switch().
241 * tdq_lock is padded to avoid false sharing with tdq_load and
245 struct cpu_group *tdq_cg; /* (c) Pointer to cpu topology. */
355 struct cpu_group *cg, int indent);
444 * nothing to do.
473 * Add a thread to the actual run-queue. Keeps transferable counts up to
501 * batch. Use the whole queue to represent these values.
509 * ridx while we wait for threads to drain.
525 * is selected to run. Running threads are not on the queue and the
553 * for this thread to the referenced thread queue.
570 * Remove the load from a thread that is transitioning to a sleep state or
591 * consider the maximum latency as the sum of the threads waiting to run
601 * It is safe to use sys_load here because this is called from
614 * Set lowpri to its exact value by searching the run-queue and
637 * of the random state (in the low bits of our answer) to keep
652 cpuset_t *cs_mask; /* The mask of allowed CPUs to choose from. */
868 * it from here, so tell it to pick new CPU by itself.
891 /* Go to next high if we found no less loaded CPU. */
894 /* Transfer thread from high to low. */
925 * Lock two thread queues using their address to maintain lock order.
963 * Transfer a thread from high to low.
970 * the new load, possibly sending an IPI to force it to
986 * Move a thread from one thread queue to another. Returns -1 if the source
988 * the destination queue prior to the addition of the new thread. In the latter
989 * case, this priority can be used to determine whether an IPI needs to be
993 tdq_move(struct tdq *from, struct tdq *to)
999 TDQ_LOCK_ASSERT(to, MA_OWNED);
1001 cpu = TDQ_ID(to);
1013 td->td_lock = TDQ_LOCKPTR(to);
1015 return (tdq_add(to, td, SRQ_YIELDING));
1019 * This tdq has idled. Try to steal a thread from another cpu and switch
1020 * to it.
1040 * 0 here will cause our caller to switch to it.
1046 * We found no CPU to steal from in this group. Escalate to
1082 * Try to lock both queues. If we are assigned a thread while
1083 * waited for the lock, switch to it now instead of stealing.
1111 * Steal the thread and switch to it.
1116 * We failed to acquire a thread even though it looked
1117 * like one was available. This could be due to affinity
1120 * above does not restore this CPU to the set due to the
1135 * the queue prior to the addition of the new thread.
1150 * Check to see if the newly added thread should preempt the one
1157 * Make sure that our caller's earlier update to tdq_load is
1164 * Try to figure out if we can signal the idle thread instead of sending
1258 * Attempt to steal a thread in priority order from a thread queue.
1275 * Sets the thread lock and ts_cpu to match the requested cpu. Unlocks the
1298 * The hard case, migration, we need to block the thread first to
1315 SCHED_STAT_DEFINE(pickcpu_local, "Migrated to current cpu");
1339 * Prefer to run interrupt threads on the processors that generate
1407 * Try hard to keep interrupts within found LLC. Search the LLC for
1435 KASSERT(cpu >= 0, ("sched_pickcpu: Failed to find a cpu."));
1438 * Compare the lowest loaded cpu to current cpu.
1567 * tickincr is shifted out by 10 to avoid rounding errors due to
1593 * on past behavior. It is the ratio of sleep time to run time scaled to
1621 * The score is only needed if this is likely to be an interactive
1650 * Scale the scheduling priority according to the "interactivity" of this
1663 * priorities. These threads are not subject to nice restrictions.
1670 * score. Negative nice values make it easier for a thread to be
1702 * function is ugly due to integer math.
1747 * to learn that the compiler is behaving badly very quickly.
1766 * Called from proc0_init() to setup the scheduler fields.
1784 * schedinit_ap() is needed prior to calling sched_throw(NULL) to ensure that
1786 * initialization and sched_throw(). One can safely add threads to the queue
1812 /* Convert sched_slice from stathz to hz. */
1845 * Adjust the priority of a thread. Move it to the appropriate run-queue
1870 * If the priority has been elevated due to priority
1871 * propagation, we may have to move ourselves to a new
1872 * queue. This could be optimized to not re-add in some
1882 * If the thread is currently running we may have to adjust the lowpri
1913 * needs to have to satisfy other possible priority lending
1936 * Standard entry for setting the priority to an absolute value.
2004 * Like the above but first check if there is anything to do.
2020 * This tdq is about to idle. Try to steal a thread from another CPU before
2036 /* We don't want to be preempted while we're iterating. */
2051 * We found no CPU to steal from in this group. Escalate to
2084 * At this point unconditionally exit the loop to bound
2091 * Try to lock both queues. If we are assigned a thread while
2092 * waited for the lock, switch to it now instead of stealing.
2111 * If we fail to acquire one due to affinity restrictions,
2112 * bail out and let the idle thread to a more complete search
2148 * Do the lock dance required to avoid LOR. We have an
2173 * Switch threads. This function has to handle threads coming in while
2175 * migrating a thread from one queue to another as running threads may
2236 /* This thread must be going to sleep. */
2259 * We enter here with the thread blocked and assigned to the
2270 * Call the MD code to switch contexts if necessary.
2281 * If DTrace has set the active vtime enum to anything
2283 * function to call.
2346 * Schedule a thread to resume execution and record how long it voluntarily
2446 /* Attempt to quickly learn interactivity. */
2467 * Return some of the child's priority and interactivity to the parent.
2482 * Penalize another thread for the time spent on this one. This helps to
2485 * causes new processes spawned by it to receive worse scores immediately.
2494 * Give the child's runtime to the parent without returning the
2495 * sleep time as a penalty to the parent. This causes shells that
2496 * launch expensive things to mark their children as expensive.
2533 * Fix priorities on return to user-space. Priorities may be elevated due
2534 * to static priorities in msleep() or similar.
2549 "Interrupt thread preemptions due to time-sharing");
2589 * If there is some activity seed it to reflect that.
2595 * Advance the insert index once for each tick to ensure that all
2596 * threads get a chance to run.
2610 * We used a tick; charge it to the thread so
2676 * Choose the highest priority thread to run. The thread is removed from
2724 * Add a thread to a thread queue. Select the appropriate runq and add the
2725 * thread to it. This is the internal function called when the tdq is
2736 ("sched_add: trying to run inhibited thread"));
2751 * Select the target thread queue and add a thread to it. Request
2780 * Pick the destination cpu and if it isn't ours transfer to the
2793 * Now that the thread is moving to the run-queue, set the lock
2794 * to the scheduler's lock.
2862 * Enforce affinity settings for a thread. Called after adjustments to
2883 * Force a switch before returning to userspace. If the
2884 * target thread is not running locally send an ipi to force
2894 * Bind a thread to a target cpu.
3046 * to avoid races with tdq_notify().
3051 * threads often enough to make it worthwhile to do so in
3052 * order to avoid calling cpu_idle().
3063 * other wakeup reasons equal to context switches.
3074 * sched_throw_grab() chooses a thread from the queue to switch to
3075 * next. It returns with the tdq lock dropped in a spinlock section to
3172 * Create on first use to catch odd startup conditions.
3204 * Build the CPU topology dump string. Is recursively called to collect
3209 int indent)
3214 sbuf_printf(sb, "%*s<group level=\"%d\" cache-level=\"%d\">\n", indent,
3215 "", 1 + indent / 2, cg->cg_level);
3216 sbuf_printf(sb, "%*s <cpu count=\"%d\" mask=\"%s\">", indent, "",
3231 sbuf_printf(sb, "%*s <flags>", indent, "");
3244 sbuf_printf(sb, "%*s <children>\n", indent, "");
3247 &cg->cg_child[i], indent+2);
3248 sbuf_printf(sb, "%*s </children>\n", indent, "");
3250 sbuf_printf(sb, "%*s</group>\n", indent, "");
3318 "Assign static kernel priorities to sleeping threads");
3326 "Number of hz ticks to keep thread affinity for");
3331 "Average period in stathz ticks to run the long-term balancer");
3333 "Attempts to steal work from other cores before idling");