History log of /freebsd/sys/kern/sched_ule.c (Results 326 – 350 of 810)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 58590eb0 16-Oct-2007 Sam Leffler <sam@FreeBSD.org>

ULE works fine on arm; allow it to be used

Reviewed by: jeff, cognet, imp
MFC after: 1 week


# 88f530cc 09-Oct-2007 Jeff Roberson <jeff@FreeBSD.org>

- Bail out of tdq_idled if !smp_started or idle stealing is disabled. This
fixes a bug on UP machines with SMP kernels where the idle thread
constantly switches after trying to steal work from

- Bail out of tdq_idled if !smp_started or idle stealing is disabled. This
fixes a bug on UP machines with SMP kernels where the idle thread
constantly switches after trying to steal work from the local cpu.
- Make the idle stealing code more robust against self selection.
- Prefer to steal from the cpu with the highest load that has at least one
transferable thread. Before we selected the cpu with the highest
transferable count which excludes bound threads.

Collaborated with: csjp
Approved by: re

show more ...


# 05dc0eb2 09-Oct-2007 Jeff Roberson <jeff@FreeBSD.org>

- Restore historical sched_yield() behavior by changing sched_relinquish()
to simply switch rather than lowering priority and switching. This allows
threads of equal priority to run but not le

- Restore historical sched_yield() behavior by changing sched_relinquish()
to simply switch rather than lowering priority and switching. This allows
threads of equal priority to run but not lesser priority.

Discussed with: davidxu
Reported by: NIIMI Satoshi <sa2c@sa2c.net>
Approved by: re

show more ...


# 59c68134 02-Oct-2007 Jeff Roberson <jeff@FreeBSD.org>

- Reassign the thread queue lock to newtd prior to switching. Assigning
after the switch leads to a race where the outgoing thread still owns
the local queue lock while another cpu may switch

- Reassign the thread queue lock to newtd prior to switching. Assigning
after the switch leads to a race where the outgoing thread still owns
the local queue lock while another cpu may switch it in. This race
is only possible on machines where cpu_switch can take significantly
longer on different cpus which in practice means HTT machines with
unfair thread scheduling algorithms.

Found by: kris (of course)
Approved by: re

show more ...


# 7fcf154a 02-Oct-2007 Jeff Roberson <jeff@FreeBSD.org>

- Move the rebalancer back into hardclock to prevent potential softclock
starvation caused by unbalanced interrupt loads.
- Change the rebalancer to work on stathz ticks but retain randomization.

- Move the rebalancer back into hardclock to prevent potential softclock
starvation caused by unbalanced interrupt loads.
- Change the rebalancer to work on stathz ticks but retain randomization.
- Simplify locking in tdq_idled() to use the tdq_lock_pair() rather than
complex sequences of locks to avoid deadlock.

Reported by: kris
Approved by: re

show more ...


# 02e2d6b4 27-Sep-2007 Jeff Roberson <jeff@FreeBSD.org>

- Honor the PREEMPTION and FULL_PREEMPTION flags by setting the default
value for kern.sched.preempt_thresh appropriately. It can still by
adjusted at runtime. ULE will still use IPI_PREEMPT

- Honor the PREEMPTION and FULL_PREEMPTION flags by setting the default
value for kern.sched.preempt_thresh appropriately. It can still by
adjusted at runtime. ULE will still use IPI_PREEMPT in certain
migration situations.
- Assert that we're not trying to compile ULE on an unsupported
architecture. To date, I believe only i386 and amd64 have implemented
the third cpu switch argument required.

Approved by: re

show more ...


# e270652b 24-Sep-2007 Jeff Roberson <jeff@FreeBSD.org>

- Bound the interactivity score so that it cannot become negative.

Approved by: re


# a5423ea3 22-Sep-2007 Jeff Roberson <jeff@FreeBSD.org>

- Improve grammar. s/it's/its/.
- Improve load long-term load balancer by always IPIing exactly once.
Previously the delay after rebalancing could cause problems with
uneven workloads.
- All

- Improve grammar. s/it's/its/.
- Improve load long-term load balancer by always IPIing exactly once.
Previously the delay after rebalancing could cause problems with
uneven workloads.
- Allow nice to have a linear effect on the interactivity score. This
allows negatively niced programs to stay interactive longer. It may be
useful with very expensive Xorg servers under high loads. In general
it should not be necessary to alter the nice level to improve interactive
response. We may also want to consider never allowing positively niced
processes to become interactive at all.
- Initialize ccpu to 0 rather than 0.0. The decimal point was leftover
from when the code was copied from 4bsd. ccpu is 0 in ULE because ULE
only exports weighted cpu values.

Reported by: Steve Kargl (Load balancing problem)
Approved by: re

show more ...


# 54b0e65f 21-Sep-2007 Jeff Roberson <jeff@FreeBSD.org>

- Redefine p_swtime and td_slptime as p_swtick and td_slptick. This
changes the units from seconds to the value of 'ticks' when swapped
in/out. ULE does not have a periodic timer that scans a

- Redefine p_swtime and td_slptime as p_swtick and td_slptick. This
changes the units from seconds to the value of 'ticks' when swapped
in/out. ULE does not have a periodic timer that scans all threads in
the system and as such maintaining a per-second counter is difficult.
- Change computations requiring the unit in seconds to subtract ticks
and divide by hz. This does make the wraparound condition hz times
more frequent but this is still in the range of several months to
years and the adverse effects are minimal.

Approved by: re

show more ...


# b61ce5b0 17-Sep-2007 Jeff Roberson <jeff@FreeBSD.org>

- Move all of the PS_ flags into either p_flag or td_flags.
- p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or
previously the sched_lock. These bugs have existed for some

- Move all of the PS_ flags into either p_flag or td_flags.
- p_sflag was mostly protected by PROC_LOCK rather than the PROC_SLOCK or
previously the sched_lock. These bugs have existed for some time.
- Allow swapout to try each thread in a process individually and then
swapin the whole process if any of these fail. This allows us to move
most scheduler related swap flags into td_flags.
- Keep ki_sflag for backwards compat but change all in source tools to
use the new and more correct location of P_INMEM.

Reported by: pho
Reviewed by: attilio, kib
Approved by: re (kensmith)

show more ...


# 9862717a 20-Aug-2007 Jeff Roberson <jeff@FreeBSD.org>

- Set steal_thresh to log2(ncpus). This improves idle-time load balancing
on 2cpu machines by reducing it to 1 by default. This improves loaded
operation on 8cpu machines by increasing it to

- Set steal_thresh to log2(ncpus). This improves idle-time load balancing
on 2cpu machines by reducing it to 1 by default. This improves loaded
operation on 8cpu machines by increasing it to 3 where the extra idle
time is not as critical.

Approved by: re

show more ...


# 3a78f965 04-Aug-2007 Jeff Roberson <jeff@FreeBSD.org>

- Fix one line that erroneously crept in my last commit.

Approved by: re


# c47f202b 04-Aug-2007 Jeff Roberson <jeff@FreeBSD.org>

- Share scheduler locks between hyper-threaded cores to protect the
tdq_group structure. Hyper-threaded cores won't really benefit from
seperate locks anyway.
- Seperate out the migration cas

- Share scheduler locks between hyper-threaded cores to protect the
tdq_group structure. Hyper-threaded cores won't really benefit from
seperate locks anyway.
- Seperate out the migration case from sched_switch to simplify the main
switch code. We only migrate here if called via sched_bind().
- When preempted place the preempted thread back in the same queue at
the head.
- Improve the cpu group and topology infrastructure.

Tested by: many on current@
Approved by: re

show more ...


# 28994a58 19-Jul-2007 Jeff Roberson <jeff@FreeBSD.org>

- Refine the load balancer to improve buildkernel times on dual core
machines.
- Leave the long-term load balancer running by default once per second.
- Enable stealing load from the idle thread

- Refine the load balancer to improve buildkernel times on dual core
machines.
- Leave the long-term load balancer running by default once per second.
- Enable stealing load from the idle thread only when the remote processor
has more than two transferable tasks. Setting this to one further
improves buildworld. Setting it higher improves mysql.
- Remove the bogus pick_zero option. I had not intended to commit this.
- Entirely disallow migration for threads with SRQ_YIELDING set. This
balances out the extra migration allowed for with the load balancers.
It also makes pick_pri perform better as I had anticipated.

Tested by: Dmitry Morozovsky <marck@rinet.ru>
Approved by: re

show more ...


# 08c9a16c 19-Jul-2007 Jeff Roberson <jeff@FreeBSD.org>

- When newtd is specified to sched_switch() it was not being initialized
properly. We have to temporarily unlock the TDQ lock so we can lock
the thread and add it to the run queue. This is us

- When newtd is specified to sched_switch() it was not being initialized
properly. We have to temporarily unlock the TDQ lock so we can lock
the thread and add it to the run queue. This is used only for KSE.
- When we add a thread from the tdq_move() via sched_balance() we need to
ipi the target if it's sitting in the idle thread or it'll never run.

Reported by: Rene Landan
Approved by: re

show more ...


# ae7a6b38 18-Jul-2007 Jeff Roberson <jeff@FreeBSD.org>

ULE 3.0: Fine grain scheduler locking and affinity improvements. This has
been in development for over 6 months as SCHED_SMP.
- Implement one spin lock per thread-queue. Threads assigned to a

ULE 3.0: Fine grain scheduler locking and affinity improvements. This has
been in development for over 6 months as SCHED_SMP.
- Implement one spin lock per thread-queue. Threads assigned to a
run-queue point to this lock via td_lock.
- Improve the facility for assigning threads to CPUs now that sched_lock
contention no longer dominates scheduling decisions on larger SMP
machines.
- Re-write idle time stealing in an attempt to make it less damaging to
general performance. This is still disabled by default. See
kern.sched.steal_idle.
- Call the long-term load balancer from a callout rather than sched_clock()
so there are no locks held. This is disabled by default. See
kern.sched.balance.
- Parameterize many scheduling decisions via sysctls. Try to document
these via sysctl descriptions.
- General structural and naming cleanups.
- Document each function with comments.

Tested by: current@ amd64, x86, UP, SMP.
Approved by: re

show more ...


# dda713df 15-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Fix an off by one error in sched_pri_range.
- In tdq_choose() only assert that a thread does not have too high a
priority (low value) for the queue we removed it from. This will catch
bugs

- Fix an off by one error in sched_pri_range.
- In tdq_choose() only assert that a thread does not have too high a
priority (low value) for the queue we removed it from. This will catch
bugs in priority elevation. It's not a serious error for the thread
to have too low a priority as we don't change queues in this case as
an optimization.

Reported by: kris

show more ...


# fe54587f 12-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Move some common code out of sched_fork_exit() and back into fork_exit().


# 710eacdc 06-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Placing the 'volatile' on the right side of the * in the td_lock
declaration removes the need for __DEVOLATILE().

Pointed out by: tegge


# 95e3a0bc 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
schedulers which rely on a global scheduler lock.

Te

- Better fix for previous error; use DEVOLATILE on the td_lock pointer
it can actually sometimes be something other than sched_lock even on
schedulers which rely on a global scheduler lock.

Tested by: kan

show more ...


# c219b097 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Pass &sched_lock as the third argument to cpu_switch() as this will
always be the correct lock and we don't get volatile warnings this
way.

Pointed out by: kan


# 36b36916 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

- Define TDQ_ID() for the !SMP case.
- Default pick_pri to off. It is not faster in most cases.


# 7b20fb19 05-Jun-2007 Jeff Roberson <jeff@FreeBSD.org>

Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to pr

Commit 1/14 of sched_lock decomposition.
- Move all scheduler locking into the schedulers utilizing a technique
similar to solaris's container locking.
- A per-process spinlock is now used to protect the queue of threads,
thread count, suspension count, p_sflags, and other process
related scheduling fields.
- The new thread lock is actually a pointer to a spinlock for the
container that the thread is currently owned by. The container may
be a turnstile, sleepqueue, or run queue.
- thread_lock() is now used to protect access to thread related scheduling
fields. thread_unlock() unlocks the lock and thread_set_lock()
implements the transition from one lock to another.
- A new "blocked_lock" is used in cases where it is not safe to hold the
actual thread's lock yet we must prevent access to the thread.
- sched_throw() and sched_fork_exit() are introduced to allow the
schedulers to fix-up locking at these points.
- Add some minor infrastructure for optionally exporting scheduler
statistics that were invaluable in solving performance problems with
this patch. Generally these statistics allow you to differentiate
between different causes of context switches.

Tested by: kris, current@
Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc.
Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each)

show more ...


# fb1e3ccd 20-Apr-2007 Kip Macy <kmacy@FreeBSD.org>

Schedule the ithread on the same cpu as the interrupt

Tested by: kmacy
Submitted by: jeffr


# 52bc574c 18-Mar-2007 Jeff Roberson <jeff@FreeBSD.org>

- Handle the case where slptime == runtime.

Submitted by: Atoine Brodin


1...<<11121314151617181920>>...33