History log of /freebsd/sys/kern/sched_ule.c (Results 401 – 425 of 810)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 8ffb8f55 14-Dec-2004 Jeff Roberson <jeff@FreeBSD.org>

- In kseq_choose(), don't recalculate slice values for processes with a
nice of 0. Doing so can cause an infinite loop because they should be
running, but a nice -20 process could prevent them

- In kseq_choose(), don't recalculate slice values for processes with a
nice of 0. Doing so can cause an infinite loop because they should be
running, but a nice -20 process could prevent them from doing so.
- Add a new flag KEF_PRIOELEV to flag a thread that has had its priority
elevated due to priority propagation. If a thread has had its priority
elevated, we assume that it must go on the current queue and it must
get a slice.
- In sched_userret() if our priority was elevated and we shouldn't have
a timeslice, yield here until we should.

Found/Tested by: glebius

show more ...


# 2d59a44d 13-Dec-2004 Jeff Roberson <jeff@FreeBSD.org>

- Take up a 'slot' while we're on the assigned queue, waiting to be
posted to another processor. Otherwise, kern_switch() gets confused
and tries to sched_add(NULL).


# 3ba5c2fa 11-Nov-2004 Jeff Roberson <jeff@FreeBSD.org>

- Temporarily disable the nice -20 throttling code. It has some interaction
with APM that I do not understand yet.

Reported & Tested by: glebius


Revision tags: release/5.3.0_cvs, release/5.3.0
# 0516c8dd 30-Oct-2004 Jeff Roberson <jeff@FreeBSD.org>

- When choosing a thread on the run queue, check to see if its nice is
outside of the nice threshold due to a recently awoken thread with a
lower nice value. This further reduces the amount of

- When choosing a thread on the run queue, check to see if its nice is
outside of the nice threshold due to a recently awoken thread with a
lower nice value. This further reduces the amount of time a positively
niced thread gets while running in conjunction with a workload that has
many short sleeps (ie buildworld).

show more ...


# 6bd0c7fd 30-Oct-2004 Jeff Roberson <jeff@FreeBSD.org>

- In sched_prio() check to see if the kse is assigned to a runq as the
check for TD_ON_RUNQ() no longer means the thread is really on a run-
queue. I suspect this state should be re-evaluated

- In sched_prio() check to see if the kse is assigned to a runq as the
check for TD_ON_RUNQ() no longer means the thread is really on a run-
queue. I suspect this state should be re-evaluated as it must mean
something else now. This fixes ULE+KSE+PREEMPTION on UP x86.

show more ...


# f8135176 06-Oct-2004 Julian Elischer <julian@FreeBSD.org>

Fix whitespace botch that only showed up in the commit message diff :-/

MFC after: 4 days


# c20c691b 06-Oct-2004 Julian Elischer <julian@FreeBSD.org>

When preempting a thread, put it back on the HEAD of its run queue.
(Only really implemented in 4bsd)

MFC after: 4 days


# c5c3fb33 05-Oct-2004 Julian Elischer <julian@FreeBSD.org>

Oops. left out part of the diff.

MFC after: 4 days


# d39063f2 05-Oct-2004 Julian Elischer <julian@FreeBSD.org>

Use some macros to trach available scheduler slots to allow
easier debugging.

MFC after: 4 days


# 14f0e2e9 16-Sep-2004 Julian Elischer <julian@FreeBSD.org>

clean up thread runq accounting a bit.

MFC after: 3 days


# 1e7fad6b 11-Sep-2004 Scott Long <scottl@FreeBSD.org>

Revert the previous round of changes to td_pinned. The scheduler isn't
fully initialed when the pmap layer tries to call sched_pini() early in the
boot and results in an quick panic. Use ke_pinned

Revert the previous round of changes to td_pinned. The scheduler isn't
fully initialed when the pmap layer tries to call sched_pini() early in the
boot and results in an quick panic. Use ke_pinned instead as was originally
done with Tor's patch.

Approved by: julian

show more ...


# 513efa5b 11-Sep-2004 Julian Elischer <julian@FreeBSD.org>

Try committing from the right tree this time
MFC after: 2 days


# 5c854acc 11-Sep-2004 Julian Elischer <julian@FreeBSD.org>

Make up my mind if cpu pinning is stored in the thread structure or the
scheduler specific extension to it. Put it in the extension as
the implimentation details of how the pinning is done needn't be

Make up my mind if cpu pinning is stored in the thread structure or the
scheduler specific extension to it. Put it in the extension as
the implimentation details of how the pinning is done needn't be visible
outside the scheduler.

Submitted by: tegge (of course!) (with changes)
MFC after: 3 days

show more ...


# 3389af30 10-Sep-2004 Julian Elischer <julian@FreeBSD.org>

Add some code to allow threads to nominat a sibling to run if theyu are going to sleep.

MFC after: 1 week


# ed062c8d 05-Sep-2004 Julian Elischer <julian@FreeBSD.org>

Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" stru

Refactor a bunch of scheduler code to give basically the same behaviour
but with slightly cleaned up interfaces.

The KSE structure has become the same as the "per thread scheduler
private data" structure. In order to not make the diffs too great
one is #defined as the other at this time.

The KSE (or td_sched) structure is now allocated per thread and has no
allocation code of its own.

Concurrency for a KSEGRP is now kept track of via a simple pair of counters
rather than using KSE structures as tokens.

Since the KSE structure is different in each scheduler, kern_switch.c
is now included at the end of each scheduler. Nothing outside the
scheduler knows the contents of the KSE (aka td_sched) structure.

The fields in the ksegrp structure that are to do with the scheduler's
queueing mechanisms are now moved to the kg_sched structure.
(per ksegrp scheduler private data structure). In other words how the
scheduler queues and keeps track of threads is no-one's business except
the scheduler's. This should allow people to write experimental
schedulers with completely different internal structuring.

A scheduler call sched_set_concurrency(kg, N) has been added that
notifies teh scheduler that no more than N threads from that ksegrp
should be allowed to be on concurrently scheduled. This is also
used to enforce 'fainess' at this time so that a ksegrp with
10000 threads can not swamp a the run queue and force out a process
with 1 thread, since the current code will not set the concurrency above
NCPU, and both schedulers will not allow more than that many
onto the system run queue at a time. Each scheduler should eventualy develop
their own methods to do this now that they are effectively separated.

Rejig libthr's kernel interface to follow the same code paths as
linkse for scope system threads. This has slightly hurt libthr's performance
but I will work to recover as much of it as I can.

Thread exit code has been cleaned up greatly.
exit and exec code now transitions a process back to
'standard non-threaded mode' before taking the next step.
Reviewed by: scottl, peter
MFC after: 1 week

show more ...


# 9923b511 02-Sep-2004 Scott Long <scottl@FreeBSD.org>

Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in ke

Turn PREEMPTION into a kernel option. Make sure that it's defined if
FULL_PREEMPTION is defined. Add a runtime warning to ULE if PREEMPTION is
enabled (code inspired by the PREEMPTION warning in kern_switch.c). This
is a possible MT5 candidate.

show more ...


# 2630e4c9 01-Sep-2004 Julian Elischer <julian@FreeBSD.org>

Give setrunqueue() and sched_add() more of a clue as to
where they are coming from and what is expected from them.

MFC after: 2 days


# 91c1172a 28-Aug-2004 Peter Wemm <peter@FreeBSD.org>

Commit Jeff's suggested changes for avoiding a bug that is exposed by
preemption and/or the rev 1.79 kern_switch.c change that was backed out.

The thread was being assigned to a runq without adding

Commit Jeff's suggested changes for avoiding a bug that is exposed by
preemption and/or the rev 1.79 kern_switch.c change that was backed out.

The thread was being assigned to a runq without adding in the load, which
would cause the counter to hit -1.

show more ...


# f2b74cbf 12-Aug-2004 Jeff Roberson <jeff@FreeBSD.org>

- Introduce a new flag KEF_HOLD that prevents sched_add() from doing a
migration. Use this in sched_prio() and sched_switch() to stop us from
migrating threads that are in short term sleeps or

- Introduce a new flag KEF_HOLD that prevents sched_add() from doing a
migration. Use this in sched_prio() and sched_switch() to stop us from
migrating threads that are in short term sleeps or are runnable. These
extra migrations were added in the patches to support KSE.
- Only set NEEDRESCHED if the thread we're adding in sched_add() is a
lower priority and is being placed on the current queue.
- Fix some minor whitespace problems.

show more ...


# 2454aaf5 10-Aug-2004 Jeff Roberson <jeff@FreeBSD.org>

- Use a new flag, KEF_XFERABLE, to record with certainty that this kse had
contributed to the transferable load count. This prevents any potential
problems with sched_pin() being used around c

- Use a new flag, KEF_XFERABLE, to record with certainty that this kse had
contributed to the transferable load count. This prevents any potential
problems with sched_pin() being used around calls to setrunqueue().
- Change the sched_add() load balancing algorithm to try to migrate on
wakeup. This attempts to place threads that communicate with each other
on the same CPU.
- Don't clear the idle counts in kseq_transfer(), let the cpus do that when
they call sched_add() from kseq_assign().
- Correct a few out of date comments.
- Make sure the ke_cpu field is correct when we preempt.
- Call kseq_assign() from sched_clock() to catch any assignments that were
done without IPI. Presently all assignments are done with an IPI, but I'm
trying a patch that limits that.
- Don't migrate a thread if it is still runnable in sched_add(). Previously,
this could only happen for KSE threads, but due to changes to
sched_switch() all threads went through this path.
- Remove some code that was added with preemption but is not necessary.

show more ...


# 00fbcda8 28-Jul-2004 Alexander Kabaev <kan@FreeBSD.org>

Avoid casts as lvalues.


# e038d354 24-Jul-2004 Scott Long <scottl@FreeBSD.org>

Clean up whitespace, increase consistency and correctness.

Submitted by: bde


# 55d44f79 19-Jul-2004 Julian Elischer <julian@FreeBSD.org>

When calling scheduler entrypoints for creating new threads and processes,
specify "us" as the thread not the process/ksegrp/kse.
You can always find the others from the thread but the converse is no

When calling scheduler entrypoints for creating new threads and processes,
specify "us" as the thread not the process/ksegrp/kse.
You can always find the others from the thread but the converse is not true.
Theorotically this would lead to runtime being allocated to the wrong
entity in some cases though it is not clear how often this actually happenned.
(would only affect threaded processes and would probably be pretty benign,
but it WAS a bug..)

Reviewed by: peter

show more ...


# 52eb8464 16-Jul-2004 John Baldwin <jhb@FreeBSD.org>

- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags
since they are only accessed by curthread and thus do not need any
locking.
- Move pr_addr and pr_ticks out of struct uprof

- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags
since they are only accessed by curthread and thus do not need any
locking.
- Move pr_addr and pr_ticks out of struct uprof (which is per-process)
and directly into struct thread as td_profil_addr and td_profil_ticks
as these variables are really per-thread. (They are used to defer an
addupc_intr() that was too "hard" until ast()).

show more ...


# 2c3490b1 10-Jul-2004 Marcel Moolenaar <marcel@FreeBSD.org>

Update for the KDB framework:
o Call kdb_backtrace() instead of backtrace().


1...<<11121314151617181920>>...33