#
ed062c8d |
| 05-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces.
The KSE structure has become the same as the "per thread scheduler private data" stru
Refactor a bunch of scheduler code to give basically the same behaviour but with slightly cleaned up interfaces.
The KSE structure has become the same as the "per thread scheduler private data" structure. In order to not make the diffs too great one is #defined as the other at this time.
The KSE (or td_sched) structure is now allocated per thread and has no allocation code of its own.
Concurrency for a KSEGRP is now kept track of via a simple pair of counters rather than using KSE structures as tokens.
Since the KSE structure is different in each scheduler, kern_switch.c is now included at the end of each scheduler. Nothing outside the scheduler knows the contents of the KSE (aka td_sched) structure.
The fields in the ksegrp structure that are to do with the scheduler's queueing mechanisms are now moved to the kg_sched structure. (per ksegrp scheduler private data structure). In other words how the scheduler queues and keeps track of threads is no-one's business except the scheduler's. This should allow people to write experimental schedulers with completely different internal structuring.
A scheduler call sched_set_concurrency(kg, N) has been added that notifies teh scheduler that no more than N threads from that ksegrp should be allowed to be on concurrently scheduled. This is also used to enforce 'fainess' at this time so that a ksegrp with 10000 threads can not swamp a the run queue and force out a process with 1 thread, since the current code will not set the concurrency above NCPU, and both schedulers will not allow more than that many onto the system run queue at a time. Each scheduler should eventualy develop their own methods to do this now that they are effectively separated.
Rejig libthr's kernel interface to follow the same code paths as linkse for scope system threads. This has slightly hurt libthr's performance but I will work to recover as much of it as I can.
Thread exit code has been cleaned up greatly. exit and exec code now transitions a process back to 'standard non-threaded mode' before taking the next step. Reviewed by: scottl, peter MFC after: 1 week
show more ...
|
#
00b0483d |
| 03-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Don't declare a function we are not defining.
|
#
37c28a02 |
| 03-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
fix compile for UP
|
#
293968d8 |
| 03-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
ooops finish last commit. moved the variables but not the declarations.
|
#
82a1dfc1 |
| 03-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Move 4bsd specific experimental IP code into the 4bsd file. Move the sysctls into kern.sched
|
#
6804a3ab |
| 01-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Give the 4bsd scheduler the ability to wake up idle processors when there is new work to be done.
MFC after: 5 days
|
#
2630e4c9 |
| 01-Sep-2004 |
Julian Elischer <julian@FreeBSD.org> |
Give setrunqueue() and sched_add() more of a clue as to where they are coming from and what is expected from them.
MFC after: 2 days
|
#
ad59c36b |
| 22-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
diff reduction for upcoming patch. Use a macro that masks some of the odd goings on with sub-structures, because they will go away anyhow.
|
#
0f54f482 |
| 11-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Properly keep track of how many kses are on the system run queue(s).
|
#
732d9528 |
| 09-Aug-2004 |
Julian Elischer <julian@FreeBSD.org> |
Increase the amount of data exported by KTR in the KTR_RUNQ setting. This extra data is needed to really follow what is going on in the threaded case.
|
#
e038d354 |
| 24-Jul-2004 |
Scott Long <scottl@FreeBSD.org> |
Clean up whitespace, increase consistency and correctness.
Submitted by: bde
|
#
55d44f79 |
| 19-Jul-2004 |
Julian Elischer <julian@FreeBSD.org> |
When calling scheduler entrypoints for creating new threads and processes, specify "us" as the thread not the process/ksegrp/kse. You can always find the others from the thread but the converse is no
When calling scheduler entrypoints for creating new threads and processes, specify "us" as the thread not the process/ksegrp/kse. You can always find the others from the thread but the converse is not true. Theorotically this would lead to runtime being allocated to the wrong entity in some cases though it is not clear how often this actually happenned. (would only affect threaded processes and would probably be pretty benign, but it WAS a bug..)
Reviewed by: peter
show more ...
|
#
52eb8464 |
| 16-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof
- Move TDF_OWEPREEMPT, TDF_OWEUPC, and TDF_USTATCLOCK over to td_pflags since they are only accessed by curthread and thus do not need any locking. - Move pr_addr and pr_ticks out of struct uprof (which is per-process) and directly into struct thread as td_profil_addr and td_profil_ticks as these variables are really per-thread. (They are used to defer an addupc_intr() that was too "hard" until ast()).
show more ...
|
#
6942d433 |
| 13-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
Set TDF_NEEDRESCHED when a higher priority thread is scheduled in sched_add() rather than just doing it in sched_wakeup(). The old ithread preemption code used to set NEEDRESCHED unconditionally if
Set TDF_NEEDRESCHED when a higher priority thread is scheduled in sched_add() rather than just doing it in sched_wakeup(). The old ithread preemption code used to set NEEDRESCHED unconditionally if it didn't preempt which masked this bug in SCHED_4BSD.
Noticed by: jake Reported by: kensmith, marcel
show more ...
|
#
0c0b25ae |
| 02-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to
Implement preemption of kernel threads natively in the scheduler rather than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to determine if a thread about to be added to a run queue should be preempted to directly. If it is not safe to preempt or if the new thread does not have a high enough priority, then the function returns false and sched_add() adds the thread to the run queue. If the thread should be preempted to but the current thread is in a nested critical section, then the flag TDF_OWEPREEMPT is set and the thread is added to the run queue. Otherwise, mi_switch() is called immediately and the thread is never added to the run queue since it is switch to directly. When exiting an outermost critical section, if TDF_OWEPREEMPT is set, then clear it and call mi_switch() to perform the deferred preemption. - Remove explicit preemption from ithread_schedule() as calling setrunqueue() now does all the correct work. This also removes the do_switch argument from ithread_schedule(). - Do not use the manual preemption code in mtx_unlock if the architecture supports native preemption. - Don't call mi_switch() in a loop during shutdown to give ithreads a chance to run if the architecture supports native preemption since the ithreads will just preempt DELAY(). - Don't call mi_switch() from the page zeroing idle thread for architectures that support native preemption as it is unnecessary. - Native preemption is enabled on the same archs that supported ithread preemption, namely alpha, i386, and amd64.
This change should largely be a NOP for the default case as committed except that we will do fewer context switches in a few cases and will avoid the run queues completely when preempting.
Approved by: scottl (with his re@ hat)
show more ...
|
#
bf0acc27 |
| 02-Jul-2004 |
John Baldwin <jhb@FreeBSD.org> |
- Change mi_switch() and sched_switch() to accept an optional thread to switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling
- Change mi_switch() and sched_switch() to accept an optional thread to switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling choosethread() to pick a thread to choose to. - Make sched_switch() aware of idle threads and know to do TD_SET_CAN_RUN() instead of sticking them on the run queue rather than requiring all callers of mi_switch() to know to do this if they can be called from an idlethread. - Move constants for arguments to mi_switch() and thread_single() out of the middle of the function prototypes and up above into their own section.
show more ...
|
#
36c6fd1c |
| 22-Jun-2004 |
Scott Long <scottl@FreeBSD.org> |
Fix another typo in the previous commit.
|
#
c38dd4b6 |
| 22-Jun-2004 |
Scott Long <scottl@FreeBSD.org> |
Fix typo that somehow crept into the previous commit
|
#
dc095794 |
| 22-Jun-2004 |
Scott Long <scottl@FreeBSD.org> |
Add the sysctl node 'kern.sched.name' that has the name of the scheduler currently in use. Move the 4bsd kern.quantum node to kern.sched.quantum for consistency.
|
#
fa885116 |
| 16-Jun-2004 |
Julian Elischer <julian@FreeBSD.org> |
Nice, is a property of a process as a whole.. I mistakenly moved it to the ksegroup when breaking up the process structure. Put it back in the proc structure.
|
Revision tags: release/4.10.0_cvs, release/4.10.0 |
|
#
7f8a436f |
| 05-Apr-2004 |
Warner Losh <imp@FreeBSD.org> |
Remove advertising clause from University of California Regent's license, per letter dated July 22, 1999.
Approved by: core
|
#
7d5ea13f |
| 05-Apr-2004 |
Doug Rabson <dfr@FreeBSD.org> |
Try not to crash instantly when signalling a libthr program to death.
|
#
8cbec0c8 |
| 05-Mar-2004 |
Robert Watson <rwatson@FreeBSD.org> |
The roundrobin callout from sched_4bsd is MPSAFE, so set up the callout as MPSAFE to avoid grabbing Giant.
Reviewed by: jhb
|
#
44f3b092 |
| 27-Feb-2004 |
John Baldwin <jhb@FreeBSD.org> |
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables.
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
show more ...
|
Revision tags: release/5.2.1_cvs, release/5.2.1 |
|
#
f2f51f8a |
| 01-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Disable ithread binding in all cases for now. This doesn't make as much sense with sched_4bsd as it does with sched_ule. - Use P_NOLOAD instead of the absence of td->td_ithd to determine wheth
- Disable ithread binding in all cases for now. This doesn't make as much sense with sched_4bsd as it does with sched_ule. - Use P_NOLOAD instead of the absence of td->td_ithd to determine whether or not a thread should be accounted for in sched_tdcnt.
show more ...
|