#
b2ae7ed7 |
| 27-Mar-2004 |
Marcel Moolenaar <marcel@FreeBSD.org> |
Change the type of the various CPU masks to cpumask_t. Note that as long as there are still explicit uses of int, whether in types or in function names (such as atomic_set_int() in sched_ule.c), we c
Change the type of the various CPU masks to cpumask_t. Note that as long as there are still explicit uses of int, whether in types or in function names (such as atomic_set_int() in sched_ule.c), we can not change cpumask_t to be anything other than u_int. See also the commit log for sys/sys/types.h, revision 1.84.
show more ...
|
#
b003da79 |
| 21-Mar-2004 |
David E. O'Brien <obrien@FreeBSD.org> |
Give a more reasonable CPU time to the threads which are using scheduler activation (i.e., applications are using libpthread). This is because SCHED_ULE sometimes puts P_SA processes into ksq_next u
Give a more reasonable CPU time to the threads which are using scheduler activation (i.e., applications are using libpthread). This is because SCHED_ULE sometimes puts P_SA processes into ksq_next unnecessarily. Which doesn't give fair amount of CPU time to processes which are using scheduler-activation-based threads when other (semi-)CPU-intensive, non-P_SA processes are running.
Further work will no doubt be done by jeffr at a later date.
Submitted by: Taku YAMAMOTO <taku@cent.saitama-u.ac.jp> Reviewed by: rwatson, freebsd-current@
show more ...
|
#
44f3b092 |
| 27-Feb-2004 |
John Baldwin <jhb@FreeBSD.org> |
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables.
Switch the sleep/wakeup and condition variable implementations to use the sleep queue interface: - Sleep queues attempt to merge some of the benefits of both sleep queues and condition variables. Having sleep qeueus in a hash table avoids having to allocate a queue head for each wait channel. Thus, struct cv has shrunk down to just a single char * pointer now. However, the hash table does not hold threads directly, but queue heads. This means that once you have located a queue in the hash bucket, you no longer have to walk the rest of the hash chain looking for threads. Instead, you have a list of all the threads sleeping on that wait channel. - Outside of the sleepq code and the sleep/cv code the kernel no longer differentiates between cv's and sleep/wakeup. For example, calls to abortsleep() and cv_abort() are replaced with a call to sleepq_abort(). Thus, the TDF_CVWAITQ flag is removed. Also, calls to unsleep() and cv_waitq_remove() have been replaced with calls to sleepq_remove(). - The sched_sleep() function no longer accepts a priority argument as sleep's no longer inherently bump the priority. Instead, this is soley a propery of msleep() which explicitly calls sched_prio() before blocking. - The TDF_ONSLEEPQ flag has been dropped as it was never used. The associated TDF_SET_ONSLEEPQ and TDF_CLR_ON_SLEEPQ macros have also been dropped and replaced with a single explicit clearing of td_wchan. TD_SET_ONSLEEPQ() would really have only made sense if it had taken the wait channel and message as arguments anyway. Now that that only happens in one place, a macro would be overkill.
show more ...
|
Revision tags: release/5.2.1_cvs, release/5.2.1 |
|
#
0392e39d |
| 01-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Allow interactive tasks to use the maximum time-slice. This is not as detrimental as I thought it would be in the case of massive process storms from a shell and it makes regular desktop usa
- Allow interactive tasks to use the maximum time-slice. This is not as detrimental as I thought it would be in the case of massive process storms from a shell and it makes regular desktop usage noticeably better.
show more ...
|
#
33916c36 |
| 01-Feb-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add a new member to struct kseq called ksq_sysload. This is intended to track the load for the sched_load() function. In the SMP case this member is not defined because it would be redundan
- Add a new member to struct kseq called ksq_sysload. This is intended to track the load for the sched_load() function. In the SMP case this member is not defined because it would be redundant with the ksg_load member which already tracks the non ithd load. - For sched_load() in the UP case simply return ksq_sysload. In the SMP case traverse the list of kseq groups and sum up their ksg_load fields.
show more ...
|
#
c77ac1fd |
| 25-Jan-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- sched_strict has been dead for a long time now. Get rid of it.
|
#
c494ddc8 |
| 25-Jan-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Clean up KASSERTS.
|
#
29bcc451 |
| 25-Jan-2004 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add a flags parameter to mi_switch. The value of flags may be SW_VOL or SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simpli
- Add a flags parameter to mi_switch. The value of flags may be SW_VOL or SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics.
show more ...
|
Revision tags: release/5.2.0_cvs, release/5.2.0 |
|
#
249e0bea |
| 20-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Make our transfer decisions based on load and not transferable load. A cpu could have been bogged down with non-transferable load and still not migrated a new thread to an idle cpu. This re
- Make our transfer decisions based on load and not transferable load. A cpu could have been bogged down with non-transferable load and still not migrated a new thread to an idle cpu. This required some benchmarking and tuning to get right as the comment above it suggests.
show more ...
|
#
e7a976f4 |
| 20-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Enable ithread migration on x86. This is done to work around a bug in the IO APIC on Xeons that prevents round-robin interrupt assignment from working.
|
#
670c524f |
| 20-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- In kseq_transfer() return if smp has not been started. - In sched_add(), do the idle check prior to the transfer check so that we don't try to transfer load from an idle cpu. This fixes panics
- In kseq_transfer() return if smp has not been started. - In sched_add(), do the idle check prior to the transfer check so that we don't try to transfer load from an idle cpu. This fixes panics caused by IPIs on UP machines running SMP kernels.
Reported/Debugged by: seanc
show more ...
|
#
9b5f6f62 |
| 20-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Running interactive tasks with the minimum time-slice is fine for vi and sh, but not so great for mozilla, X, etc. Add a fixed define for the slice size granted to interactive KSEs.
|
#
86e1c22a |
| 14-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Assign the ke_cpu field in kseq_notify() so that all of our callers do not have to do it. - Set the ke_runq to NULL in sched_add() before calling kseq_notify(). Otherwise we may panic in sch
- Assign the ke_cpu field in kseq_notify() so that all of our callers do not have to do it. - Set the ke_runq to NULL in sched_add() before calling kseq_notify(). Otherwise we may panic in sched_add() if INVARIANTS is on.
show more ...
|
#
cac77d04 |
| 12-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Now that we have kseq groups, balance them seperately. - The new sched_balance_groups() function does intra-group balancing while sched_balance() balances the available groups. - Pick a random
- Now that we have kseq groups, balance them seperately. - The new sched_balance_groups() function does intra-group balancing while sched_balance() balances the available groups. - Pick a random time between 0 ticks and hz * 2 ticks to restart each balancing process. Each balancer has its own timeout. - Pick a random place in the list of groups to start the search for lowest and highest group loads. This prevents us from prefering a group based on numeric position. - Use a nasty hack to stop us from preferring cpu 0. The problem is that softclock always runs on cpu 0, so it always has a little extra load. We ignore this load in the balancer for now. In the future softclock should run on a random cpu and these hacks can go away.
show more ...
|
#
2e227f04 |
| 11-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Don't let the pctcpu rate limiter throttle us if we have recorded over SCHED_CPU_TICKS ticks. This was allowing processes to display (1/SCHED_CPU_TIME * 100) % more cpu than they had used.
|
#
b11fdad0 |
| 11-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- In sched_switch(), if a thread has been assigned, don't touch the runqueues or load. These things have already been taken care of in sched_bind() which should be the only place that we're sw
- In sched_switch(), if a thread has been assigned, don't touch the runqueues or load. These things have already been taken care of in sched_bind() which should be the only place that we're switching in an assigned thread.
show more ...
|
#
80f86c9f |
| 11-Dec-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Add support for CPU groups to ule. All SMT cores on the same physical cpu are added to a group. - Don't place a cpu into the kseq_idle bitmask until all cpus in that group have idled. - Pr
- Add support for CPU groups to ule. All SMT cores on the same physical cpu are added to a group. - Don't place a cpu into the kseq_idle bitmask until all cpus in that group have idled. - Prefer idle groups over idle group members in the new kseq_transfer() function. In this way we will prefer to balance load across full cores rather than add further load a partial core. - Before a cpu goes idle, check the other group members for threads. Since SMT cpus may freely share threads, this is cheap. - SMT cores may be individually pinned and bound to now. This contrasts the old mechanism where binding or pinning would have allowed a thread to run on any available cpu. - Remove some unnecessary logic from sched_switch(). Priority propagation should be properly taken care of in sched_prio() now.
show more ...
|
#
a2640c9b |
| 07-Dec-2003 |
Peter Wemm <peter@FreeBSD.org> |
rqb_bits[] may be an int64_t (eg: on alpha, and recently on amd64). Be sure to shift (long)1 << 33 and higher, not (int)1. Otherwise bad things happen(TM). This is why beast.freebsd.org paniced wit
rqb_bits[] may be an int64_t (eg: on alpha, and recently on amd64). Be sure to shift (long)1 << 33 and higher, not (int)1. Otherwise bad things happen(TM). This is why beast.freebsd.org paniced with ULE.
Reviewed by: jeff
show more ...
|
#
b6c71225 |
| 03-Dec-2003 |
John Baldwin <jhb@FreeBSD.org> |
Fix all users of mp_maxid to use the same semantics, namely:
1) mp_maxid is a valid FreeBSD CPU ID in the range 0 .. MAXCPU - 1. 2) For all active CPUs in the system, PCPU_GET(cpuid) <= mp_maxid.
A
Fix all users of mp_maxid to use the same semantics, namely:
1) mp_maxid is a valid FreeBSD CPU ID in the range 0 .. MAXCPU - 1. 2) For all active CPUs in the system, PCPU_GET(cpuid) <= mp_maxid.
Approved by: re (scottl) Tested on: i386, amd64, alpha
show more ...
|
#
fa9c9717 |
| 17-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Mark ksq_assigned as volatile so that when this code is used without sched_lock we can be sure that we'll pick up the new value.
|
#
093c05e3 |
| 17-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove long dead code. rslices hasn't been used in some time and neither has sched_pickcpu().
|
#
155b9987 |
| 15-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Introduce kseq_runq_{add,rem}() which are used to insert and remove kses from the run queues. Also, on SMP, we track the transferable count here. Threads are transferable only as long as th
- Introduce kseq_runq_{add,rem}() which are used to insert and remove kses from the run queues. Also, on SMP, we track the transferable count here. Threads are transferable only as long as they are on the run queue. - Previously, we adjusted our load balancing based on the transferable count minus the number of actual cpus. This was done to account for the threads which were likely to be running. All of this logic is simpler now that transferable accounts for only those threads which can actually be taken. Updated various places in sched_add() and kseq_balance() to account for this. - Rename kseq_{add,rem} to kseq_load_{add,rem} to reflect what they're really doing. The load is accounted for seperately from the runq because the load is accounted for even as the thread is running. - Fix a bug in sched_class() where we weren't properly using the PRI_BASE() version of the kg_pri_class. - Add a large comment that describes the impact of a seemingly simple conditional in sched_add(). - Also in sched_add() check the transferable count and KSE_CAN_MIGRATE() prior to checking kseq_idle. This reduces the frequency of access for kseq_idle which is a shared resource.
show more ...
|
#
f28b3340 |
| 06-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Somehow I botched my last commit. Add an extra ( to fix things up. I'm still not sure how this happened.
Reported by: ps
|
#
a70d729b |
| 06-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- Remove the local definition of sched_pin and unpin. They are provided in sched.h now. - Respect the td pin count.
|
#
46f8b265 |
| 05-Nov-2003 |
Jeff Roberson <jeff@FreeBSD.org> |
- It's ok if sched_runnable() has races in it, we don't need the sched_lock here unless we have something on the assigned queue.
|