#
d14c38ce |
| 05-Nov-2024 |
Mark Johnston <markj@FreeBSD.org> |
sys: Avoid relying on pollution from refcount.h
Fix up headers which previously assumed that refcount.h includes systm.h, and make them more self-contained. Then, replace the systm.h include in ref
sys: Avoid relying on pollution from refcount.h
Fix up headers which previously assumed that refcount.h includes systm.h, and make them more self-contained. Then, replace the systm.h include in refcount with kassert.h.
Reviewed by: imp, kib MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D47450
show more ...
|
Revision tags: release/13.4.0, release/14.1.0, release/13.3.0, release/14.0.0 |
|
#
95ee2897 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: two-line .h pattern
Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/
|
Revision tags: release/13.2.0, release/12.4.0, release/13.1.0, release/12.3.0, release/13.0.0, release/12.2.0, release/11.4.0, release/12.1.0, release/11.3.0 |
|
#
0269ae4c |
| 06-Jun-2019 |
Alan Somers <asomers@FreeBSD.org> |
MFHead @348740
Sponsored by: The FreeBSD Foundation
|
#
e2e050c8 |
| 20-May-2019 |
Conrad Meyer <cem@FreeBSD.org> |
Extract eventfilter declarations to sys/_eventfilter.h
This allows replacing "sys/eventfilter.h" includes with "sys/_eventfilter.h" in other header files (e.g., sys/{bus,conf,cpu}.h) and reduces hea
Extract eventfilter declarations to sys/_eventfilter.h
This allows replacing "sys/eventfilter.h" includes with "sys/_eventfilter.h" in other header files (e.g., sys/{bus,conf,cpu}.h) and reduces header pollution substantially.
EVENTHANDLER_DECLARE and EVENTHANDLER_LIST_DECLAREs were moved out of .c files into appropriate headers (e.g., sys/proc.h, powernv/opal.h).
As a side effect of reduced header pollution, many .c files and headers no longer contain needed definitions. The remainder of the patch addresses adding appropriate includes to fix those files.
LOCK_DEBUG and LOCK_FILE_LINE_ARG are moved to sys/_lock.h, as required by sys/mutex.h since r326106 (but silently protected by header pollution prior to this change).
No functional change (intended). Of course, any out of tree modules that relied on header pollution for sys/eventhandler.h, sys/lock.h, or sys/mutex.h inclusion need to be fixed. __FreeBSD_version has been bumped.
show more ...
|
Revision tags: release/12.0.0, release/11.2.0 |
|
#
c4e20cad |
| 27-Nov-2017 |
Pedro F. Giffuni <pfg@FreeBSD.org> |
sys/sys: further adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I was using misidentified many licenses so this was mostly a manual - error
sys/sys: further adoption of SPDX licensing ID tags.
Mainly focus on files that use BSD 2-Clause license, however the tool I was using misidentified many licenses so this was mostly a manual - error prone - task.
The Software Package Data Exchange (SPDX) group provides a specification to make it easier for automated tools to detect and summarize well known opensource licenses. We are gradually adopting the specification, noting that the tags are considered only advisory and do not, in any way, superceed or replace the license texts.
show more ...
|
Revision tags: release/10.4.0, release/11.1.0, release/11.0.1, release/11.0.0, release/10.3.0, release/10.2.0, release/10.1.0, release/9.3.0, release/10.0.0, release/9.2.0, release/8.4.0, release/9.1.0, release/8.3.0_cvs, release/8.3.0, release/9.0.0, release/7.4.0_cvs, release/8.2.0_cvs, release/7.4.0, release/8.2.0, release/8.1.0_cvs, release/8.1.0, release/7.3.0_cvs, release/7.3.0, release/8.0.0_cvs, release/8.0.0, release/7.2.0_cvs, release/7.2.0, release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
90356491 |
| 15-May-2008 |
Attilio Rao <attilio@FreeBSD.org> |
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the lock "type" directly to witness_init() from the parent lock init function. Handle delayed initialization before than witness_initialize() is called through the witness_pendhelp structure. - Axe out LO_ENROLLPEND as it is not really needed. The case where the mutex init delayed wants to be destroyed can't happen because witness_destroy() checks for witness_cold and panic in case. - In enroll(), if we cannot allocate a new object from the freelist, notify that to userspace through a printf(). - Modify the depart function in order to return nothing as in the current CVS version it always returns true and adjust callers accordingly. - Fix the witness_addgraph() argument name prototype. - Remove unuseful code from itismychild().
This commit leads to a shrinked struct lock_object and so smaller locks, in particular on amd64 where 2 uintptr_t (16 bytes per-primitive) are gained.
Reviewed by: jhb
show more ...
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
eea4f254 |
| 16-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_obje
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
show more ...
|
Revision tags: release/6.2.0_cvs, release/6.2.0 |
|
#
61bd5e21 |
| 13-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
track lock class name in a way that doesn't break WITNESS
|
#
7c0435b9 |
| 11-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the prof
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%.
Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
show more ...
|
Revision tags: release/5.5.0_cvs, release/5.5.0, release/6.1.0_cvs, release/6.1.0 |
|
#
3c6decc3 |
| 06-Jan-2006 |
John Baldwin <jhb@FreeBSD.org> |
Trim another pointer from struct lock_object (and thus from struct mtx and struct sx). Instead of storing a direct pointer to a our lock_class struct in lock_object, reserve 4 bits in the lo_flags f
Trim another pointer from struct lock_object (and thus from struct mtx and struct sx). Instead of storing a direct pointer to a our lock_class struct in lock_object, reserve 4 bits in the lo_flags field to serve as an index into a global lock_classes array that contains pointers to the lock classes. Only debugging code such as WITNESS or INVARIANTS checks and KTR logging need to access the lock_class member, so this shouldn't add any overhead to production kernels. It might add some slight overhead to kernels using those debug options however.
As with the previous set of changes to lock_object, this is going to completely obliterate the kernel ABI, so be sure to recompile all your modules.
show more ...
|
#
5d2162b2 |
| 05-Dec-2005 |
John Baldwin <jhb@FreeBSD.org> |
Tweak witness handling of lock object to shave 2 pointers off of each lock object (and thus off of each mutex and sx lock): - Rename the all_locks list to pending_locks and only put locks initialized
Tweak witness handling of lock object to shave 2 pointers off of each lock object (and thus off of each mutex and sx lock): - Rename the all_locks list to pending_locks and only put locks initialized before SI_SUB_WITNESS on the list so that the SI_SUB_WITNESS can add them to witness once it starts up. - Now that pending_locks is only used during early startup, change it from a TAILQ to an STAILQ. This removes a pointer from the STAILQ_ENTRY in struct lock_object. - Since the pending_locks list is only used during the single-threaded early boot it no longer needs to be protected by a mutex, so remove all_mtx. - Since the lo_list member of struct lock_object is now only used during early boot before witness is running, collapse lo_list and lo_witness into a union. This shaves the second pointer off of struct lock_object. - Axe lock_cur_cnt and lock_max_cnt.
With these changes, struct mtx shrinks from 36 to 28 bytes on 32-bit platforms and from 72 to 56 bytes on 64-bit platforms. Note that this commit will completely and utterly destroy the kernel ABI, so no MFC.
Tested on: alpha, amd64, i386, sparc64
show more ...
|
Revision tags: release/6.0.0_cvs, release/6.0.0, release/5.4.0_cvs, release/5.4.0, release/4.11.0_cvs, release/4.11.0, release/5.3.0_cvs, release/5.3.0 |
|
#
7a637a63 |
| 19-Jun-2004 |
Bruce Evans <bde@FreeBSD.org> |
Include <sys/_lock.h>'s prerequisite <sys/queue.h> before including the former, not after.
Don't hide this bug by including <sys/queue.h> in <sys/_lock.h>.
|
#
bade128f |
| 16-Jun-2004 |
Poul-Henning Kamp <phk@FreeBSD.org> |
I am not quite sure what broke compiling LINT:mcount.c, but a nested include of <sys/queue.h> here solves it.
|
Revision tags: release/4.10.0_cvs, release/4.10.0, release/5.2.1_cvs, release/5.2.1, release/5.2.0_cvs, release/5.2.0, release/4.9.0_cvs, release/4.9.0, release/5.1.0_cvs, release/5.1.0, release/4.8.0_cvs, release/4.8.0, release/5.0.0_cvs, release/5.0.0, release/4.7.0_cvs, release/4.6.2_cvs, release/4.6.2, release/4.6.1, release/4.6.0_cvs |
|
#
1fe7722c |
| 07-Jun-2002 |
Bruce Evans <bde@FreeBSD.org> |
Renamed the idempotency identifier to match the file name.
|
#
48849938 |
| 06-Jun-2002 |
John Baldwin <jhb@FreeBSD.org> |
Change the all locks list from a STAILQ to a TAILQ. This bloats struct lock_object by another pointer (though all of lock_object should be conditional on LOCK_DEBUG anyways) in exchange for an O(1)
Change the all locks list from a STAILQ to a TAILQ. This bloats struct lock_object by another pointer (though all of lock_object should be conditional on LOCK_DEBUG anyways) in exchange for an O(1) TAILQ_REMOVE() in witness_destroy() (called for every mtx_destroy() and sx_destroy()) instead of an O(n) STAILQ_REMOVE. Since WITNESS is so dog slow as it is, the speed-up is worth the space cost.
Suggested by: iedowse
show more ...
|
#
b6396e16 |
| 04-Apr-2002 |
John Baldwin <jhb@FreeBSD.org> |
Add a new char * pointer lo_type to struct lock_object that is used to point to a more generic name for a lock that is more suitable for use by witness when grouping locks. For example, although net
Add a new char * pointer lo_type to struct lock_object that is used to point to a more generic name for a lock that is more suitable for use by witness when grouping locks. For example, although network driver locks use the interface name for the name of each lock, they should all use the same witness and be treated the same as witness. Another example is that all UMA zone locks should be treated the same. The witness code has also been updated to print out the lock type in addition to the lock name in a few places where it is relevant.
show more ...
|
Revision tags: release/4.5.0_cvs, release/4.4.0_cvs |
|
#
fb63feef |
| 19-Oct-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Move the definition of LOCK_DEBUG back to sys/lock.h from sys/_lock.h. - Change LOCK_DEBUG so that it is always on if KTR is compiled in regardless of the state of KTR_COMPILE. This means that w
- Move the definition of LOCK_DEBUG back to sys/lock.h from sys/_lock.h. - Change LOCK_DEBUG so that it is always on if KTR is compiled in regardless of the state of KTR_COMPILE. This means that we no longer need to include sys/ktr.h before sys/lock.h to ensure a valid setting for LOCK_DEBUG. - Change the use of LOCK_DEBUG so that it is now always defined and its value is used instead of merely its definition. That is, instead of #ifdef LOCK_DEBUG, code should now use #if LOCK_DEBUG > 0. - Use this latest to #error out in sys/mutex.h if sys/lock.h isn't included before sys/mutex.h to ensure that the proper versions of the mutex operations are used. - As a result of (2) sys/mutex.h no longer includes sys/ktr.h in the KERNEL case.
Requested by: bde (1)
show more ...
|
#
9ba567a0 |
| 26-Sep-2001 |
John Baldwin <jhb@FreeBSD.org> |
Move the definition of LOCK_DEBUG from sys/lock.h to sys/_lock.h.
|
#
5752bffd |
| 05-Sep-2001 |
David E. O'Brien <obrien@FreeBSD.org> |
style(9) the structure definitions.
|
#
2d96f0b1 |
| 04-May-2001 |
John Baldwin <jhb@FreeBSD.org> |
- Move state about lock objects out of struct lock_object and into a new struct lock_instance that is stored in the per-process and per-CPU lock lists. Previously, the lock lists just kept a poi
- Move state about lock objects out of struct lock_object and into a new struct lock_instance that is stored in the per-process and per-CPU lock lists. Previously, the lock lists just kept a pointer to each lock held. That pointer is now replaced by a lock instance which contains a pointer to the lock object, the file and line of the last acquisition of a lock, and various flags about a lock including its recursion count. - If we sleep while holding a sleepable lock, then mark that lock instance as having slept and ignore any lock order violations that occur while acquiring Giant when we wake up with slept locks. This is ok because of Giant's special nature. - Allow witness to differentiate between shared and exclusive locks and unlocks of a lock. Witness will now detect the case when a lock is acquired first in one mode and then in another. Mutexes are always locked and unlocked exclusively. Witness will also now detect the case where a process attempts to unlock a shared lock while holding an exclusive lock and vice versa. - Fix a bug in the lock list implementation where we used the wrong constant to detect the case where a lock list entry was full.
show more ...
|
#
fb919e4d |
| 01-May-2001 |
Mark Murray <markm@FreeBSD.org> |
Undo part of the tangle of having sys/lock.h and sys/mutex.h included in other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys
Undo part of the tangle of having sys/lock.h and sys/mutex.h included in other "system" header files.
Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files.
Sort sys/*.h includes where possible in affected files.
OK'ed by: bde (with reservations)
show more ...
|
Revision tags: release/10.4.0, release/11.1.0, release/11.0.1, release/11.0.0, release/10.3.0, release/10.2.0, release/10.1.0, release/9.3.0, release/10.0.0, release/9.2.0, release/8.4.0, release/9.1.0, release/8.3.0_cvs, release/8.3.0, release/9.0.0, release/7.4.0_cvs, release/8.2.0_cvs, release/7.4.0, release/8.2.0, release/8.1.0_cvs, release/8.1.0, release/7.3.0_cvs, release/7.3.0, release/8.0.0_cvs, release/8.0.0, release/7.2.0_cvs, release/7.2.0, release/7.1.0_cvs, release/7.1.0, release/6.4.0_cvs, release/6.4.0 |
|
#
90356491 |
| 15-May-2008 |
Attilio Rao <attilio@FreeBSD.org> |
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the
- Embed the recursion counter for any locking primitive directly in the lock_object, using an unified field called lo_data. - Replace lo_type usage with the w_name usage and at init time pass the lock "type" directly to witness_init() from the parent lock init function. Handle delayed initialization before than witness_initialize() is called through the witness_pendhelp structure. - Axe out LO_ENROLLPEND as it is not really needed. The case where the mutex init delayed wants to be destroyed can't happen because witness_destroy() checks for witness_cold and panic in case. - In enroll(), if we cannot allocate a new object from the freelist, notify that to userspace through a printf(). - Modify the depart function in order to return nothing as in the current CVS version it always returns true and adjust callers accordingly. - Fix the witness_addgraph() argument name prototype. - Remove unuseful code from itismychild().
This commit leads to a shrinked struct lock_object and so smaller locks, in particular on amd64 where 2 uintptr_t (16 bytes per-primitive) are gained.
Reviewed by: jhb
show more ...
|
Revision tags: release/7.0.0_cvs, release/7.0.0, release/6.3.0_cvs, release/6.3.0 |
|
#
eea4f254 |
| 16-Dec-2007 |
Jeff Roberson <jeff@FreeBSD.org> |
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_obje
- Re-implement lock profiling in such a way that it no longer breaks the ABI when enabled. There is no longer an embedded lock_profile_object in each lock. Instead a list of lock_profile_objects is kept per-thread for each lock it may own. The cnt_hold statistic is now always 0 to facilitate this. - Support shared locking by tracking individual lock instances and statistics in the per-thread per-instance lock_profile_object. - Make the lock profiling hash table a per-cpu singly linked list with a per-cpu static lock_prof allocator. This removes the need for an array of spinlocks and reduces cache contention between cores. - Use a seperate hash for spinlocks and other locks so that only a critical_enter() is required and not a spinlock_enter() to modify the per-cpu tables. - Count time spent spinning in the lock statistics. - Remove the LOCK_PROFILE_SHARED option as it is always supported now. - Specifically drop and release the scheduler locks in both schedulers since we track owners now.
In collaboration with: Kip Macy Sponsored by: Nokia
show more ...
|
Revision tags: release/6.2.0_cvs, release/6.2.0 |
|
#
61bd5e21 |
| 13-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
track lock class name in a way that doesn't break WITNESS
|
#
7c0435b9 |
| 11-Nov-2006 |
Kip Macy <kmacy@FreeBSD.org> |
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the prof
MUTEX_PROFILING has been generalized to LOCK_PROFILING. We now profile wait (time waited to acquire) and hold times for *all* kernel locks. If the architecture has a system synchronized TSC, the profiling code will use that - thereby minimizing profiling overhead. Large chunks of profiling code have been moved out of line, the overhead measured on the T1 for when it is compiled in but not enabled is < 1%.
Approved by: scottl (standing in for mentor rwatson) Reviewed by: des and jhb
show more ...
|