| /linux/Documentation/locking/ |
| H A D | robust-futex-ABI.rst | 9 futexes, for kernel assist of cleanup of held locks on task exit. 12 linked list in user space, where it can be updated efficiently as locks 19 2) internal kernel code at exit, to handle any listed locks held 32 to do so, then improperly listed locks will not be cleaned up on exit, 34 waiting on the same locks. 88 specified 'offset'. Should a thread die while holding any such locks, 89 the kernel will walk this list, mark any such locks with a bit 106 robust_futexes used by that thread. The thread should link those locks 108 other links between the locks, such as the reverse side of a double 111 By keeping its locks linked this way, on a list starting with a 'head' [all …]
|
| H A D | lockdep-design.rst | 11 The basic object the validator operates upon is a 'class' of locks. 13 A class of locks is a group of locks that are logically the same with 14 respect to locking rules, even if the locks may have multiple (possibly 24 perspective, the two locks (L1 and L2) are not necessarily related; that 111 Unused locks (e.g., mutexes) cannot be part of the cause of an error. 143 Furthermore, two locks can not be taken in inverse order:: 149 deadlock - as attempts to acquire the two locks form a circle which 153 operations; the validator will still find whether these locks can be 170 any rule violation between the new lock and any of the held locks. 188 could interrupt _any_ of the irq-unsafe or hardirq-unsafe locks, which [all …]
|
| H A D | mutex-design.rst | 15 or similar theoretical text books. Mutexes are sleeping locks which 27 and implemented in kernel/locking/mutex.c. These locks use an atomic variable 69 While formally kernel mutexes are sleepable locks, it is path (ii) that 86 - Memory areas where held locks reside must not be freed. 98 list of all locks held in the system, printout of them. 100 - Detects self-recursing locks and prints out all relevant info. 102 locks and tasks (and only those tasks). 104 Mutexes - and most other sleeping locks like rwsems - do not provide an 161 locks in the kernel. E.g: on x86-64 it is 32 bytes, where 'struct semaphore'
|
| H A D | pi-futex.rst | 32 Firstly, sharing locks between multiple tasks is a common programming 46 short-held locks: for example, a highprio audio playback thread is 51 So once we accept that synchronization objects (locks) are an 53 apps have a very fair expectation of being able to use locks, we've got 58 inheritance only apply to kernel-space locks. But user-space locks are 64 locks (such as futex-based pthread mutexes) is priority inheritance: 80 normal futex-based locks: a 0 value means unlocked, and a value==TID
|
| H A D | robust-futexes.rst | 11 what futexes are: normal futexes are special types of locks that in the 45 (and in most cases there is none, futexes being fast lightweight locks) 90 robust locks that userspace is holding (maintained by glibc) - which 94 locks to be cleaned up? 101 walks the list [not trusting it], and marks all locks that are owned by 133 - no registration of individual locks is needed: robust mutexes don't 152 million (!) held locks, using the new method [on a 2GHz CPU]: 162 (1 million held locks are unheard of - we expect at most a handful of 163 locks to be held at a time. Nevertheless it's nice to know that this
|
| /linux/include/drm/ |
| H A D | drm_modeset_lock.h | 38 * @locked: list of held locks 42 * Each thread competing for a set of locks must use one acquire 52 * drm_modeset_backoff() which drops locks and slow-locks the 64 * list of held locks (drm_modeset_lock) 152 * DRM_MODESET_LOCK_ALL_BEGIN - Helper to acquire modeset locks 158 * Use these macros to simplify grabbing all modeset locks using a local 162 * Any code run between BEGIN and END will be holding the modeset locks. 167 * Drivers can acquire additional modeset locks. If any lock acquisition 185 * DRM_MODESET_LOCK_ALL_END - Helper to release and cleanup modeset locks 198 * successfully acquire the locks, ret will be whatever your code sets it to. If [all …]
|
| /linux/kernel/locking/ |
| H A D | test-ww_mutex.c | 391 struct ww_mutex *locks; member 440 struct ww_mutex *locks = stress->locks; in stress_inorder_work() local 459 err = ww_mutex_lock(&locks[order[n]], &ctx); in stress_inorder_work() 467 ww_mutex_unlock(&locks[order[contended]]); in stress_inorder_work() 470 ww_mutex_unlock(&locks[order[n]]); in stress_inorder_work() 474 ww_mutex_lock_slow(&locks[order[contended]], &ctx); in stress_inorder_work() 498 LIST_HEAD(locks); in stress_reorder_work() 513 ll->lock = &stress->locks[order[n]]; in stress_reorder_work() 514 list_add(&ll->link, &locks); in stress_reorder_work() 522 list_for_each_entry(ll, &locks, link) { in stress_reorder_work() [all …]
|
| H A D | lockdep_proc.c | 299 * All irq-safe locks may nest inside irq-unsafe locks, in lockdep_stats_show() 339 seq_printf(m, " hardirq-safe locks: %11lu\n", in lockdep_stats_show() 341 seq_printf(m, " hardirq-unsafe locks: %11lu\n", in lockdep_stats_show() 343 seq_printf(m, " softirq-safe locks: %11lu\n", in lockdep_stats_show() 345 seq_printf(m, " softirq-unsafe locks: %11lu\n", in lockdep_stats_show() 347 seq_printf(m, " irq-safe locks: %11lu\n", in lockdep_stats_show() 349 seq_printf(m, " irq-unsafe locks: %11lu\n", in lockdep_stats_show() 352 seq_printf(m, " hardirq-read-safe locks: %11lu\n", in lockdep_stats_show() 354 seq_printf(m, " hardirq-read-unsafe locks: %11lu\n", in lockdep_stats_show() 356 seq_printf(m, " softirq-read-safe locks: %11lu\n", in lockdep_stats_show() [all …]
|
| /linux/tools/testing/selftests/ftrace/test.d/ftrace/ |
| H A D | fgraph-multi-filter.tc | 98 locks_clock_cnt=`function_count locks clock` 99 clock_locks_cnt=`function_count clock locks` 160 # Enable all functions but those that have "locks" 161 set_fgraph $INSTANCE1 '' '*locks*' 166 # If a function has "locks" it should not have "clock" 167 check_cnt $locks_clock_cnt locks clock 169 # If a function has "clock" it should not have "locks" 170 check_cnt $clock_locks_cnt clock locks
|
| /linux/drivers/gpu/drm/ |
| H A D | drm_modeset_lock.c | 65 * where all modeset locks need to be taken through drm_modeset_lock_all_ctx(). 73 * On top of these per-object locks using &ww_mutex there's also an overall 77 * Finally there's a bunch of dedicated locks to protect drm core internal 131 * drm_modeset_lock_all - take all modeset locks 134 * This function takes all modeset locks, suitable where a more fine-grained 135 * scheme isn't (yet) implemented. Locks must be dropped by calling the 176 * We hold the locks now, so it is safe to stash the acquisition in drm_modeset_lock_all() 186 * drm_modeset_unlock_all - drop all modeset locks 189 * This function drops all modeset locks taken by a previous call to the 218 * drm_warn_on_modeset_not_all_locked - check that all modeset locks are locked [all …]
|
| /linux/tools/testing/selftests/bpf/progs/ |
| H A D | res_spin_lock.c | 87 struct bpf_res_spin_lock *locks[48] = {}; in res_spin_lock_test_held_lock_max() local 92 _Static_assert(ARRAY_SIZE(((struct rqspinlock_held){}).locks) == 31, in res_spin_lock_test_held_lock_max() 104 locks[i] = &e->lock; in res_spin_lock_test_held_lock_max() 116 locks[i] = &e->lock; in res_spin_lock_test_held_lock_max() 121 if (bpf_res_spin_lock(locks[i])) in res_spin_lock_test_held_lock_max() 128 ret = bpf_res_spin_lock(locks[34]); in res_spin_lock_test_held_lock_max() 130 bpf_res_spin_unlock(locks[34]); in res_spin_lock_test_held_lock_max() 139 bpf_res_spin_unlock(locks[i]); in res_spin_lock_test_held_lock_max()
|
| /linux/tools/testing/selftests/filelock/ |
| H A D | ofdlocks.c | 15 fl->l_pid = 0; // needed for OFD locks in lock_set() 27 fl->l_pid = 0; // needed for OFD locks in lock_get() 59 /* Make sure read locks do not conflict on different fds. */ in main() 67 ksft_print_msg("[FAIL] read locks conflicted\n"); in main() 70 /* Make sure read/write locks do conflict on different fds. */ in main() 79 ("[SUCCESS] read and write locks conflicted\n"); in main() 82 ("[SUCCESS] read and write locks not conflicted\n"); in main() 121 /* Get info about the lock on second fd - no locks on it. */ in main()
|
| /linux/lib/ |
| H A D | bucket_locks.c | 10 * the number of locks per CPU to allocate. The size is rounded up 14 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *locks_mask, in __alloc_bucket_spinlocks() argument 43 *locks = tlocks; in __alloc_bucket_spinlocks() 50 void free_bucket_spinlocks(spinlock_t *locks) in free_bucket_spinlocks() argument 52 kvfree(locks); in free_bucket_spinlocks()
|
| /linux/drivers/scsi/ |
| H A D | scsi_devinfo.c | 60 {"Aashima", "IMAGERY 2400SP", "1.03", BLIST_NOLUN}, /* locks up */ 61 {"CHINON", "CD-ROM CDS-431", "H42", BLIST_NOLUN}, /* locks up */ 62 {"CHINON", "CD-ROM CDS-535", "Q14", BLIST_NOLUN}, /* locks up */ 63 {"DENON", "DRD-25X", "V", BLIST_NOLUN}, /* locks up */ 66 {"IBM", "2104-DU3", NULL, BLIST_NOLUN}, /* locks up */ 67 {"IBM", "2104-TU3", NULL, BLIST_NOLUN}, /* locks up */ 68 {"IMS", "CDD521/10", "2.06", BLIST_NOLUN}, /* locks up */ 69 {"MAXTOR", "XT-3280", "PR02", BLIST_NOLUN}, /* locks up */ 70 {"MAXTOR", "XT-4380S", "B3C", BLIST_NOLUN}, /* locks up */ 71 {"MAXTOR", "MXT-1240S", "I1.2", BLIST_NOLUN}, /* locks up */ [all …]
|
| /linux/drivers/md/ |
| H A D | dm-bio-prison-v2.h | 73 * Shared locks have a bio associated with them. 103 * Locks a cell. No associated bio. Exclusive locks get priority. These 104 * locks constrain whether the io locks are granted according to level. 106 * Shared locks will still be granted if the lock_level is > (not = to) the 141 * There may be shared locks still held at this point even if you quiesced
|
| /linux/drivers/hwspinlock/ |
| H A D | sun6i_hwspinlock.c | 135 * to 0x4 represent 32, 64, 128 and 256 locks in sun6i_hwspinlock_probe() 136 * but later datasheets (H5, H6) say 00, 01, 10, 11 represent 32, 64, 128 and 256 locks, in sun6i_hwspinlock_probe() 137 * but that would mean H5 and H6 have 64 locks, while their datasheets talk about 32 locks in sun6i_hwspinlock_probe() 138 * all the time, not a single mentioning of 64 locks in sun6i_hwspinlock_probe() 143 * this is the reason 0x1 is considered being 32 locks and bit 30 is taken into account in sun6i_hwspinlock_probe() 144 * verified on H2+ (datasheet 0x1 = 32 locks) and H5 (datasheet 01 = 64 locks) in sun6i_hwspinlock_probe()
|
| /linux/Documentation/gpu/rfc/ |
| H A D | gpusvm.rst | 13 * No driver specific locks other than locks for hardware interaction in 15 invent driver defined locks to seal core MM races. 27 * Only looking at physical memory data structures and locks as opposed to 28 looking at virtual memory data structures and locks. 38 pagetable locks/mmu notifier range lock/whatever we end up calling 41 should not be handled on the fault side by trying to hold locks;
|
| /linux/tools/memory-model/ |
| H A D | linux-kernel.bell | 58 unmatched-locks = Rcu-lock \ domain(matched) 60 and unmatched = unmatched-locks | unmatched-unlocks 62 and unmatched-locks-to-unlocks = 63 [unmatched-locks] ; po ; [unmatched-unlocks] 64 and matched = matched | (unmatched-locks-to-unlocks \
|
| /linux/fs/ceph/ |
| H A D | locks.c | 59 /* clear error when all locks are released */ in ceph_fl_release_lock() 189 * rely on locks (dir mutex) held by our caller. in ceph_lock_wait_for_completion() 398 doutc(cl, "counted %d flock locks and %d fcntl locks\n", in ceph_count_locks() 438 * Encode the flock and fcntl locks for the given inode into the ceph_filelock 454 doutc(cl, "encoding %d flock and %d fcntl locks\n", num_flock_locks, in ceph_encode_locks_to_buffer() 489 * Copy the encoded flock and fcntl locks into the pagelist. 490 * Format is: #fcntl locks, sequential fcntl locks, #flock locks, 491 * sequential flock locks.
|
| /linux/include/linux/ |
| H A D | mmap_lock.h | 88 * VMA locks do not behave like most ordinary locks found in the kernel, so we 91 * Read locks act as shared locks which exclude an exclusive lock being 94 * Write locks are acquired exclusively per-VMA, but released in a shared 98 * We therefore cannot track write locks per-VMA, nor do we try. Mitigating this 164 * VMA which has excluded all VMA read locks. 403 * read locks simlutaneous to us. in vma_assert_stabilised() 477 * Locks next vma pointed by the iterator. Confirms the locked vma has not 527 /* If no VMA locks, then either mmap lock suffices to stabilise. */ in vma_assert_stabilised() 562 * Drop all currently-held per-VMA locks. 567 * *all* VMA write locks, including ones from further up the stack.
|
| H A D | blockgroup_lock.h | 24 struct bgl_lock locks[NR_BG_LOCKS]; member 32 spin_lock_init(&bgl->locks[i].lock); in bgl_lock_init() 38 return &bgl->locks[block_group & (NR_BG_LOCKS-1)].lock; in bgl_lock_ptr()
|
| /linux/tools/perf/pmu-events/arch/x86/icelakex/ |
| H A D | uncore-cache.json | 6729 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6740 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6760 …pecified by the subevent. Does not include addressless requests such as locks and interrupts. : … 6771 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6780 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6790 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6801 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6811 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6833 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 6844 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", [all …]
|
| /linux/Documentation/filesystems/ |
| H A D | directory-locking.rst | 7 kinds of locks - per-inode (->i_rwsem) and per-filesystem 11 always acquire the locks in order by increasing address. We'll call 47 * take the locks that need to be taken (exclusive), in inode pointer order 137 There is a ranking on the locks, such that all primitives take 146 * among the locks on different filesystems use the relative 168 contended locks in the minimal deadlock will be of the same rank, 187 only 3 possible operations: directory removal (locks parent, then 195 have changed since the moment directory locks had been acquired, 248 the locks) and voila - we have a deadlock.
|
| /linux/drivers/md/dm-vdo/ |
| H A D | recovery-journal.h | 62 * counters are used as locks to prevent premature reaping of journal blocks. Each time a new 94 /* The number of logical zones which may hold locks */ 96 /* The number of physical zones which may hold locks */ 98 /* The number of locks */ 99 block_count_t locks; member 148 /* The slab depot which can hold locks on this journal */ 150 /* The block map which can hold locks on this journal */ 214 /* The locks for each on-disk block */
|
| /linux/drivers/net/wireless/realtek/rtlwifi/rtl8192d/ |
| H A D | phy_common.h | 39 spin_lock_irqsave(&rtlpriv->locks.cck_and_rw_pagea_lock, *flag); in rtl92d_acquire_cckandrw_pagea_ctl() 51 spin_unlock_irqrestore(&rtlpriv->locks.cck_and_rw_pagea_lock, in rtl92d_release_cckandrw_pagea_ctl() 99 spin_lock(&rtlpriv->locks.rf_lock); in rtl92d_pci_lock() 105 spin_unlock(&rtlpriv->locks.rf_lock); in rtl92d_pci_unlock()
|