Home
last modified time | relevance | path

Searched full:we (Results 1 – 25 of 6983) sorted by relevance

12345678910>>...280

/linux/net/ipv4/
H A Dtcp_vegas.c15 * o We do not change the loss detection or recovery mechanisms of
19 * only every-other RTT during slow start, we increase during
22 * we use the rate at which ACKs come back as the "actual"
24 * o To speed convergence to the right rate, we set the cwnd
25 * to achieve the right ("actual") rate when we exit slow start.
26 * o To filter out the noise caused by delayed ACKs, we use the
55 /* There are several situations when we must "re-start" Vegas:
60 * o when we send a packet and there is no outstanding
63 * In these circumstances we cannot do a Vegas calculation at the
64 * end of the first RTT, because any calculation we do is using
[all …]
/linux/arch/powerpc/mm/nohash/
H A Dtlb_low_64e.S91 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */
93 /* We do the user/kernel test for the PID here along with the RW test
95 /* We pre-test some combination of permissions to avoid double
98 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE
103 * writeable, we will take a new fault later, but that should be
106 * We also move ESR_ST in _PAGE_DIRTY position
109 * MAS1 is preset for all we need except for TID that needs to
137 * We are entered with:
176 /* Now we build the MAS:
219 /* We need to check if it was an instruction miss */
[all …]
/linux/fs/xfs/scrub/
H A Dagb_bitmap.c19 * We know that the btree query_all function starts at the left edge and walks
20 * towards the right edge of the tree. Therefore, we know that we can walk up
22 * to the first record/key in that block, we haven't seen this block before;
23 * and therefore we need to remember that we saw this block in the btree.
32 * the first btree record, we'll observe that bc_levels[0].ptr == 1, so we
33 * record that we saw block 1. Then we observe that bc_levels[1].ptr == 1, so
34 * we record block 4. The list is [1, 4].
36 * For the second btree record, we see that bc_levels[0].ptr == 2, so we exit
39 * For the 101st btree record, we've moved onto leaf block 2. Now
40 * bc_levels[0].ptr == 1 again, so we record that we saw block 2. We see that
[all …]
H A Dalloc_repair.c48 * AG. Therefore, we can recreate the free extent records in an AG by looking
60 * walking the rmapbt records, we create a second bitmap @not_allocbt_blocks to
72 * The OWN_AG bitmap itself isn't needed after this point, so what we really do
83 * written to the new btree indices. We reconstruct both bnobt and cntbt at
84 * the same time since we've already done all the work.
86 * We use the prefix 'xrep_abt' here because we regenerate both free space
118 * Next block we anticipate seeing in the rmap records. If the next
119 * rmap record is greater than next_agbno, we have found unused space.
126 /* Longest free extent we found in the AG. */
139 * Make sure the busy extent list is clear because we can't put extents in xrep_setup_ag_allocbt()
[all …]
H A Dfscounters.c32 * The basics of filesystem summary counter checking are that we iterate the
35 * Then we compare what we computed against the in-core counters.
38 * While we /could/ freeze the filesystem and scramble around the AGs counting
39 * the free blocks, in practice we prefer not do that for a scan because
40 * freezing is costly. To get around this, we added a per-cpu counter of the
41 * delalloc reservations so that we can rotor around the AGs relatively
42 * quickly, and we allow the counts to be slightly off because we're not taking
43 * any locks while we do this.
45 * So the first thing we do is warm up the buffer cache in the setup routine by
48 * structures as quickly as it can. We snapshot the percpu counters before and
[all …]
/linux/fs/btrfs/
H A Dfiemap.c30 * - Cache the next entry to be emitted to the fiemap buffer, so that we can
35 * buffer is memory mapped to the fiemap target file, we don't deadlock
36 * during btrfs_page_mkwrite(). This is because during fiemap we are locking
40 * if the fiemap buffer is memory mapped to the file we are running fiemap
53 * the next file extent item we must search for in the inode's subvolume
59 * This matches struct fiemap_extent_info::fi_mapped_extents, we use it
61 * fiemap_fill_next_extent() because we buffer ready fiemap entries at
62 * the @entries array, and we want to stop as soon as we hit the max
86 * Ignore 1 (reached max entries) because we keep track of that in flush_fiemap_cache()
102 * And only when we fails to merge, cached one will be submitted as
[all …]
H A Dlocking.h23 * We are limited in number of subclasses by MAX_LOCKDEP_SUBCLASSES, which at
24 * the time of this patch is 8, which is how many we use. Keep this in mind if
31 * When we COW a block we are holding the lock on the original block,
33 * when we lock the newly allocated COW'd block. Handle this by having
39 * Oftentimes we need to lock adjacent nodes on the same level while
40 * still holding the lock on the original node we searched to, such as
43 * Because of this we need to indicate to lockdep that this is
51 * When splitting we will be holding a lock on the left/right node when
52 * we need to cow that node, thus we need a new set of subclasses for
59 * When splitting we may push nodes to the left or right, but still use
[all …]
/linux/drivers/usb/dwc2/
H A Dhcd_queue.c32 /* If we get a NAK, wait this long before retrying */
121 * @num_bits: The number of bits we need per period we want to reserve
123 * @interval: How often we need to be scheduled for the reservation this
127 * the interval or we return failure right away.
128 * @only_one_period: Normally we'll allow picking a start anywhere within the
129 * first interval, since we can still make all repetition
131 * here then we'll return failure if we can't fit within
134 * The idea here is that we want to schedule time for repeating events that all
139 * To keep things "simple", we'll represent our schedule with a bitmap that
141 * but does mean that we need to handle things specially (and non-ideally) if
[all …]
/linux/drivers/misc/vmw_vmci/
H A Dvmci_route.c33 * which comes from the VMX, so we know it is coming from a in vmci_route()
36 * To avoid inconsistencies, test these once. We will test in vmci_route()
37 * them again when we do the actual send to ensure that we do in vmci_route()
49 * If this message already came from a guest then we in vmci_route()
57 * We must be acting as a guest in order to send to in vmci_route()
63 /* And we cannot send if the source is the host context. */ in vmci_route()
71 * then they probably mean ANY, in which case we in vmci_route()
87 * If it is not from a guest but we are acting as a in vmci_route()
88 * guest, then we need to send it down to the host. in vmci_route()
89 * Note that if we are also acting as a host then this in vmci_route()
[all …]
/linux/Documentation/filesystems/
H A Ddirectory-locking.rst10 When taking the i_rwsem on multiple non-directory objects, we
11 always acquire the locks in order by increasing address. We'll call
22 * lock the directory we are accessing (shared)
26 * lock the directory we are accessing (exclusive)
73 in its own right; it may happen as part of lookup. We speak of the
74 operations on directory trees, but we obviously do not have the full
75 picture of those - especially for network filesystems. What we have
77 Trees grow as we do operations; memory pressure prunes them. Normally
78 that's not a problem, but there is a nasty twist - what should we do
83 possibility that directory we see in one place gets moved by the server
[all …]
H A Didmappings.rst23 on, we will always prefix ids with ``u`` or ``k`` to make it clear whether
24 we're talking about an id in the upper or lower idmapset.
42 that make it easier to understand how we can translate between idmappings. For
43 example, we know that the inverse idmapping is an order isomorphism as well::
49 Given that we are dealing with order isomorphisms plus the fact that we're
50 dealing with subsets we can embed idmappings into each other, i.e. we can
51 sensibly translate between different idmappings. For example, assume we've been
61 Because we're dealing with order isomorphic subsets it is meaningful to ask
64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using
69 If we were given the same task for the following three idmappings::
[all …]
/linux/arch/openrisc/mm/
H A Dfault.c59 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault()
62 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault()
68 * mappings we don't have to walk all processes pgdirs and in do_page_fault()
69 * add the high mappings all at once. Instead we do it as they in do_page_fault()
82 /* If exceptions were enabled, we can reenable them here */ in do_page_fault()
100 * If we're in an interrupt or have no user in do_page_fault()
101 * context, we must not take the fault.. in do_page_fault()
125 * we get page-aligned addresses so we can only check in do_page_fault()
126 * if we're within a page from usp, but that might be in do_page_fault()
137 * Ok, we have a good vm_area for this memory access, so in do_page_fault()
[all …]
/linux/Documentation/driver-api/thermal/
H A Dcpu-idle-cooling.rst25 because of the OPP density, we can only choose an OPP with a power
35 If we can remove the static and the dynamic leakage for a specific
38 injection period, we can mitigate the temperature by modulating the
47 At a specific OPP, we can assume that injecting idle cycle on all CPUs
49 idle state target residency, we lead to dropping the static and the
69 We use a fixed duration of idle injection that gives an acceptable
132 - It is less than or equal to the latency we tolerate when the
134 user experience, reactivity vs performance trade off we want. This
137 - It is greater than the idle state’s target residency we want to go
138 for thermal mitigation, otherwise we end up consuming more energy.
[all …]
/linux/drivers/scsi/aic7xxx/
H A Daic79xx.seq85 * If we have completions stalled waiting for the qfreeze
109 * ENSELO is cleared by a SELDO, so we must test for SELDO
149 * We have received good status for this transaction. There may
169 * Since this status did not consume a FIFO, we have to
170 * be a bit more dilligent in how we check for FIFOs pertaining
178 * count in the SCB. In this case, we allow the routine servicing
183 * we detect case 1, we will properly defer the post of the SCB
222 * bad SCSI status (currently only for underruns), we
223 * queue the SCB for normal completion. Otherwise, we
258 * If we have relatively few commands outstanding, don't
[all …]
/linux/fs/jbd2/
H A Dtransaction.c70 * have an existing running transaction: we only make a new transaction
71 * once we have started to commit the old one).
74 * The journal MUST be locked. We don't perform atomic mallocs on the
75 * new transaction and we can't block without protecting against other
175 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching()
198 * Wait until we can add credits for handle to the running transaction. Called
200 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and
204 * value, we need to fake out sparse so ti doesn't complain about a
229 * potential buffers requested by this operation, we need to in add_transaction_credits()
236 * then start to commit it: we can then go back and in add_transaction_credits()
[all …]
/linux/Documentation/arch/powerpc/
H A Dpci_iov_resource_on_powernv.rst40 The following section provides a rough description of what we have on P8
52 For DMA, MSIs and inbound PCIe error messages, we have a table (in
55 We call this the RTT.
57 - For DMA we then provide an entire address space for each PE that can
63 - For MSIs, we have two windows in the address space (one at the top of
87 32-bit PCIe accesses. We configure that window at boot from FW and
91 reserved for MSIs but this is not a problem at this point; we just
93 ignores that however and will forward in that space if we try).
100 Now, this is the "main" window we use in Linux today (excluding
101 SR-IOV). We basically use the trick of forcing the bridge MMIO windows
[all …]
/linux/scripts/
H A Dgenerate_builtin_ranges.awk12 # If we have seen this object before, return information from the cache.
36 # name (e.g. core). We check the associated module file name, and if
55 # We use a modified absolute start address (soff + base) as index because we
58 # So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at
75 # and we record the object name "crypto/lzo-rle".
89 # We collect the base address of the section in order to convert all addresses
92 # We collect the address of the anchor (or first symbol in the section if there
96 # We collect the start address of any sub-section (section included in the top
108 # which format we ar
[all...]
/linux/arch/powerpc/kvm/
H A Dbook3s_hv_rm_xics.c70 * We start the search from our current CPU Id in the core map
71 * and go in a circle until we get back to our ID looking for a
102 * visible before we return to caller (and the in grab_next_hostcore()
147 * if we can't find one, set up state to eventually return too hard. in icp_rm_set_vcpu_irq()
193 * the state already. This is why we never clear the interrupt output in icp_rm_try_update()
194 * here, we only ever set it. The clear only happens prior to doing in icp_rm_try_update()
195 * an update and only by the processor itself. Currently we do it in icp_rm_try_update()
198 * We also do not try to figure out whether the EE state has changed, in icp_rm_try_update()
199 * we unconditionally set it if the new state calls for it. The reason in icp_rm_try_update()
200 * for that is that we opportunistically remove the pending interrupt in icp_rm_try_update()
[all …]
/linux/kernel/irq/
H A Dspurious.c51 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq()
165 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq()
166 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq()
197 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq()
202 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq()
213 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq()
235 * We cannot call note_interrupt from the threaded handler in note_interrupt()
236 * because we need to look at the compound of all handlers in note_interrupt()
238 * shared case we have no serialization against an incoming in note_interrupt()
239 * hardware interrupt while we are dealing with a threaded in note_interrupt()
[all …]
/linux/fs/xfs/
H A Dxfs_reflink.c43 * alter the blocks in a different file; the way that we'll do that is
45 * means that when we want to write to a shared block, we allocate a new
46 * block, write the data to the new block, and if that succeeds we map the
52 * for bigger chunks less often, which is exactly what we want for CoW.
55 * writable (write_begin or page_mkwrite). If the offset is not mapped, we
80 * We want to adapt the delalloc mechanism for copy-on-write, since the
83 * the mappings must be stored in a separate CoW fork because we do not want
84 * to disturb the mapping in the data fork until we're sure that the write
93 * Just prior to submitting the actual disk write requests, we convert
102 * because we don't want to destroy the old data fork map until we're sure
[all …]
H A Dxfs_inode.c56 * inode is in b-tree format, then we need to lock the inode exclusively until
58 * our parallelism unnecessarily, though. What we do instead is check to see
126 * The difference in mmap_lock locking order mean that we cannot hold the
133 * Hence to serialise fully against both syscall and mmap based IO, we need to
135 * both taken in places where we need to invalidate the page cache in a race
234 * that we know which locks to drop.
296 * Sometimes we assert the ILOCK is held exclusively, but we're in in xfs_assert_ilocked()
297 * a workqueue, so lockdep doesn't know we're the owner. in xfs_assert_ilocked()
335 * parent locking. Care must be taken to ensure we don't overrun the subclass
336 * storage fields in the class mask we build.
[all …]
/linux/arch/powerpc/kexec/
H A Dcore_64.c47 * Since we use the kernel fault handlers and paging code to in machine_kexec_prepare()
48 * handle the virtual mode, we must make sure no destination in machine_kexec_prepare()
55 /* We also should not overwrite the tce tables */ in machine_kexec_prepare()
88 * We rely on kexec_load to create a lists that properly in copy_segments()
90 * We will still crash if the list is wrong, but at least in copy_segments()
123 * After this call we may not use anything allocated in dynamic in kexec_copy_flush()
131 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush()
148 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down()
155 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down()
173 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait()
[all …]
/linux/mm/
H A Dmmap_lock.c109 * We should use WRITE_ONCE() here because we can have concurrent reads in __vma_start_write()
111 * We don't really care about the correctness of that early check, but in __vma_start_write()
112 * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy. in __vma_start_write()
133 * We are the only writer, so no need to use vma_refcount_put(). in vma_mark_detached()
153 * locked result to avoid performance overhead, in which case we fall back to
156 * reused and attached to a different mm before we lock it.
172 * We can use READ_ONCE() for the mm_lock_seq here, and don't need in vma_start_read()
174 * we don't rely on for anything - the mm_lock_seq read against which we in vma_start_read()
202 * False unlocked result is impossible because we modify and check in vma_start_read()
206 * We must use ACQUIRE semantics for the mm_lock_seq so that if we are in vma_start_read()
[all …]
/linux/kernel/futex/
H A Dpi.c114 * We need to check the following states:
134 * [1] Indicates that the kernel can acquire the futex atomically. We
218 * We get here with hb->lock held, and having found a in attach_to_pi_state()
227 * free pi_state before we can take a reference ourselves. in attach_to_pi_state()
232 * Now that we have a pi_state, we can acquire wait_lock in attach_to_pi_state()
240 * still is what we expect it to be, otherwise retry the entire in attach_to_pi_state()
383 * This creates pi_state, we have hb->lock held, this means nothing can in __attach_to_pi_owner()
419 * We are the first waiter - try to look up the real owner and attach in attach_to_pi_owner()
437 * We nee in attach_to_pi_owner()
[all...]
/linux/include/linux/
H A Dlru_cache.h29 We use an LRU policy if it is necessary to "cool down" a region currently in
30 the active set before we can "heat" a previously unused region.
33 As it actually Tracks Objects in an Active SeT, we could also call it
35 backend storage upon next resync, if we don't get it right).
39 We replicate IO (more or less synchronously) to local and remote disk.
42 we need to resync all regions that have been target of in-flight WRITE IO
43 (in use, or "hot", regions), as we don't know whether or not those WRITEs
46 To avoid a "full resync", we need to persistently track these regions.
57 If we set a hard limit on the area that may be "hot" at any given time, we
61 we need to resync all blocks that have been changed on the other replica
[all …]

12345678910>>...280