Home
last modified time | relevance | path

Searched full:we (Results 1 – 25 of 7299) sorted by relevance

12345678910>>...292

/linux/fs/btrfs/
H A Dspace-info.c28 * 1) space_info. This is the ultimate arbiter of how much space we can use.
31 * reservations we care about total_bytes - SUM(space_info->bytes_) when
36 * metadata reservation we have. You can see the comment in the block_rsv
40 * 3) btrfs_calc*_size. These are the worst case calculations we used based
41 * on the number of items we will want to modify. We have one for changing
42 * items, and one for inserting new items. Generally we use these helpers to
48 * We call into either btrfs_reserve_data_bytes() or
49 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with
50 * num_bytes we want to reserve.
67 * Assume we are unable to simply make the reservation because we do not have
[all …]
H A Dfiemap.c30 * - Cache the next entry to be emitted to the fiemap buffer, so that we can
35 * buffer is memory mapped to the fiemap target file, we don't deadlock
36 * during btrfs_page_mkwrite(). This is because during fiemap we are locking
40 * if the fiemap buffer is memory mapped to the file we are running fiemap
53 * the next file extent item we must search for in the inode's subvolume
59 * This matches struct fiemap_extent_info::fi_mapped_extents, we use it
61 * fiemap_fill_next_extent() because we buffer ready fiemap entries at
62 * the @entries array, and we want to stop as soon as we hit the max
86 * Ignore 1 (reached max entries) because we keep track of that in flush_fiemap_cache()
102 * And only when we fails to merge, cached one will be submitted as
[all …]
H A Dtree-log.c49 * log, we must force a full commit before doing an fsync of the directory
63 * 2) we must log any new names for any file or dir that is in the fsync
66 * 2a) we must log any new names for any file or dir during rename
73 * 3) after a crash, we must go through any directories with a link count
90 * stage (0) is to only pin down the blocks we find
92 * we find in the log are created in the subvolume.
126 * Instead of doing a tree commit on every fsync, we use the
150 * We're holding a transaction handle whether we are logging or in btrfs_iget_logging()
151 * replaying a log tree, so we must make sure NOFS semantics apply in btrfs_iget_logging()
252 * returns 0 if there was a log transaction running and we were able
[all …]
/linux/fs/xfs/
H A Dxfs_log_cil.c23 * recover, so we don't allow failure here. Also, we allocate in a context that
24 * we don't want to be issuing transactions from, so we need to tell the
27 * We don't reserve any space for the ticket - we are going to steal whatever
28 * space we require from transactions as they commit. To ensure we reserve all
29 * the space required, we need to set the current reservation of the ticket to
30 * zero so that we know to steal the initial transaction overhead from the
42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
62 * We can't rely on just the log item being in the CIL, we have to check
80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt()
140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate()
[all …]
H A Dxfs_log_priv.h74 * By covering, we mean changing the h_tail_lsn in the last on-disk
83 * might include space beyond the EOF. So if we just push the EOF a
91 * system is idle. We need two dummy transaction because the h_tail_lsn
103 * we are done covering previous transactions.
104 * NEED -- logging has occurred and we need a dummy transaction
106 * DONE -- we were in the NEED state and have committed a dummy
108 * NEED2 -- we detected that a dummy transaction has gone to the
110 * DONE2 -- we committed a dummy transaction when in the NEED2 state.
112 * There are two places where we switch states:
114 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2.
[all …]
H A Dxfs_log.c78 * We need to make sure the buffer pointer returned is naturally aligned for the
79 * biggest basic data type we put into it. We have already accounted for this
82 * However, this padding does not get written into the log, and hence we have to
87 * We also add space for the xlog_op_header that describes this region in the
88 * log. This prepends the data region we return to the caller to copy their data
90 * is not 8 byte aligned, we have to be careful to ensure that we align the
91 * start of the buffer such that the region we return to the call is 8 byte
172 * we have overrun available reservation space, return 0. The memory barrier
290 * path. Hence any lock will be globally hot if we take it unconditionally on
293 * As tickets are only ever moved on and off head->waiters under head->lock, we
[all …]
H A Dxfs_log_recover.c78 * Pass log block 0 since we don't have an addr yet, buffer will be in xlog_alloc_buffer()
88 * We do log I/O in units of log sectors (a power-of-2 multiple of the in xlog_alloc_buffer()
89 * basic block size), so we round up the requested size to accommodate in xlog_alloc_buffer()
97 * blocks (sector size 1). But otherwise we extend the buffer by one in xlog_alloc_buffer()
249 * h_fs_uuid is null, we assume this log was last mounted in xlog_header_check_mount()
328 * range of basic blocks we'll be examining. If that fails, in xlog_find_verify_cycle()
329 * try a smaller size. We need to be able to read at least in xlog_find_verify_cycle()
330 * a log sector, or we're out of luck. in xlog_find_verify_cycle()
385 * a good log record. Therefore, we subtract one to get the block number
387 * of blocks we would have read on a previous read. This happens when the
[all …]
/linux/drivers/md/bcache/
H A Djournal.h9 * never spans two buckets. This means (not implemented yet) we can resize the
15 * We also keep some things in the journal header that are logically part of the
20 * rewritten when we want to move/wear level the main journal.
22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be
25 * moving gc we work around it by flushing the btree to disk before updating the
35 * We track this by maintaining a refcount for every open journal entry, in a
38 * zero, we pop it off - thus, the size of the fifo tells us the number of open
41 * We take a refcount on a journal entry when we add some keys to a journal
42 * entry that we're going to insert (held by struct btree_op), and then when we
43 * insert those keys into the btree the btree write we're setting up takes a
[all …]
H A Dbset.h17 * We use two different functions for validating bkeys, bch_ptr_invalid and
27 * them on disk, just unnecessary work - so we filter them out when resorting
30 * We can't filter out stale keys when we're resorting, because garbage
32 * unless we're rewriting the btree node those stale keys still exist on disk.
34 * We also implement functions here for removing some number of sectors from the
44 * There could be many of them on disk, but we never allow there to be more than
45 * 4 in memory - we lazily resort as needed.
47 * We implement code here for creating and maintaining auxiliary search trees
48 * (described below) for searching an individial bset, and on top of that we
62 * Since keys are variable length, we can't use a binary search on a bset - we
[all …]
H A Dbcache.h29 * "cached" data is always dirty. The end result is that we get thin
38 * operation all of our available space will be allocated. Thus, we need an
39 * efficient way of deleting things from the cache so we can write new things to
42 * To do this, we first divide the cache device up into buckets. A bucket is the
51 * The priority is used to implement an LRU. We reset a bucket's priority when
52 * we allocate it or on cache it, and every so often we decrement the priority
59 * we have to do is increment its gen (and write its new gen to disk; we batch
62 * Bcache is entirely COW - we never write twice to a bucket, even buckets that
110 * Our unit of allocation is a bucket, and we can't arbitrarily allocate and
113 * (If buckets are really big we'll only use part of the bucket for a btree node
[all …]
/linux/net/ipv4/
H A Dtcp_vegas.c15 * o We do not change the loss detection or recovery mechanisms of
19 * only every-other RTT during slow start, we increase during
22 * we use the rate at which ACKs come back as the "actual"
24 * o To speed convergence to the right rate, we set the cwnd
25 * to achieve the right ("actual") rate when we exit slow start.
26 * o To filter out the noise caused by delayed ACKs, we use the
55 /* There are several situations when we must "re-start" Vegas:
60 * o when we send a packet and there is no outstanding
63 * In these circumstances we cannot do a Vegas calculation at the
64 * end of the first RTT, because any calculation we do is using
[all …]
/linux/arch/powerpc/mm/nohash/
H A Dtlb_low_64e.S91 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */
93 /* We do the user/kernel test for the PID here along with the RW test
95 /* We pre-test some combination of permissions to avoid double
98 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE
103 * writeable, we will take a new fault later, but that should be
106 * We also move ESR_ST in _PAGE_DIRTY position
109 * MAS1 is preset for all we need except for TID that needs to
137 * We are entered with:
176 /* Now we build the MAS:
219 /* We need to check if it was an instruction miss */
[all …]
/linux/fs/xfs/scrub/
H A Dagb_bitmap.c19 * We know that the btree query_all function starts at the left edge and walks
20 * towards the right edge of the tree. Therefore, we know that we can walk up
22 * to the first record/key in that block, we haven't seen this block before;
23 * and therefore we need to remember that we saw this block in the btree.
32 * the first btree record, we'll observe that bc_levels[0].ptr == 1, so we
33 * record that we saw block 1. Then we observe that bc_levels[1].ptr == 1, so
34 * we record block 4. The list is [1, 4].
36 * For the second btree record, we see that bc_levels[0].ptr == 2, so we exit
39 * For the 101st btree record, we've moved onto leaf block 2. Now
40 * bc_levels[0].ptr == 1 again, so we record that we saw block 2. We see that
[all …]
H A Dalloc_repair.c48 * AG. Therefore, we can recreate the free extent records in an AG by looking
60 * walking the rmapbt records, we create a second bitmap @not_allocbt_blocks to
72 * The OWN_AG bitmap itself isn't needed after this point, so what we really do
83 * written to the new btree indices. We reconstruct both bnobt and cntbt at
84 * the same time since we've already done all the work.
86 * We use the prefix 'xrep_abt' here because we regenerate both free space
118 * Next block we anticipate seeing in the rmap records. If the next
119 * rmap record is greater than next_agbno, we have found unused space.
126 /* Longest free extent we found in the AG. */
139 * Make sure the busy extent list is clear because we can't put extents in xrep_setup_ag_allocbt()
[all …]
H A Dfscounters.c32 * The basics of filesystem summary counter checking are that we iterate the
35 * Then we compare what we computed against the in-core counters.
38 * While we /could/ freeze the filesystem and scramble around the AGs counting
39 * the free blocks, in practice we prefer not do that for a scan because
40 * freezing is costly. To get around this, we added a per-cpu counter of the
41 * delalloc reservations so that we can rotor around the AGs relatively
42 * quickly, and we allow the counts to be slightly off because we're not taking
43 * any locks while we do this.
45 * So the first thing we do is warm up the buffer cache in the setup routine by
48 * structures as quickly as it can. We snapshot the percpu counters before and
[all …]
/linux/drivers/usb/dwc2/
H A Dhcd_queue.c32 /* If we get a NAK, wait this long before retrying */
121 * @num_bits: The number of bits we need per period we want to reserve
123 * @interval: How often we need to be scheduled for the reservation this
127 * the interval or we return failure right away.
128 * @only_one_period: Normally we'll allow picking a start anywhere within the
129 * first interval, since we can still make all repetition
131 * here then we'll return failure if we can't fit within
134 * The idea here is that we want to schedule time for repeating events that all
139 * To keep things "simple", we'll represent our schedule with a bitmap that
141 * but does mean that we need to handle things specially (and non-ideally) if
[all …]
/linux/drivers/misc/vmw_vmci/
H A Dvmci_route.c33 * which comes from the VMX, so we know it is coming from a in vmci_route()
36 * To avoid inconsistencies, test these once. We will test in vmci_route()
37 * them again when we do the actual send to ensure that we do in vmci_route()
49 * If this message already came from a guest then we in vmci_route()
57 * We must be acting as a guest in order to send to in vmci_route()
63 /* And we cannot send if the source is the host context. */ in vmci_route()
71 * then they probably mean ANY, in which case we in vmci_route()
87 * If it is not from a guest but we are acting as a in vmci_route()
88 * guest, then we need to send it down to the host. in vmci_route()
89 * Note that if we are also acting as a host then this in vmci_route()
[all …]
/linux/Documentation/filesystems/
H A Ddirectory-locking.rst10 When taking the i_rwsem on multiple non-directory objects, we
11 always acquire the locks in order by increasing address. We'll call
22 * lock the directory we are accessing (shared)
26 * lock the directory we are accessing (exclusive)
73 in its own right; it may happen as part of lookup. We speak of the
74 operations on directory trees, but we obviously do not have the full
75 picture of those - especially for network filesystems. What we have
77 Trees grow as we do operations; memory pressure prunes them. Normally
78 that's not a problem, but there is a nasty twist - what should we do
83 possibility that directory we see in one place gets moved by the server
[all …]
H A Dpropagate_umount.txt3 Umount propagation starts with a set of mounts we are already going to
4 take out. Ideally, we would like to add all downstream cognates to
39 is in the set, it will be resolved. However, we rely upon umount_tree()
51 We are given a closed set U and we want to find all mounts that have
64 subtrees of U, in which case we'd end up examining the same candidates
70 Note that if we run into a candidate we'd already seen, it must've been
73 if we find a child already added to the set, we know that everything
93 keep walking Propagation(p) from q until we find something
96 would get rid of that problem, but we need a sane implementation of
99 skip_them() being "repeat the forward-and-up part until we get NULL
[all …]
H A Didmappings.rst23 on, we will always prefix ids with ``u`` or ``k`` to make it clear whether
24 we're talking about an id in the upper or lower idmapset.
42 that make it easier to understand how we can translate between idmappings. For
43 example, we know that the inverse idmapping is an order isomorphism as well::
49 Given that we are dealing with order isomorphisms plus the fact that we're
50 dealing with subsets we can embed idmappings into each other, i.e. we can
51 sensibly translate between different idmappings. For example, assume we've been
61 Because we're dealing with order isomorphic subsets it is meaningful to ask
64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using
69 If we were given the same task for the following three idmappings::
[all …]
/linux/drivers/gpu/drm/i915/gt/
H A Dintel_execlists_submission.c24 * shouldn't we just need a set of those per engine command streamer? This is
35 * Regarding the creation of contexts, we have:
43 * like before) we need:
50 * more complex, because we don't know at creation time which engine is going
51 * to use them. To handle this, we have implemented a deferred creation of LR
55 * gets populated for a given engine once we receive an execbuffer. If later
56 * on we receive another execbuffer ioctl for the same context but a different
57 * engine, we allocate/populate a new ringbuffer and context backing object and
61 * only allowed with the render ring, we can allocate & populate them right
96 * we use a NULL second context) or the first two requests have unique IDs.
[all …]
/linux/arch/openrisc/mm/
H A Dfault.c59 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault()
62 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault()
68 * mappings we don't have to walk all processes pgdirs and in do_page_fault()
69 * add the high mappings all at once. Instead we do it as they in do_page_fault()
82 /* If exceptions were enabled, we can reenable them here */ in do_page_fault()
100 * If we're in an interrupt or have no user in do_page_fault()
101 * context, we must not take the fault.. in do_page_fault()
125 * we get page-aligned addresses so we can only check in do_page_fault()
126 * if we're within a page from usp, but that might be in do_page_fault()
137 * Ok, we have a good vm_area for this memory access, so in do_page_fault()
[all …]
/linux/Documentation/driver-api/thermal/
H A Dcpu-idle-cooling.rst25 because of the OPP density, we can only choose an OPP with a power
35 If we can remove the static and the dynamic leakage for a specific
38 injection period, we can mitigate the temperature by modulating the
47 At a specific OPP, we can assume that injecting idle cycle on all CPUs
49 idle state target residency, we lead to dropping the static and the
69 We use a fixed duration of idle injection that gives an acceptable
132 - It is less than or equal to the latency we tolerate when the
134 user experience, reactivity vs performance trade off we want. This
137 - It is greater than the idle state’s target residency we want to go
138 for thermal mitigation, otherwise we end up consuming more energy.
[all …]
/linux/drivers/scsi/aic7xxx/
H A Daic79xx.seq85 * If we have completions stalled waiting for the qfreeze
109 * ENSELO is cleared by a SELDO, so we must test for SELDO
149 * We have received good status for this transaction. There may
169 * Since this status did not consume a FIFO, we have to
170 * be a bit more dilligent in how we check for FIFOs pertaining
178 * count in the SCB. In this case, we allow the routine servicing
183 * we detect case 1, we will properly defer the post of the SCB
222 * bad SCSI status (currently only for underruns), we
223 * queue the SCB for normal completion. Otherwise, we
258 * If we have relatively few commands outstanding, don't
[all …]
/linux/arch/powerpc/kexec/
H A Dcore_64.c47 * Since we use the kernel fault handlers and paging code to in machine_kexec_prepare()
48 * handle the virtual mode, we must make sure no destination in machine_kexec_prepare()
55 /* We also should not overwrite the tce tables */ in machine_kexec_prepare()
88 * We rely on kexec_load to create a lists that properly in copy_segments()
90 * We will still crash if the list is wrong, but at least in copy_segments()
123 * After this call we may not use anything allocated in dynamic in kexec_copy_flush()
131 * we need to clear the icache for all dest pages sometime, in kexec_copy_flush()
148 mb(); /* make sure our irqs are disabled before we say they are */ in kexec_smp_down()
155 * Now every CPU has IRQs off, we can clear out any pending in kexec_smp_down()
173 /* Make sure each CPU has at least made it to the state we need. in kexec_prepare_cpus_wait()
[all …]

12345678910>>...292