/linux/fs/btrfs/ |
H A D | space-info.c | 27 * 1) space_info. This is the ultimate arbiter of how much space we can use. 30 * reservations we care about total_bytes - SUM(space_info->bytes_) when 35 * metadata reservation we have. You can see the comment in the block_rsv 39 * 3) btrfs_calc*_size. These are the worst case calculations we used based 40 * on the number of items we will want to modify. We have one for changing 41 * items, and one for inserting new items. Generally we use these helpers to 47 * We call into either btrfs_reserve_data_bytes() or 48 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with 49 * num_bytes we want to reserve. 66 * Assume we are unable to simply make the reservation because we do not have [all …]
|
H A D | delalloc-space.c | 23 * We call into btrfs_reserve_data_bytes() for the user request bytes that 24 * they wish to write. We make this reservation and add it to 25 * space_info->bytes_may_use. We set EXTENT_DELALLOC on the inode io_tree 27 * make a real allocation if we are pre-allocating or doing O_DIRECT. 30 * At writepages()/prealloc/O_DIRECT time we will call into 31 * btrfs_reserve_extent() for some part or all of this range of bytes. We 35 * may allocate a smaller on disk extent than we previously reserved. 46 * This is the simplest case, we haven't completed our operation and we know 47 * how much we reserved, we can simply call 60 * We keep track of two things on a per inode bases [all …]
|
H A D | direct-io.c | 62 * We're concerned with the entire range that we're going to be in lock_extent_direct() 63 * doing DIO to, so we need to make sure there's no ordered in lock_extent_direct() 70 * We need to make sure there are no buffered pages in this in lock_extent_direct() 71 * range either, we could have raced between the invalidate in in lock_extent_direct() 90 * If we are doing a DIO read and the ordered extent we in lock_extent_direct() 91 * found is for a buffered write, we can not wait for it in lock_extent_direct() 92 * to complete and retry, because if we do so we can in lock_extent_direct() 99 * range and this range started (we unlock the ranges in lock_extent_direct() 112 * We could trigger writeback for this range (and wait in lock_extent_direct() 118 * ordered dio extent we created before but did not have in lock_extent_direct() [all …]
|
H A D | fiemap.c | 30 * - Cache the next entry to be emitted to the fiemap buffer, so that we can 35 * buffer is memory mapped to the fiemap target file, we don't deadlock 36 * during btrfs_page_mkwrite(). This is because during fiemap we are locking 40 * if the fiemap buffer is memory mapped to the file we are running fiemap 53 * the next file extent item we must search for in the inode's subvolume 59 * This matches struct fiemap_extent_info::fi_mapped_extents, we use it 61 * fiemap_fill_next_extent() because we buffer ready fiemap entries at 62 * the @entries array, and we want to stop as soon as we hit the max 86 * Ignore 1 (reached max entries) because we keep track of that in flush_fiemap_cache() 102 * And only when we fails to merge, cached one will be submitted as [all …]
|
/linux/fs/xfs/ |
H A D | xfs_log_cil.c | 23 * recover, so we don't allow failure here. Also, we allocate in a context that 24 * we don't want to be issuing transactions from, so we need to tell the 27 * We don't reserve any space for the ticket - we are going to steal whatever 28 * space we require from transactions as they commit. To ensure we reserve all 29 * the space required, we need to set the current reservation of the ticket to 30 * zero so that we know to steal the initial transaction overhead from the 42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc() 62 * We can't rely on just the log item being in the CIL, we have to check 80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt() 140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate() [all …]
|
H A D | xfs_log_priv.h | 74 * By covering, we mean changing the h_tail_lsn in the last on-disk 83 * might include space beyond the EOF. So if we just push the EOF a 91 * system is idle. We need two dummy transaction because the h_tail_lsn 103 * we are done covering previous transactions. 104 * NEED -- logging has occurred and we need a dummy transaction 106 * DONE -- we were in the NEED state and have committed a dummy 108 * NEED2 -- we detected that a dummy transaction has gone to the 110 * DONE2 -- we committed a dummy transaction when in the NEED2 state. 112 * There are two places where we switch states: 114 * 1.) In xfs_sync, when we detect an idle log and are in NEED or NEED2. [all …]
|
H A D | xfs_log.c | 77 * We need to make sure the buffer pointer returned is naturally aligned for the 78 * biggest basic data type we put into it. We have already accounted for this 81 * However, this padding does not get written into the log, and hence we have to 86 * We also add space for the xlog_op_header that describes this region in the 87 * log. This prepends the data region we return to the caller to copy their data 89 * is not 8 byte aligned, we have to be careful to ensure that we align the 90 * start of the buffer such that the region we return to the call is 8 byte 171 * we have overrun available reservation space, return 0. The memory barrier 289 * path. Hence any lock will be globally hot if we take it unconditionally on 292 * As tickets are only ever moved on and off head->waiters under head->lock, we [all …]
|
H A D | xfs_log_recover.c | 78 * Pass log block 0 since we don't have an addr yet, buffer will be in xlog_alloc_buffer() 88 * We do log I/O in units of log sectors (a power-of-2 multiple of the in xlog_alloc_buffer() 89 * basic block size), so we round up the requested size to accommodate in xlog_alloc_buffer() 97 * blocks (sector size 1). But otherwise we extend the buffer by one in xlog_alloc_buffer() 249 * h_fs_uuid is null, we assume this log was last mounted in xlog_header_check_mount() 328 * range of basic blocks we'll be examining. If that fails, in xlog_find_verify_cycle() 329 * try a smaller size. We need to be able to read at least in xlog_find_verify_cycle() 330 * a log sector, or we're out of luck. in xlog_find_verify_cycle() 385 * a good log record. Therefore, we subtract one to get the block number 387 * of blocks we would have read on a previous read. This happens when the [all …]
|
/linux/drivers/md/bcache/ |
H A D | journal.h | 9 * never spans two buckets. This means (not implemented yet) we can resize the 15 * We also keep some things in the journal header that are logically part of the 20 * rewritten when we want to move/wear level the main journal. 22 * Currently, we don't journal BTREE_REPLACE operations - this will hopefully be 25 * moving gc we work around it by flushing the btree to disk before updating the 35 * We track this by maintaining a refcount for every open journal entry, in a 38 * zero, we pop it off - thus, the size of the fifo tells us the number of open 41 * We take a refcount on a journal entry when we add some keys to a journal 42 * entry that we're going to insert (held by struct btree_op), and then when we 43 * insert those keys into the btree the btree write we're setting up takes a [all …]
|
H A D | bset.h | 17 * We use two different functions for validating bkeys, bch_ptr_invalid and 27 * them on disk, just unnecessary work - so we filter them out when resorting 30 * We can't filter out stale keys when we're resorting, because garbage 32 * unless we're rewriting the btree node those stale keys still exist on disk. 34 * We also implement functions here for removing some number of sectors from the 44 * There could be many of them on disk, but we never allow there to be more than 45 * 4 in memory - we lazily resort as needed. 47 * We implement code here for creating and maintaining auxiliary search trees 48 * (described below) for searching an individial bset, and on top of that we 62 * Since keys are variable length, we can't use a binary search on a bset - we [all …]
|
/linux/net/ipv4/ |
H A D | tcp_vegas.c | 15 * o We do not change the loss detection or recovery mechanisms of 19 * only every-other RTT during slow start, we increase during 22 * we use the rate at which ACKs come back as the "actual" 24 * o To speed convergence to the right rate, we set the cwnd 25 * to achieve the right ("actual") rate when we exit slow start. 26 * o To filter out the noise caused by delayed ACKs, we use the 55 /* There are several situations when we must "re-start" Vegas: 60 * o when we send a packet and there is no outstanding 63 * In these circumstances we cannot do a Vegas calculation at the 64 * end of the first RTT, because any calculation we do is using [all …]
|
/linux/arch/powerpc/mm/nohash/ |
H A D | tlb_low_64e.S | 91 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */ 93 /* We do the user/kernel test for the PID here along with the RW test 95 /* We pre-test some combination of permissions to avoid double 98 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE 103 * writeable, we will take a new fault later, but that should be 106 * We also move ESR_ST in _PAGE_DIRTY position 109 * MAS1 is preset for all we need except for TID that needs to 137 * We are entered with: 176 /* Now we build the MAS: 219 /* We need to check if it was an instruction miss */ [all …]
|
/linux/Documentation/filesystems/xfs/ |
H A D | xfs-delayed-logging-design.rst | 15 We begin with an overview of transactions in XFS, followed by describing how 16 transaction reservations are structured and accounted, and then move into how we 18 reservations bounds. At this point we need to explain how relogging works. With 113 individual modification is atomic, the chain is *not atomic*. If we crash half 140 complete, we can explicitly tag a transaction as synchronous. This will trigger 145 throughput to the IO latency limitations of the underlying storage. Instead, we 161 available to write the modification into the journal before we start making 164 log in the worst case. This means that if we are modifying a btree in the 165 transaction, we have to reserve enough space to record a full leaf-to-root split 166 of the btree. As such, the reservations are quite complex because we have to [all …]
|
/linux/fs/xfs/scrub/ |
H A D | agb_bitmap.c | 19 * We know that the btree query_all function starts at the left edge and walks 20 * towards the right edge of the tree. Therefore, we know that we can walk up 22 * to the first record/key in that block, we haven't seen this block before; 23 * and therefore we need to remember that we saw this block in the btree. 32 * the first btree record, we'll observe that bc_levels[0].ptr == 1, so we 33 * record that we saw block 1. Then we observe that bc_levels[1].ptr == 1, so 34 * we record block 4. The list is [1, 4]. 36 * For the second btree record, we see that bc_levels[0].ptr == 2, so we exit 39 * For the 101st btree record, we've moved onto leaf block 2. Now 40 * bc_levels[0].ptr == 1 again, so we record that we saw block 2. We see that [all …]
|
/linux/fs/bcachefs/ |
H A D | bset.h | 21 * We use two different functions for validating bkeys, bkey_invalid and 27 * them on disk, just unnecessary work - so we filter them out when resorting 30 * We can't filter out stale keys when we're resorting, because garbage 32 * unless we're rewriting the btree node those stale keys still exist on disk. 34 * We also implement functions here for removing some number of sectors from the 44 * There could be many of them on disk, but we never allow there to be more than 45 * 4 in memory - we lazily resort as needed. 47 * We implement code here for creating and maintaining auxiliary search trees 48 * (described below) for searching an individial bset, and on top of that we 62 * Since keys are variable length, we can't use a binary search on a bset - we [all …]
|
/linux/drivers/gpu/drm/i915/ |
H A D | i915_request.c | 72 * We could extend the life of a context to beyond that of all in i915_fence_get_timeline_name() 74 * or we just give them a false name. Since in i915_fence_get_timeline_name() 129 * freed when the slab cache itself is freed, and so we would get in i915_fence_release() 138 * We do not hold a reference to the engine here and so have to be in i915_fence_release() 139 * very careful in what rq->engine we poke. The virtual engine is in i915_fence_release() 140 * referenced via the rq->context and we released that ref during in i915_fence_release() 141 * i915_request_retire(), ergo we must not dereference a virtual in i915_fence_release() 142 * engine here. Not that we would want to, as the only consumer of in i915_fence_release() 147 * we know that it will have been processed by the HW and will in i915_fence_release() 153 * power-of-two we assume that rq->engine may still be a virtual in i915_fence_release() [all …]
|
/linux/drivers/misc/vmw_vmci/ |
H A D | vmci_route.c | 33 * which comes from the VMX, so we know it is coming from a in vmci_route() 36 * To avoid inconsistencies, test these once. We will test in vmci_route() 37 * them again when we do the actual send to ensure that we do in vmci_route() 49 * If this message already came from a guest then we in vmci_route() 57 * We must be acting as a guest in order to send to in vmci_route() 63 /* And we cannot send if the source is the host context. */ in vmci_route() 71 * then they probably mean ANY, in which case we in vmci_route() 87 * If it is not from a guest but we are acting as a in vmci_route() 88 * guest, then we need to send it down to the host. in vmci_route() 89 * Note that if we are also acting as a host then this in vmci_route() [all …]
|
/linux/Documentation/filesystems/ |
H A D | directory-locking.rst | 10 When taking the i_rwsem on multiple non-directory objects, we 11 always acquire the locks in order by increasing address. We'll call 22 * lock the directory we are accessing (shared) 26 * lock the directory we are accessing (exclusive) 73 in its own right; it may happen as part of lookup. We speak of the 74 operations on directory trees, but we obviously do not have the full 75 picture of those - especially for network filesystems. What we have 77 Trees grow as we do operations; memory pressure prunes them. Normally 78 that's not a problem, but there is a nasty twist - what should we do 83 possibility that directory we see in one place gets moved by the server [all …]
|
H A D | idmappings.rst | 23 on, we will always prefix ids with ``u`` or ``k`` to make it clear whether 24 we're talking about an id in the upper or lower idmapset. 42 that make it easier to understand how we can translate between idmappings. For 43 example, we know that the inverse idmapping is an order isomorphism as well:: 49 Given that we are dealing with order isomorphisms plus the fact that we're 50 dealing with subsets we can embed idmappings into each other, i.e. we can 51 sensibly translate between different idmappings. For example, assume we've been 61 Because we're dealing with order isomorphic subsets it is meaningful to ask 64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using 69 If we were given the same task for the following three idmappings:: [all …]
|
/linux/drivers/gpu/drm/i915/gt/ |
H A D | intel_execlists_submission.c | 24 * shouldn't we just need a set of those per engine command streamer? This is 35 * Regarding the creation of contexts, we have: 43 * like before) we need: 50 * more complex, because we don't know at creation time which engine is going 51 * to use them. To handle this, we have implemented a deferred creation of LR 55 * gets populated for a given engine once we receive an execbuffer. If later 56 * on we receive another execbuffer ioctl for the same context but a different 57 * engine, we allocate/populate a new ringbuffer and context backing object and 61 * only allowed with the render ring, we can allocate & populate them right 96 * we use a NULL second context) or the first two requests have unique IDs. [all …]
|
/linux/kernel/irq/ |
H A D | spurious.c | 26 * We wait here for a poller to finish. 28 * If the poll runs on this CPU, then we yell loudly and return 32 * We wait until the poller is done and then recheck disabled and 33 * action (about to be disabled). Only if it's still active, we return 86 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq() 209 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq() 210 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq() 244 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq() 249 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq() 260 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq() [all …]
|
/linux/drivers/scsi/aic7xxx/ |
H A D | aic79xx.seq | 85 * If we have completions stalled waiting for the qfreeze 109 * ENSELO is cleared by a SELDO, so we must test for SELDO 149 * We have received good status for this transaction. There may 169 * Since this status did not consume a FIFO, we have to 170 * be a bit more dilligent in how we check for FIFOs pertaining 178 * count in the SCB. In this case, we allow the routine servicing 183 * we detect case 1, we will properly defer the post of the SCB 222 * bad SCSI status (currently only for underruns), we 223 * queue the SCB for normal completion. Otherwise, we 258 * If we have relatively few commands outstanding, don't [all …]
|
/linux/fs/jbd2/ |
H A D | transaction.c | 70 * have an existing running transaction: we only make a new transaction 71 * once we have started to commit the old one). 74 * The journal MUST be locked. We don't perform atomic mallocs on the 75 * new transaction and we can't block without protecting against other 179 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching() 202 * Wait until we can add credits for handle to the running transaction. Called 204 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and 208 * value, we need to fake out sparse so ti doesn't complain about a 233 * potential buffers requested by this operation, we need to in add_transaction_credits() 240 * then start to commit it: we can then go back and in add_transaction_credits() [all …]
|
/linux/arch/openrisc/mm/ |
H A D | fault.c | 59 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault() 62 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault() 68 * mappings we don't have to walk all processes pgdirs and in do_page_fault() 69 * add the high mappings all at once. Instead we do it as they in do_page_fault() 82 /* If exceptions were enabled, we can reenable them here */ in do_page_fault() 100 * If we're in an interrupt or have no user in do_page_fault() 101 * context, we must not take the fault.. in do_page_fault() 125 * we get page-aligned addresses so we can only check in do_page_fault() 126 * if we're within a page from usp, but that might be in do_page_fault() 137 * Ok, we have a good vm_area for this memory access, so in do_page_fault() [all …]
|
/linux/Documentation/dev-tools/kunit/ |
H A D | run_wrapper.rst | 7 We can either run KUnit tests using kunit_tool or can run tests 10 As long as we can build the kernel, we can run KUnit. 21 We should see the following: 29 We may want to use the following options: 44 kunit_tool. This is useful if we have several different groups of 45 tests we want to run independently, or if we want to use pre-defined 64 If we want to run a specific set of tests (rather than those listed 65 in the KUnit ``defconfig``), we can provide Kconfig options in the 83 We can then add any other Kconfig options. For example: 90 set in the kernel ``.config`` before running the tests. It warns if we [all …]
|