| /linux/fs/xfs/ |
| H A D | xfs_log_cil.c | 23 * recover, so we don't allow failure here. Also, we allocate in a context that 24 * we don't want to be issuing transactions from, so we need to tell the 27 * We don't reserve any space for the ticket - we are going to steal whatever 28 * space we require from transactions as they commit. To ensure we reserve all 29 * the space required, we need to set the current reservation of the ticket to 30 * zero so that we know to steal the initial transaction overhead from the 42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc() 62 * We can't rely on just the log item being in the CIL, we have to check 80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt() 140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate() [all …]
|
| H A D | xfs_log.c | 122 * we have overrun available reservation space, return 0. The memory barrier 240 * path. Hence any lock will be globally hot if we take it unconditionally on 243 * As tickets are only ever moved on and off head->waiters under head->lock, we 244 * only need to take that lock if we are going to add the ticket to the queue 245 * and sleep. We can avoid taking the lock if the ticket was never added to 246 * head->waiters because the t_queue list head will be empty and we hold the 263 * logspace before us. Wake up the first waiters, if we do not wake in xlog_grant_head_check() 325 * This is a new transaction on the ticket, so we need to change the in xfs_log_regrant() 327 * the log. Just add one to the existing tid so that we can see chains in xfs_log_regrant() 348 * If we are failing, make sure the ticket doesn't have any current in xfs_log_regrant() [all …]
|
| H A D | xfs_log_recover.c | 78 * Pass log block 0 since we don't have an addr yet, buffer will be in xlog_alloc_buffer() 88 * We do log I/O in units of log sectors (a power-of-2 multiple of the in xlog_alloc_buffer() 89 * basic block size), so we round up the requested size to accommodate in xlog_alloc_buffer() 97 * blocks (sector size 1). But otherwise we extend the buffer by one in xlog_alloc_buffer() 249 * h_fs_uuid is null, we assume this log was last mounted in xlog_header_check_mount() 328 * range of basic blocks we'll be examining. If that fails, in xlog_find_verify_cycle() 329 * try a smaller size. We need to be able to read at least in xlog_find_verify_cycle() 330 * a log sector, or we're out of luck. in xlog_find_verify_cycle() 385 * a good log record. Therefore, we subtract one to get the block number 387 * of blocks we would have read on a previous read. This happens when the [all …]
|
| /linux/fs/btrfs/ |
| H A D | space-info.c | 29 * 1) space_info. This is the ultimate arbiter of how much space we can use. 32 * reservations we care about total_bytes - SUM(space_info->bytes_) when 37 * metadata reservation we have. You can see the comment in the block_rsv 41 * 3) btrfs_calc*_size. These are the worst case calculations we used based 42 * on the number of items we will want to modify. We have one for changing 43 * items, and one for inserting new items. Generally we use these helpers to 49 * We call into either btrfs_reserve_data_bytes() or 50 * btrfs_reserve_metadata_bytes(), depending on which we're looking for, with 51 * num_bytes we want to reserve. 68 * Assume we are unable to simply make the reservation because we do not have [all …]
|
| H A D | fiemap.c | 30 * - Cache the next entry to be emitted to the fiemap buffer, so that we can 35 * buffer is memory mapped to the fiemap target file, we don't deadlock 36 * during btrfs_page_mkwrite(). This is because during fiemap we are locking 40 * if the fiemap buffer is memory mapped to the file we are running fiemap 53 * the next file extent item we must search for in the inode's subvolume 59 * This matches struct fiemap_extent_info::fi_mapped_extents, we use it 61 * fiemap_fill_next_extent() because we buffer ready fiemap entries at 62 * the @entries array, and we want to stop as soon as we hit the max 86 * Ignore 1 (reached max entries) because we keep track of that in flush_fiemap_cache() 102 * And only when we fails to merge, cached one will be submitted as [all …]
|
| H A D | locking.h | 23 * We are limited in number of subclasses by MAX_LOCKDEP_SUBCLASSES, which at 24 * the time of this patch is 8, which is how many we use. Keep this in mind if 31 * When we COW a block we are holding the lock on the original block, 33 * when we lock the newly allocated COW'd block. Handle this by having 39 * Oftentimes we need to lock adjacent nodes on the same level while 40 * still holding the lock on the original node we searched to, such as 43 * Because of this we need to indicate to lockdep that this is 51 * When splitting we will be holding a lock on the left/right node when 52 * we need to cow that node, thus we need a new set of subclasses for 59 * When splitting we may push nodes to the left or right, but still use [all …]
|
| /linux/net/ipv4/ |
| H A D | tcp_vegas.c | 15 * o We do not change the loss detection or recovery mechanisms of 19 * only every-other RTT during slow start, we increase during 22 * we use the rate at which ACKs come back as the "actual" 24 * o To speed convergence to the right rate, we set the cwnd 25 * to achieve the right ("actual") rate when we exit slow start. 26 * o To filter out the noise caused by delayed ACKs, we use the 55 /* There are several situations when we must "re-start" Vegas: 60 * o when we send a packet and there is no outstanding 63 * In these circumstances we cannot do a Vegas calculation at the 64 * end of the first RTT, because any calculation we do is using [all …]
|
| /linux/arch/powerpc/mm/nohash/ |
| H A D | tlb_low_64e.S | 91 /* We need _PAGE_PRESENT and _PAGE_ACCESSED set */ 93 /* We do the user/kernel test for the PID here along with the RW test 95 /* We pre-test some combination of permissions to avoid double 98 * We move the ESR:ST bit into the position of _PAGE_BAP_SW in the PTE 103 * writeable, we will take a new fault later, but that should be 106 * We also move ESR_ST in _PAGE_DIRTY position 109 * MAS1 is preset for all we need except for TID that needs to 137 * We are entered with: 176 /* Now we build the MAS: 219 /* We need to check if it was an instruction miss */ [all …]
|
| /linux/drivers/usb/dwc2/ |
| H A D | hcd_queue.c | 32 /* If we get a NAK, wait this long before retrying */ 121 * @num_bits: The number of bits we need per period we want to reserve 123 * @interval: How often we need to be scheduled for the reservation this 127 * the interval or we return failure right away. 128 * @only_one_period: Normally we'll allow picking a start anywhere within the 129 * first interval, since we can still make all repetition 131 * here then we'll return failure if we can't fit within 134 * The idea here is that we want to schedule time for repeating events that all 139 * To keep things "simple", we'll represent our schedule with a bitmap that 141 * but does mean that we need to handle things specially (and non-ideally) if [all …]
|
| /linux/drivers/misc/vmw_vmci/ |
| H A D | vmci_route.c | 33 * which comes from the VMX, so we know it is coming from a in vmci_route() 36 * To avoid inconsistencies, test these once. We will test in vmci_route() 37 * them again when we do the actual send to ensure that we do in vmci_route() 49 * If this message already came from a guest then we in vmci_route() 57 * We must be acting as a guest in order to send to in vmci_route() 63 /* And we cannot send if the source is the host context. */ in vmci_route() 71 * then they probably mean ANY, in which case we in vmci_route() 87 * If it is not from a guest but we are acting as a in vmci_route() 88 * guest, then we need to send it down to the host. in vmci_route() 89 * Note that if we are also acting as a host then this in vmci_route() [all …]
|
| /linux/fs/xfs/scrub/ |
| H A D | alloc_repair.c | 48 * AG. Therefore, we can recreate the free extent records in an AG by looking 60 * walking the rmapbt records, we create a second bitmap @not_allocbt_blocks to 72 * The OWN_AG bitmap itself isn't needed after this point, so what we really do 83 * written to the new btree indices. We reconstruct both bnobt and cntbt at 84 * the same time since we've already done all the work. 86 * We use the prefix 'xrep_abt' here because we regenerate both free space 118 * Next block we anticipate seeing in the rmap records. If the next 119 * rmap record is greater than next_agbno, we have found unused space. 126 /* Longest free extent we found in the AG. */ 139 * Make sure the busy extent list is clear because we can't put extents in xrep_setup_ag_allocbt() [all …]
|
| H A D | fscounters.c | 32 * The basics of filesystem summary counter checking are that we iterate the 35 * Then we compare what we computed against the in-core counters. 38 * While we /could/ freeze the filesystem and scramble around the AGs counting 39 * the free blocks, in practice we prefer not do that for a scan because 40 * freezing is costly. To get around this, we added a per-cpu counter of the 41 * delalloc reservations so that we can rotor around the AGs relatively 42 * quickly, and we allow the counts to be slightly off because we're not taking 43 * any locks while we do this. 45 * So the first thing we do is warm up the buffer cache in the setup routine by 48 * structures as quickly as it can. We snapshot the percpu counters before and [all …]
|
| /linux/Documentation/filesystems/ |
| H A D | directory-locking.rst | 10 When taking the i_rwsem on multiple non-directory objects, we 11 always acquire the locks in order by increasing address. We'll call 22 * lock the directory we are accessing (shared) 26 * lock the directory we are accessing (exclusive) 73 in its own right; it may happen as part of lookup. We speak of the 74 operations on directory trees, but we obviously do not have the full 75 picture of those - especially for network filesystems. What we have 77 Trees grow as we do operations; memory pressure prunes them. Normally 78 that's not a problem, but there is a nasty twist - what should we do 83 possibility that directory we see in one place gets moved by the server [all …]
|
| H A D | idmappings.rst | 23 on, we will always prefix ids with ``u`` or ``k`` to make it clear whether 24 we're talking about an id in the upper or lower idmapset. 42 that make it easier to understand how we can translate between idmappings. For 43 example, we know that the inverse idmapping is an order isomorphism as well:: 49 Given that we are dealing with order isomorphisms plus the fact that we're 50 dealing with subsets we can embed idmappings into each other, i.e. we can 51 sensibly translate between different idmappings. For example, assume we've been 61 Because we're dealing with order isomorphic subsets it is meaningful to ask 64 mapping ``k11000`` up to ``u1000``. Afterwards, we can map ``u1000`` down using 69 If we were given the same task for the following three idmappings:: [all …]
|
| /linux/drivers/gpu/drm/i915/gt/ |
| H A D | intel_execlists_submission.c | 24 * shouldn't we just need a set of those per engine command streamer? This is 35 * Regarding the creation of contexts, we have: 43 * like before) we need: 50 * more complex, because we don't know at creation time which engine is going 51 * to use them. To handle this, we have implemented a deferred creation of LR 55 * gets populated for a given engine once we receive an execbuffer. If later 56 * on we receive another execbuffer ioctl for the same context but a different 57 * engine, we allocate/populate a new ringbuffer and context backing object and 61 * only allowed with the render ring, we can allocate & populate them right 96 * we use a NULL second context) or the first two requests have unique IDs. [all …]
|
| /linux/arch/openrisc/mm/ |
| H A D | fault.c | 59 * We fault-in kernel-space virtual memory on-demand. The in do_page_fault() 62 * NOTE! We MUST NOT take any locks for this case. We may in do_page_fault() 68 * mappings we don't have to walk all processes pgdirs and in do_page_fault() 69 * add the high mappings all at once. Instead we do it as they in do_page_fault() 82 /* If exceptions were enabled, we can reenable them here */ in do_page_fault() 100 * If we're in an interrupt or have no user in do_page_fault() 101 * context, we must not take the fault.. in do_page_fault() 125 * we get page-aligned addresses so we can only check in do_page_fault() 126 * if we're within a page from usp, but that might be in do_page_fault() 137 * Ok, we have a good vm_area for this memory access, so in do_page_fault() [all …]
|
| /linux/drivers/scsi/aic7xxx/ |
| H A D | aic79xx.seq | 85 * If we have completions stalled waiting for the qfreeze 109 * ENSELO is cleared by a SELDO, so we must test for SELDO 149 * We have received good status for this transaction. There may 169 * Since this status did not consume a FIFO, we have to 170 * be a bit more dilligent in how we check for FIFOs pertaining 178 * count in the SCB. In this case, we allow the routine servicing 183 * we detect case 1, we will properly defer the post of the SCB 222 * bad SCSI status (currently only for underruns), we 223 * queue the SCB for normal completion. Otherwise, we 258 * If we have relatively few commands outstanding, don't [all …]
|
| /linux/Documentation/driver-api/thermal/ |
| H A D | cpu-idle-cooling.rst | 25 because of the OPP density, we can only choose an OPP with a power 35 If we can remove the static and the dynamic leakage for a specific 38 injection period, we can mitigate the temperature by modulating the 47 At a specific OPP, we can assume that injecting idle cycle on all CPUs 49 idle state target residency, we lead to dropping the static and the 69 We use a fixed duration of idle injection that gives an acceptable 132 - It is less than or equal to the latency we tolerate when the 134 user experience, reactivity vs performance trade off we want. This 137 - It is greater than the idle state’s target residency we want to go 138 for thermal mitigation, otherwise we end up consuming more energy. [all …]
|
| /linux/fs/jbd2/ |
| H A D | transaction.c | 70 * have an existing running transaction: we only make a new transaction 71 * once we have started to commit the old one). 74 * The journal MUST be locked. We don't perform atomic mallocs on the 75 * new transaction and we can't block without protecting against other 175 * We don't call jbd2_might_wait_for_commit() here as there's no in wait_transaction_switching() 198 * Wait until we can add credits for handle to the running transaction. Called 200 * transaction. Returns 1 if we had to wait, j_state_lock is dropped, and 204 * value, we need to fake out sparse so ti doesn't complain about a 229 * potential buffers requested by this operation, we need to in add_transaction_credits() 236 * then start to commit it: we can then go back and in add_transaction_credits() [all …]
|
| /linux/drivers/md/bcache/ |
| H A D | bcache.h | 29 * "cached" data is always dirty. The end result is that we get thin 38 * operation all of our available space will be allocated. Thus, we need an 39 * efficient way of deleting things from the cache so we can write new things to 42 * To do this, we first divide the cache device up into buckets. A bucket is the 51 * The priority is used to implement an LRU. We reset a bucket's priority when 52 * we allocate it or on cache it, and every so often we decrement the priority 59 * we have to do is increment its gen (and write its new gen to disk; we batch 62 * Bcache is entirely COW - we never write twice to a bucket, even buckets that 110 * Our unit of allocation is a bucket, and we can't arbitrarily allocate and 113 * (If buckets are really big we'll only use part of the bucket for a btree node [all …]
|
| /linux/Documentation/arch/powerpc/ |
| H A D | pci_iov_resource_on_powernv.rst | 40 The following section provides a rough description of what we have on P8 52 For DMA, MSIs and inbound PCIe error messages, we have a table (in 55 We call this the RTT. 57 - For DMA we then provide an entire address space for each PE that can 63 - For MSIs, we have two windows in the address space (one at the top of 87 32-bit PCIe accesses. We configure that window at boot from FW and 91 reserved for MSIs but this is not a problem at this point; we just 93 ignores that however and will forward in that space if we try). 100 Now, this is the "main" window we use in Linux today (excluding 101 SR-IOV). We basically use the trick of forcing the bridge MMIO windows [all …]
|
| /linux/scripts/ |
| H A D | generate_builtin_ranges.awk | 12 # If we have seen this object before, return information from the cache. 36 # name (e.g. core). We check the associated module file name, and if 55 # We use a modified absolute start address (soff + base) as index because we 58 # So, we use (addr << 1) + 1 to allow a possible anchor record to be placed at 75 # and we record the object name "crypto/lzo-rle". 89 # We collect the base address of the section in order to convert all addresses 92 # We collect the address of the anchor (or first symbol in the section if there 96 # We collect the start address of any sub-section (section included in the top 108 # which format we ar [all...] |
| /linux/arch/powerpc/kvm/ |
| H A D | book3s_hv_rm_xics.c | 70 * We start the search from our current CPU Id in the core map 71 * and go in a circle until we get back to our ID looking for a 102 * visible before we return to caller (and the in grab_next_hostcore() 147 * if we can't find one, set up state to eventually return too hard. in icp_rm_set_vcpu_irq() 193 * the state already. This is why we never clear the interrupt output in icp_rm_try_update() 194 * here, we only ever set it. The clear only happens prior to doing in icp_rm_try_update() 195 * an update and only by the processor itself. Currently we do it in icp_rm_try_update() 198 * We also do not try to figure out whether the EE state has changed, in icp_rm_try_update() 199 * we unconditionally set it if the new state calls for it. The reason in icp_rm_try_update() 200 * for that is that we opportunistically remove the pending interrupt in icp_rm_try_update() [all …]
|
| /linux/kernel/irq/ |
| H A D | spurious.c | 51 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq() 165 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq() 166 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq() 197 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq() 202 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq() 213 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq() 235 * We cannot call note_interrupt from the threaded handler in note_interrupt() 236 * because we need to look at the compound of all handlers in note_interrupt() 238 * shared case we have no serialization against an incoming in note_interrupt() 239 * hardware interrupt while we are dealing with a threaded in note_interrupt() [all …]
|
| /linux/kernel/futex/ |
| H A D | pi.c | 114 * We need to check the following states: 134 * [1] Indicates that the kernel can acquire the futex atomically. We 218 * We get here with hb->lock held, and having found a in attach_to_pi_state() 227 * free pi_state before we can take a reference ourselves. in attach_to_pi_state() 232 * Now that we have a pi_state, we can acquire wait_lock in attach_to_pi_state() 240 * still is what we expect it to be, otherwise retry the entire in attach_to_pi_state() 383 * This creates pi_state, we have hb->lock held, this means nothing can in __attach_to_pi_owner() 419 * We are the first waiter - try to look up the real owner and attach in attach_to_pi_owner() 437 * We need to look at the task state to figure out, whether the in attach_to_pi_owner() 439 * in futex_exit_release(), we do this protected by p->pi_lock: in attach_to_pi_owner() [all …]
|