Lines Matching full:we

23  * recover, so we don't allow failure here. Also, we allocate in a context that
24 * we don't want to be issuing transactions from, so we need to tell the
27 * We don't reserve any space for the ticket - we are going to steal whatever
28 * space we require from transactions as they commit. To ensure we reserve all
29 * the space required, we need to set the current reservation of the ticket to
30 * zero so that we know to steal the initial transaction overhead from the
42 * set the current reservation to zero so we know to steal the basic in xlog_cil_ticket_alloc()
62 * We can't rely on just the log item being in the CIL, we have to check
80 * current sequence, we're in a new checkpoint. in xlog_item_in_current_chkpt()
140 * We're in the middle of switching cil contexts. Reset the in xlog_cil_push_pcp_aggregate()
141 * counter we use to detect when the current context is nearing in xlog_cil_push_pcp_aggregate()
151 * limit threshold so we can switch to atomic counter aggregation for accurate
167 * We can race with other cpus setting cil_pcpmask. However, we've in xlog_cil_insert_pcp_aggregate()
194 * After the first stage of log recovery is done, we know where the head and
195 * tail of the log are. We need this log initialisation done before we can
198 * Here we allocate a log ticket to track space usage during a CIL push. This
199 * ticket is passed to xlog_write() directly so that we don't slowly leak log
230 * If we do this allocation within xlog_cil_insert_format_items(), it is done
232 * the memory allocation. This means that we have a potential deadlock situation
233 * under low memory conditions when we have lots of dirty metadata pinned in
234 * the CIL and we need a CIL commit to occur to free memory.
236 * To avoid this, we need to move the memory allocation outside the
243 * process, we cannot share the buffer between the transaction commit (which
246 * unreliable, but we most definitely do not want to be allocating and freeing
253 * the incoming modification. Then during the formatting of the item we can swap
254 * the active buffer with the new one if we can't reuse the existing buffer. We
256 * it's size is right, otherwise we'll free and reallocate it at that point.
289 * Ordered items need to be tracked but we do not wish to write in xlog_cil_alloc_shadow_bufs()
290 * them. We need a logvec to track the object, but we do not in xlog_cil_alloc_shadow_bufs()
300 * We 64-bit align the length of each iovec so that the start of in xlog_cil_alloc_shadow_bufs()
301 * the next one is naturally aligned. We'll need to account for in xlog_cil_alloc_shadow_bufs()
304 * We also add the xlog_op_header to each region when in xlog_cil_alloc_shadow_bufs()
306 * at this point. Hence we'll need an addition number of bytes in xlog_cil_alloc_shadow_bufs()
318 * that space to ensure we can align it appropriately and not in xlog_cil_alloc_shadow_bufs()
324 * if we have no shadow buffer, or it is too small, we need to in xlog_cil_alloc_shadow_bufs()
330 * We free and allocate here as a realloc would copy in xlog_cil_alloc_shadow_bufs()
331 * unnecessary data. We don't use kvzalloc() for the in xlog_cil_alloc_shadow_bufs()
332 * same reason - we don't need to zero the data area in in xlog_cil_alloc_shadow_bufs()
384 * If there is no old LV, this is the first time we've seen the item in in xfs_cil_prepare_item()
385 * this CIL context and so we need to pin it. If we are replacing the in xfs_cil_prepare_item()
387 * buffer for later freeing. In both cases we are now switching to the in xfs_cil_prepare_item()
406 * CIL, store the sequence number on the log item so we can in xfs_cil_prepare_item()
417 * For delayed logging, we need to hold a formatted buffer containing all the
425 * guaranteed to be large enough for the current modification, but we will only
426 * use that if we can't reuse the existing lv. If we can't reuse the existing
427 * lv, then simple swap it out for the shadow lv. We don't free it - that is
430 * We don't set up region headers during this process; we simply copy the
431 * regions into the flat buffer. We can do this because we still have to do a
433 * ophdrs during the iclog write means that we can support splitting large
437 * Hence what we need to do now is change the rewrite the vector array to point
438 * to the copied region inside the buffer we just allocated. This allows us to
450 /* Bail out if we didn't find a log item. */ in xlog_cil_insert_format_items()
540 * as well. Remove the amount of space we added to the checkpoint ticket from
562 * We can do this safely because the context can't checkpoint until we in xlog_cil_insert_items()
563 * are done so it doesn't matter exactly how we update the CIL. in xlog_cil_insert_items()
568 * Subtract the space released by intent cancelation from the space we in xlog_cil_insert_items()
569 * consumed so that we remove it from the CIL space and add it back to in xlog_cil_insert_items()
575 * Grab the per-cpu pointer for the CIL before we start any accounting. in xlog_cil_insert_items()
576 * That ensures that we are running with pre-emption disabled and so we in xlog_cil_insert_items()
588 * We need to take the CIL checkpoint unit reservation on the first in xlog_cil_insert_items()
589 * commit into the CIL. Test the XLOG_CIL_EMPTY bit first so we don't in xlog_cil_insert_items()
590 * unnecessarily do an atomic op in the fast path here. We can clear the in xlog_cil_insert_items()
591 * XLOG_CIL_EMPTY bit as we are under the xc_ctx_lock here and that in xlog_cil_insert_items()
599 * Check if we need to steal iclog headers. atomic_read() is not a in xlog_cil_insert_items()
600 * locked atomic operation, so we can check the value before we do any in xlog_cil_insert_items()
601 * real atomic ops in the fast path. If we've already taken the CIL unit in xlog_cil_insert_items()
602 * reservation from this commit, we've already got one iclog header in xlog_cil_insert_items()
603 * space reserved so we have to account for that otherwise we risk in xlog_cil_insert_items()
606 * If the CIL is already at the hard limit, we might need more header in xlog_cil_insert_items()
608 * commit that occurs once we are over the hard limit to ensure the CIL in xlog_cil_insert_items()
611 * This can steal more than we need, but that's OK. in xlog_cil_insert_items()
642 * If we just transitioned over the soft limit, we need to in xlog_cil_insert_items()
657 * We do this here so we only need to take the CIL lock once during in xlog_cil_insert_items()
674 * If we've overrun the reservation, dump the tx details before we move in xlog_cil_insert_items()
719 * not the commit record LSN. This is because we can pipeline multiple
736 * If we are called with the aborted flag set, it is because a log write during
740 * iclog write error even though we haven't started any IO yet. Hence in this
741 * case all we need to do is iop_committed processing, followed by an
768 * higher LSN than the current head. We do this before insertion of the in xlog_cil_ail_insert()
770 * space that this checkpoint has already consumed. We call in xlog_cil_ail_insert()
784 * We move the AIL head forwards to account for the space used in the in xlog_cil_ail_insert()
785 * log before we remove that space from the grant heads. This prevents a in xlog_cil_ail_insert()
817 * if we are aborting the operation, no point in inserting the in xlog_cil_ail_insert()
818 * object into the AIL as we are in a shutdown situation. in xlog_cil_ail_insert()
832 * we have the ail lock. Then unpin the item. This does in xlog_cil_ail_insert()
855 /* make sure we insert the remainder! */ in xlog_cil_ail_insert()
879 * Mark all items committed and clear busy extents. We free the log vector
880 * chains in a separate pass so that we unpin the log items as quickly as
891 * If the I/O failed, we're aborting the commit and already shutdown. in xlog_cil_committed()
892 * Wake any commit waiters before aborting the log items so we don't in xlog_cil_committed()
939 * Record the LSN of the iclog we were just granted space to start writing into.
956 * The LSN we need to pass to the log items on transaction in xlog_cil_set_ctx_write_state()
958 * the commit lsn. If we use the commit record lsn then we can in xlog_cil_set_ctx_write_state()
967 * Make sure the metadata we are about to overwrite in the log in xlog_cil_set_ctx_write_state()
978 * Take a reference to the iclog for the context so that we still hold in xlog_cil_set_ctx_write_state()
986 * iclog for an entire commit record, so we can attach the context in xlog_cil_set_ctx_write_state()
987 * callbacks now. This needs to be done before we make the commit_lsn in xlog_cil_set_ctx_write_state()
997 * Now we can record the commit LSN and wake anyone waiting for this in xlog_cil_set_ctx_write_state()
1031 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_order_write()
1138 * Build a checkpoint transaction header to begin the journal transaction. We
1142 * This is the only place we write a transaction header, so we also build the
1144 * transaction header. We keep the start record in it's own log vector rather
1196 * CIL item reordering compare function. We want to order in ascending ID order,
1197 * but we want to leave items with the same ID in the order they were added to
1198 * the list. This is important for operations like reflink where we log 4 order
1199 * dependent intents in a single transaction when we overwrite an existing
1217 * the CIL. We don't need the CIL lock here because it's only needed on the
1220 * If a log item is marked with a whiteout, we do not need to write it to the
1221 * journal and so we just move them to the whiteout list for the caller to
1247 /* we don't write ordered log vectors */ in xlog_cil_build_lv_chain()
1275 * If the current sequence is the same as xc_push_seq we need to do a flush. If
1277 * flushed and we don't need to do anything - the caller will wait for it to
1281 * Hence we can allow log forces to run racily and not issue pushes for the
1282 * same sequence twice. If we get a race between multiple pushes for the same
1287 * allocation context. However, we do not want to block on memory reclaim
1289 * by memory reclaim itself. Hence we really need to run under full GFP_NOFS
1324 * As we are about to switch to a new, empty CIL context, we no longer in xlog_cil_push_work()
1337 * Check if we've anything to push. If there is nothing, then we don't in xlog_cil_push_work()
1338 * move on to a new sequence number and so we have to be able to push in xlog_cil_push_work()
1355 * We are now going to push this context, so add it to the committing in xlog_cil_push_work()
1356 * list before we do anything else. This ensures that anyone waiting on in xlog_cil_push_work()
1365 * waiting on. If the CIL is not empty, we get put on the committing in xlog_cil_push_work()
1367 * an empty CIL and an unchanged sequence number means we jumped out in xlog_cil_push_work()
1384 * Switch the contexts so we can drop the context lock and move out in xlog_cil_push_work()
1385 * of a shared context. We can't just go straight to the commit record, in xlog_cil_push_work()
1386 * though - we need to synchronise with previous and future commits so in xlog_cil_push_work()
1388 * that we process items during log IO completion in the correct order. in xlog_cil_push_work()
1390 * For example, if we get an EFI in one checkpoint and the EFD in the in xlog_cil_push_work()
1391 * next (e.g. due to log forces), we do not want the checkpoint with in xlog_cil_push_work()
1393 * we must strictly order the commit records of the checkpoints so in xlog_cil_push_work()
1398 * Hence we need to add this context to the committing context list so in xlog_cil_push_work()
1404 * committing list. This also ensures that we can do unlocked checks in xlog_cil_push_work()
1414 * Sort the log vector chain before we add the transaction headers. in xlog_cil_push_work()
1415 * This ensures we always have the transaction headers at the start in xlog_cil_push_work()
1422 * begin the transaction. We need to account for the space used by the in xlog_cil_push_work()
1424 * Add the lvhdr to the head of the lv chain we pass to xlog_write() so in xlog_cil_push_work()
1446 * Grab the ticket from the ctx so we can ungrant it after releasing the in xlog_cil_push_work()
1447 * commit_iclog. The ctx may be freed by the time we return from in xlog_cil_push_work()
1449 * callback run) so we can't reference the ctx after the call to in xlog_cil_push_work()
1456 * to complete before we submit the commit_iclog. We can't use state in xlog_cil_push_work()
1460 * In the latter case, if it's a future iclog and we wait on it, the we in xlog_cil_push_work()
1462 * wakeup until this commit_iclog is written to disk. Hence we use the in xlog_cil_push_work()
1463 * iclog header lsn and compare it to the commit lsn to determine if we in xlog_cil_push_work()
1474 * iclogs older than ic_prev. Hence we only need to wait in xlog_cil_push_work()
1482 * We need to issue a pre-flush so that the ordering for this in xlog_cil_push_work()
1539 * We need to push CIL every so often so we don't cache more than we can fit in
1553 * The cil won't be empty because we are called while holding the in xlog_cil_push_background()
1554 * context lock so whatever we added to the CIL will still be there. in xlog_cil_push_background()
1559 * We are done if: in xlog_cil_push_background()
1560 * - we haven't used up all the space available yet; or in xlog_cil_push_background()
1561 * - we've already queued up a push; and in xlog_cil_push_background()
1562 * - we're not over the hard limit; and in xlog_cil_push_background()
1565 * If so, we don't need to take the push lock as there's nothing to do. in xlog_cil_push_background()
1582 * Drop the context lock now, we can't hold that if we need to sleep in xlog_cil_push_background()
1583 * because we are over the blocking threshold. The push_lock is still in xlog_cil_push_background()
1590 * If we are well over the space limit, throttle the work that is being in xlog_cil_push_background()
1615 * If the caller is performing a synchronous force, we will flush the workqueue
1620 * If the caller is performing an async push, we need to ensure that the
1621 * checkpoint is fully flushed out of the iclogs when we finish the push. If we
1625 * mechanism. Hence in this case we need to pass a flag to the push work to
1648 * If this is an async flush request, we always need to set the in xlog_cil_push_now()
1657 * If the CIL is empty or we've already pushed the sequence then in xlog_cil_push_now()
1658 * there's no more work that we need to do. in xlog_cil_push_now()
1687 * committed in the current (same) CIL checkpoint, we don't need to write either
1689 * journalled atomically within this checkpoint. As we cannot remove items from
1725 * To do this, we need to format the item, pin it in memory if required and
1726 * account for the space used by the transaction. Once we have done that we
1728 * transaction to the checkpoint context so we carry the busy extents through
1747 * Do all necessary memory allocation before we lock the CIL. in xlog_cil_commit()
1772 * This needs to be done before we drop the CIL context lock because we in xlog_cil_commit()
1774 * to disk. If we don't, then the CIL checkpoint can race with us and in xlog_cil_commit()
1775 * we can run checkpoint completion before we've updated and unlocked in xlog_cil_commit()
1817 * We only need to push if we haven't already pushed the sequence number given.
1818 * Hence the only time we will trigger a push here is if the push sequence is
1821 * We return the current commit lsn to allow the callers to determine if a
1840 * check to see if we need to force out the current context. in xlog_cil_force_seq()
1848 * See if we can find a previous sequence still committing. in xlog_cil_force_seq()
1849 * We need to wait for all previous sequence commits to complete in xlog_cil_force_seq()
1856 * Avoid getting stuck in this loop because we were woken by the in xlog_cil_force_seq()
1881 * Hence by the time we have got here it our sequence may not have been in xlog_cil_force_seq()
1887 * Hence if we don't find the context in the committing list and the in xlog_cil_force_seq()
1891 * it means we haven't yet started the push, because if it had started in xlog_cil_force_seq()
1892 * we would have found the context on the committing list. in xlog_cil_force_seq()
1904 * We detected a shutdown in progress. We need to trigger the log force in xlog_cil_force_seq()
1906 * we are already in a shutdown state. Hence we can't return in xlog_cil_force_seq()
1908 * LSN is already stable), so we return a zero LSN instead. in xlog_cil_force_seq()