Lines Matching full:we

78 	 * Pass log block 0 since we don't have an addr yet, buffer will be  in xlog_alloc_buffer()
88 * We do log I/O in units of log sectors (a power-of-2 multiple of the in xlog_alloc_buffer()
89 * basic block size), so we round up the requested size to accommodate in xlog_alloc_buffer()
97 * blocks (sector size 1). But otherwise we extend the buffer by one in xlog_alloc_buffer()
249 * h_fs_uuid is null, we assume this log was last mounted in xlog_header_check_mount()
328 * range of basic blocks we'll be examining. If that fails, in xlog_find_verify_cycle()
329 * try a smaller size. We need to be able to read at least in xlog_find_verify_cycle()
330 * a log sector, or we're out of luck. in xlog_find_verify_cycle()
385 * a good log record. Therefore, we subtract one to get the block number
387 * of blocks we would have read on a previous read. This happens when the
450 * We hit the beginning of the physical log & still no header. Return in xlog_find_verify_log_record()
460 * We have the final block of the good log (the first block in xlog_find_verify_log_record()
461 * of the log record _before_ the head. So we check the uuid. in xlog_find_verify_log_record()
467 * We may have found a log record header before we expected one. in xlog_find_verify_log_record()
468 * last_blk will be the 1st block # with a given cycle #. We may end in xlog_find_verify_log_record()
469 * up reading an entire log record. In this case, we don't want to in xlog_find_verify_log_record()
471 * record do we update last_blk. in xlog_find_verify_log_record()
487 * eliminated when calculating the head. We aren't guaranteed that previous
488 * LR have complete transactions. We only know that a cycle number of
489 * current cycle number -1 won't be present in the log if we start writing
523 * log so we can store the uuid in there in xlog_find_head()
555 * we set it to log_bbnum which is an invalid block number, but this in xlog_find_head()
563 * In this case we believe that the entire log should have in xlog_find_head()
564 * cycle number last_half_cycle. We need to scan backwards in xlog_find_head()
566 * containing last_half_cycle - 1. If we find such a hole, in xlog_find_head()
581 * In the 256k log case, we will read from the beginning to the in xlog_find_head()
583 * We don't worry about the x+1 blocks that we encounter, in xlog_find_head()
584 * because we know that they cannot be the head since the log in xlog_find_head()
591 * In this case we want to find the first block with cycle in xlog_find_head()
592 * number matching last_half_cycle. We expect the log to be in xlog_find_head()
596 * be where the new head belongs. First we do a binary search in xlog_find_head()
598 * search may not be totally accurate, so then we scan back in xlog_find_head()
601 * the log, then we look for occurrences of last_half_cycle - 1 in xlog_find_head()
602 * at the end of the log. The cases we're looking for look in xlog_find_head()
606 * ^ but we want to locate this spot in xlog_find_head()
610 * ^ we want to locate this spot in xlog_find_head()
624 * we actually look at the block size of the filesystem. in xlog_find_head()
629 * We are guaranteed that the entire check can be performed in xlog_find_head()
641 * We are going to scan backwards in the log in two parts. in xlog_find_head()
642 * First we scan the physical end of the log. In this part in xlog_find_head()
643 * of the log, we are looking for blocks with cycle number in xlog_find_head()
645 * If we find one, then we know that the log starts there, as in xlog_find_head()
646 * we've found a hole that didn't get written in going around in xlog_find_head()
651 * last_half_cycle, then we check the blocks at the start of in xlog_find_head()
652 * the log looking for occurrences of last_half_cycle. If we in xlog_find_head()
654 * first occurrence of last_half_cycle is wrong and we move in xlog_find_head()
655 * back to the hole we've found. This case looks like in xlog_find_head()
658 * Another case we need to handle that only occurs in 256k in xlog_find_head()
663 * x + 1 blocks. We need to skip past those since that is in xlog_find_head()
665 * last_half_cycle-1 we accomplish that. in xlog_find_head()
696 * Now we need to make sure head_blk is not pointing to a block in in xlog_find_head()
716 /* We hit the beginning of the log during our search */ in xlog_find_head()
740 * When returning here, we have a good block number. Bad block in xlog_find_head()
741 * means that during a previous crash, we didn't have a clean break in xlog_find_head()
742 * from cycle number N to cycle number N-1. In this case, we need in xlog_find_head()
757 * Given a starting log block, walk backwards until we find the provided number
782 * Walk backwards from the head block until we hit the tail or the first in xlog_rseek_logrec_hdr()
800 * If we haven't hit the tail block or the log record header count, in xlog_rseek_logrec_hdr()
830 * Given head and tail blocks, walk forward from the tail block until we find
856 * Walk forward from the tail block until we hit the head or the last in xlog_seek_logrec_hdr()
874 * If we haven't hit the head block or the log record header count, in xlog_seek_logrec_hdr()
920 * We also have to handle the case where the tail was pinned and the head
926 * recovery because we have no way to know the tail was updated if the
965 * Run a CRC check from the tail to the head. We can't just check in xlog_verify_tail()
1011 * CRC verification. While we can't always be certain that CRC verification
1012 * failure is due to a torn write vs. an unrelated corruption, we do know that
1039 * head until we hit the tail or the maximum number of log record I/Os in xlog_verify_head()
1041 * we don't trash the rhead/buffer pointers from the caller. in xlog_verify_head()
1062 * We've hit a potential torn write. Reset the error and warn in xlog_verify_head()
1109 * We need to make sure we handle log wrapping properly, so we can't use the
1113 * The log is limited to 32 bit sizes, so we use the appropriate modulus
1152 * Look for unmount record. If we find it, then we know there was a in xlog_check_unmount_rec()
1154 * log, we convert to a log block before comparing to the head_blk. in xlog_check_unmount_rec()
1157 * below. We won't want to clear the unmount record if there is one, so in xlog_check_unmount_rec()
1158 * we pass the lsn of the unmount record rather than the block after it. in xlog_check_unmount_rec()
1200 * Reset log values according to the state of the log when we in xlog_set_state()
1201 * crashed. In the case where head_blk == 0, we bump curr_cycle in xlog_set_state()
1204 * point we have guaranteed that all partial log records have been in xlog_set_state()
1205 * accounted for. Therefore, we know that the last good log record in xlog_set_state()
1228 * lsn. The entire log record does not need to be valid. We only care
1231 * We could speed up search by using current head_blk buffer, but it is not
1274 * seriously wrong if we can't find it. in xlog_find_tail()
1303 * Verify the log head if the log is not clean (e.g., we have anything in xlog_find_tail()
1308 * Note that we can only run CRC verification when the log is dirty in xlog_find_tail()
1334 * Note that the unmount was clean. If the unmount was not clean, we in xlog_find_tail()
1336 * headers if we have a filesystem using non-persistent counters. in xlog_find_tail()
1344 * because we allow multiple outstanding log writes concurrently, in xlog_find_tail()
1347 * We use the lsn from before modifying it so that we'll never in xlog_find_tail()
1350 * Do this only if we are going to recover the filesystem in xlog_find_tail()
1353 * However on Linux, we can & do recover a read-only filesystem. in xlog_find_tail()
1354 * We only skip recovery if NORECOVERY is specified on mount, in xlog_find_tail()
1355 * in which case we would not be here. in xlog_find_tail()
1358 * We can't recover this device anyway, so it won't matter. in xlog_find_tail()
1376 * the X blocks. This will cut down on the number of reads we need to do.
1427 /* we have a partially zeroed log */ in xlog_find_zeroed()
1436 * we scan over the defined maximum blocks. At this point, the maximum in xlog_find_zeroed()
1447 * We search for any instances of cycle number 0 that occur before in xlog_find_zeroed()
1448 * our current estimate of the head. What we're trying to detect is in xlog_find_zeroed()
1459 * Potentially backup over partial log record write. We don't need in xlog_find_zeroed()
1460 * to search the end of the log because we know it is zero. in xlog_find_zeroed()
1524 * a smaller size. We need to be able to write at least a in xlog_write_log_records()
1525 * log sector, or we're out of luck. in xlog_write_log_records()
1536 /* We may need to do a read at the start to fill in part of in xlog_write_log_records()
1555 /* We may need to do a read at the end to fill in part of in xlog_write_log_records()
1588 * in front of the log head. We do this so that we won't become confused
1589 * if we come up, write only a little bit more, and then crash again.
1590 * If we leave the partial log records out there, this situation could
1592 * have the current cycle number. We get rid of them by overwriting them
1597 * the log so that we will not write over the unmount record after a
1599 * any valid log records in it until a new one was written. If we crashed
1600 * during that time we would not be able to recover.
1620 * and the tail. We want to write over any blocks beyond the in xlog_clear_stale_blocks()
1621 * head that we may have written just before the crash, but in xlog_clear_stale_blocks()
1622 * we don't want to overwrite the tail of the log. in xlog_clear_stale_blocks()
1651 * If the head is right up against the tail, we can't clear in xlog_clear_stale_blocks()
1662 * we could have and the distance to the tail to clear out. in xlog_clear_stale_blocks()
1663 * We take the smaller so that we don't overwrite the tail and in xlog_clear_stale_blocks()
1664 * we don't waste all day writing from the head to the tail in xlog_clear_stale_blocks()
1671 * We can stomp all the blocks we need to without in xlog_clear_stale_blocks()
1684 * We need to wrap around the end of the physical log in in xlog_clear_stale_blocks()
1700 * This writes the remainder of the blocks we want to clear. in xlog_clear_stale_blocks()
1701 * It uses the current cycle number since we're now on the in xlog_clear_stale_blocks()
1702 * same cycle as the head so that we get: in xlog_clear_stale_blocks()
1704 * ^^^^^ blocks we're writing in xlog_clear_stale_blocks()
1767 * Get an inode so that we can recover a log operation.
1770 * Check that the generation number matches the intent item like we do for
1848 * there's nothing to replay from them so we can simply cull them
1849 * from the transaction. However, we can't do that until after we've
1862 * in a "free" state before we remove the unlinked inode list pointer.
1867 * But there's a problem with that - we can't tell an inode allocation buffer
1868 * apart from a regular buffer, so we can't separate them. We can, however,
1869 * tell an inode unlink buffer from the others, and so we can separate them out
1878 * Note that we add objects to the tail of the lists so that first-to-last
1880 * list means when we traverse from the head we walk them in last-to-first
1882 * but for all other items there may be specific ordering that we need to
2147 * This works because all regions must be 32 bit aligned. Therefore, we
2148 * either have both fields or we have neither field. In the case we have
2149 * neither field, the data part of the region is zero length. We only have
2151 * later. If we have at least 4 bytes, then we can determine how many regions
2168 /* we need to catch log corruptions here */ in xlog_recover_add_to_trans()
2184 * records. If we don't have the whole thing here, copy what we in xlog_recover_add_to_trans()
2290 * Callees must not free the trans structure. We'll decide if we need to in xlog_recovery_process_trans()
2305 /* success or fail, we are now done with this transaction. */ in xlog_recovery_process_trans()
2331 * Either way, return what we found during the lookup - an existing transaction
2352 * skip over non-start transaction headers - we could be in xlog_recover_ophdr_to_trans()
2393 /* Do we understand who wrote this op? */ in xlog_recover_process_ophdr()
2419 * The recovered buffer queue is drained only once we know that all in xlog_recover_process_ophdr()
2432 * In other words, we are allowed to submit a buffer from log recovery in xlog_recover_process_ophdr()
2433 * once per current LSN. Otherwise, we may incorrectly skip recovery in xlog_recover_process_ophdr()
2436 * We don't know up front whether buffers are updated multiple times per in xlog_recover_process_ophdr()
2455 * transaction structure is in a normal state. We have either seen the
2456 * start of the transaction or the last operation we added was not a partial
2457 * operation. If the last operation we added to the transaction was a
2458 * partial operation, we need to mark r_state with XLOG_WAS_CONT_TRANS.
2479 /* check the log format matches our own - else we can't recover */ in xlog_recover_process_data()
2522 * to regrant every roll so that we can make forward progress in xlog_finish_defer_ops()
2537 * Transfer to this new transaction all the dfops we captured in xlog_finish_defer_ops()
2569 * corresponding log done items should be in the AIL. What we do now is update
2572 * Since we process the log intent items in normal transactions, they will be
2574 * down the list processing each one. We'll use a flag in the intent item to
2575 * skip those that we've already processed and use the AIL iteration mechanism's
2578 * When we start, we know that the intents are the only things in the AIL. As we
2579 * process them, however, other items are added to the AIL. Hence we know we
2580 * have started recovery on all the pending intents when we find an non-intent
2600 * We should never see a redo item with a LSN higher than in xlog_recover_process_intents()
2601 * the last transaction we found in the log at the start in xlog_recover_process_intents()
2612 * The recovery function can free the log item, so we must not in xlog_recover_process_intents()
2635 * A cancel occurs when the mount has failed and we're bailing out. Release all
2636 * pending log intent items that we haven't started recovery on so they don't
2746 * before we continue so that it won't race with in xlog_recover_iunlink_bucket()
2778 * This is called during recovery to process any inodes which we unlinked but
2780 * AGI blocks. What we do here is scan all the AGIs and fully truncate and free
2785 * If everything we touch in the agi processing loop is already in memory, this
2788 * until we either run out of inodes to process, run low on memory or we run out
2794 * space. Hence we need to yield the CPU when there is other kernel work
2812 * We should probably mark the filesystem as corrupt after we've in xlog_recover_iunlink_ag()
2813 * recovered all the ag's we can.... in xlog_recover_iunlink_ag()
2820 * the transaction to truncate and free each inode. Because we are not in xlog_recover_iunlink_ag()
2821 * racing with anyone else here for the AGI buffer, we don't even need in xlog_recover_iunlink_ag()
2823 * the buffer. We keep buffer reference though, so that it stays pinned in xlog_recover_iunlink_ag()
2824 * in memory while we need the buffer. in xlog_recover_iunlink_ag()
2901 * sets old_crc to 0 so we must consider this valid even on v5 supers. in xlog_recover_process()
2912 * We're in the normal recovery path. Issue a warning if and only if the in xlog_recover_process()
3034 * h_size (iclog size) is hardcoded to 32k. Now that we in xlog_do_recovery_pass()
3058 * This open codes xlog_logrec_hblks so that we can reuse the in xlog_do_recovery_pass()
3059 * fixed up h_size value calculated above. Without that we'd in xlog_do_recovery_pass()
3089 * we can't do a sequential recovery. in xlog_do_recovery_pass()
3121 * - we increased the buffer size originally in xlog_do_recovery_pass()
3126 * - we read the log end (LR header start) in xlog_do_recovery_pass()
3183 * - we increased the buffer size originally in xlog_do_recovery_pass()
3188 * - we read the log end (LR header start) in xlog_do_recovery_pass()
3251 * If there has been an item recovery error then we in xlog_do_recovery_pass()
3253 * occur. We might have multiple checkpoints with the in xlog_do_recovery_pass()
3289 * Do the recovery of the log. We actually do this in two phases.
3298 * and freed at this level, since only here do we know when all of
3364 * We now update the tail_lsn since much of the recovery has completed in xlog_do_recover()
3366 * iunlinks, we can free up the entire log. This was set in in xlog_do_recover()
3369 * AIL; so we look at the AIL to determine how to set the tail_lsn. in xlog_do_recover()
3374 * Now that we've finished replaying all buffer and inode updates, in xlog_do_recover()
3436 * called), we just go ahead and recover. We do this all in xlog_recover()
3437 * under the vfs layer, so we can get away with it unless in xlog_recover()
3438 * the device itself is read-only, in which case we fail. in xlog_recover()
3445 * Version 5 superblock log feature mask validation. We know the in xlog_recover()
3447 * in what we need to recover. If there are unknown features in xlog_recover()
3488 * In the first part of recovery we replay inodes and buffers and build up the
3489 * list of intents which need to be processed. Here we process the intents and
3492 * from disk in between the two stages. This is necessary so that we can free
3495 * We run this whole process under GFP_NOFS allocation context. We do a
3496 * combination of non-transactional and transactional work, yet we really don't
3511 * Cancel all the unprocessed intent items now so that we don't in xlog_recover_finish()
3514 * (inode reclaim does this) before we get around to in xlog_recover_finish()
3535 * staging extents as we have not permitted any user modifications. in xlog_recover_finish()
3545 * If we get an error here, make sure the log is shut down in xlog_recover_finish()