Lines Matching full:bad
3 * Bad block management
20 * The purpose of badblocks set/clear is to manage bad blocks ranges which are
23 * When the caller of badblocks_set() wants to set a range of bad blocks, the
27 * more complicated when the setting range covers multiple already set bad block
28 * ranges, with restrictions of maximum length of each bad range and the bad
32 * for setting a large range of bad blocks, we can handle it by dividing the
34 * bad table full conditions. Every time only a smaller piece of the bad range
36 * possible overlapped or adjacent already set bad block ranges. Then the hard
39 * When setting a range of bad blocks to the bad table, the simplified situations
40 * to be considered are, (The already set bad blocks ranges are naming with
41 * prefix E, and the setting bad blocks range is naming with prefix S)
43 * 1) A setting range is not overlapped or adjacent to any other already set bad
51 * For this situation if the bad blocks table is not full, just allocate a
52 * free slot from the bad blocks table to mark the setting range S. The
57 * 2) A setting range starts exactly at a start LBA of an already set bad blocks
67 * be merged into existing bad range E. The result is,
77 * An extra slot from the bad blocks table will be allocated for S, and head
84 * be merged into existing bad range E. The result is,
94 bad blocks range E. The result is,
117 * 3) A setting range starts before the start LBA of an already set bad blocks
138 * 4) A setting range starts after the start LBA of an already set bad blocks
140 * 4.1) If the setting range S exactly matches the tail part of already set bad
187 * 4.3) If the setting bad blocks range S is overlapped with an already set bad
208 * 5) A setting bad blocks range S is adjacent to one or more already set bad
210 * 5.1) Front merge: If the already set bad blocks range E is before setting
225 * range S right after already set range E into the bad blocks table. The
231 * 6.1) Multiple already set ranges may merge into less ones in a full bad table
239 * In the above example, when the bad blocks table is full, inserting the
241 * can be allocated from bad blocks table. In this situation a proper
242 * setting method should be go though all the setting bad blocks range and
244 * is available slot from bad blocks table, re-try again to handle more
245 * setting bad blocks ranges as many as possible.
254 * to no-space in bad blocks table, but the following E1, E2 and E3 ranges
256 * 1 free slot in bad blocks table.
260 * Since the bad blocks table is not full anymore, re-try again for the
262 * bad blocks table with previous freed slot from multiple ranges merge.
264 * In the following example, in bad blocks table, E1 is an acked bad blocks
265 * range and E2 is an unacked bad blocks range, therefore they are not able
266 * to merge into a larger range. The setting bad blocks range S is acked,
275 * the bad blocks table should be (E3 is remaining part of E2 which is not
281 * The above result is correct but not perfect. Range E1 and S in the bad
283 * occupy less bad blocks table space and make badblocks_check() faster.
290 * 6.3) Behind merge: If the already set bad blocks range E is behind the setting
313 * S in front of the already set range E in the bad blocks table. The result
320 * the bad block range setting conditions. Maybe there is some rare corner case
322 * to no space, or some ranges are not merged to save bad blocks table space.
326 * which starts before or at current setting range. Since the setting bad blocks
332 * return correct bad blocks table index immediately.
335 * Clearing a bad blocks range from the bad block table has similar idea as
337 * when the clearing range hits middle of a bad block range, the existing bad
339 * bad block table. The simplified situations to be considered are, (The already
340 * set bad blocks ranges in bad block table are naming with prefix E, and the
341 * clearing bad blocks range is naming with prefix C)
343 * 1) A clearing range is not overlapped to any already set ranges in bad block
351 * For the above situations, no bad block to be cleared and no failure
353 * 2) The clearing range hits middle of an already setting bad blocks range in
354 * the bad block table.
361 * In this situation if the bad block table is not full, the range E will be
366 * 3) The clearing range starts exactly at same LBA as an already set bad block range
367 * from the bad block table.
377 * item deleted from bad block table. The result is,
388 * For this situation the whole bad blocks range E will be cleared and its
389 * corresponded item is deleted from the bad block table.
390 * 4) The clearing range exactly ends at same LBA as an already set bad block
404 * 5) The clearing range is partially overlapped with an already set bad block
405 * range from the bad block table.
406 * 5.1) The already set bad block range is front overlapped with the clearing
425 * 5.2) The already set bad block range is behind overlaopped with the clearing
446 * All bad blocks range clearing can be simplified into the above 5 situations
448 * while-loop. The idea is similar to bad blocks range setting but much
453 * Find the range starts at-or-before 's' from bad table. The search
454 * starts from index 'hint' and stops at index 'hint_end' from the bad
476 * Find the range starts at-or-before bad->start. If 'hint' is provided
477 * (hint >= 0) then search in the bad table from hint firstly. It is
478 * very probably the wanted bad range can be found from the hint index,
481 static int prev_badblocks(struct badblocks *bb, struct badblocks_context *bad, in prev_badblocks() argument
484 sector_t s = bad->start; in prev_badblocks()
508 /* Do bisect search in bad table */ in prev_badblocks()
531 * Return 'true' if the range indicated by 'bad' can be backward merged
532 * with the bad range (from the bad table) index by 'behind'.
535 struct badblocks_context *bad, int behind) in can_merge_behind() argument
537 sector_t sectors = bad->len; in can_merge_behind()
538 sector_t s = bad->start; in can_merge_behind()
544 BB_ACK(p[behind]) == bad->ack) in can_merge_behind()
550 * Do backward merge for range indicated by 'bad' and the bad range
551 * (from the bad table) indexed by 'behind'. The return value is merged
552 * sectors from bad->len.
554 static int behind_merge(struct badblocks *bb, struct badblocks_context *bad, in behind_merge() argument
557 sector_t sectors = bad->len; in behind_merge()
558 sector_t s = bad->start; in behind_merge()
567 p[behind] = BB_MAKE(s, BB_LEN(p[behind]) + merged, bad->ack); in behind_merge()
576 * Return 'true' if the range indicated by 'bad' can be forward
577 * merged with the bad range (from the bad table) indexed by 'prev'.
580 struct badblocks_context *bad) in can_merge_front() argument
582 sector_t s = bad->start; in can_merge_front()
585 if (BB_ACK(p[prev]) == bad->ack && in can_merge_front()
593 * Do forward merge for range indicated by 'bad' and the bad range
594 * (from bad table) indexed by 'prev'. The return value is sectors
595 * merged from bad->len.
597 static int front_merge(struct badblocks *bb, int prev, struct badblocks_context *bad) in front_merge() argument
599 sector_t sectors = bad->len; in front_merge()
600 sector_t s = bad->start; in front_merge()
616 BB_LEN(p[prev]) + merged, bad->ack); in front_merge()
624 * handle: If a bad range (indexed by 'prev' from bad table) exactly
625 * starts as bad->start, and the bad range ahead of 'prev' (indexed by
626 * 'prev - 1' from bad table) exactly ends at where 'prev' starts, and
628 * these two bad range (from bad table) can be combined.
630 * Return 'true' if bad ranges indexed by 'prev' and 'prev - 1' from bad
634 struct badblocks_context *bad) in can_combine_front() argument
639 (BB_OFFSET(p[prev]) == bad->start) && in can_combine_front()
648 * Combine the bad ranges indexed by 'prev' and 'prev - 1' (from bad
649 * table) into one larger bad range, and the new range is indexed by
666 * Return 'true' if the range indicated by 'bad' is exactly forward
667 * overlapped with the bad range (from bad table) indexed by 'front'.
668 * Exactly forward overlap means the bad range (from bad table) indexed
669 * by 'prev' does not cover the whole range indicated by 'bad'.
672 struct badblocks_context *bad) in overlap_front() argument
676 if (bad->start >= BB_OFFSET(p[front]) && in overlap_front()
677 bad->start < BB_END(p[front])) in overlap_front()
683 * Return 'true' if the range indicated by 'bad' is exactly backward
684 * overlapped with the bad range (from bad table) indexed by 'behind'.
686 static bool overlap_behind(struct badblocks *bb, struct badblocks_context *bad, in overlap_behind() argument
691 if (bad->start < BB_OFFSET(p[behind]) && in overlap_behind()
692 (bad->start + bad->len) > BB_OFFSET(p[behind])) in overlap_behind()
698 * Return 'true' if the range indicated by 'bad' can overwrite the bad
699 * range (from bad table) indexed by 'prev'.
701 * The range indicated by 'bad' can overwrite the bad range indexed by
703 * 1) The whole range indicated by 'bad' can cover partial or whole bad
704 * range (from bad table) indexed by 'prev'.
705 * 2) The ack value of 'bad' is larger or equal to the ack value of bad
708 * If the overwriting doesn't cover the whole bad range (from bad table)
709 * indexed by 'prev', new range might be split from existing bad range,
710 * 1) The overwrite covers head or tail part of existing bad range, 1
711 * extra bad range will be split and added into the bad table.
712 * 2) The overwrite covers middle of existing bad range, 2 extra bad
714 * added into the bad table.
719 struct badblocks_context *bad, int *extra) in can_front_overwrite() argument
724 WARN_ON(!overlap_front(bb, prev, bad)); in can_front_overwrite()
726 if (BB_ACK(p[prev]) >= bad->ack) in can_front_overwrite()
729 if (BB_END(p[prev]) <= (bad->start + bad->len)) { in can_front_overwrite()
730 len = BB_END(p[prev]) - bad->start; in can_front_overwrite()
731 if (BB_OFFSET(p[prev]) == bad->start) in can_front_overwrite()
736 bad->len = len; in can_front_overwrite()
738 if (BB_OFFSET(p[prev]) == bad->start) in can_front_overwrite()
743 * one, an extra slot needed from bad table. in can_front_overwrite()
755 * Do the overwrite from the range indicated by 'bad' to the bad range
756 * (from bad table) indexed by 'prev'.
758 * extra bad range(s) might be split and added into the bad table. All
759 * the splitting cases in the bad table will be handled here.
762 struct badblocks_context *bad, int extra) in front_overwrite() argument
771 bad->ack); in front_overwrite()
774 if (BB_OFFSET(p[prev]) == bad->start) { in front_overwrite()
776 bad->len, bad->ack); in front_overwrite()
779 p[prev + 1] = BB_MAKE(bad->start + bad->len, in front_overwrite()
784 bad->start - BB_OFFSET(p[prev]), in front_overwrite()
793 p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); in front_overwrite()
798 bad->start - BB_OFFSET(p[prev]), in front_overwrite()
807 p[prev + 1] = BB_MAKE(bad->start, bad->len, bad->ack); in front_overwrite()
816 return bad->len; in front_overwrite()
820 * Explicitly insert a range indicated by 'bad' to the bad table, where
823 static int insert_at(struct badblocks *bb, int at, struct badblocks_context *bad) in insert_at() argument
830 len = min_t(sector_t, bad->len, BB_MAX_LEN); in insert_at()
833 p[at] = BB_MAKE(bad->start, len, bad->ack); in insert_at()
858 /* Do exact work to set bad block range into the bad block table */
864 struct badblocks_context bad; in _badblocks_set() local
892 bad.ack = acknowledged; in _badblocks_set()
896 bad.start = s; in _badblocks_set()
897 bad.len = sectors; in _badblocks_set()
901 len = insert_at(bb, 0, &bad); in _badblocks_set()
907 prev = prev_badblocks(bb, &bad, hint); in _badblocks_set()
913 if (bad.len > (BB_OFFSET(p[0]) - bad.start)) in _badblocks_set()
914 bad.len = BB_OFFSET(p[0]) - bad.start; in _badblocks_set()
915 len = insert_at(bb, 0, &bad); in _badblocks_set()
923 if (overlap_behind(bb, &bad, 0)) { in _badblocks_set()
924 if (can_merge_behind(bb, &bad, 0)) { in _badblocks_set()
925 len = behind_merge(bb, &bad, 0); in _badblocks_set()
940 if (can_combine_front(bb, prev, &bad)) { in _badblocks_set()
948 if (overlap_front(bb, prev, &bad)) { in _badblocks_set()
949 if (can_merge_front(bb, prev, &bad)) { in _badblocks_set()
950 len = front_merge(bb, prev, &bad); in _badblocks_set()
955 if (!can_front_overwrite(bb, prev, &bad, &extra)) { in _badblocks_set()
962 len = front_overwrite(bb, prev, &bad, extra); in _badblocks_set()
966 if (can_combine_front(bb, prev, &bad)) { in _badblocks_set()
975 if (can_merge_front(bb, prev, &bad)) { in _badblocks_set()
976 len = front_merge(bb, prev, &bad); in _badblocks_set()
986 overlap_behind(bb, &bad, prev + 1) && in _badblocks_set()
1000 /* cannot merge and there is space in bad table */ in _badblocks_set()
1002 overlap_behind(bb, &bad, prev + 1)) in _badblocks_set()
1003 bad.len = min_t(sector_t, in _badblocks_set()
1004 bad.len, BB_OFFSET(p[prev + 1]) - bad.start); in _badblocks_set()
1006 len = insert_at(bb, prev + 1, &bad); in _badblocks_set()
1067 * Clear the bad block range from bad block table which is front overlapped
1069 * already set bad block range are cleared. If the whole bad block range is
1074 struct badblocks_context *bad, int *deleted) in front_clear() argument
1076 sector_t sectors = bad->len; in front_clear()
1077 sector_t s = bad->start; in front_clear()
1113 * bad block range from bad block table. In this condition the existing bad
1117 struct badblocks_context *bad) in front_splitting_clear() argument
1122 sector_t sectors = bad->len; in front_splitting_clear()
1123 sector_t s = bad->start; in front_splitting_clear()
1133 /* Do the exact work to clear bad block range from the bad block table */
1136 struct badblocks_context bad; in _badblocks_clear() local
1156 * However it is better the think a block is bad when it in _badblocks_clear()
1157 * isn't than to think a block is not bad when it is. in _badblocks_clear()
1167 bad.ack = true; in _badblocks_clear()
1171 bad.start = s; in _badblocks_clear()
1172 bad.len = sectors; in _badblocks_clear()
1181 prev = prev_badblocks(bb, &bad, hint); in _badblocks_clear()
1185 if (overlap_behind(bb, &bad, 0)) { in _badblocks_clear()
1192 * Both situations are to clear non-bad range, in _badblocks_clear()
1200 if ((prev + 1) >= bb->count && !overlap_front(bb, prev, &bad)) { in _badblocks_clear()
1206 /* Clear will split a bad record but the table is full */ in _badblocks_clear()
1207 if (badblocks_full(bb) && (BB_OFFSET(p[prev]) < bad.start) && in _badblocks_clear()
1208 (BB_END(p[prev]) > (bad.start + sectors))) { in _badblocks_clear()
1213 if (overlap_front(bb, prev, &bad)) { in _badblocks_clear()
1214 if ((BB_OFFSET(p[prev]) < bad.start) && in _badblocks_clear()
1215 (BB_END(p[prev]) > (bad.start + bad.len))) { in _badblocks_clear()
1218 len = front_splitting_clear(bb, prev, &bad); in _badblocks_clear()
1228 len = front_clear(bb, prev, &bad, &deleted); in _badblocks_clear()
1238 if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) { in _badblocks_clear()
1239 len = BB_OFFSET(p[prev + 1]) - bad.start; in _badblocks_clear()
1241 /* Clear non-bad range should be treated as successful */ in _badblocks_clear()
1248 /* Clear non-bad range should be treated as successful */ in _badblocks_clear()
1273 /* Do the exact work to check bad blocks range from the bad block table */
1279 struct badblocks_context bad; in _badblocks_check() local
1304 bad.start = s; in _badblocks_check()
1305 bad.len = sectors; in _badblocks_check()
1312 prev = prev_badblocks(bb, &bad, hint); in _badblocks_check()
1316 ((prev + 1) >= bb->count) && !overlap_front(bb, prev, &bad)) { in _badblocks_check()
1322 if ((prev >= 0) && overlap_front(bb, prev, &bad)) { in _badblocks_check()
1342 if ((prev + 1) < bb->count && overlap_behind(bb, &bad, prev + 1)) { in _badblocks_check()
1343 len = BB_OFFSET(p[prev + 1]) - bad.start; in _badblocks_check()
1374 * badblocks_check() - check a given range for bad sectors
1381 * We can record which blocks on each device are 'bad' and so just
1383 * Entries in the bad-block table are 64bits wide. This comprises:
1384 * Length of bad-range, in sectors: 0-511 for lengths 1-512
1385 * Start of bad-range, sector offset, 54 bits (allows 8 exbibytes)
1390 * Locking of the bad-block table uses a seqlock so badblocks_check
1392 * We will sometimes want to check for bad blocks in a bi_end_io function,
1395 * When looking for a bad block we specify a range and want to
1396 * know if any block in the range is bad. So we binary-search
1402 * 0: there are no known bad blocks in the range
1403 * 1: there are known bad block which are all acknowledged
1404 * -1: there are bad blocks which have not yet been acknowledged in metadata.
1405 * plus the start/length of the first bad section we overlap.
1415 * badblocks_set() - Add a range of bad blocks to the table.
1417 * @s: first sector to mark as bad
1418 * @sectors: number of sectors to mark as bad
1419 * @acknowledged: weather to mark the bad sectors as acknowledged
1437 * badblocks_clear() - Remove a range of bad blocks to the table.
1439 * @s: first sector to mark as bad
1440 * @sectors: number of sectors to mark as bad
1457 * ack_all_badblocks() - Acknowledge all bad blocks in a list.
1489 * badblocks_show() - sysfs access to bad-blocks list
1538 * badblocks_store() - sysfs access to bad-blocks list