Lines Matching +full:cache +full:- +full:block +full:- +full:size
2 RAID 4/5/6 cache
5 Raid 4/5/6 could include an extra disk for data cache besides normal RAID
6 disks. The role of RAID disks isn't changed with the cache disk. The cache disk
7 caches data to the RAID disks. The cache can be in write-through (supported
8 since 4.4) or write-back mode (supported since 4.10). mdadm (supported since
9 3.4) has a new option '--write-journal' to create array with cache. Please
10 refer to mdadm manual for details. By default (RAID array starts), the cache is
11 in write-through mode. A user can switch it to write-back mode by::
13 echo "write-back" > /sys/block/md0/md/journal_mode
15 And switch it back to write-through mode by::
17 echo "write-through" > /sys/block/md0/md/journal_mode
19 In both modes, all writes to the array will hit cache disk first. This means
20 the cache disk must be fast and sustainable.
22 write-through mode
34 The write-through cache will cache all data on cache disk first. After the data
35 is safe on the cache disk, the data will be flushed onto RAID disks. The
36 two-step write will guarantee MD can recover correct data after unclean
37 shutdown even the array is degraded. Thus the cache can close the 'write hole'.
39 In write-through mode, MD reports IO completion to upper layer (usually
40 filesystems) after the data is safe on RAID disks, so cache disk failure
41 doesn't cause data loss. Of course cache disk failure means the array is
44 In write-through mode, the cache disk isn't required to be big. Several
47 write-back mode
50 write-back mode fixes the 'write hole' issue too, since all write data is
51 cached on cache disk. But the main goal of 'write-back' cache is to speed up
52 write. If a write crosses all RAID disks of a stripe, we call it full-stripe
53 write. For non-full-stripe writes, MD must read old data before the new parity
56 overhead too. Write-back cache will aggregate the data and flush the data to
61 In write-back mode, MD reports IO completion to upper layer (usually
62 filesystems) right after the data hits cache disk. The data is flushed to raid
63 disks later after specific conditions met. So cache disk failure will cause
66 In write-back mode, MD also caches data in memory. The memory cache includes
67 the same data stored on cache disk, so a power loss doesn't cause data loss.
68 The memory cache size has performance impact for the array. It's recommended
69 the size is big. A user can configure the size by::
71 echo "2048" > /sys/block/md0/md/stripe_cache_size
73 Too small cache disk will make the write aggregation less efficient in this
74 mode depending on the workloads. It's recommended to use a cache disk with at
75 least several gigabytes size in write-back mode.
80 The write-through and write-back cache use the same disk format. The cache disk
90 order in which MD writes data to cache disk and RAID disks. Specifically, in
91 write-through mode, MD calculates parity for IO data, writes both IO data and
96 In write-back mode, MD writes IO data to the log and reports IO completion. The
98 memory cache. If some conditions are met, MD will flush the data to RAID disks.
101 release the memory cache. The flush conditions could be stripe becomes a full
102 stripe write, free cache disk space is low or free in-kernel memory cache space