Lines Matching +full:re +full:- +full:attached

11 This is the git repository of bcache-tools:
12 https://git.kernel.org/pub/scm/linux/kernel/git/colyli/bcache-tools.git/
17 It's designed around the performance characteristics of SSDs - it only allocates
25 great lengths to protect your data - it reliably handles unclean shutdown. (It
27 writes as completed until they're on stable storage).
29 Writeback caching can use most of the cache for buffering writes - writing
36 average is above the cutoff it will skip all IO from that task - instead of
47 You'll need bcache util from the bcache-tools repository. Both the cache device
50 bcache make -B /dev/sdb
51 bcache make -C /dev/sdc
53 `bcache make` has the ability to format multiple devices at the same time - if
57 bcache make -B /dev/sda /dev/sdb -C /dev/sdc
59 If your bcache-tools is not updated to latest version and does not have the
60 unified `bcache` utility, you may use the legacy `make-bcache` utility to format
61 bcache device with same -B and -C parameters.
63 bcache-tools now ships udev rules, and bcache devices are known to the kernel
83 /dev/bcache/by-uuid/<uuid>
84 /dev/bcache/by-label/<label>
92 You can also control them through /sys/fs//bcache/<cset-uuid>/ .
99 ---------
102 must be attached to your cache set to enable caching. Attaching a backing
106 echo <CSET-UUID> > /sys/block/bcache0/bcache/attach
110 /dev/bcache<N> device won't be created until the cache shows up - particularly
113 If you're booting up and your cache device is gone and never coming back, you
119 /sys/block/bcache0, because bcache0 doesn't exist yet. If you're using a
124 cache, don't expect the filesystem to be recoverable - you will have massive
128 --------------
135 - For reads from the cache, if they error we just retry the read from the
138 - For writethrough writes, if the write to the cache errors we just switch to
142 - For writeback writes, we currently pass that error back up to the
143 filesystem/userspace. This could be improved - we could retry it as a write
146 - When we detach, we first try to flush any dirty data (if we were running in
152 --------------
173 host:/sys/block/md5/bcache# echo 0226553a-37cf-41d5-b3ce-8b1e944543a8 > attach
175 [ 1933.478179] bcache: __cached_dev_store() Can't attach 0226553a-37cf-41d5-b3ce-8b1e944543a8
179 or disappeared and came back, and needs to be (re-)registered::
187 Please report it to the bcache development list: linux-bcache@vger.kernel.org
197 of the backing device created with --offset 8K, or any value defined by
198 --data-offset when you originally formatted bcache with `bcache make`.
202 losetup -o 8192 /dev/loop0 /dev/your_bcache_backing_dev
214 host:~# wipefs -a /dev/sdh2
220 host:~# bcache make -C /dev/sdh2
221 UUID: 7be7e175-8f4c-4f99-94b2-9c904d227045
222 Set UUID: 5bc072a8-ab17-446d-9744-e247949913c1
239 host:/sys/block/md5/bcache# echo 5bc072a8-ab17-446d-9744-e247949913c1 > attach
240 … bcache: bch_cached_dev_attach() Caching md5 as bcache0 on set 5bc072a8-ab17-446d-9744-e247949913c1
248 host:~# wipefs -a /dev/nvme0n1p4
254 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# ls -l cache0
255 …lrwxrwxrwx 1 root root 0 Feb 25 18:33 cache0 -> ../../../devices/pci0000:00/0000:00:1d.0/0000:70:0…
256 host:/sys/fs/bcache/b7ba27a1-2398-4649-8ae3-0959f57ba128# echo 1 > stop
257 …kernel: [ 917.041908] bcache: cache_set_free() Cache set b7ba27a1-2398-4649-8ae3-0959f57ba128 unr…
261 host:~# wipefs -a /dev/nvme0n1p4
265 G) dm-crypt and bcache
275 fdisk run and re-register a changed partition table, which won't work
294 bcache: cache_set_free() Cache set 5bc072a8-ab17-446d-9744-e247949913c1 unregistered
302 host:/sys/fs/bcache# ls -l */{cache?,bdev?}
303 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/bdev1 -> ../../../devices/vir…
304 …xrwx 1 root root 0 Mar 5 09:39 0226553a-37cf-41d5-b3ce-8b1e944543a8/cache0 -> ../../../devices/vi…
305 …lrwxrwxrwx 1 root root 0 Mar 5 09:39 5bc072a8-ab17-446d-9744-e247949913c1/cache0 -> ../../../devi…
310 host:/sys/fs/bcache/5bc072a8-ab17-446d-9744-e247949913c1# echo 1 > stop
318 ---------------------------
321 be reasonable for typical desktop and server workloads, but they're not what you
324 - Backing device alignment
328 width using `bcache make --data-offset`. If you intend to expand your
338 volume to the following data-spindle counts without re-aligning::
342 - Bad write performance
351 - Bad performance, or traffic not going to the SSD that you'd expect
353 By default, bcache doesn't cache everything. It tries to skip sequential IO -
359 writing an 8 gigabyte test file - so you want to disable that::
367 - Traffic's still going to the spindle/still getting cache misses
369 In the real world, SSDs don't always keep up with disks - particularly with
385 - Still getting cache misses, of the same data
395 nodes are huge and index large regions of the device). But when you're
396 benchmarking, if you're trying to warm the cache by reading a bunch of data
397 and there's no other traffic - that can be a problem.
403 Sysfs - backing device
404 ----------------------
407 (if attached) /sys/fs/bcache/<cset-uuid>/bdev*
454 no cache: Has never been attached to a cache set.
479 Rate in sectors per second - if writeback_percent is nonzero, background
488 Sysfs - backing device stats
492 versions that decay over the past day, hour and 5 minutes; they're also
511 Sysfs - cache set
514 Available at /sys/fs/bcache/<cset-uuid>
520 Symlink to each of the attached backing devices.
564 Write to this file to shut down the cache set - waits until all attached
574 Sysfs - cache set internal
598 was reused and invalidated - i.e. where the pointer was stale after the read
604 Sysfs - Cache device
610 Minimum granularity of writes - should match hardware sector size.