xref: /linux/Documentation/ABI/stable/sysfs-block (revision d6296cb65320be16dbf20f2fd584ddc25f3437cd)
1What:		/sys/block/<disk>/alignment_offset
2Date:		April 2009
3Contact:	Martin K. Petersen <martin.petersen@oracle.com>
4Description:
5		Storage devices may report a physical block size that is
6		bigger than the logical block size (for instance a drive
7		with 4KB physical sectors exposing 512-byte logical
8		blocks to the operating system).  This parameter
9		indicates how many bytes the beginning of the device is
10		offset from the disk's natural alignment.
11
12
13What:		/sys/block/<disk>/discard_alignment
14Date:		May 2011
15Contact:	Martin K. Petersen <martin.petersen@oracle.com>
16Description:
17		Devices that support discard functionality may
18		internally allocate space in units that are bigger than
19		the exported logical block size. The discard_alignment
20		parameter indicates how many bytes the beginning of the
21		device is offset from the internal allocation unit's
22		natural alignment.
23
24
25What:		/sys/block/<disk>/diskseq
26Date:		February 2021
27Contact:	Matteo Croce <mcroce@microsoft.com>
28Description:
29		The /sys/block/<disk>/diskseq files reports the disk
30		sequence number, which is a monotonically increasing
31		number assigned to every drive.
32		Some devices, like the loop device, refresh such number
33		every time the backing file is changed.
34		The value type is 64 bit unsigned.
35
36
37What:		/sys/block/<disk>/inflight
38Date:		October 2009
39Contact:	Jens Axboe <axboe@kernel.dk>, Nikanth Karthikesan <knikanth@suse.de>
40Description:
41		Reports the number of I/O requests currently in progress
42		(pending / in flight) in a device driver. This can be less
43		than the number of requests queued in the block device queue.
44		The report contains 2 fields: one for read requests
45		and one for write requests.
46		The value type is unsigned int.
47		Cf. Documentation/block/stat.rst which contains a single value for
48		requests in flight.
49		This is related to /sys/block/<disk>/queue/nr_requests
50		and for SCSI device also its queue_depth.
51
52
53What:		/sys/block/<disk>/integrity/device_is_integrity_capable
54Date:		July 2014
55Contact:	Martin K. Petersen <martin.petersen@oracle.com>
56Description:
57		Indicates whether a storage device is capable of storing
58		integrity metadata. Set if the device is T10 PI-capable.
59
60
61What:		/sys/block/<disk>/integrity/format
62Date:		June 2008
63Contact:	Martin K. Petersen <martin.petersen@oracle.com>
64Description:
65		Metadata format for integrity capable block device.
66		E.g. T10-DIF-TYPE1-CRC.
67
68
69What:		/sys/block/<disk>/integrity/protection_interval_bytes
70Date:		July 2015
71Contact:	Martin K. Petersen <martin.petersen@oracle.com>
72Description:
73		Describes the number of data bytes which are protected
74		by one integrity tuple. Typically the device's logical
75		block size.
76
77
78What:		/sys/block/<disk>/integrity/read_verify
79Date:		June 2008
80Contact:	Martin K. Petersen <martin.petersen@oracle.com>
81Description:
82		Indicates whether the block layer should verify the
83		integrity of read requests serviced by devices that
84		support sending integrity metadata.
85
86
87What:		/sys/block/<disk>/integrity/tag_size
88Date:		June 2008
89Contact:	Martin K. Petersen <martin.petersen@oracle.com>
90Description:
91		Number of bytes of integrity tag space available per
92		512 bytes of data.
93
94
95What:		/sys/block/<disk>/integrity/write_generate
96Date:		June 2008
97Contact:	Martin K. Petersen <martin.petersen@oracle.com>
98Description:
99		Indicates whether the block layer should automatically
100		generate checksums for write requests bound for
101		devices that support receiving integrity metadata.
102
103
104What:		/sys/block/<disk>/<partition>/alignment_offset
105Date:		April 2009
106Contact:	Martin K. Petersen <martin.petersen@oracle.com>
107Description:
108		Storage devices may report a physical block size that is
109		bigger than the logical block size (for instance a drive
110		with 4KB physical sectors exposing 512-byte logical
111		blocks to the operating system).  This parameter
112		indicates how many bytes the beginning of the partition
113		is offset from the disk's natural alignment.
114
115
116What:		/sys/block/<disk>/<partition>/discard_alignment
117Date:		May 2011
118Contact:	Martin K. Petersen <martin.petersen@oracle.com>
119Description:
120		Devices that support discard functionality may
121		internally allocate space in units that are bigger than
122		the exported logical block size. The discard_alignment
123		parameter indicates how many bytes the beginning of the
124		partition is offset from the internal allocation unit's
125		natural alignment.
126
127
128What:		/sys/block/<disk>/<partition>/stat
129Date:		February 2008
130Contact:	Jerome Marchand <jmarchan@redhat.com>
131Description:
132		The /sys/block/<disk>/<partition>/stat files display the
133		I/O statistics of partition <partition>. The format is the
134		same as the format of /sys/block/<disk>/stat.
135
136
137What:		/sys/block/<disk>/queue/add_random
138Date:		June 2010
139Contact:	linux-block@vger.kernel.org
140Description:
141		[RW] This file allows to turn off the disk entropy contribution.
142		Default value of this file is '1'(on).
143
144
145What:		/sys/block/<disk>/queue/chunk_sectors
146Date:		September 2016
147Contact:	Hannes Reinecke <hare@suse.com>
148Description:
149		[RO] chunk_sectors has different meaning depending on the type
150		of the disk. For a RAID device (dm-raid), chunk_sectors
151		indicates the size in 512B sectors of the RAID volume stripe
152		segment. For a zoned block device, either host-aware or
153		host-managed, chunk_sectors indicates the size in 512B sectors
154		of the zones of the device, with the eventual exception of the
155		last zone of the device which may be smaller.
156
157
158What:		/sys/block/<disk>/queue/crypto/
159Date:		February 2022
160Contact:	linux-block@vger.kernel.org
161Description:
162		The presence of this subdirectory of /sys/block/<disk>/queue/
163		indicates that the device supports inline encryption.  This
164		subdirectory contains files which describe the inline encryption
165		capabilities of the device.  For more information about inline
166		encryption, refer to Documentation/block/inline-encryption.rst.
167
168
169What:		/sys/block/<disk>/queue/crypto/max_dun_bits
170Date:		February 2022
171Contact:	linux-block@vger.kernel.org
172Description:
173		[RO] This file shows the maximum length, in bits, of data unit
174		numbers accepted by the device in inline encryption requests.
175
176
177What:		/sys/block/<disk>/queue/crypto/modes/<mode>
178Date:		February 2022
179Contact:	linux-block@vger.kernel.org
180Description:
181		[RO] For each crypto mode (i.e., encryption/decryption
182		algorithm) the device supports with inline encryption, a file
183		will exist at this location.  It will contain a hexadecimal
184		number that is a bitmask of the supported data unit sizes, in
185		bytes, for that crypto mode.
186
187		Currently, the crypto modes that may be supported are:
188
189		   * AES-256-XTS
190		   * AES-128-CBC-ESSIV
191		   * Adiantum
192
193		For example, if a device supports AES-256-XTS inline encryption
194		with data unit sizes of 512 and 4096 bytes, the file
195		/sys/block/<disk>/queue/crypto/modes/AES-256-XTS will exist and
196		will contain "0x1200".
197
198
199What:		/sys/block/<disk>/queue/crypto/num_keyslots
200Date:		February 2022
201Contact:	linux-block@vger.kernel.org
202Description:
203		[RO] This file shows the number of keyslots the device has for
204		use with inline encryption.
205
206
207What:		/sys/block/<disk>/queue/dax
208Date:		June 2016
209Contact:	linux-block@vger.kernel.org
210Description:
211		[RO] This file indicates whether the device supports Direct
212		Access (DAX), used by CPU-addressable storage to bypass the
213		pagecache.  It shows '1' if true, '0' if not.
214
215
216What:		/sys/block/<disk>/queue/discard_granularity
217Date:		May 2011
218Contact:	Martin K. Petersen <martin.petersen@oracle.com>
219Description:
220		[RO] Devices that support discard functionality may internally
221		allocate space using units that are bigger than the logical
222		block size. The discard_granularity parameter indicates the size
223		of the internal allocation unit in bytes if reported by the
224		device. Otherwise the discard_granularity will be set to match
225		the device's physical block size. A discard_granularity of 0
226		means that the device does not support discard functionality.
227
228
229What:		/sys/block/<disk>/queue/discard_max_bytes
230Date:		May 2011
231Contact:	Martin K. Petersen <martin.petersen@oracle.com>
232Description:
233		[RW] While discard_max_hw_bytes is the hardware limit for the
234		device, this setting is the software limit. Some devices exhibit
235		large latencies when large discards are issued, setting this
236		value lower will make Linux issue smaller discards and
237		potentially help reduce latencies induced by large discard
238		operations.
239
240
241What:		/sys/block/<disk>/queue/discard_max_hw_bytes
242Date:		July 2015
243Contact:	linux-block@vger.kernel.org
244Description:
245		[RO] Devices that support discard functionality may have
246		internal limits on the number of bytes that can be trimmed or
247		unmapped in a single operation.  The `discard_max_hw_bytes`
248		parameter is set by the device driver to the maximum number of
249		bytes that can be discarded in a single operation.  Discard
250		requests issued to the device must not exceed this limit.  A
251		`discard_max_hw_bytes` value of 0 means that the device does not
252		support discard functionality.
253
254
255What:		/sys/block/<disk>/queue/discard_zeroes_data
256Date:		May 2011
257Contact:	Martin K. Petersen <martin.petersen@oracle.com>
258Description:
259		[RO] Will always return 0.  Don't rely on any specific behavior
260		for discards, and don't read this file.
261
262
263What:		/sys/block/<disk>/queue/dma_alignment
264Date:		May 2022
265Contact:	linux-block@vger.kernel.org
266Description:
267		Reports the alignment that user space addresses must have to be
268		used for raw block device access with O_DIRECT and other driver
269		specific passthrough mechanisms.
270
271
272What:		/sys/block/<disk>/queue/fua
273Date:		May 2018
274Contact:	linux-block@vger.kernel.org
275Description:
276		[RO] Whether or not the block driver supports the FUA flag for
277		write requests.  FUA stands for Force Unit Access. If the FUA
278		flag is set that means that write requests must bypass the
279		volatile cache of the storage device.
280
281
282What:		/sys/block/<disk>/queue/hw_sector_size
283Date:		January 2008
284Contact:	linux-block@vger.kernel.org
285Description:
286		[RO] This is the hardware sector size of the device, in bytes.
287
288
289What:		/sys/block/<disk>/queue/independent_access_ranges/
290Date:		October 2021
291Contact:	linux-block@vger.kernel.org
292Description:
293		[RO] The presence of this sub-directory of the
294		/sys/block/xxx/queue/ directory indicates that the device is
295		capable of executing requests targeting different sector ranges
296		in parallel. For instance, single LUN multi-actuator hard-disks
297		will have an independent_access_ranges directory if the device
298		correctly advertizes the sector ranges of its actuators.
299
300		The independent_access_ranges directory contains one directory
301		per access range, with each range described using the sector
302		(RO) attribute file to indicate the first sector of the range
303		and the nr_sectors (RO) attribute file to indicate the total
304		number of sectors in the range starting from the first sector of
305		the range.  For example, a dual-actuator hard-disk will have the
306		following independent_access_ranges entries.::
307
308			$ tree /sys/block/<disk>/queue/independent_access_ranges/
309			/sys/block/<disk>/queue/independent_access_ranges/
310			|-- 0
311			|   |-- nr_sectors
312			|   `-- sector
313			`-- 1
314			    |-- nr_sectors
315			    `-- sector
316
317		The sector and nr_sectors attributes use 512B sector unit,
318		regardless of the actual block size of the device. Independent
319		access ranges do not overlap and include all sectors within the
320		device capacity. The access ranges are numbered in increasing
321		order of the range start sector, that is, the sector attribute
322		of range 0 always has the value 0.
323
324
325What:		/sys/block/<disk>/queue/io_poll
326Date:		November 2015
327Contact:	linux-block@vger.kernel.org
328Description:
329		[RW] When read, this file shows whether polling is enabled (1)
330		or disabled (0).  Writing '0' to this file will disable polling
331		for this device.  Writing any non-zero value will enable this
332		feature.
333
334
335What:		/sys/block/<disk>/queue/io_poll_delay
336Date:		November 2016
337Contact:	linux-block@vger.kernel.org
338Description:
339		[RW] If polling is enabled, this controls what kind of polling
340		will be performed. It defaults to -1, which is classic polling.
341		In this mode, the CPU will repeatedly ask for completions
342		without giving up any time.  If set to 0, a hybrid polling mode
343		is used, where the kernel will attempt to make an educated guess
344		at when the IO will complete. Based on this guess, the kernel
345		will put the process issuing IO to sleep for an amount of time,
346		before entering a classic poll loop. This mode might be a little
347		slower than pure classic polling, but it will be more efficient.
348		If set to a value larger than 0, the kernel will put the process
349		issuing IO to sleep for this amount of microseconds before
350		entering classic polling.
351
352
353What:		/sys/block/<disk>/queue/io_timeout
354Date:		November 2018
355Contact:	Weiping Zhang <zhangweiping@didiglobal.com>
356Description:
357		[RW] io_timeout is the request timeout in milliseconds. If a
358		request does not complete in this time then the block driver
359		timeout handler is invoked. That timeout handler can decide to
360		retry the request, to fail it or to start a device recovery
361		strategy.
362
363
364What:		/sys/block/<disk>/queue/iostats
365Date:		January 2009
366Contact:	linux-block@vger.kernel.org
367Description:
368		[RW] This file is used to control (on/off) the iostats
369		accounting of the disk.
370
371
372What:		/sys/block/<disk>/queue/logical_block_size
373Date:		May 2009
374Contact:	Martin K. Petersen <martin.petersen@oracle.com>
375Description:
376		[RO] This is the smallest unit the storage device can address.
377		It is typically 512 bytes.
378
379
380What:		/sys/block/<disk>/queue/max_active_zones
381Date:		July 2020
382Contact:	Niklas Cassel <niklas.cassel@wdc.com>
383Description:
384		[RO] For zoned block devices (zoned attribute indicating
385		"host-managed" or "host-aware"), the sum of zones belonging to
386		any of the zone states: EXPLICIT OPEN, IMPLICIT OPEN or CLOSED,
387		is limited by this value. If this value is 0, there is no limit.
388
389		If the host attempts to exceed this limit, the driver should
390		report this error with BLK_STS_ZONE_ACTIVE_RESOURCE, which user
391		space may see as the EOVERFLOW errno.
392
393
394What:		/sys/block/<disk>/queue/max_discard_segments
395Date:		February 2017
396Contact:	linux-block@vger.kernel.org
397Description:
398		[RO] The maximum number of DMA scatter/gather entries in a
399		discard request.
400
401
402What:		/sys/block/<disk>/queue/max_hw_sectors_kb
403Date:		September 2004
404Contact:	linux-block@vger.kernel.org
405Description:
406		[RO] This is the maximum number of kilobytes supported in a
407		single data transfer.
408
409
410What:		/sys/block/<disk>/queue/max_integrity_segments
411Date:		September 2010
412Contact:	linux-block@vger.kernel.org
413Description:
414		[RO] Maximum number of elements in a DMA scatter/gather list
415		with integrity data that will be submitted by the block layer
416		core to the associated block driver.
417
418
419What:		/sys/block/<disk>/queue/max_open_zones
420Date:		July 2020
421Contact:	Niklas Cassel <niklas.cassel@wdc.com>
422Description:
423		[RO] For zoned block devices (zoned attribute indicating
424		"host-managed" or "host-aware"), the sum of zones belonging to
425		any of the zone states: EXPLICIT OPEN or IMPLICIT OPEN, is
426		limited by this value. If this value is 0, there is no limit.
427
428
429What:		/sys/block/<disk>/queue/max_sectors_kb
430Date:		September 2004
431Contact:	linux-block@vger.kernel.org
432Description:
433		[RW] This is the maximum number of kilobytes that the block
434		layer will allow for a filesystem request. Must be smaller than
435		or equal to the maximum size allowed by the hardware. Write 0
436		to use default kernel settings.
437
438
439What:		/sys/block/<disk>/queue/max_segment_size
440Date:		March 2010
441Contact:	linux-block@vger.kernel.org
442Description:
443		[RO] Maximum size in bytes of a single element in a DMA
444		scatter/gather list.
445
446
447What:		/sys/block/<disk>/queue/max_segments
448Date:		March 2010
449Contact:	linux-block@vger.kernel.org
450Description:
451		[RO] Maximum number of elements in a DMA scatter/gather list
452		that is submitted to the associated block driver.
453
454
455What:		/sys/block/<disk>/queue/minimum_io_size
456Date:		April 2009
457Contact:	Martin K. Petersen <martin.petersen@oracle.com>
458Description:
459		[RO] Storage devices may report a granularity or preferred
460		minimum I/O size which is the smallest request the device can
461		perform without incurring a performance penalty.  For disk
462		drives this is often the physical block size.  For RAID arrays
463		it is often the stripe chunk size.  A properly aligned multiple
464		of minimum_io_size is the preferred request size for workloads
465		where a high number of I/O operations is desired.
466
467
468What:		/sys/block/<disk>/queue/nomerges
469Date:		January 2010
470Contact:	linux-block@vger.kernel.org
471Description:
472		[RW] Standard I/O elevator operations include attempts to merge
473		contiguous I/Os. For known random I/O loads these attempts will
474		always fail and result in extra cycles being spent in the
475		kernel. This allows one to turn off this behavior on one of two
476		ways: When set to 1, complex merge checks are disabled, but the
477		simple one-shot merges with the previous I/O request are
478		enabled. When set to 2, all merge tries are disabled. The
479		default value is 0 - which enables all types of merge tries.
480
481
482What:		/sys/block/<disk>/queue/nr_requests
483Date:		July 2003
484Contact:	linux-block@vger.kernel.org
485Description:
486		[RW] This controls how many requests may be allocated in the
487		block layer for read or write requests. Note that the total
488		allocated number may be twice this amount, since it applies only
489		to reads or writes (not the accumulated sum).
490
491		To avoid priority inversion through request starvation, a
492		request queue maintains a separate request pool per each cgroup
493		when CONFIG_BLK_CGROUP is enabled, and this parameter applies to
494		each such per-block-cgroup request pool.  IOW, if there are N
495		block cgroups, each request queue may have up to N request
496		pools, each independently regulated by nr_requests.
497
498
499What:		/sys/block/<disk>/queue/nr_zones
500Date:		November 2018
501Contact:	Damien Le Moal <damien.lemoal@wdc.com>
502Description:
503		[RO] nr_zones indicates the total number of zones of a zoned
504		block device ("host-aware" or "host-managed" zone model). For
505		regular block devices, the value is always 0.
506
507
508What:		/sys/block/<disk>/queue/optimal_io_size
509Date:		April 2009
510Contact:	Martin K. Petersen <martin.petersen@oracle.com>
511Description:
512		[RO] Storage devices may report an optimal I/O size, which is
513		the device's preferred unit for sustained I/O.  This is rarely
514		reported for disk drives.  For RAID arrays it is usually the
515		stripe width or the internal track size.  A properly aligned
516		multiple of optimal_io_size is the preferred request size for
517		workloads where sustained throughput is desired.  If no optimal
518		I/O size is reported this file contains 0.
519
520
521What:		/sys/block/<disk>/queue/physical_block_size
522Date:		May 2009
523Contact:	Martin K. Petersen <martin.petersen@oracle.com>
524Description:
525		[RO] This is the smallest unit a physical storage device can
526		write atomically.  It is usually the same as the logical block
527		size but may be bigger.  One example is SATA drives with 4KB
528		sectors that expose a 512-byte logical block size to the
529		operating system.  For stacked block devices the
530		physical_block_size variable contains the maximum
531		physical_block_size of the component devices.
532
533
534What:		/sys/block/<disk>/queue/read_ahead_kb
535Date:		May 2004
536Contact:	linux-block@vger.kernel.org
537Description:
538		[RW] Maximum number of kilobytes to read-ahead for filesystems
539		on this block device.
540
541
542What:		/sys/block/<disk>/queue/rotational
543Date:		January 2009
544Contact:	linux-block@vger.kernel.org
545Description:
546		[RW] This file is used to stat if the device is of rotational
547		type or non-rotational type.
548
549
550What:		/sys/block/<disk>/queue/rq_affinity
551Date:		September 2008
552Contact:	linux-block@vger.kernel.org
553Description:
554		[RW] If this option is '1', the block layer will migrate request
555		completions to the cpu "group" that originally submitted the
556		request. For some workloads this provides a significant
557		reduction in CPU cycles due to caching effects.
558
559		For storage configurations that need to maximize distribution of
560		completion processing setting this option to '2' forces the
561		completion to run on the requesting cpu (bypassing the "group"
562		aggregation logic).
563
564
565What:		/sys/block/<disk>/queue/scheduler
566Date:		October 2004
567Contact:	linux-block@vger.kernel.org
568Description:
569		[RW] When read, this file will display the current and available
570		IO schedulers for this block device. The currently active IO
571		scheduler will be enclosed in [] brackets. Writing an IO
572		scheduler name to this file will switch control of this block
573		device to that new IO scheduler. Note that writing an IO
574		scheduler name to this file will attempt to load that IO
575		scheduler module, if it isn't already present in the system.
576
577
578What:		/sys/block/<disk>/queue/stable_writes
579Date:		September 2020
580Contact:	linux-block@vger.kernel.org
581Description:
582		[RW] This file will contain '1' if memory must not be modified
583		while it is being used in a write request to this device.  When
584		this is the case and the kernel is performing writeback of a
585		page, the kernel will wait for writeback to complete before
586		allowing the page to be modified again, rather than allowing
587		immediate modification as is normally the case.  This
588		restriction arises when the device accesses the memory multiple
589		times where the same data must be seen every time -- for
590		example, once to calculate a checksum and once to actually write
591		the data.  If no such restriction exists, this file will contain
592		'0'.  This file is writable for testing purposes.
593
594
595What:		/sys/block/<disk>/queue/throttle_sample_time
596Date:		March 2017
597Contact:	linux-block@vger.kernel.org
598Description:
599		[RW] This is the time window that blk-throttle samples data, in
600		millisecond.  blk-throttle makes decision based on the
601		samplings. Lower time means cgroups have more smooth throughput,
602		but higher CPU overhead. This exists only when
603		CONFIG_BLK_DEV_THROTTLING_LOW is enabled.
604
605
606What:		/sys/block/<disk>/queue/virt_boundary_mask
607Date:		April 2021
608Contact:	linux-block@vger.kernel.org
609Description:
610		[RO] This file shows the I/O segment memory alignment mask for
611		the block device.  I/O requests to this device will be split
612		between segments wherever either the memory address of the end
613		of the previous segment or the memory address of the beginning
614		of the current segment is not aligned to virt_boundary_mask + 1
615		bytes.
616
617
618What:		/sys/block/<disk>/queue/wbt_lat_usec
619Date:		November 2016
620Contact:	linux-block@vger.kernel.org
621Description:
622		[RW] If the device is registered for writeback throttling, then
623		this file shows the target minimum read latency. If this latency
624		is exceeded in a given window of time (see wb_window_usec), then
625		the writeback throttling will start scaling back writes. Writing
626		a value of '0' to this file disables the feature. Writing a
627		value of '-1' to this file resets the value to the default
628		setting.
629
630
631What:		/sys/block/<disk>/queue/write_cache
632Date:		April 2016
633Contact:	linux-block@vger.kernel.org
634Description:
635		[RW] When read, this file will display whether the device has
636		write back caching enabled or not. It will return "write back"
637		for the former case, and "write through" for the latter. Writing
638		to this file can change the kernels view of the device, but it
639		doesn't alter the device state. This means that it might not be
640		safe to toggle the setting from "write back" to "write through",
641		since that will also eliminate cache flushes issued by the
642		kernel.
643
644
645What:		/sys/block/<disk>/queue/write_same_max_bytes
646Date:		January 2012
647Contact:	Martin K. Petersen <martin.petersen@oracle.com>
648Description:
649		[RO] Some devices support a write same operation in which a
650		single data block can be written to a range of several
651		contiguous blocks on storage. This can be used to wipe areas on
652		disk or to initialize drives in a RAID configuration.
653		write_same_max_bytes indicates how many bytes can be written in
654		a single write same command. If write_same_max_bytes is 0, write
655		same is not supported by the device.
656
657
658What:		/sys/block/<disk>/queue/write_zeroes_max_bytes
659Date:		November 2016
660Contact:	Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
661Description:
662		[RO] Devices that support write zeroes operation in which a
663		single request can be issued to zero out the range of contiguous
664		blocks on storage without having any payload in the request.
665		This can be used to optimize writing zeroes to the devices.
666		write_zeroes_max_bytes indicates how many bytes can be written
667		in a single write zeroes command. If write_zeroes_max_bytes is
668		0, write zeroes is not supported by the device.
669
670
671What:		/sys/block/<disk>/queue/zone_append_max_bytes
672Date:		May 2020
673Contact:	linux-block@vger.kernel.org
674Description:
675		[RO] This is the maximum number of bytes that can be written to
676		a sequential zone of a zoned block device using a zone append
677		write operation (REQ_OP_ZONE_APPEND). This value is always 0 for
678		regular block devices.
679
680
681What:		/sys/block/<disk>/queue/zone_write_granularity
682Date:		January 2021
683Contact:	linux-block@vger.kernel.org
684Description:
685		[RO] This indicates the alignment constraint, in bytes, for
686		write operations in sequential zones of zoned block devices
687		(devices with a zoned attributed that reports "host-managed" or
688		"host-aware"). This value is always 0 for regular block devices.
689
690
691What:		/sys/block/<disk>/queue/zoned
692Date:		September 2016
693Contact:	Damien Le Moal <damien.lemoal@wdc.com>
694Description:
695		[RO] zoned indicates if the device is a zoned block device and
696		the zone model of the device if it is indeed zoned.  The
697		possible values indicated by zoned are "none" for regular block
698		devices and "host-aware" or "host-managed" for zoned block
699		devices. The characteristics of host-aware and host-managed
700		zoned block devices are described in the ZBC (Zoned Block
701		Commands) and ZAC (Zoned Device ATA Command Set) standards.
702		These standards also define the "drive-managed" zone model.
703		However, since drive-managed zoned block devices do not support
704		zone commands, they will be treated as regular block devices and
705		zoned will report "none".
706
707
708What:		/sys/block/<disk>/stat
709Date:		February 2008
710Contact:	Jerome Marchand <jmarchan@redhat.com>
711Description:
712		The /sys/block/<disk>/stat files displays the I/O
713		statistics of disk <disk>. They contain 11 fields:
714
715		==  ==============================================
716		 1  reads completed successfully
717		 2  reads merged
718		 3  sectors read
719		 4  time spent reading (ms)
720		 5  writes completed
721		 6  writes merged
722		 7  sectors written
723		 8  time spent writing (ms)
724		 9  I/Os currently in progress
725		10  time spent doing I/Os (ms)
726		11  weighted time spent doing I/Os (ms)
727		12  discards completed
728		13  discards merged
729		14  sectors discarded
730		15  time spent discarding (ms)
731		16  flush requests completed
732		17  time spent flushing (ms)
733		==  ==============================================
734
735		For more details refer Documentation/admin-guide/iostats.rst
736