xref: /freebsd/sys/contrib/openzfs/man/man8/zpool-iostat.8 (revision eac7052fdebb90caf2f653e06187bdbca837b9c7)
1.\"
2.\" CDDL HEADER START
3.\"
4.\" The contents of this file are subject to the terms of the
5.\" Common Development and Distribution License (the "License").
6.\" You may not use this file except in compliance with the License.
7.\"
8.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9.\" or http://www.opensolaris.org/os/licensing.
10.\" See the License for the specific language governing permissions
11.\" and limitations under the License.
12.\"
13.\" When distributing Covered Code, include this CDDL HEADER in each
14.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15.\" If applicable, add the following below this CDDL HEADER, with the
16.\" fields enclosed by brackets "[]" replaced with your own identifying
17.\" information: Portions Copyright [yyyy] [name of copyright owner]
18.\"
19.\" CDDL HEADER END
20.\"
21.\"
22.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
23.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
24.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
25.\" Copyright (c) 2017 Datto Inc.
26.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
27.\" Copyright 2017 Nexenta Systems, Inc.
28.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
29.\"
30.Dd August 9, 2019
31.Dt ZPOOL-IOSTAT 8
32.Os
33.Sh NAME
34.Nm zpool Ns Pf - Cm iostat
35.Nd Display logical I/O statistics for the given ZFS storage pools/vdevs
36.Sh SYNOPSIS
37.Nm
38.Cm iostat
39.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
40.Op Fl T Sy u Ns | Ns Sy d
41.Op Fl ghHLnpPvy
42.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
43.Op Ar interval Op Ar count
44.Sh DESCRIPTION
45.Bl -tag -width Ds
46.It Xo
47.Nm
48.Cm iostat
49.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw
50.Op Fl T Sy u Ns | Ns Sy d
51.Op Fl ghHLnpPvy
52.Oo Oo Ar pool Ns ... Oc Ns | Ns Oo Ar pool vdev Ns ... Oc Ns | Ns Oo Ar vdev Ns ... Oc Oc
53.Op Ar interval Op Ar count
54.Xc
55Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
56be observed via
57.Xr iostat 1 .
58If writes are located nearby, they may be merged into a single
59larger operation. Additional I/O may be generated depending on the level of
60vdev redundancy.
61To filter output, you may pass in a list of pools, a pool and list of vdevs
62in that pool, or a list of any vdevs from any pool. If no items are specified,
63statistics for every pool in the system are shown.
64When given an
65.Ar interval ,
66the statistics are printed every
67.Ar interval
68seconds until ^C is pressed. If
69.Fl n
70flag is specified the headers are displayed only once, otherwise they are
71displayed periodically. If count is specified, the command exits
72after count reports are printed. The first report printed is always
73the statistics since boot regardless of whether
74.Ar interval
75and
76.Ar count
77are passed. However, this behavior can be suppressed with the
78.Fl y
79flag. Also note that the units of
80.Sy K ,
81.Sy M ,
82.Sy G ...
83that are printed in the report are in base 1024. To get the raw
84values, use the
85.Fl p
86flag.
87.Bl -tag -width Ds
88.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns ...
89Run a script (or scripts) on each vdev and include the output as a new column
90in the
91.Nm zpool Cm iostat
92output. Users can run any script found in their
93.Pa ~/.zpool.d
94directory or from the system
95.Pa /etc/zfs/zpool.d
96directory. Script names containing the slash (/) character are not allowed.
97The default search path can be overridden by setting the
98ZPOOL_SCRIPTS_PATH environment variable. A privileged user can run
99.Fl c
100if they have the ZPOOL_SCRIPTS_AS_ROOT
101environment variable set. If a script requires the use of a privileged
102command, like
103.Xr smartctl 8 ,
104then it's recommended you allow the user access to it in
105.Pa /etc/sudoers
106or add the user to the
107.Pa /etc/sudoers.d/zfs
108file.
109.Pp
110If
111.Fl c
112is passed without a script name, it prints a list of all scripts.
113.Fl c
114also sets verbose mode
115.No \&( Ns Fl v Ns No \&).
116.Pp
117Script output should be in the form of "name=value". The column name is
118set to "name" and the value is set to "value". Multiple lines can be
119used to output multiple columns. The first line of output not in the
120"name=value" format is displayed without a column title, and no more
121output after that is displayed. This can be useful for printing error
122messages. Blank or NULL values are printed as a '-' to make output
123awk-able.
124.Pp
125The following environment variables are set before running each script:
126.Bl -tag -width "VDEV_PATH"
127.It Sy VDEV_PATH
128Full path to the vdev
129.El
130.Bl -tag -width "VDEV_UPATH"
131.It Sy VDEV_UPATH
132Underlying path to the vdev (/dev/sd*).  For use with device mapper,
133multipath, or partitioned vdevs.
134.El
135.Bl -tag -width "VDEV_ENC_SYSFS_PATH"
136.It Sy VDEV_ENC_SYSFS_PATH
137The sysfs path to the enclosure for the vdev (if any).
138.El
139.It Fl T Sy u Ns | Ns Sy d
140Display a time stamp.
141Specify
142.Sy u
143for a printed representation of the internal representation of time.
144See
145.Xr time 2 .
146Specify
147.Sy d
148for standard date format.
149See
150.Xr date 1 .
151.It Fl g
152Display vdev GUIDs instead of the normal device names. These GUIDs
153can be used in place of device names for the zpool
154detach/offline/remove/replace commands.
155.It Fl H
156Scripted mode. Do not display headers, and separate fields by a
157single tab instead of arbitrary space.
158.It Fl L
159Display real paths for vdevs resolving all symbolic links. This can
160be used to look up the current block device name regardless of the
161.Pa /dev/disk/
162path used to open it.
163.It Fl n
164Print headers only once when passed
165.It Fl p
166Display numbers in parsable (exact) values. Time values are in
167nanoseconds.
168.It Fl P
169Display full paths for vdevs instead of only the last component of
170the path. This can be used in conjunction with the
171.Fl L
172flag.
173.It Fl r
174Print request size histograms for the leaf vdev's IO. This includes
175histograms of individual IOs (ind) and aggregate IOs (agg). These stats
176can be useful for observing how well IO aggregation is working.  Note
177that TRIM IOs may exceed 16M, but will be counted as 16M.
178.It Fl v
179Verbose statistics Reports usage statistics for individual vdevs within the
180pool, in addition to the pool-wide statistics.
181.It Fl y
182Omit statistics since boot.
183Normally the first line of output reports the statistics since boot.
184This option suppresses that first line of output.
185.Ar interval
186.It Fl w
187Display latency histograms:
188.Pp
189.Ar total_wait :
190Total IO time (queuing + disk IO time).
191.Ar disk_wait :
192Disk IO time (time reading/writing the disk).
193.Ar syncq_wait :
194Amount of time IO spent in synchronous priority queues.  Does not include
195disk time.
196.Ar asyncq_wait :
197Amount of time IO spent in asynchronous priority queues.  Does not include
198disk time.
199.Ar scrub :
200Amount of time IO spent in scrub queue. Does not include disk time.
201.It Fl l
202Include average latency statistics:
203.Pp
204.Ar total_wait :
205Average total IO time (queuing + disk IO time).
206.Ar disk_wait :
207Average disk IO time (time reading/writing the disk).
208.Ar syncq_wait :
209Average amount of time IO spent in synchronous priority queues. Does
210not include disk time.
211.Ar asyncq_wait :
212Average amount of time IO spent in asynchronous priority queues.
213Does not include disk time.
214.Ar scrub :
215Average queuing time in scrub queue. Does not include disk time.
216.Ar trim :
217Average queuing time in trim queue. Does not include disk time.
218.It Fl q
219Include active queue statistics. Each priority queue has both
220pending (
221.Ar pend )
222and active (
223.Ar activ )
224IOs. Pending IOs are waiting to
225be issued to the disk, and active IOs have been issued to disk and are
226waiting for completion. These stats are broken out by priority queue:
227.Pp
228.Ar syncq_read/write :
229Current number of entries in synchronous priority
230queues.
231.Ar asyncq_read/write :
232Current number of entries in asynchronous priority queues.
233.Ar scrubq_read :
234Current number of entries in scrub queue.
235.Ar trimq_write :
236Current number of entries in trim queue.
237.Pp
238All queue statistics are instantaneous measurements of the number of
239entries in the queues. If you specify an interval, the measurements
240will be sampled from the end of the interval.
241.El
242.El
243.Sh SEE ALSO
244.Xr zpool-list 8 ,
245.Xr zpool-status 8 ,
246.Xr iostat 1 ,
247.Xr smartctl 8
248