#
89f4f91d |
| 05-Mar-2024 |
Alan Somers <asomers@FreeBSD.org> |
zfsd: Use vdev prop values for fault/degrade thresholds
ZED uses vdev props for setting disk fault/degrade thresholds, this patch enables zfsd to use the same vdev props for these same tasks.
OpenZ
zfsd: Use vdev prop values for fault/degrade thresholds
ZED uses vdev props for setting disk fault/degrade thresholds, this patch enables zfsd to use the same vdev props for these same tasks.
OpenZFS on Linux is using vdev props for ZED disk fault/degrade thresholds. Originally the thresholds supported were for io and checksum events and recently this was updated to process slow io events as well, see https://github.com/openzfs/zfs/commit/cbe882298e4ddc3917dfaf239eca475fe06d62d4
This patch enables us to use the same vdev props in zfsd as ZED uses. After this patch is merged both OSs will use the same vdev props to set retirement thresholds.
It's probably important to note that the threshold defaults are different between OS. I've kept the existing defaults inside zfsd and DID NOT match them to what ZED does.
Differential Revision: https://reviews.freebsd.org/D44043 MFC after: 2 weeks Relnotes: yes Reviewed by: asomers, allanjude Sponsored by: Axcient Submitted by: Alek Pinchuk <apinchuk@axcient.com>
show more ...
|
Revision tags: release/13.3.0, release/14.0.0 |
|
#
d565784a |
| 12-Jul-2023 |
Alan Somers <asomers@FreeBSD.org> |
zfsd: fault disks that generate too many I/O delay events
If ZFS reports that a disk had at least 8 I/O operations over 60s that were each delayed by at least 30s (implying a queue depth > 4 or I/O
zfsd: fault disks that generate too many I/O delay events
If ZFS reports that a disk had at least 8 I/O operations over 60s that were each delayed by at least 30s (implying a queue depth > 4 or I/O aggregation, obviously), fault that disk. Disks that respond this slowly can degrade the entire system's performance.
MFC after: 2 weeks Sponsored by: Axcient Reviewed by: delphij Differential Revision: https://reviews.freebsd.org/D42825
show more ...
|
#
b3e76948 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
Remove $FreeBSD$: two-line .h pattern
Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/
|
Revision tags: release/13.2.0 |
|
#
f698c1e9 |
| 05-Apr-2023 |
Alan Somers <asomers@FreeBSD.org> |
zfsd: add support for hotplugging spares
If you remove an unused spare and then reinsert it, zfsd will now online it in all pools.
Do not MFC without 2a58b312b62 (but it's ok to MFC that one withou
zfsd: add support for hotplugging spares
If you remove an unused spare and then reinsert it, zfsd will now online it in all pools.
Do not MFC without 2a58b312b62 (but it's ok to MFC that one without this one).
Submitted by: Ameer Hamza <ahamza@ixsystems.com> (zfsd), Me (tests) MFC after: 2 weeks MFC with: 2a58b312b62f908ec92311d1bd8536dbaeb8e55b Sponsored by: iX Systems, Axcient Pull Request: https://github.com/freebsd/freebsd-src/pull/697
show more ...
|
Revision tags: release/12.4.0, release/13.1.0, release/12.3.0, release/13.0.0, release/12.2.0, release/11.4.0, release/12.1.0, release/11.3.0, release/12.0.0, release/11.2.0 |
|
#
d727d8b0 |
| 15-Feb-2018 |
Alan Somers <asomers@FreeBSD.org> |
Optimize zfsd for the happy case
If there are no damaged pools, then ignore all GEOM events. We only use them to fix damaged pools. However, still pay attention to ZFS events.
MFC after: 20 days
Optimize zfsd for the happy case
If there are no damaged pools, then ignore all GEOM events. We only use them to fix damaged pools. However, still pay attention to ZFS events.
MFC after: 20 days X-MFC-With: 329284 Sponsored by: Spectra Logic Corp
show more ...
|
#
c2c014f2 |
| 07-Nov-2017 |
Hans Petter Selasky <hselasky@FreeBSD.org> |
Merge ^/head r323559 through r325504.
|
#
3c5ab8c1 |
| 30-Oct-2017 |
Enji Cooper <ngie@FreeBSD.org> |
MFhead@r325119
|
#
12a88a3d |
| 26-Oct-2017 |
Alan Somers <asomers@FreeBSD.org> |
zfsd should be able to online an L2ARC that disappears and returns
Previously, this didn't work because L2ARC devices' labels don't contain pool GUIDs. Modify zfsd so that the pool GUID won't be re
zfsd should be able to online an L2ARC that disappears and returns
Previously, this didn't work because L2ARC devices' labels don't contain pool GUIDs. Modify zfsd so that the pool GUID won't be required:
lib/libdevdctl/guid.h Change INVALID_GUID from a uint64_t constant to a function that returns an invalid Guid object. Remove the void constructor. Nothing uses it, and it violates RAII.
cddl/usr.sbin/zfsd/case_file.h cddl/usr.sbin/zfsd/case_file.cc Allow CaseFile::Find to match a CaseFile based on Vdev GUID alone. In CaseFile::ReEvaluate, attempt to online devices even if the newly arrived device has no pool GUID.
cddl/usr.sbin/zfsd/vdev_iterator.cc Iterate through a pool's cache devices as well as its regular devices.
Reported by: avg Reviewed by: avg MFC after: 3 weeks Sponsored by: Spectra Logic Corp Differential Revision: https://reviews.freebsd.org/D12791
show more ...
|
Revision tags: release/10.4.0, release/11.1.0, release/11.0.1, release/11.0.0 |
|
#
7a0c41d5 |
| 28-May-2016 |
Alan Somers <asomers@FreeBSD.org> |
zfsd(8), the ZFS fault management daemon
Add zfsd, which deals with hard drive faults in ZFS pools. It manages hotspares and replements in drive slots that publish physical paths.
cddl/usr.sbin/zfs
zfsd(8), the ZFS fault management daemon
Add zfsd, which deals with hard drive faults in ZFS pools. It manages hotspares and replements in drive slots that publish physical paths.
cddl/usr.sbin/zfsd Add zfsd(8) and its unit tests
cddl/usr.sbin/Makefile Add zfsd to the build
lib/libdevdctl A C++ library that helps devd clients process events
lib/Makefile share/mk/bsd.libnames.mk share/mk/src.libnames.mk Add libdevdctl to the build. It's a private library, unusable by out-of-tree software.
etc/defaults/rc.conf By default, set zfsd_enable to NO
etc/mtree/BSD.include.dist Add a directory for libdevdctl's include files
etc/mtree/BSD.tests.dist Add a directory for zfsd's unit tests
etc/mtree/BSD.var.dist Add /var/db/zfsd/cases, where zfsd stores case files while it's shut down.
etc/rc.d/Makefile etc/rc.d/zfsd Add zfsd's rc script
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/vdev.c Fix the resource.fs.zfs.statechange message. It had a number of problems:
It was only being emitted on a transition to the HEALTHY state. That made it impossible for zfsd to take actions based on drives getting sicker.
It compared the new state to vdev_prevstate, which is the state that the vdev had the last time it was opened. That doesn't make sense, because a vdev can change state multiple times without being reopened.
vdev_set_state contains logic that will change the device's new state based on various conditions. However, the statechange event was being posted _before_ that logic took effect. Now it's being posted after.
Submitted by: gibbs, asomers, mav, allanjude Reviewed by: mav, delphij Relnotes: yes Sponsored by: Spectra Logic Corp, iX Systems Differential Revision: https://reviews.freebsd.org/D6564
show more ...
|