67738a4f | 08-Jul-2016 |
Arne Jansen <jansen@webgods.de> |
This commit fixes https://www.illumos.org/issues/6779.
Chronology of the events:
1. we have an empty directory 2. the filesystem is nearly out of quota 3. a snapshot prevents any space from being f
This commit fixes https://www.illumos.org/issues/6779.
Chronology of the events:
1. we have an empty directory 2. the filesystem is nearly out of quota 3. a snapshot prevents any space from being freed 4. a remote clients calls opendir via nfs4 5. the same client removes the directory 6. the client proceeds with readdir --> boom.
During 4, the client obtains a FH for the directory. During 5, the directory entry gets removed, the inode is added to the unlinked set and z_unlinked is set. Then the inode contents is deleted (dmu_free_long_range). Deletion of the inode itself fails because the fs ran out of space exactly at this moment. The inode is left in the unlinked set, the znode is freed. During 6, the readdir of the client instantiates a new znode for the (empty) inode, this time with z_unlinked unset. During zap read, the contents of the empty file is interpreted as a fatzap, which leads to the crash.
This patch addresses the problem twofold: 1. when instantiating a znode, set z_unlinked when z_links == 0 2. mark the deletion of the inode as netfree (in zfs_rmnode).
refs #3026
show more ...
|
c3a1418d | 30-Jun-2016 |
Arne Jansen <jansen@webgods.de> |
ZFS: vdev_avoid_read
Avoid reading from a disk if possible. This can be used during resilver to avoid reading from a bad data disk, but leave it in the pool for redundancy. Configure via mdb: ::spa
ZFS: vdev_avoid_read
Avoid reading from a disk if possible. This can be used during resilver to avoid reading from a bad data disk, but leave it in the pool for redundancy. Configure via mdb: ::spa -v to find the vdev <vdev>::print -a vdev_avoid_read to get the address <addr>/v 1 to set the feature
refs #3106
show more ...
|
a6c71776 | 14-Dec-2015 |
Simon Klinkert <klinkert@webgods.de> |
zev: truncate offset
There is a race between truncate events. zev_get_checksums() relies on a locked znode and their znode->z_size but this is not always consistant for truncate events when there ar
zev: truncate offset
There is a race between truncate events. zev_get_checksums() relies on a locked znode and their znode->z_size but this is not always consistant for truncate events when there are multiple truncates running. The first truncate may extend the size of a file and the second truncate could shrink the file again. The first event would deliver a broken checksum.
We lock the znode for the whole operation now.
refs #2921
show more ...
|
b69b4dd0 | 22-Oct-2015 |
Paul Dagnelie <pcd@delphix.com> |
6370 ZFS send fails to transmit some holes Reviewed by: Matthew Ahrens <mahrens@delphix.com> Reviewed by: Chris Williamson <chris.williamson@delphix.com>
In certain circumstances, "zfs send -i" (inc
6370 ZFS send fails to transmit some holes Reviewed by: Matthew Ahrens <mahrens@delphix.com> Reviewed by: Chris Williamson <chris.williamson@delphix.com>
In certain circumstances, "zfs send -i" (incremental send) can produce a stream which will result in incorrect sparse file contents on the target.
The problem manifests as regions of the received file that should be sparse (and read a zero-filled) actually contain data from a file that was deleted (and which happened to share this file's object ID).
Note: this can happen only with filesystems (not zvols, because they do not free (and thus can not reuse) object IDs).
Note: This can happen only if, since the incremental source (FromSnap), a file was deleted and then another file was created, and the new file is sparse (i.e. has areas that were never written to and should be implicitly zero-filled).
We suspect that this was introduced by 4370 (applies only if hole_birth feature is enabled), and made worse by 5243 (applies if hole_birth feature is disabled, and we never send any holes).
The bug is caused by the hole birth feature. When an object is deleted and replaced, all the holes in the object have birth time zero. However, zfs send cannot tell that the holes are new since the file was replaced, so it doesn't send them in an incremental. As a result, you can end up with invalid data when you receive incremental send streams. As a short-term fix, we can always send holes with birth time 0 (unless it's a zvol or a dataset where we can guarantee that no objects have been reused).
Conflicts:
usr/src/test/zfs-tests/tests/functional/cli_root/zfs_receive/zfs_receive_010_pos.ksh
show more ...
|
8dd201a5 | 08-Sep-2015 |
Arne Jansen <sensille@gmx.net> |
6220 memleak in l2arc on debug build Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Reviewed by: Simon Klinkert <simon.klinkert@gmail.com> Reviewed by: George Wilson <george@delphix.com> Appr
6220 memleak in l2arc on debug build Reviewed by: Saso Kiselkov <saso.kiselkov@nexenta.com> Reviewed by: Simon Klinkert <simon.klinkert@gmail.com> Reviewed by: George Wilson <george@delphix.com> Approved by: Robert Mustacchi <rm@joyent.com>
show more ...
|