#
4f978603 |
| 02-Jun-2025 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge branch 'next' into for-linus
Prepare input updates for 6.16 merge window.
|
Revision tags: v6.15, v6.15-rc7 |
|
#
d51b9d81 |
| 16-May-2025 |
Dmitry Torokhov <dmitry.torokhov@gmail.com> |
Merge tag 'v6.15-rc6' into next
Sync up with mainline to bring in xpad controller changes.
|
Revision tags: v6.15-rc6, v6.15-rc5 |
|
#
844e31bb |
| 29-Apr-2025 |
Rob Clark <robdclark@chromium.org> |
Merge remote-tracking branch 'drm-misc/drm-misc-next' into msm-next
Merge drm-misc-next to get commit Fixes: fec450ca15af ("drm/display: hdmi: provide central data authority for ACR params").
Signe
Merge remote-tracking branch 'drm-misc/drm-misc-next' into msm-next
Merge drm-misc-next to get commit Fixes: fec450ca15af ("drm/display: hdmi: provide central data authority for ACR params").
Signed-off-by: Rob Clark <robdclark@chromium.org>
show more ...
|
Revision tags: v6.15-rc4 |
|
#
3ab7ae8e |
| 24-Apr-2025 |
Thomas Hellström <thomas.hellstrom@linux.intel.com> |
Merge drm/drm-next into drm-xe-next
Backmerge to bring in linux 6.15-rc.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
|
Revision tags: v6.15-rc3, v6.15-rc2 |
|
#
1afba39f |
| 07-Apr-2025 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-next into drm-misc-next
Backmerging to get v6.15-rc1 into drm-misc-next. Also fixes a build issue when enabling CONFIG_DRM_SCHED_KUNIT_TEST.
Signed-off-by: Thomas Zimmermann <tzimmerm
Merge drm/drm-next into drm-misc-next
Backmerging to get v6.15-rc1 into drm-misc-next. Also fixes a build issue when enabling CONFIG_DRM_SCHED_KUNIT_TEST.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
show more ...
|
#
9f13acb2 |
| 11-Apr-2025 |
Ingo Molnar <mingo@kernel.org> |
Merge tag 'v6.15-rc1' into x86/cpu, to refresh the branch with upstream changes
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
#
6ce0fdaa |
| 09-Apr-2025 |
Ingo Molnar <mingo@kernel.org> |
Merge tag 'v6.15-rc1' into x86/asm, to refresh the branch
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
#
1260ed77 |
| 08-Apr-2025 |
Thomas Zimmermann <tzimmermann@suse.de> |
Merge drm/drm-fixes into drm-misc-fixes
Backmerging to get updates from v6.15-rc1.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
|
Revision tags: v6.15-rc1 |
|
#
c148bc75 |
| 27-Mar-2025 |
Linus Torvalds <torvalds@linux-foundation.org> |
Merge tag 'xfs-6.15-merge' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull xfs updates from Carlos Maiolino:
- XFS zoned allocator: Enables XFS to support zoned devices using its real-tim
Merge tag 'xfs-6.15-merge' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
Pull xfs updates from Carlos Maiolino:
- XFS zoned allocator: Enables XFS to support zoned devices using its real-time allocator
- Use folios/vmalloc for buffer cache backing memory
- Some code cleanups and bug fixes
* tag 'xfs-6.15-merge' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: (70 commits) xfs: remove the flags argument to xfs_buf_get_uncached xfs: remove the flags argument to xfs_buf_read_uncached xfs: remove xfs_buf_free_maps xfs: remove xfs_buf_get_maps xfs: call xfs_buf_alloc_backing_mem from _xfs_buf_alloc xfs: remove unnecessary NULL check before kvfree() xfs: don't wake zone space waiters without m_zone_info xfs: don't increment m_generation for all errors in xfs_growfs_data xfs: fix a missing unlock in xfs_growfs_data xfs: Remove duplicate xfs_rtbitmap.h header xfs: trigger zone GC when out of available rt blocks xfs: trace what memory backs a buffer xfs: cleanup mapping tmpfs folios into the buffer cache xfs: use vmalloc instead of vm_map_area for buffer backing memory xfs: buffer items don't straddle pages anymore xfs: kill XBF_UNMAPPED xfs: convert buffer cache to use high order folios xfs: remove the kmalloc to page allocator fallback xfs: refactor backing memory allocations for buffers xfs: remove xfs_buf_is_vmapped ...
show more ...
|
Revision tags: v6.14 |
|
#
8e641546 |
| 18-Mar-2025 |
Carlos Maiolino <cem@kernel.org> |
Merge branch 'xfs-6.15-zoned_devices' into XFS-for-linus-6.15-merge
Merge Zoned allocator for XFS.
Signed-off-by: Carlos Maiolino <cem@kernel.org>
|
Revision tags: v6.14-rc7 |
|
#
32f6987f |
| 10-Mar-2025 |
Carlos Maiolino <cem@kernel.org> |
Merge branch 'xfs-6.15-merge' into for-next
XFS code for 6.15 to be merged into linux-next
Signed-off-by: Carlos Maiolino <cem@kernel.org>
|
#
358cab79 |
| 10-Mar-2025 |
Carlos Maiolino <cem@kernel.org> |
Merge branch 'xfs-6.15-zoned_devices' into xfs-6.15-merge
Merge Zoned devices support for XFS
Signed-off-by: Carlos Maiolino <cem@kernel.org>
|
Revision tags: v6.14-rc6 |
|
#
4c6283ec |
| 04-Mar-2025 |
Carlos Maiolino <cem@kernel.org> |
Merge tag 'xfs-zoned-allocator-2025-03-03' of git://git.infradead.org/users/hch/xfs into xfs-6.15-zoned_devices
xfs: add support for zoned devices
Add support for the new zoned space allocator and
Merge tag 'xfs-zoned-allocator-2025-03-03' of git://git.infradead.org/users/hch/xfs into xfs-6.15-zoned_devices
xfs: add support for zoned devices
Add support for the new zoned space allocator and thus for zoned devices:
https://zonedstorage.io/docs/introduction/zoned-storage
to XFS. This has been developed for and tested on both SMR hard drives, which are the oldest and most common class of zoned devices:
https://zonedstorage.io/docs/introduction/smr
and ZNS SSDs:
https://zonedstorage.io/docs/introduction/zns
It has not been tested with zoned UFS devices, as their current capacity points and performance characteristics aren't too interesting for XFS use cases (but never say never).
Sequential write only zones are only supported for data using a new allocator for the RT device, which maps each zone to a rtgroup which is written sequentially. All metadata and (for now) the log require using randomly writable space. This means a realtime device is required to support zoned storage, but for the common case of SMR hard drives that contain random writable zones and sequential write required zones on the same block device, the concept of an internal RT device is added which means using XFS on a SMR HDD is as simple as:
$ mkfs.xfs /dev/sda $ mount /dev/sda /mnt
When using NVMe ZNS SSDs that do not support conventional zones, the traditional multi-device RT configuration is required. E.g. for an SSD with a conventional namespace 1 and a zoned namespace 2:
$ mkfs.xfs /dev/nvme0n1 -o rtdev=/dev/nvme0n2 $ mount -o rtdev=/dev/nvme0n2 /dev/nvme0n1 /mnt
The zoned allocator can also be used on conventional block devices, or on conventional zones (e.g. when using an SMR HDD as the external RT device). For example using zoned XFS on normal SSDs shows very nice performance advantages and write amplification reduction for intelligent workloads like RocksDB.
Some work is still in progress or planned, but should not affect the integration with the rest of XFS or the on-disk format:
- support for quotas - support for reflinks
Note that the I/O path already supports reflink, but garbage collection isn't refcount aware yet and would unshare shared blocks, thus rendering the feature useless.
show more ...
|
Revision tags: v6.14-rc5, v6.14-rc4, v6.14-rc3, v6.14-rc2, v6.14-rc1 |
|
#
64d03611 |
| 31-Jan-2025 |
Hans Holmberg <hans.holmberg@wdc.com> |
xfs: support write life time based data placement
Add a file write life time data placement allocation scheme that aims to minimize fragmentation and thereby to do two things:
a) separate file dat
xfs: support write life time based data placement
Add a file write life time data placement allocation scheme that aims to minimize fragmentation and thereby to do two things:
a) separate file data to different zones when possible. b) colocate file data of similar life times when feasible.
To get best results, average file sizes should align with the zone capacity that is reported through the XFS_IOC_FSGEOMETRY ioctl.
This improvement in data placement efficiency reduces the number of blocks requiring relocation by GC, and thus decreases overall write amplification. The impact on performance varies depending on how full the file system is.
For RocksDB using leveled compaction, the lifetime hints can improve throughput for overwrite workloads at 80% file system utilization by ~10%, but for lower file system utilization there won't be as much benefit in application performance as there is less need for garbage collection to start with.
Lifetime hints can be disabled using the nolifetime mount option.
Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
show more ...
|
#
080d01c4 |
| 15-Feb-2025 |
Christoph Hellwig <hch@lst.de> |
xfs: implement zoned garbage collection
RT groups on a zoned file system need to be completely empty before their space can be reused. This means that partially empty groups need to be emptied enti
xfs: implement zoned garbage collection
RT groups on a zoned file system need to be completely empty before their space can be reused. This means that partially empty groups need to be emptied entirely to free up space if no entirely free groups are available.
Add a garbage collection thread that moves all data out of the least used zone when not enough free zones are available, and which resets all zones that have been emptied. To find empty zone a simple set of 10 buckets based on the amount of space used in the zone is used. To empty zones, the rmap is walked to find the owners and the data is read and then written to the new place.
To automatically defragment files the rmap records are sorted by inode and logical offset. This means defragmentation of parallel writes into a single zone happens automatically when performing garbage collection. Because holding the iolock over the entire GC cycle would inject very noticeable latency for other accesses to the inodes, the iolock is not taken while performing I/O. Instead the I/O completion handler checks that the mapping hasn't changed over the one recorded at the start of the GC cycle and doesn't update the mapping if it change.
Co-developed-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
show more ...
|
#
0bb21930 |
| 13-Feb-2025 |
Christoph Hellwig <hch@lst.de> |
xfs: add support for zoned space reservations
For zoned file systems garbage collection (GC) has to take the iolock and mmaplock after moving data to a new place to synchronize with readers. This m
xfs: add support for zoned space reservations
For zoned file systems garbage collection (GC) has to take the iolock and mmaplock after moving data to a new place to synchronize with readers. This means waiting for garbage collection with the iolock can deadlock.
To avoid this, the worst case required blocks have to be reserved before taking the iolock, which is done using a new RTAVAILABLE counter that tracks blocks that are free to write into and don't require garbage collection. The new helpers try to take these available blocks, and if there aren't enough available it wakes and waits for GC. This is done using a list of on-stack reservations to ensure fairness.
Co-developed-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
show more ...
|
#
4e4d5207 |
| 13-Feb-2025 |
Christoph Hellwig <hch@lst.de> |
xfs: add the zoned space allocator
For zoned RT devices space is always allocated at the write pointer, that is right after the last written block and only recorded on I/O completion.
Because the a
xfs: add the zoned space allocator
For zoned RT devices space is always allocated at the write pointer, that is right after the last written block and only recorded on I/O completion.
Because the actual allocation algorithm is very simple and just involves picking a good zone - preferably the one used for the last write to the inode. As the number of zones that can written at the same time is usually limited by the hardware, selecting a zone is done as late as possible from the iomap dio and buffered writeback bio submissions helpers just before submitting the bio.
Given that the writers already took a reservation before acquiring the iolock, space will always be readily available if an open zone slot is available. A new structure is used to track these open zones, and pointed to by the xfs_rtgroup. Because zoned file systems don't have a rsum cache the space for that pointer can be reused.
Allocations are only recorded at I/O completion time. The scheme used for that is very similar to the reflink COW end I/O path.
Co-developed-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Hans Holmberg <hans.holmberg@wdc.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: "Darrick J. Wong" <djwong@kernel.org>
show more ...
|