History log of /linux/Documentation/mm/transhuge.rst (Results 51 – 75 of 78)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# 6599e683 28-Dec-2022 Mauro Carvalho Chehab <mchehab@kernel.org>

Merge tag 'v6.2-rc1' into media_tree

Linux 6.2-rc1

* tag 'v6.2-rc1': (14398 commits)
Linux 6.2-rc1
treewide: Convert del_timer*() to timer_shutdown*()
pstore: Properly assign mem_type propert

Merge tag 'v6.2-rc1' into media_tree

Linux 6.2-rc1

* tag 'v6.2-rc1': (14398 commits)
Linux 6.2-rc1
treewide: Convert del_timer*() to timer_shutdown*()
pstore: Properly assign mem_type property
pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES
cfi: Fix CFI failure with KASAN
perf python: Fix splitting CC into compiler and options
afs: Stop implementing ->writepage()
afs: remove afs_cache_netfs and afs_zap_permits() declarations
afs: remove variable nr_servers
afs: Fix lost servers_outstanding count
ALSA: usb-audio: Add new quirk FIXED_RATE for JBL Quantum810 Wireless
ALSA: azt3328: Remove the unused function snd_azf3328_codec_outl()
gcov: add support for checksum field
test_maple_tree: add test for mas_spanning_rebalance() on insufficient data
maple_tree: fix mas_spanning_rebalance() on insufficient data
hugetlb: really allocate vma lock for all sharable vmas
kmsan: export kmsan_handle_urb
kmsan: include linux/vmalloc.h
mm/mempolicy: fix memory leak in set_mempolicy_home_node system call
mm, mremap: fix mremap() expanding vma with addr inside vma
...

show more ...


# d0e99511 17-Jan-2023 Kalle Valo <kvalo@kernel.org>

Merge wireless into wireless-next

Due to the two cherry picked commits from wireless to wireless-next we have
several conflicts in mt76. To avoid any bugs with conflicts merge wireless into
wireless

Merge wireless into wireless-next

Due to the two cherry picked commits from wireless to wireless-next we have
several conflicts in mt76. To avoid any bugs with conflicts merge wireless into
wireless-next.

96f134dc1964 wifi: mt76: handle possible mt76_rx_token_consume failures
fe13dad8992b wifi: mt76: dma: do not increment queue head if mt76_dma_add_buf fails

show more ...


# 2c55d703 03-Jan-2023 Maxime Ripard <maxime@cerno.tech>

Merge drm/drm-fixes into drm-misc-fixes

Let's start the fixes cycle.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>


Revision tags: v6.2-rc1
# c183e6c3 21-Dec-2022 Jakub Kicinski <kuba@kernel.org>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>


# 1a931707 16-Dec-2022 Arnaldo Carvalho de Melo <acme@redhat.com>

Merge remote-tracking branch 'torvalds/master' into perf/core

To resolve a trivial merge conflict with c302378bc157f6a7 ("libbpf:
Hashmap interface update to allow both long and void* keys/values"),

Merge remote-tracking branch 'torvalds/master' into perf/core

To resolve a trivial merge conflict with c302378bc157f6a7 ("libbpf:
Hashmap interface update to allow both long and void* keys/values"),
where a function present upstream was removed in the perf tools
development tree.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


# e2ca6ba6 14-Dec-2022 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'mm-stable-2022-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM updates from Andrew Morton:

- More userfaultfs work from Peter Xu

- Several convert-to-folios se

Merge tag 'mm-stable-2022-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull MM updates from Andrew Morton:

- More userfaultfs work from Peter Xu

- Several convert-to-folios series from Sidhartha Kumar and Huang Ying

- Some filemap cleanups from Vishal Moola

- David Hildenbrand added the ability to selftest anon memory COW
handling

- Some cpuset simplifications from Liu Shixin

- Addition of vmalloc tracing support by Uladzislau Rezki

- Some pagecache folioifications and simplifications from Matthew
Wilcox

- A pagemap cleanup from Kefeng Wang: we have VM_ACCESS_FLAGS, so use
it

- Miguel Ojeda contributed some cleanups for our use of the
__no_sanitize_thread__ gcc keyword.

This series should have been in the non-MM tree, my bad

- Naoya Horiguchi improved the interaction between memory poisoning and
memory section removal for huge pages

- DAMON cleanups and tuneups from SeongJae Park

- Tony Luck fixed the handling of COW faults against poisoned pages

- Peter Xu utilized the PTE marker code for handling swapin errors

- Hugh Dickins reworked compound page mapcount handling, simplifying it
and making it more efficient

- Removal of the autonuma savedwrite infrastructure from Nadav Amit and
David Hildenbrand

- zram support for multiple compression streams from Sergey Senozhatsky

- David Hildenbrand reworked the GUP code's R/O long-term pinning so
that drivers no longer need to use the FOLL_FORCE workaround which
didn't work very well anyway

- Mel Gorman altered the page allocator so that local IRQs can remnain
enabled during per-cpu page allocations

- Vishal Moola removed the try_to_release_page() wrapper

- Stefan Roesch added some per-BDI sysfs tunables which are used to
prevent network block devices from dirtying excessive amounts of
pagecache

- David Hildenbrand did some cleanup and repair work on KSM COW
breaking

- Nhat Pham and Johannes Weiner have implemented writeback in zswap's
zsmalloc backend

- Brian Foster has fixed a longstanding corner-case oddity in
file[map]_write_and_wait_range()

- sparse-vmemmap changes for MIPS, LoongArch and NIOS2 from Feiyang
Chen

- Shiyang Ruan has done some work on fsdax, to make its reflink mode
work better under xfstests. Better, but still not perfect

- Christoph Hellwig has removed the .writepage() method from several
filesystems. They only need .writepages()

- Yosry Ahmed wrote a series which fixes the memcg reclaim target
beancounting

- David Hildenbrand has fixed some of our MM selftests for 32-bit
machines

- Many singleton patches, as usual

* tag 'mm-stable-2022-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (313 commits)
mm/hugetlb: set head flag before setting compound_order in __prep_compound_gigantic_folio
mm: mmu_gather: allow more than one batch of delayed rmaps
mm: fix typo in struct pglist_data code comment
kmsan: fix memcpy tests
mm: add cond_resched() in swapin_walk_pmd_entry()
mm: do not show fs mm pc for VM_LOCKONFAULT pages
selftests/vm: ksm_functional_tests: fixes for 32bit
selftests/vm: cow: fix compile warning on 32bit
selftests/vm: madv_populate: fix missing MADV_POPULATE_(READ|WRITE) definitions
mm/gup_test: fix PIN_LONGTERM_TEST_READ with highmem
mm,thp,rmap: fix races between updates of subpages_mapcount
mm: memcg: fix swapcached stat accounting
mm: add nodes= arg to memory.reclaim
mm: disable top-tier fallback to reclaim on proactive reclaim
selftests: cgroup: make sure reclaim target memcg is unprotected
selftests: cgroup: refactor proactive reclaim code to reclaim_until()
mm: memcg: fix stale protection of reclaim target memcg
mm/mmap: properly unaccount memory on mas_preallocate() failure
omfs: remove ->writepage
jfs: remove ->writepage
...

show more ...


# 4f2c0a4a 14-Dec-2022 Nick Terrell <terrelln@fb.com>

Merge branch 'main' into zstd-linus


Revision tags: v6.1, v6.1-rc8, v6.1-rc7
# 4b51634c 22-Nov-2022 Hugh Dickins <hughd@google.com>

mm,thp,rmap: subpages_mapcount COMPOUND_MAPPED if PMD-mapped

Can the lock_compound_mapcount() bit_spin_lock apparatus be removed now?
Yes. Not by atomic64_t or cmpxchg games, those get difficult o

mm,thp,rmap: subpages_mapcount COMPOUND_MAPPED if PMD-mapped

Can the lock_compound_mapcount() bit_spin_lock apparatus be removed now?
Yes. Not by atomic64_t or cmpxchg games, those get difficult on 32-bit;
but if we slightly abuse subpages_mapcount by additionally demanding that
one bit be set there when the compound page is PMD-mapped, then a cascade
of two atomic ops is able to maintain the stats without bit_spin_lock.

This is harder to reason about than when bit_spin_locked, but I believe
safe; and no drift in stats detected when testing. When there are racing
removes and adds, of course the sequence of operations is less well-
defined; but each operation on subpages_mapcount is atomically good. What
might be disastrous, is if subpages_mapcount could ever fleetingly appear
negative: but the pte lock (or pmd lock) these rmap functions are called
under, ensures that a last remove cannot race ahead of a first add.

Continue to make an exception for hugetlb (PageHuge) pages, though that
exception can be easily removed by a further commit if necessary: leave
subpages_mapcount 0, don't bother with COMPOUND_MAPPED in its case, just
carry on checking compound_mapcount too in folio_mapped(), page_mapped().

Evidence is that this way goes slightly faster than the previous
implementation in all cases (pmds after ptes now taking around 103ms); and
relieves us of worrying about contention on the bit_spin_lock.

Link: https://lkml.kernel.org/r/3978f3ca-5473-55a7-4e14-efea5968d892@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dan Carpenter <error27@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


# be5ef2d9 22-Nov-2022 Hugh Dickins <hughd@google.com>

mm,thp,rmap: subpages_mapcount of PTE-mapped subpages

Patch series "mm,thp,rmap: rework the use of subpages_mapcount", v2.


This patch (of 3):

Following suggestion from Linus, instead of counting

mm,thp,rmap: subpages_mapcount of PTE-mapped subpages

Patch series "mm,thp,rmap: rework the use of subpages_mapcount", v2.


This patch (of 3):

Following suggestion from Linus, instead of counting every PTE map of a
compound page in subpages_mapcount, just count how many of its subpages
are PTE-mapped: this yields the exact number needed for NR_ANON_MAPPED and
NR_FILE_MAPPED stats, without any need for a locked scan of subpages; and
requires updating the count less often.

This does then revert total_mapcount() and folio_mapcount() to needing a
scan of subpages; but they are inherently racy, and need no locking, so
Linus is right that the scans are much better done there. Plus (unlike in
6.1 and previous) subpages_mapcount lets us avoid the scan in the common
case of no PTE maps. And page_mapped() and folio_mapped() remain scanless
and just as efficient with the new meaning of subpages_mapcount: those are
the functions which I most wanted to remove the scan from.

The updated page_dup_compound_rmap() is no longer suitable for use by anon
THP's __split_huge_pmd_locked(); but page_add_anon_rmap() can be used for
that, so long as its VM_BUG_ON_PAGE(!PageLocked) is deleted.

Evidence is that this way goes slightly faster than the previous
implementation for most cases; but significantly faster in the (now
scanless) pmds after ptes case, which started out at 870ms and was brought
down to 495ms by the previous series, now takes around 105ms.

Link: https://lkml.kernel.org/r/a5849eca-22f1-3517-bf29-95d982242742@google.com
Link: https://lkml.kernel.org/r/eec17e16-4e1-7c59-f1bc-5bca90dac919@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dan Carpenter <error27@gmail.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: James Houghton <jthoughton@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


Revision tags: v6.1-rc6, v6.1-rc5, v6.1-rc4
# 9bd3155e 03-Nov-2022 Hugh Dickins <hughd@google.com>

mm,thp,rmap: lock_compound_mapcounts() on THP mapcounts

Fix the races in maintaining compound_mapcount, subpages_mapcount and
subpage _mapcount by using PG_locked in the first tail of any compound
p

mm,thp,rmap: lock_compound_mapcounts() on THP mapcounts

Fix the races in maintaining compound_mapcount, subpages_mapcount and
subpage _mapcount by using PG_locked in the first tail of any compound
page for a bit_spin_lock() on such modifications; skipping the usual
atomic operations on those fields in this case.

Bring page_remove_file_rmap() and page_remove_anon_compound_rmap() back
into page_remove_rmap() itself. Rearrange page_add_anon_rmap() and
page_add_file_rmap() and page_remove_rmap() to follow the same "if
(compound) {lock} else if (PageCompound) {lock} else {atomic}" pattern
(with a PageTransHuge in the compound test, like before, to avoid BUG_ONs
and optimize away that block when THP is not configured). Move all the
stats updates outside, after the bit_spin_locked section, so that it is
sure to be a leaf lock.

Add page_dup_compound_rmap() to manage compound locking versus atomics in
sync with the rest. In particular, hugetlb pages are still using the
atomics: to avoid unnecessary interference there, and because they never
have subpage mappings; but this exception can easily be changed.
Conveniently, page_dup_compound_rmap() turns out to suit an anon THP's
__split_huge_pmd_locked() too.

bit_spin_lock() is not popular with PREEMPT_RT folks: but PREEMPT_RT
sensibly excludes TRANSPARENT_HUGEPAGE already, so its only exposure is to
the non-hugetlb non-THP pte-mapped compound pages (with large folios being
currently dependent on TRANSPARENT_HUGEPAGE). There is never any scan of
subpages in this case; but we have chosen to use PageCompound tests rather
than PageTransCompound tests to gate the use of lock_compound_mapcounts(),
so that page_mapped() is correct on all compound pages, whether or not
TRANSPARENT_HUGEPAGE is enabled: could that be a problem for PREEMPT_RT,
when there is contention on the lock - under heavy concurrent forking for
example? If so, then it can be turned into a sleeping lock (like
folio_lock()) when PREEMPT_RT.

A simple 100 X munmap(mmap(2GB, MAP_SHARED|MAP_POPULATE, tmpfs), 2GB) took
18 seconds on small pages, and used to take 1 second on huge pages, but
now takes 115 milliseconds on huge pages. Mapping by pmds a second time
used to take 860ms and now takes 86ms; mapping by pmds after mapping by
ptes (when the scan is needed) used to take 870ms and now takes 495ms.
Mapping huge pages by ptes is largely unaffected but variable: between 5%
faster and 5% slower in what I've recorded. Contention on the lock is
likely to behave worse than contention on the atomics behaved.

Link: https://lkml.kernel.org/r/1b42bd1a-8223-e827-602f-d466c2db7d3c@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: James Houghton <jthoughton@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


# cb67f428 03-Nov-2022 Hugh Dickins <hughd@google.com>

mm,thp,rmap: simplify compound page mapcount handling

Compound page (folio) mapcount calculations have been different for anon
and file (or shmem) THPs, and involved the obscure PageDoubleMap flag.

mm,thp,rmap: simplify compound page mapcount handling

Compound page (folio) mapcount calculations have been different for anon
and file (or shmem) THPs, and involved the obscure PageDoubleMap flag.
And each huge mapping and unmapping of a file (or shmem) THP involved
atomically incrementing and decrementing the mapcount of every subpage of
that huge page, dirtying many struct page cachelines.

Add subpages_mapcount field to the struct folio and first tail page, so
that the total of subpage mapcounts is available in one place near the
head: then page_mapcount() and total_mapcount() and page_mapped(), and
their folio equivalents, are so quick that anon and file and hugetlb don't
need to be optimized differently. Delete the unloved PageDoubleMap.

page_add and page_remove rmap functions must now maintain the
subpages_mapcount as well as the subpage _mapcount, when dealing with pte
mappings of huge pages; and correct maintenance of NR_ANON_MAPPED and
NR_FILE_MAPPED statistics still needs reading through the subpages, using
nr_subpages_unmapped() - but only when first or last pmd mapping finds
subpages_mapcount raised (double-map case, not the common case).

But are those counts (used to decide when to split an anon THP, and in
vmscan's pagecache_reclaimable heuristic) correctly maintained? Not
quite: since page_remove_rmap() (and also split_huge_pmd()) is often
called without page lock, there can be races when a subpage pte mapcount
0<->1 while compound pmd mapcount 0<->1 is scanning - races which the
previous implementation had prevented. The statistics might become
inaccurate, and even drift down until they underflow through 0. That is
not good enough, but is better dealt with in a followup patch.

Update a few comments on first and second tail page overlaid fields.
hugepage_add_new_anon_rmap() has to "increment" compound_mapcount, but
subpages_mapcount and compound_pincount are already correctly at 0, so
delete its reinitialization of compound_pincount.

A simple 100 X munmap(mmap(2GB, MAP_SHARED|MAP_POPULATE, tmpfs), 2GB) took
18 seconds on small pages, and used to take 1 second on huge pages, but
now takes 119 milliseconds on huge pages. Mapping by pmds a second time
used to take 860ms and now takes 92ms; mapping by pmds after mapping by
ptes (when the scan is needed) used to take 870ms and now takes 495ms.
But there might be some benchmarks which would show a slowdown, because
tail struct pages now fall out of cache until final freeing checks them.

Link: https://lkml.kernel.org/r/47ad693-717-79c8-e1ba-46c3a6602e48@google.com
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: James Houghton <jthoughton@google.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: Peter Xu <peterx@redhat.com>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zach O'Keefe <zokeefe@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

show more ...


Revision tags: v6.1-rc3, v6.1-rc2
# 14e77332 22-Oct-2022 Nick Terrell <terrelln@fb.com>

Merge branch 'main' into zstd-next


Revision tags: v6.1-rc1
# 5f8f8574 10-Oct-2022 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'next' into for-linus

Prepare input updates for 6.1 merge window.


# edd1533d 05-Oct-2022 Benjamin Tissoires <benjamin.tissoires@redhat.com>

Merge branch 'for-6.1/logitech' into for-linus

- Add hanlding of all Bluetooth HID++ devices and fixes in hid++
(Bastien Nocera)


# 97acb6a8 03-Oct-2022 Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Merge drm/drm-next into drm-intel-gt-next

Daniele needs 84d4333c1e28 ("misc/mei: Add NULL check to component match
callback functions") in order to merge the DG2 HuC patches.

Signed-off-by: Tvrtko

Merge drm/drm-next into drm-intel-gt-next

Daniele needs 84d4333c1e28 ("misc/mei: Add NULL check to component match
callback functions") in order to merge the DG2 HuC patches.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

show more ...


Revision tags: v6.0
# 305a72ef 02-Oct-2022 Dan Williams <dan.j.williams@intel.com>

Merge branch 'for-6.1/nvdimm' into libnvdimm-for-next

Add v6.1 content on top of some straggling updates that missed v6.0.


# 70d1b1a7 27-Sep-2022 Leon Romanovsky <leonro@nvidia.com>

Merge branch 'mlx5-vfio' into mlx5-next

Merge net/mlx5 dependencies for device DMA logging.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>


Revision tags: v6.0-rc7
# b3bbcc5d 25-Sep-2022 Dan Williams <dan.j.williams@intel.com>

Merge branch 'for-6.0/dax' into libnvdimm-fixes

Pick up another "Soft Reservation" fix for v6.0-final on top of some
straggling nvdimm fixes that missed v5.19.


# 74656d03 21-Sep-2022 Ingo Molnar <mingo@kernel.org>

Merge tag 'v6.0-rc6' into locking/core, to refresh the branch

Signed-off-by: Ingo Molnar <mingo@kernel.org>


Revision tags: v6.0-rc6
# a108772d 14-Sep-2022 Maxime Ripard <maxime@cerno.tech>

Merge drm/drm-next into drm-misc-next

We need 6.0-rc1 to merge the backlight rework PR.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>


Revision tags: v6.0-rc5
# 2a906db2 06-Sep-2022 Tony Lindgren <tony@atomide.com>

Merge branch 'am5748-fix' into fixes


Revision tags: v6.0-rc4
# 10438976 02-Sep-2022 Ingo Molnar <mingo@kernel.org>

Merge branch 'linus' into x86/mm, to refresh the branch

This branch is ~14k commits behind upstream, and has an old merge base
from early into the merge window, refresh it to v6.0-rc3+fixes before
q

Merge branch 'linus' into x86/mm, to refresh the branch

This branch is ~14k commits behind upstream, and has an old merge base
from early into the merge window, refresh it to v6.0-rc3+fixes before
queueing up new commits.

Signed-off-by: Ingo Molnar <mingo@kernel.org>

show more ...


# 53aa930d 30-Aug-2022 Ingo Molnar <mingo@kernel.org>

Merge branch 'sched/warnings' into sched/core, to pick up WARN_ON_ONCE() conversion commit

Merge in the BUG_ON() => WARN_ON_ONCE() conversion commit.

Signed-off-by: Ingo Molnar <mingo@kernel.org>


# 917bda9a 29-Aug-2022 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync drm-intel-next with v6.0-rc as well as recent drm-intel-gt-next.

Since drm-next does not have commit f0c70d41e4e8 ("drm/i915/guc: remove
runtime info pri

Merge drm/drm-next into drm-intel-next

Sync drm-intel-next with v6.0-rc as well as recent drm-intel-gt-next.

Since drm-next does not have commit f0c70d41e4e8 ("drm/i915/guc: remove
runtime info printing from time stamp logging") yet, only
drm-intel-gt-next, will need to do that as part of the merge here to
build.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>

show more ...


Revision tags: v6.0-rc3, v6.0-rc2
# 93fbff11 17-Aug-2022 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge branch 'i2c/make_remove_callback_void-immutable' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux into next

Sync up with the latest I2C code base to get updated prototype of I2C
bus

Merge branch 'i2c/make_remove_callback_void-immutable' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux into next

Sync up with the latest I2C code base to get updated prototype of I2C
bus remove() method.

show more ...


1234