rust: Introduce atomic API helpersIn order to support LKMM atomics in Rust, add rust_helper_* for atomicAPIs. These helpers ensure the implementation of LKMM atomics in Rust isthe same as in C. T
rust: Introduce atomic API helpersIn order to support LKMM atomics in Rust, add rust_helper_* for atomicAPIs. These helpers ensure the implementation of LKMM atomics in Rust isthe same as in C. This could save the maintenance burden of having twosimilar atomic implementations in asm.Originally-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Boqun Feng <boqun.feng@gmail.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Alice Ryhl <aliceryhl@google.com>Link: https://lore.kernel.org/all/20250719030827.61357-2-boqun.feng@gmail.com/
show more ...
locking/atomic: scripts: fix ${atomic}_sub_and_test() kerneldocFor ${atomic}_sub_and_test() the @i parameter is the value to subtract,not add. Fix the typo in the kerneldoc template and generate t
locking/atomic: scripts: fix ${atomic}_sub_and_test() kerneldocFor ${atomic}_sub_and_test() the @i parameter is the value to subtract,not add. Fix the typo in the kerneldoc template and generate the headerswith this update.Fixes: ad8110706f38 ("locking/atomic: scripts: generate kerneldoc comments")Suggested-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Carlos Llamas <cmllamas@google.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Acked-by: Mark Rutland <mark.rutland@arm.com>Reviewed-by: Kees Cook <keescook@chromium.org>Cc: stable@vger.kernel.orgLink: https://lkml.kernel.org/r/20240515133844.3502360-1-cmllamas@google.com
locking/atomic: scripts: Clarify ordering of conditional atomicsConditional atomic operations (e.g. cmpxchg()) only provide orderingwhen the condition holds; when the condition does not hold, the
locking/atomic: scripts: Clarify ordering of conditional atomicsConditional atomic operations (e.g. cmpxchg()) only provide orderingwhen the condition holds; when the condition does not hold, the locationis not modified and relaxed ordering is provided. Where ordering isneeded for failed conditional atomics, it is necessary to usesmp_mb__before_atomic() and/or smp_mb__after_atomic().This is explained tersely in memory-barriers.txt, and is implied but notexplicitly stated in the kerneldoc comments for the conditionaloperations. The lack of an explicit statement has lead to some off-listqueries about the ordering semantics of failing conditional operations,so evidently this is confusing.Update the kerneldoc comments to explicitly describe the lack of orderingfor failed conditional atomic operations.For most conditional atomic operations, this is written as: | If (${condition}), atomically updates @v to (${new}) with ${desc_order} ordering. | Otherwise, @v is not modified and relaxed ordering is provided.For the try_cmpxchg() operations, this is written as: | If (${condition}), atomically updates @v to @new with ${desc_order} ordering. | Otherwise, @v is not modified, @old is updated to the current value of @v, | and relaxed ordering is provided.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Ingo Molnar <mingo@kernel.org>Reviewed-by: Paul E. McKenney <paulmck@kernel.org>Reviewed-by: Nhat Pham <nphamcs@gmail.com>Link: https://lore.kernel.org/r/20240209124010.2096198-1-mark.rutland@arm.com
locking/atomic: Add generic support for sync_try_cmpxchg() and its fallbackProvide the generic sync_try_cmpxchg() function from theraw_ prefixed version, also adding explicit instrumentation.The
locking/atomic: Add generic support for sync_try_cmpxchg() and its fallbackProvide the generic sync_try_cmpxchg() function from theraw_ prefixed version, also adding explicit instrumentation.The patch amends existing scripts to generate sync_try_cmpxchg()locking primitive and its raw_sync_try_cmpxchg() fallback, whileleaving existing macros from the try_cmpxchg() family unchanged.The target can define its own arch_sync_try_cmpxchg() to override thegeneric version of raw_sync_try_cmpxchg(). This allows the targetto generate more optimal assembly than the generic version.Additionally, the patch renames two scripts to better reflectwhet they really do.Signed-off-by: Uros Bizjak <ubizjak@gmail.com>Signed-off-by: Ingo Molnar <mingo@kernel.org>Cc: Will Deacon <will@kernel.org>Cc: Peter Zijlstra <peterz@infradead.org>Cc: Boqun Feng <boqun.feng@gmail.com>Cc: Mark Rutland <mark.rutland@arm.com>Cc: Linus Torvalds <torvalds@linux-foundation.org>Cc: linux-kernel@vger.kernel.org
locking/atomic: scripts: fix fallback ifdefferySince commit: 9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")The ordering fallbacks for atomic*_read_acquire() anda
locking/atomic: scripts: fix fallback ifdefferySince commit: 9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")The ordering fallbacks for atomic*_read_acquire() andatomic*_set_release() erroneously fall back to the implictly relaxedatomic*_read() and atomic*_set() variants respectively, without anyadditional barriers. This loses the ACQUIRE and RELEASE orderingsemantics, which can result in a wide variety of problems, even onstrongly-ordered architectures where the implementation ofatomic*_read() and/or atomic*_set() allows the compiler to reorder thoserelative to other accesses.In practice this has been observed to break bit spinlocks on arm64,resulting in dentry cache corruption.The fallback logic was intended to allow ACQUIRE/RELEASE/RELAXED ops tobe defined in terms of FULL ops, but where an op had RELAXED ordering bydefault, this unintentionally permitted the ACQUIRE/RELEASE ops to bedefined in terms of the implicitly RELAXED default.This patch corrects the logic to avoid falling back to implicitlyRELAXED ops, resulting in the same behaviour as prior to commit9257959a6e5b4fca.I've verified the resulting assembly on arm64 by generating outlinedwrappers of the atomics. Prior to this patch the compiler generatessequences using relaxed load (LDR) and store (STR) instructions, e.g.| <outlined_atomic64_read_acquire>:| ldr x0, [x0]| ret|| <outlined_atomic64_set_release>:| str x1, [x0]| retWith this patch applied the compiler generates sequences using theintended load-acquire (LDAR) and store-release (STLR) instructions, e.g.| <outlined_atomic64_read_acquire>:| ldar x0, [x0]| ret|| <outlined_atomic64_set_release>:| stlr x1, [x0]| retTo make sure that there were no other victims of the ifdeffery rewrite,I generated outlined copies of all of the {atomic,atomic64,atomic_long}atomic operations before and after commit 9257959a6e5b4fca. A diff ofthe generated assembly on arm64 shows that only the read_acquire() andset_release() operations were changed, and only lost their intendedordering:| [mark@lakrids:~/src/linux]% diff -u \| <(aarch64-linux-gnu-objdump -d before-9257959a6e5b4fca.o)| <(aarch64-linux-gnu-objdump -d after-9257959a6e5b4fca.o)| --- /proc/self/fd/11 2023-09-19 16:51:51.114779415 +0100| +++ /proc/self/fd/16 2023-09-19 16:51:51.114779415 +0100| @@ -1,5 +1,5 @@|| -before-9257959a6e5b4fca.o: file format elf64-littleaarch64| +after-9257959a6e5b4fca.o: file format elf64-littleaarch64||| Disassembly of section .text:| @@ -9,7 +9,7 @@| 4: d65f03c0 ret|| 0000000000000008 <outlined_atomic_read_acquire>:| - 8: 88dffc00 ldar w0, [x0]| + 8: b9400000 ldr w0, [x0]| c: d65f03c0 ret|| 0000000000000010 <outlined_atomic_set>:| @@ -17,7 +17,7 @@| 14: d65f03c0 ret|| 0000000000000018 <outlined_atomic_set_release>:| - 18: 889ffc01 stlr w1, [x0]| + 18: b9000001 str w1, [x0]| 1c: d65f03c0 ret|| 0000000000000020 <outlined_atomic_add>:| @@ -1230,7 +1230,7 @@| 1070: d65f03c0 ret|| 0000000000001074 <outlined_atomic64_read_acquire>:| - 1074: c8dffc00 ldar x0, [x0]| + 1074: f9400000 ldr x0, [x0]| 1078: d65f03c0 ret|| 000000000000107c <outlined_atomic64_set>:| @@ -1238,7 +1238,7 @@| 1080: d65f03c0 ret|| 0000000000001084 <outlined_atomic64_set_release>:| - 1084: c89ffc01 stlr x1, [x0]| + 1084: f9000001 str x1, [x0]| 1088: d65f03c0 ret|| 000000000000108c <outlined_atomic64_add>:| @@ -2427,7 +2427,7 @@| 207c: d65f03c0 ret|| 0000000000002080 <outlined_atomic_long_read_acquire>:| - 2080: c8dffc00 ldar x0, [x0]| + 2080: f9400000 ldr x0, [x0]| 2084: d65f03c0 ret|| 0000000000002088 <outlined_atomic_long_set>:| @@ -2435,7 +2435,7 @@| 208c: d65f03c0 ret|| 0000000000002090 <outlined_atomic_long_set_release>:| - 2090: c89ffc01 stlr x1, [x0]| + 2090: f9000001 str x1, [x0]| 2094: d65f03c0 ret|| 0000000000002098 <outlined_atomic_long_add>:I've build tested this with a variety of configs for alpha, arm, arm64,csky, i386, m68k, microblaze, mips, nios2, openrisc, powerpc, riscv,s390, sh, sparc, x86_64, and xtensa, for which I've seen no issues. Iwas unable to build test for ia64 and parisc due to existing buildbreakage in v6.6-rc2.Fixes: 9257959a6e5b4fca ("locking/atomic: scripts: restructure fallback ifdeffery")Reported-by: Ming Lei <ming.lei@redhat.com>Reported-by: Darrick J. Wong <djwong@kernel.org>Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Tested-by: Baokun Li <libaokun1@huawei.com>Link: https://lkml.kernel.org/r/20230919171430.2697727-1-mark.rutland@arm.com
locking/atomic: scripts: fix ${atomic}_dec_if_positive() kerneldocThe ${atomic}_dec_if_positive() ops are unlike all the other conditionalatomic ops. Rather than returning a boolean success value,
locking/atomic: scripts: fix ${atomic}_dec_if_positive() kerneldocThe ${atomic}_dec_if_positive() ops are unlike all the other conditionalatomic ops. Rather than returning a boolean success value, these returnthe value that the atomic variable would be updated to, even when noupdate is performed.We missed this when adding kerneldoc comments, and the documentation for${atomic}_dec_if_positive() erroneously states:| Return: @true if @v was updated, @false otherwise.Ideally we'd clean this up by aligning ${atomic}_dec_if_positive() withthe usual atomic op conventions: with ${atomic}_fetch_dec_if_positive()for those who care about the value of the varaible, and${atomic}_dec_if_positive() returning a boolean success value.In the mean time, align the documentation with the current reality.Fixes: ad8110706f381170 ("locking/atomic: scripts: generate kerneldoc comments")Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Paul E. McKenney <paulmck@kernel.org>Link: https://lore.kernel.org/r/20230615132734.1119765-1-mark.rutland@arm.com
locking/atomic: scripts: generate kerneldoc commentsCurrently the atomics are documented in Documentation/atomic_t.txt, andhave no kerneldoc comments. There are a sufficient number of gotchas(e.g
locking/atomic: scripts: generate kerneldoc commentsCurrently the atomics are documented in Documentation/atomic_t.txt, andhave no kerneldoc comments. There are a sufficient number of gotchas(e.g. semantics, noinstr-safety) that it would be nice to have commentsto call these out, and it would be nice to have kerneldoc comments suchthat these can be collated.While it's possible to derive the semantics from the code, this can bepainful given the amount of indirection we currently have (e.g. fallbackpaths), and it's easy to be mislead by naming, e.g.* The unconditional void-returning ops *only* have relaxed variants without a _relaxed suffix, and can easily be mistaken for being fully ordered. It would be nice to give these a _relaxed() suffix, but this would result in significant churn throughout the kernel.* Our naming of conditional and unconditional+test ops is rather inconsistent, and it can be difficult to derive the name of an operation, or to identify where an op is conditional or unconditional+test. Some ops are clearly conditional: - dec_if_positive - add_unless - dec_unless_positive - inc_unless_negative Some ops are clearly unconditional+test: - sub_and_test - dec_and_test - inc_and_test However, what exactly those test is not obvious. A _test_zero suffix might be clearer. Others could be read ambiguously: - inc_not_zero // conditional - add_negative // unconditional+test It would probably be worth renaming these, e.g. to inc_unless_zero and add_test_negative.As a step towards making this more consistent and easier to understand,this patch adds kerneldoc comments for all generated *atomic*_*()functions. These are generated from templates, with some common textshared, making it easy to extend these in future if necessary.I've tried to make these as consistent and clear as possible, and I'vedeliberately ensured:* All ops have their ordering explicitly mentioned in the short and long description.* All test ops have "test" in their short description.* All ops are described as an expression using their usual C operator. For example: andnot: "Atomically updates @v to (@v & ~@i)" inc: "Atomically updates @v to (@v + 1)" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style.* All conditional ops have their condition described as an expression using the usual C operators. For example: add_unless: "If (@v != @u), atomically updates @v to (@v + @i)" cmpxchg: "If (@v == @old), atomically updates @v to @new" Which may be clearer to non-naative English speakers, and allows all the operations to be described in the same style.* All bitwise ops (and,andnot,or,xor) explicitly mention that they are bitwise in their short description, so that they are not mistaken for performing their logical equivalents.* The noinstr safety of each op is explicitly described, with a description of whether or not to use the raw_ form of the op.There should be no functional change as a result of this patch.Reported-by: Paul E. McKenney <paulmck@kernel.org>Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-26-mark.rutland@arm.com
locking/atomic: scripts: simplify raw_atomic*() definitionsCurrently each ordering variant has several potential definitions,with a mixture of preprocessor and C definitions, including severalcop
locking/atomic: scripts: simplify raw_atomic*() definitionsCurrently each ordering variant has several potential definitions,with a mixture of preprocessor and C definitions, including severalcopies of its C prototype, e.g.| #if defined(arch_atomic_fetch_andnot_acquire)| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire| #elif defined(arch_atomic_fetch_andnot_relaxed)| static __always_inline int| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)| {| int ret = arch_atomic_fetch_andnot_relaxed(i, v);| __atomic_acquire_fence();| return ret;| }| #elif defined(arch_atomic_fetch_andnot)| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot| #else| static __always_inline int| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)| {| return raw_atomic_fetch_and_acquire(~i, v);| }| #endifMake this a bit simpler by defining the C prototype once, and writingthe various potential definitions as plain C code guarded by ifdeffery.For example, the above becomes:| static __always_inline int| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)| {| #if defined(arch_atomic_fetch_andnot_acquire)| return arch_atomic_fetch_andnot_acquire(i, v);| #elif defined(arch_atomic_fetch_andnot_relaxed)| int ret = arch_atomic_fetch_andnot_relaxed(i, v);| __atomic_acquire_fence();| return ret;| #elif defined(arch_atomic_fetch_andnot)| return arch_atomic_fetch_andnot(i, v);| #else| return raw_atomic_fetch_and_acquire(~i, v);| #endif| }Which is far easier to read. As we now always have a single copy of theC prototype wrapping all the potential definitions, we now have anobvious single location for kerneldoc comments.At the same time, the fallbacks for raw_atomic*_xhcg() are made to use'new' rather than 'i' as the name of the new value. This is what theexisting fallback template used, and is more consistent with theraw_atomic{_try,}cmpxchg() fallbacks.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-24-mark.rutland@arm.com
locking/atomic: scripts: simplify raw_atomic_long*() definitionsCurrently, atomic-long is split into two sections, one defining theraw_atomic_long_*() ops for CONFIG_64BIT, and one defining the ra
locking/atomic: scripts: simplify raw_atomic_long*() definitionsCurrently, atomic-long is split into two sections, one defining theraw_atomic_long_*() ops for CONFIG_64BIT, and one defining the rawatomic_long_*() ops for !CONFIG_64BIT.With many lines elided, this looks like:| #ifdef CONFIG_64BIT| ...| static __always_inline bool| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)| {| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);| }| ...| #else /* CONFIG_64BIT */| ...| static __always_inline bool| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)| {| return raw_atomic_try_cmpxchg(v, (int *)old, new);| }| ...| #endifThe two definitions are spread far apart in the file, and duplicate theprototype, making it hard to have a legible set of kerneldoc comments.Make this simpler by defining the C prototype once, and writing the twodefinitions inline. For example, the above becomes:| static __always_inline bool| raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new)| {| #ifdef CONFIG_64BIT| return raw_atomic64_try_cmpxchg(v, (s64 *)old, new);| #else| return raw_atomic_try_cmpxchg(v, (int *)old, new);| #endif| }As we now always have a single copy of the C prototype wrapping all thepotential definitions, we now have an obvious single location for kerneldoccomments. As a bonus, both the script and the generated file aresomewhat shorter.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-23-mark.rutland@arm.com
locking/atomic: scripts: split pfx/name/sfx/orderCurrently gen-atomic-long.sh's gen_proto_order_variant() functioncombines the pfx/name/sfx/order variables immediately, unlike otherfunctions in g
locking/atomic: scripts: split pfx/name/sfx/orderCurrently gen-atomic-long.sh's gen_proto_order_variant() functioncombines the pfx/name/sfx/order variables immediately, unlike otherfunctions in gen-atomic-*.sh.This is fine today, but subsequent patches will require the individualindividual pfx/name/sfx/order variables within gen-atomic-long.sh'sgen_proto_order_variant() function. In preparation for this, split thevariables in the style of other gen-atomic-*.sh scripts.This results in no change to the generated headers, so there should beno functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-22-mark.rutland@arm.com
locking/atomic: scripts: restructure fallback ifdefferyCurrently the various ordering variants of an atomic operation aredefined in groups of full/acquire/release/relaxed ordering variants withso
locking/atomic: scripts: restructure fallback ifdefferyCurrently the various ordering variants of an atomic operation aredefined in groups of full/acquire/release/relaxed ordering variants withsome shared ifdeffery and several potential definitions of each orderingvariant in different branches of the shared ifdeffery.As an ordering variant can have several potential definitions downdifferent branches of the shared ifdeffery, it can be painful for ahuman to find a relevant definition, and we don't have a good locationto place anything common to all definitions of an ordering variant (e.g.kerneldoc).Historically the grouping of full/acquire/release/relaxed orderingvariants was necessary as we filled in the missing atomics in the samenamespace as the architecture used. It would be easy to accidentallydefine one ordering fallback in terms of another ordering fallback withredundant barriers, and avoiding that would otherwise require a lot ofbaroque ifdeffery.With recent changes we no longer need to fill in the missing atomics inthe arch_atomic*_<op>() namespace, and only need to fill in theraw_atomic*_<op>() namespace. Due to this, there's no risk of anamespace collision, and we can define each raw_atomic*_<op> orderingvariant with its own ifdeffery checking for the arch_atomic*_<op>ordering variants.Restructure the fallbacks in this way, with each ordering variant havingits own ifdeffery of the form:| #if defined(arch_atomic_fetch_andnot_acquire)| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot_acquire| #elif defined(arch_atomic_fetch_andnot_relaxed)| static __always_inline int| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)| {| int ret = arch_atomic_fetch_andnot_relaxed(i, v);| __atomic_acquire_fence();| return ret;| }| #elif defined(arch_atomic_fetch_andnot)| #define raw_atomic_fetch_andnot_acquire arch_atomic_fetch_andnot| #else| static __always_inline int| raw_atomic_fetch_andnot_acquire(int i, atomic_t *v)| {| return raw_atomic_fetch_and_acquire(~i, v);| }| #endifNote that where there's no relevant arch_atomic*_<op>() orderingvariant, we'll define the operation in terms of a distinctraw_atomic*_<otherop>(), as this itself might have been filled in with afallback.As we now generate the raw_atomic*_<op>() implementations directly, weno longer need the trivial wrappers, so they are removed.This makes the ifdeffery easier to follow, and will allow for furtherimprovements in subsequent patches.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-21-mark.rutland@arm.com
locking/atomic: scripts: build raw_atomic_long*() directlyNow that arch_atomic*() usage is limited to the atomic headers, we nolonger have any users of arch_atomic_long_*(), and can generateraw_a
locking/atomic: scripts: build raw_atomic_long*() directlyNow that arch_atomic*() usage is limited to the atomic headers, we nolonger have any users of arch_atomic_long_*(), and can generateraw_atomic_long_*() directly.Generate the raw_atomic_long_*() ops directly.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-20-mark.rutland@arm.com
locking/atomic: scripts: add trivial raw_atomic*_<op>()Currently a number of arch_atomic*_<op>() functions are optional, andwhere an arch does not provide a given arch_atomic*_<op>() we willdefin
locking/atomic: scripts: add trivial raw_atomic*_<op>()Currently a number of arch_atomic*_<op>() functions are optional, andwhere an arch does not provide a given arch_atomic*_<op>() we willdefine an implementation of arch_atomic*_<op>() inatomic-arch-fallback.h.Filling in the missing ops requires special care as we want to selectthe optimal definition of each op (e.g. preferentially defining ops interms of their relaxed form rather than their fully-ordered form). Theifdeffery necessary for this requires us to group ordering variantstogether, which can be a bit painful to read, and is painful forkerneldoc generation.It would be easier to handle this if we generated ops into a separatenamespace, as this would remove the need to take special care with theifdeffery, and allow each ordering variant to be generated separately.This patch adds a new set of raw_atomic_<op>() definitions, which arecurrently trivial wrappers of their arch_atomic_<op>() equivalent. Thiswill allow us to move treewide users of arch_atomic_<op>() over to rawatomic op before we rework the fallback generation to generateraw_atomic_<op> directly.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-18-mark.rutland@arm.com
locking/atomic: scripts: factor out order template generationCurrently gen_proto_order_variants() hard codes the path for the templates usedfor order fallbacks. Factor this out into a helper so th
locking/atomic: scripts: factor out order template generationCurrently gen_proto_order_variants() hard codes the path for the templates usedfor order fallbacks. Factor this out into a helper so that it can be reusedelsewhere.This results in no change to the generated headers, so there should beno functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-17-mark.rutland@arm.com
locking/atomic: scripts: remove leftover "${mult}"We removed cmpxchg_double() and variants in commit: b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")Which removed the need for "${mult}" in th
locking/atomic: scripts: remove leftover "${mult}"We removed cmpxchg_double() and variants in commit: b4cf83b2d1da40b2 ("arch: Remove cmpxchg_double")Which removed the need for "${mult}" in the instrumentation logic.Unfortunately we missed an instance of "${mult}".There is no change to the generated header.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-16-mark.rutland@arm.com
locking/atomic: scripts: remove bogus order parameterAt the start of gen_proto_order_variants(), the ${order} variable is notyet defined, and will be substituted with an empty string.Replace the
locking/atomic: scripts: remove bogus order parameterAt the start of gen_proto_order_variants(), the ${order} variable is notyet defined, and will be substituted with an empty string.Replace the current bogus use of ${order} with an empty string instead.This results in no change to the generated headers.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-15-mark.rutland@arm.com
locking/atomic: make atomic*_{cmp,}xchg optionalMost architectures define the atomic/atomic64 xchg and cmpxchgoperations in terms of arch_xchg and arch_cmpxchg respectfully.Add fallbacks for the
locking/atomic: make atomic*_{cmp,}xchg optionalMost architectures define the atomic/atomic64 xchg and cmpxchgoperations in terms of arch_xchg and arch_cmpxchg respectfully.Add fallbacks for these cases and remove the trivial cases from archcode. On some architectures the existing definitions are kept as theseare used to build other arch_atomic*() operations.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-5-mark.rutland@arm.com
locking/atomic: remove fallback commentsCurrently a subset of the fallback templates have kerneldoc comments,resulting in a haphazard set of generated kerneldoc comments as onlysome operations ha
locking/atomic: remove fallback commentsCurrently a subset of the fallback templates have kerneldoc comments,resulting in a haphazard set of generated kerneldoc comments as onlysome operations have fallback templates to begin with.We'd like to generate more consistent kerneldoc comments, and to do sowe'll need to restructure the way the fallback code is generated.To minimize churn and to make it easier to restructure the fallbackcode, this patch removes the existing kerneldoc comments from thefallback templates. We can add new kerneldoc comments in subsequentpatches.There should be no functional change as a result of this patch.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Kees Cook <keescook@chromium.org>Link: https://lore.kernel.org/r/20230605070124.3741859-3-mark.rutland@arm.com
arch: Remove cmpxchg_doubleNo moar users, remove the monster.Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Arnd Bergmann <arnd@arndb.de>Reviewed-by: Mark Rutland <mar
arch: Remove cmpxchg_doubleNo moar users, remove the monster.Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Arnd Bergmann <arnd@arndb.de>Reviewed-by: Mark Rutland <mark.rutland@arm.com>Acked-by: Heiko Carstens <hca@linux.ibm.com>Tested-by: Mark Rutland <mark.rutland@arm.com>Link: https://lore.kernel.org/r/20230531132323.991907085@infradead.org
instrumentation: Wire up cmpxchg128()Wire up the cmpxchg128 family in the atomic wrapper scripts.These provide the generic cmpxchg128 family of functions from thearch_ prefixed version, adding e
instrumentation: Wire up cmpxchg128()Wire up the cmpxchg128 family in the atomic wrapper scripts.These provide the generic cmpxchg128 family of functions from thearch_ prefixed version, adding explicit instrumentation where needed.Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Reviewed-by: Arnd Bergmann <arnd@arndb.de>Reviewed-by: Mark Rutland <mark.rutland@arm.com>Acked-by: Mark Rutland <mark.rutland@arm.com>Tested-by: Mark Rutland <mark.rutland@arm.com>Link: https://lore.kernel.org/r/20230531132323.519237070@infradead.org
locking/atomic: Correct (cmp)xchg() instrumentationAll xchg() and cmpxchg() ops are atomic RMWs, but currently weinstrument these with instrument_atomic_write() rather thaninstrument_atomic_read_
locking/atomic: Correct (cmp)xchg() instrumentationAll xchg() and cmpxchg() ops are atomic RMWs, but currently weinstrument these with instrument_atomic_write() rather thaninstrument_atomic_read_write(), missing the read aspect.Similarly, all try_cmpxchg() ops are non-atomic RMWs on *oldp, but weinstrument these accesses with instrument_atomic_write() rather thaninstrument_read_write(), missing the read aspect and erroneously markingthese as atomic.Fix the instrumentation for both points.Signed-off-by: Mark Rutland <mark.rutland@arm.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Signed-off-by: Ingo Molnar <mingo@kernel.org>Link: https://lkml.kernel.org/r/20230413160644.490976-1-mark.rutland@arm.comCc: Linus Torvalds <torvalds@linux-foundation.org>
locking/atomic: Add generic try_cmpxchg{,64}_local() supportAdd generic support for try_cmpxchg{,64}_local() and their falbacks.These provides the generic try_cmpxchg_local family of functionsfr
locking/atomic: Add generic try_cmpxchg{,64}_local() supportAdd generic support for try_cmpxchg{,64}_local() and their falbacks.These provides the generic try_cmpxchg_local family of functionsfrom the arch_ prefixed version, also adding explicit instrumentation.Signed-off-by: Uros Bizjak <ubizjak@gmail.com>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Signed-off-by: Ingo Molnar <mingo@kernel.org>Acked-by: Mark Rutland <mark.rutland@arm.com>Link: https://lore.kernel.org/r/20230405141710.3551-2-ubizjak@gmail.comCc: Linus Torvalds <torvalds@linux-foundation.org>
atomics: Provide atomic_add_negative() variantsatomic_add_negative() does not provide the relaxed/acquire/releasevariants.Provide them in preparation for a new scalable reference count algorithm
atomics: Provide atomic_add_negative() variantsatomic_add_negative() does not provide the relaxed/acquire/releasevariants.Provide them in preparation for a new scalable reference count algorithm.Signed-off-by: Thomas Gleixner <tglx@linutronix.de>Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>Acked-by: Mark Rutland <mark.rutland@arm.com>Link: https://lore.kernel.org/r/20230323102800.101763813@linutronix.de
Fix up more non-executable files marked executableJoe found another DT file that shouldn't be executable, and thatfrustrated me enough that I went hunting with this script: git ls-files -s |
Fix up more non-executable files marked executableJoe found another DT file that shouldn't be executable, and thatfrustrated me enough that I went hunting with this script: git ls-files -s | grep '^100755' | cut -f2 | xargs grep -L '^#!'and that found another file that shouldn't have been marked executableeither, despite being in the scripts directory.Maybe these two are the last ones at least for now. But I'm sure we'llbe back in a few years, fixing things up again.Fixes: 8c6789f4e2d4 ("ASoC: dt-bindings: Add Everest ES8326 audio CODEC")Fixes: 4d8e5cd233db ("locking/atomics: Fix scripts/atomic/ script permissions")Reported-by: Joe Perches <joe@perches.com>Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kbuild: check sha1sum just once for each atomic headerIt is unneeded to check the sha1sum every time.Create the timestamp files to manage it.Add '.' to clean-dirs because 'make clean' must visi
kbuild: check sha1sum just once for each atomic headerIt is unneeded to check the sha1sum every time.Create the timestamp files to manage it.Add '.' to clean-dirs because 'make clean' must visit ./Kbuild toclean up the timestamp files.Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
123