| c0d597e0 | 12-Nov-2025 |
Ard Biesheuvel <ardb@kernel.org> |
lib/crypto: arm/blake2b: Move to scoped ksimd API
Even though ARM's versions of kernel_neon_begin()/_end() are not being changed, update the newly migrated ARM blake2b to the scoped ksimd API so tha
lib/crypto: arm/blake2b: Move to scoped ksimd API
Even though ARM's versions of kernel_neon_begin()/_end() are not being changed, update the newly migrated ARM blake2b to the scoped ksimd API so that all ARM and arm64 in lib/crypto remains consistent in this manner.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 95ce85de | 02-Nov-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm, arm64: Drop filenames from file comments
Remove self-references to filenames from assembly files in lib/crypto/arm/ and lib/crypto/arm64/. This follows the recommended practice and
lib/crypto: arm, arm64: Drop filenames from file comments
Remove self-references to filenames from assembly files in lib/crypto/arm/ and lib/crypto/arm64/. This follows the recommended practice and eliminates an outdated reference to sha2-ce-core.S.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251102014809.170713-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| ba6617bd | 18-Oct-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/blake2b: Migrate optimized code into library
Migrate the arm-optimized BLAKE2b code from arch/arm/crypto/ to lib/crypto/arm/. This makes the BLAKE2b library able to use it, and it a
lib/crypto: arm/blake2b: Migrate optimized code into library
Migrate the arm-optimized BLAKE2b code from arch/arm/crypto/ to lib/crypto/arm/. This makes the BLAKE2b library able to use it, and it also simplifies the code because it's easier to integrate with the library than crypto_shash.
This temporarily makes the arm-optimized BLAKE2b code unavailable via crypto_shash. A later commit reimplements the blake2b-* crypto_shash algorithms on top of the BLAKE2b library API, making it available again.
Note that as per the lib/crypto/ convention, the optimized code is now enabled by default. So, this also fixes the longstanding issue where the optimized BLAKE2b code was not enabled by default.
To see the diff from arch/arm/crypto/blake2b-neon-glue.c to lib/crypto/arm/blake2b.h, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251018043106.375964-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 5385bcbf | 18-Oct-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: blake2s: Drop excessive const & rename block => data
A couple more small cleanups to the BLAKE2s code before these things get propagated into the BLAKE2b code:
- Drop 'const' from some
lib/crypto: blake2s: Drop excessive const & rename block => data
A couple more small cleanups to the BLAKE2s code before these things get propagated into the BLAKE2b code:
- Drop 'const' from some non-pointer function parameters. It was a bit excessive and not conventional.
- Rename 'block' argument of blake2s_compress*() to 'data'. This is for consistency with the SHA-* code, and also to avoid the implication that it points to a singular "block".
No functional changes.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251018043106.375964-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 68546e56 | 06-Sep-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: curve25519: Consolidate into single module
Reorganize the Curve25519 library code:
- Build a single libcurve25519 module, instead of up to three modules: libcurve25519, libcurve25519-
lib/crypto: curve25519: Consolidate into single module
Reorganize the Curve25519 library code:
- Build a single libcurve25519 module, instead of up to three modules: libcurve25519, libcurve25519-generic, and an arch-specific module.
- Move the arch-specific Curve25519 code from arch/$(SRCARCH)/crypto/ to lib/crypto/$(SRCARCH)/. Centralize the build rules into lib/crypto/Makefile and lib/crypto/Kconfig.
- Include the arch-specific code directly in lib/crypto/curve25519.c via a header, rather than using a separate .c file.
- Eliminate the entanglement with CRYPTO. CRYPTO_LIB_CURVE25519 no longer selects CRYPTO, and the arch-specific Curve25519 code no longer depends on CRYPTO.
This brings Curve25519 in line with the latest conventions for lib/crypto/, used by other algorithms. The exception is that I kept the generic code in separate translation units for now. (Some of the function names collide between the x86 and generic Curve25519 code. And the Curve25519 functions are very long anyway, so inlining doesn't matter as much for Curve25519 as it does for some other algorithms.)
Link: https://lore.kernel.org/r/20250906213523.84915-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 39ee3970 | 27-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: blake2s: Consolidate into single C translation unit
As was done with the other algorithms, reorganize the BLAKE2s code so that the generic implementation and the arch-specific "glue" cod
lib/crypto: blake2s: Consolidate into single C translation unit
As was done with the other algorithms, reorganize the BLAKE2s code so that the generic implementation and the arch-specific "glue" code is consolidated into a single translation unit, so that the compiler will inline the functions and automatically decide whether to include the generic code in the resulting binary or not.
Similarly, also consolidate the build rules into lib/crypto/{Makefile,Kconfig}. This removes the last uses of lib/crypto/{arm,x86}/{Makefile,Kconfig}, so remove those too.
Don't keep the !KMSAN dependency. It was needed only for other algorithms such as ChaCha that initialize memory from assembly code.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-12-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 56e48d4e | 27-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: blake2s: Always enable arch-optimized BLAKE2s code
When support for a crypto algorithm is enabled, the arch-optimized implementation of that algorithm should be enabled too. We've learn
lib/crypto: blake2s: Always enable arch-optimized BLAKE2s code
When support for a crypto algorithm is enabled, the arch-optimized implementation of that algorithm should be enabled too. We've learned this the hard way many times over the years: people regularly forget to enable the arch-optimized implementations of the crypto algorithms, resulting in significant performance being left on the table.
Currently, BLAKE2s support is always enabled ('obj-y'), since random.c uses it. Therefore, the arch-optimized BLAKE2s code, which exists for ARM and x86_64, should be always enabled too. Let's do that.
Note that the effect on kernel image size is very small and should not be a concern. On ARM, enabling CRYPTO_BLAKE2S_ARM actually *shrinks* the kernel size by about 1200 bytes, since the ARM-optimized blake2s_compress() completely replaces the generic blake2s_compress(). On x86_64, enabling CRYPTO_BLAKE2S_X86 increases the kernel size by about 1400 bytes, as the generic blake2s_compress() is still included as a fallback; however, for context, that is only about a quarter the size of the generic blake2s_compress(). The x86_64 optimized BLAKE2s code uses much less icache at runtime than the generic code.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-10-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 13cecc52 | 27-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: chacha: Consolidate into single module
Consolidate the ChaCha code into a single module (excluding chacha-block-generic.c which remains always built-in for random.c), similar to various
lib/crypto: chacha: Consolidate into single module
Consolidate the ChaCha code into a single module (excluding chacha-block-generic.c which remains always built-in for random.c), similar to various other algorithms:
- Each arch now provides a header file lib/crypto/$(SRCARCH)/chacha.h, replacing lib/crypto/$(SRCARCH)/chacha*.c. The header defines chacha_crypt_arch() and hchacha_block_arch(). It is included by lib/crypto/chacha.c, and thus the code gets built into the single libchacha module, with improved inlining in some cases.
- Whether arch-optimized ChaCha is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default.
- Any additional arch-specific translation units for the optimized ChaCha code, such as assembly files, are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile.
This removes the last use for the Makefile and Kconfig files in the arm64, mips, powerpc, riscv, and s390 subdirectories of lib/crypto/. So also remove those files and the references to them.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250827151131.27733-7-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| b646b782 | 29-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: poly1305: Consolidate into single module
Consolidate the Poly1305 code into a single module, similar to various other algorithms (SHA-1, SHA-256, SHA-512, etc.):
- Each arch now provide
lib/crypto: poly1305: Consolidate into single module
Consolidate the Poly1305 code into a single module, similar to various other algorithms (SHA-1, SHA-256, SHA-512, etc.):
- Each arch now provides a header file lib/crypto/$(SRCARCH)/poly1305.h, replacing lib/crypto/$(SRCARCH)/poly1305*.c. The header defines poly1305_block_init(), poly1305_blocks(), poly1305_emit(), and optionally poly1305_mod_init_arch(). It is included by lib/crypto/poly1305.c, and thus the code gets built into the single libpoly1305 module, with improved inlining in some cases.
- Whether arch-optimized Poly1305 is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default. (The PPC64 one remains unconditionally disabled due to 'depends on BROKEN'.)
- Any additional arch-specific translation units for the optimized Poly1305 code, such as assembly files, are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile.
A special consideration is needed because the Adiantum code uses the poly1305_core_*() functions directly. For now, just carry forward that approach. This means retaining the CRYPTO_LIB_POLY1305_GENERIC kconfig symbol, and keeping the poly1305_core_*() functions in separate translation units. So it's not quite as streamlined I've done with the other hash functions, but we still get a single libpoly1305 module.
Note: to see the diff from the arm, arm64, and x86 .c files to the new .h files, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250829152513.92459-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| df220cc5 | 29-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: poly1305: Remove unused function poly1305_is_arch_optimized()
poly1305_is_arch_optimized() is unused, so remove it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kern
lib/crypto: poly1305: Remove unused function poly1305_is_arch_optimized()
poly1305_is_arch_optimized() is unused, so remove it.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250829152513.92459-2-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| bce58166 | 01-Aug-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: sha512: Use underlying functions instead of crypto_simd_usable()
Since sha512_kunit tests the fallback code paths without using crypto_simd_disabled_for_test, make the SHA-512 code just
lib/crypto: sha512: Use underlying functions instead of crypto_simd_usable()
Since sha512_kunit tests the fallback code paths without using crypto_simd_disabled_for_test, make the SHA-512 code just use the underlying may_use_simd() and irq_fpu_usable() functions directly instead of crypto_simd_usable(). This eliminates an unnecessary layer.
Link: https://lore.kernel.org/r/20250731223651.136939-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 70cb6ca5 | 13-Jul-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/sha1: Migrate optimized code into library
Instead of exposing the arm-optimized SHA-1 code via arm-specific crypto_shash algorithms, instead just implement the sha1_blocks() library
lib/crypto: arm/sha1: Migrate optimized code into library
Instead of exposing the arm-optimized SHA-1 code via arm-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it.
To match sha1_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 6e07c5e1 | 11-Jul-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/poly1305: Remove unneeded empty weak function
Fix poly1305-armv4.pl to not do '.globl poly1305_blocks_neon' when poly1305_blocks_neon() is not defined. Then, remove the empty __weak
lib/crypto: arm/poly1305: Remove unneeded empty weak function
Fix poly1305-armv4.pl to not do '.globl poly1305_blocks_neon' when poly1305_blocks_neon() is not defined. Then, remove the empty __weak definition of poly1305_blocks_neon(), which was still needed only because of that unnecessary globl statement. (It also used to be needed because the compiler could generate calls to it when CONFIG_KERNEL_MODE_NEON=n, but that has been fixed.)
Thanks to Arnd Bergmann for reporting that the globl statement in the asm file was still depending on the weak symbol.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250711212822.6372-1-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| 52c3e242 | 07-Jul-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
Restore the SIMD usability check that was removed by commit 773426f4771b ("crypto: arm/poly1305 - Add block-only interface").
T
lib/crypto: arm/poly1305: Fix register corruption in no-SIMD contexts
Restore the SIMD usability check that was removed by commit 773426f4771b ("crypto: arm/poly1305 - Add block-only interface").
This safety check is cheap and is well worth eliminating a footgun. While the Poly1305 functions should not be called when SIMD registers are unusable, if they are anyway, they should just do the right thing instead of corrupting random tasks' registers and/or computing incorrect MACs. Fixing this is also needed for poly1305_kunit to pass.
Just use may_use_simd() instead of the original crypto_simd_usable(), since poly1305_kunit won't rely on crypto_simd_disabled_for_test.
Fixes: 773426f4771b ("crypto: arm/poly1305 - Add block-only interface") Cc: stable@vger.kernel.org Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250706231100.176113-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| e96cb950 | 30-Jun-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: sha256: Consolidate into single module
Consolidate the CPU-based SHA-256 code into a single module, following what I did with SHA-512:
- Each arch now provides a header file lib/crypto/
lib/crypto: sha256: Consolidate into single module
Consolidate the CPU-based SHA-256 code into a single module, following what I did with SHA-512:
- Each arch now provides a header file lib/crypto/$(SRCARCH)/sha256.h, replacing lib/crypto/$(SRCARCH)/sha256.c. The header defines sha256_blocks() and optionally sha256_mod_init_arch(). It is included by lib/crypto/sha256.c, and thus the code gets built into the single libsha256 module, with proper inlining and dead code elimination.
- sha256_blocks_generic() is moved from lib/crypto/sha256-generic.c into lib/crypto/sha256.c. It's now a static function marked with __maybe_unused, so the compiler automatically eliminates it in any cases where it's not used.
- Whether arch-optimized SHA-256 is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default.
- Any additional arch-specific translation units for the optimized SHA-256 code (such as assembly files) are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160645.3198-13-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|