History log of /linux/lib/crypto/Makefile (Results 1 – 25 of 109)
Revision Date Author Comments
# 370c3883 14-Apr-2026 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux

Pull crypto library updates from Eric Biggers:

- Migrate more hash algorithms from the traditional c

Merge tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux

Pull crypto library updates from Eric Biggers:

- Migrate more hash algorithms from the traditional crypto subsystem to
lib/crypto/

Like the algorithms migrated earlier (e.g. SHA-*), this simplifies
the implementations, improves performance, enables further
simplifications in calling code, and solves various other issues:

- AES CBC-based MACs (AES-CMAC, AES-XCBC-MAC, and AES-CBC-MAC)

- Support these algorithms in lib/crypto/ using the AES library
and the existing arm64 assembly code

- Reimplement the traditional crypto API's "cmac(aes)",
"xcbc(aes)", and "cbcmac(aes)" on top of the library

- Convert mac80211 to use the AES-CMAC library. Note: several
other subsystems can use it too and will be converted later

- Drop the broken, nonstandard, and likely unused support for
"xcbc(aes)" with key lengths other than 128 bits

- Enable optimizations by default

- GHASH

- Migrate the standalone GHASH code into lib/crypto/

- Integrate the GHASH code more closely with the very similar
POLYVAL code, and improve the generic GHASH implementation to
resist cache-timing attacks and use much less memory

- Reimplement the AES-GCM library and the "gcm" crypto_aead
template on top of the GHASH library. Remove "ghash" from the
crypto_shash API, as it's no longer needed

- Enable optimizations by default

- SM3

- Migrate the kernel's existing SM3 code into lib/crypto/, and
reimplement the traditional crypto API's "sm3" on top of it

- I don't recommend using SM3, but this cleanup is worthwhile
to organize the code the same way as other algorithms

- Testing improvements:

- Add a KUnit test suite for each of the new library APIs

- Migrate the existing ChaCha20Poly1305 test to KUnit

- Make the KUnit all_tests.config enable all crypto library tests

- Move the test kconfig options to the Runtime Testing menu

- Other updates to arch-optimized crypto code:

- Optimize SHA-256 for Zhaoxin CPUs using the Padlock Hash Engine

- Remove some MD5 implementations that are no longer worth keeping

- Drop big endian and voluntary preemption support from the arm64
code, as those configurations are no longer supported on arm64

- Make jitterentropy and samples/tsm-mr use the crypto library APIs

* tag 'libcrypto-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux: (66 commits)
lib/crypto: arm64: Assume a little-endian kernel
arm64: fpsimd: Remove obsolete cond_yield macro
lib/crypto: arm64/sha3: Remove obsolete chunking logic
lib/crypto: arm64/sha512: Remove obsolete chunking logic
lib/crypto: arm64/sha256: Remove obsolete chunking logic
lib/crypto: arm64/sha1: Remove obsolete chunking logic
lib/crypto: arm64/poly1305: Remove obsolete chunking logic
lib/crypto: arm64/gf128hash: Remove obsolete chunking logic
lib/crypto: arm64/chacha: Remove obsolete chunking logic
lib/crypto: arm64/aes: Remove obsolete chunking logic
lib/crypto: Include <crypto/utils.h> instead of <crypto/algapi.h>
lib/crypto: aesgcm: Don't disable IRQs during AES block encryption
lib/crypto: aescfb: Don't disable IRQs during AES block encryption
lib/crypto: tests: Migrate ChaCha20Poly1305 self-test to KUnit
lib/crypto: sparc: Drop optimized MD5 code
lib/crypto: mips: Drop optimized MD5 code
lib: Move crypto library tests to Runtime Testing menu
crypto: sm3 - Remove 'struct sm3_state'
crypto: sm3 - Remove the original "sm3_block_generic()"
crypto: sm3 - Remove sm3_base.h
...

show more ...


# d2a68aba 27-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: tests: Migrate ChaCha20Poly1305 self-test to KUnit

Move the ChaCha20Poly1305 test from an ad-hoc self-test to a KUnit test.

Keep the same test logic for now, just translated to KUnit.

lib/crypto: tests: Migrate ChaCha20Poly1305 self-test to KUnit

Move the ChaCha20Poly1305 test from an ad-hoc self-test to a KUnit test.

Keep the same test logic for now, just translated to KUnit.

Moving to KUnit has multiple benefits, such as:

- Consistency with the rest of the lib/crypto/ tests.

- Kernel developers familiar with KUnit, which is used kernel-wide, can
quickly understand the test and how to enable and run it.

- The test will be automatically run by anyone using
lib/crypto/.kunitconfig or KUnit's all_tests.config.

- Results are reported using the standard KUnit mechanism.

- It eliminates one of the few remaining back-references to crypto/ from
lib/crypto/, specifically a reference to CONFIG_CRYPTO_SELFTESTS.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260327224229.137532-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 23e5c306 26-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: sparc: Drop optimized MD5 code

MD5 is obsolete. Continuing to maintain architecture-optimized
implementations of MD5 is unnecessary and risky. It diverts resources
from the modern algo

lib/crypto: sparc: Drop optimized MD5 code

MD5 is obsolete. Continuing to maintain architecture-optimized
implementations of MD5 is unnecessary and risky. It diverts resources
from the modern algorithms that are actually important.

While there was demand for continuing to maintain the PowerPC optimized
MD5 code to accommodate userspace programs that are misusing AF_ALG
(https://lore.kernel.org/linux-crypto/c4191597-341d-4fd7-bc3d-13daf7666c41@csgroup.eu/),
no such demand has been seen for the SPARC optimized MD5 code.

Thus, let's drop it and focus effort on the more modern SHA algorithms,
which already have optimized code for SPARC.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260326203341.60393-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 17ba6108 21-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: x86/sm3: Migrate optimized code into library

Instead of exposing the x86-optimized SM3 code via an x86-specific
crypto_shash algorithm, instead just implement the sm3_blocks() library
fu

lib/crypto: x86/sm3: Migrate optimized code into library

Instead of exposing the x86-optimized SM3 code via an x86-specific
crypto_shash algorithm, instead just implement the sm3_blocks() library
function. This is much simpler, it makes the SM3 library functions be
x86-optimized, and it fixes the longstanding issue where the
x86-optimized SM3 code was disabled by default. SM3 still remains
available through crypto_shash, but individual architectures no longer
need to handle it.

Tweak the prototype of sm3_transform_avx() to match what the library
expects, including changing the block count to size_t. Note that the
assembly code actually already treated this argument as size_t.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260321040935.410034-10-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 5f6bbba5 21-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: riscv/sm3: Migrate optimized code into library

Instead of exposing the riscv-optimized SM3 code via a riscv-specific
crypto_shash algorithm, instead just implement the sm3_blocks() libra

lib/crypto: riscv/sm3: Migrate optimized code into library

Instead of exposing the riscv-optimized SM3 code via a riscv-specific
crypto_shash algorithm, instead just implement the sm3_blocks() library
function. This is much simpler, it makes the SM3 library functions be
riscv-optimized, and it fixes the longstanding issue where the
riscv-optimized SM3 code was disabled by default. SM3 still remains
available through crypto_shash, but individual architectures no longer
need to handle it.

Tweak the prototype of sm3_transform_zvksh_zvkb() to match what the
library expects, including changing the block count to size_t.
Note that the assembly code already treated it as size_t.

Note: to see the diff from arch/riscv/crypto/sm3-riscv64-glue.c to
lib/crypto/riscv/sm3.h, view this commit with 'git show -M10'.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260321040935.410034-9-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 9f69f52b 21-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64/sm3: Migrate optimized code into library

Instead of exposing the arm64-optimized SM3 code via arm64-specific
crypto_shash algorithms, instead just implement the sm3_blocks() librar

lib/crypto: arm64/sm3: Migrate optimized code into library

Instead of exposing the arm64-optimized SM3 code via arm64-specific
crypto_shash algorithms, instead just implement the sm3_blocks() library
function. This is much simpler, it makes the SM3 library functions be
arm64-optimized, and it fixes the longstanding issue where the
arm64-optimized SM3 code was disabled by default. SM3 still remains
available through crypto_shash, but individual architectures no longer
need to handle it.

Tweak the SM3 assembly function prototypes to match what the library
expects, including changing the block count from 'int' to 'size_t'.
sm3_ce_transform() had to be updated to access 'x2' instead of 'w2',
while sm3_neon_transform() already used 'x2'.

Remove the CFI stubs which are no longer needed because the SM3 assembly
functions are no longer ever indirectly called.

Remove the dependency on KERNEL_MODE_NEON. It was unnecessary, because
KERNEL_MODE_NEON is always enabled on arm64.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260321040935.410034-8-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 3e79c8ec 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: x86/ghash: Migrate optimized code into library

Remove the "ghash-pclmulqdqni" crypto_shash algorithm. Move the
corresponding assembly code into lib/crypto/, and wire it up to the
GHASH

lib/crypto: x86/ghash: Migrate optimized code into library

Remove the "ghash-pclmulqdqni" crypto_shash algorithm. Move the
corresponding assembly code into lib/crypto/, and wire it up to the
GHASH library.

This makes the GHASH library be optimized with x86's carryless
multiplication instructions. It also greatly reduces the amount of
x86-specific glue code that is needed, and it fixes the issue where this
GHASH optimization was disabled by default.

Rename and adjust the prototypes of the assembly functions to make them
fit better with the library. Remove the byte-swaps (pshufb
instructions) that are no longer necessary because the library keeps the
accumulator in POLYVAL format rather than GHASH format.

Rename clmul_ghash_mul() to polyval_mul_pclmul() to reflect that it
really does a POLYVAL style multiplication. Wire it up to both
ghash_mul_arch() and polyval_mul_arch().

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-15-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# af413d71 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: riscv/ghash: Migrate optimized code into library

Remove the "ghash-riscv64-zvkg" crypto_shash algorithm. Move the
corresponding assembly code into lib/crypto/, modify it to take the
len

lib/crypto: riscv/ghash: Migrate optimized code into library

Remove the "ghash-riscv64-zvkg" crypto_shash algorithm. Move the
corresponding assembly code into lib/crypto/, modify it to take the
length in blocks instead of bytes, and wire it up to the GHASH library.

This makes the GHASH library be optimized with the RISC-V Vector
Cryptography Extension. It also greatly reduces the amount of
riscv-specific glue code that is needed, and it fixes the issue where
this optimized GHASH code was disabled by default.

Note that this RISC-V code has multiple opportunities for improvement,
such as adding more parallelism, providing an optimized multiplication
function, and directly supporting POLYVAL. But for now, this commit
simply tweaks ghash_zvkg() slightly to make it compatible with the
library, then wires it up to ghash_blocks_arch().

ghash_preparekey_arch() is also implemented to store the copy of the raw
key needed by the vghsh.vv instruction.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-13-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 73f315c1 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: powerpc/ghash: Migrate optimized code into library

Remove the "p8_ghash" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH libra

lib/crypto: powerpc/ghash: Migrate optimized code into library

Remove the "p8_ghash" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH library.

This makes the GHASH library be optimized for POWER8. It also greatly
reduces the amount of powerpc-specific glue code that is needed, and it
fixes the issue where this optimized GHASH code was disabled by default.

Note that previously the C code defined the POWER8 GHASH key format as
"u128 htable[16]", despite the assembly code only using four entries.
Fix the C code to use the correct key format. To fulfill the library
API contract, also make the key preparation work in all contexts.

Note that the POWER8 assembly code takes the accumulator in GHASH
format, but it actually byte-reflects it to get it into POLYVAL format.
The library already works with POLYVAL natively. For now, just wire up
this existing code by converting it to/from GHASH format in C code.
This should be cleaned up to eliminate the unnecessary conversion later.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-12-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# a336c01f 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64/ghash: Migrate optimized code into library

Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH libra

lib/crypto: arm64/ghash: Migrate optimized code into library

Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH library.

This makes the GHASH library be optimized on arm64 (though only with
NEON, not PMULL; for now the goal is just parity with crypto_shash). It
greatly reduces the amount of arm64-specific glue code that is needed,
and it fixes the issue where this optimization was disabled by default.

To integrate the assembly code correctly with the library, make the
following tweaks:

- Change the type of 'blocks' from int to size_t
- Change the types of 'dg' and 'h' to polyval_elem. Note that this
simply reflects the format that the code was already using.
- Remove the 'head' argument, which is no longer needed.
- Remove the CFI stubs, as indirect calls are no longer used.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-10-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 71e59795 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm/ghash: Migrate optimized code into library

Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH library

lib/crypto: arm/ghash: Migrate optimized code into library

Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding
assembly code into lib/crypto/, and wire it up to the GHASH library.

This makes the GHASH library be optimized on arm (though only with NEON,
not PMULL; for now the goal is just parity with crypto_shash). It
greatly reduces the amount of arm-specific glue code that is needed, and
it fixes the issue where this optimization was disabled by default.

To integrate the assembly code correctly with the library, make the
following tweaks:

- Change the type of 'blocks' from int to size_t.
- Change the types of 'dg' and 'h' to polyval_elem. Note that this
simply reflects the format that the code was already using, at least
on little endian CPUs. For big endian CPUs, add byte-swaps.
- Remove the 'head' argument, which is no longer needed.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-8-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 61f66c52 19-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: gf128hash: Rename polyval module to gf128hash

Currently, the standalone GHASH code is coupled with crypto_shash. This
has resulted in unnecessary complexity and overhead, as well as the

lib/crypto: gf128hash: Rename polyval module to gf128hash

Currently, the standalone GHASH code is coupled with crypto_shash. This
has resulted in unnecessary complexity and overhead, as well as the code
being unavailable to library code such as the AES-GCM library. Like was
done with POLYVAL, it needs to find a new home in lib/crypto/.

GHASH and POLYVAL are closely related and can each be implemented in
terms of each other. Optimized code for one can be reused with the
other. But also since GHASH tends to be difficult to implement directly
due to its unnatural bit order, most modern GHASH implementations
(including the existing arm, arm64, powerpc, and x86 optimized GHASH
code, and the new generic GHASH code I'll be adding) actually
reinterpret the GHASH computation as an equivalent POLYVAL computation,
pre and post-processing the inputs and outputs to map to/from POLYVAL.

Given this close relationship, it makes sense to group the GHASH and
POLYVAL code together in the same module. This gives us a wide range of
options for implementing them, reusing code between the two and properly
utilizing whatever instructions each architecture provides.

Thus, GHASH support will be added to the library module that is
currently called "polyval". Rename it to an appropriate name:
"gf128hash". Rename files, options, functions, etc. where appropriate
to reflect the upcoming sharing with GHASH. (Note: polyval_kunit is not
renamed, as ghash_kunit will be added alongside it instead.)

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260319061723.1140720-2-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# c2db2288 14-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64: Drop checks for CONFIG_KERNEL_MODE_NEON

CONFIG_KERNEL_MODE_NEON is always enabled on arm64, and it always has
been since its introduction in 2013. Given that and the fact that th

lib/crypto: arm64: Drop checks for CONFIG_KERNEL_MODE_NEON

CONFIG_KERNEL_MODE_NEON is always enabled on arm64, and it always has
been since its introduction in 2013. Given that and the fact that the
usefulness of kernel-mode NEON has only been increasing over time,
checking for this option in arm64-specific code is unnecessary. Remove
these checks from lib/crypto/ to simplify the code and prevent any
future bugs where e.g. code gets disabled due to a typo in this logic.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260314175049.26931-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# d5b66179 17-Mar-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: powerpc: Add powerpc/aesp8-ppc.S to clean-files

Make the generated file powerpc/aesp8-ppc.S be removed by 'make clean'.

Fixes: 7cf2082e74ce ("lib/crypto: powerpc/aes: Migrate POWER8 opt

lib/crypto: powerpc: Add powerpc/aesp8-ppc.S to clean-files

Make the generated file powerpc/aesp8-ppc.S be removed by 'make clean'.

Fixes: 7cf2082e74ce ("lib/crypto: powerpc/aes: Migrate POWER8 optimized code into library")
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260317044925.104184-1-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 4b908403 18-Feb-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64/aes: Move assembly code for AES modes into libaes

To migrate the support for CBC-based MACs into libaes, the corresponding
arm64 assembly code needs to be moved there. However, th

lib/crypto: arm64/aes: Move assembly code for AES modes into libaes

To migrate the support for CBC-based MACs into libaes, the corresponding
arm64 assembly code needs to be moved there. However, the arm64 AES
assembly code groups many AES modes together; individual modes aren't
easily separable. (This isn't unique to arm64; other architectures
organize their AES modes similarly.)

Since the other AES modes will be migrated into the library eventually
too, just move the full assembly files for the AES modes into the
library. (This is similar to what I already did for PowerPC and SPARC.)

Specifically: move the assembly files aes-ce.S, aes-modes.S, and
aes-neon.S and their build rules; declare the assembly functions in
<crypto/aes.h>; and export the assembly functions from libaes.

Note that the exports and public declarations of the assembly functions
are temporary. They exist only to keep arch/arm64/crypto/ working until
the AES modes are fully moved into the library.

Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260218213501.136844-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 24eb22d8 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: x86/aes: Add AES-NI optimization

Optimize the AES library with x86 AES-NI instructions.

The relevant existing assembly functions, aesni_set_key(), aesni_enc(),
and aesni_dec(), are a bi

lib/crypto: x86/aes: Add AES-NI optimization

Optimize the AES library with x86 AES-NI instructions.

The relevant existing assembly functions, aesni_set_key(), aesni_enc(),
and aesni_dec(), are a bit difficult to extract into the library:

- They're coupled to the code for the AES modes.
- They operate on struct crypto_aes_ctx. The AES library now uses
different structs.
- They assume the key is 16-byte aligned. The AES library only
*prefers* 16-byte alignment; it doesn't require it.

Moreover, they're not all that great in the first place:

- They use unrolled loops, which isn't a great choice on x86.
- They use the 'aeskeygenassist' instruction, which is unnecessary, is
slow on Intel CPUs, and forces the loop to be unrolled.
- They have special code for AES-192 key expansion, despite that being
kind of useless. AES-128 and AES-256 are the ones used in practice.

These are small functions anyway.

Therefore, I opted to just write replacements of these functions for the
library. They address all the above issues.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-18-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 293c7cd5 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: sparc/aes: Migrate optimized code into library

Move the SPARC64 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library

lib/crypto: sparc/aes: Migrate optimized code into library

Move the SPARC64 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the "aes-sparc64" crypto_cipher algorithm.

The result is that both the AES library and crypto_cipher APIs use the
SPARC64 AES opcodes, whereas previously only crypto_cipher did (and it
wasn't enabled by default, which this commit fixes as well).

Note that some of the functions in the SPARC64 AES assembly code are
still used by the AES mode implementations in
arch/sparc/crypto/aes_glue.c. For now, just export these functions.
These exports will go away once the AES mode implementations are
migrated to the library as well. (Trying to split up the assembly file
seemed like much more trouble than it would be worth.)

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-17-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# a4e573db 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: riscv/aes: Migrate optimized code into library

Move the aes_encrypt_zvkned() and aes_decrypt_zvkned() assembly
functions into lib/crypto/, wire them up to the AES library API, and
remove

lib/crypto: riscv/aes: Migrate optimized code into library

Move the aes_encrypt_zvkned() and aes_decrypt_zvkned() assembly
functions into lib/crypto/, wire them up to the AES library API, and
remove the "aes-riscv64-zvkned" crypto_cipher algorithm.

To make this possible, change the prototypes of these functions to
take (rndkeys, key_len) instead of a pointer to crypto_aes_ctx, and
change the RISC-V AES-XTS code to implement tweak encryption using the
AES library instead of directly calling aes_encrypt_zvkned().

The result is that both the AES library and crypto_cipher APIs use
RISC-V's AES instructions, whereas previously only crypto_cipher did
(and it wasn't enabled by default, which this commit fixes as well).

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-15-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 7cf2082e 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: powerpc/aes: Migrate POWER8 optimized code into library

Move the POWER8 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES

lib/crypto: powerpc/aes: Migrate POWER8 optimized code into library

Move the POWER8 AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the superseded "p8_aes" crypto_cipher algorithm.

The result is that both the AES library and crypto_cipher APIs are now
optimized for POWER8, whereas previously only crypto_cipher was (and
optimizations weren't enabled by default, which this commit fixes too).

Note that many of the functions in the POWER8 assembly code are still
used by the AES mode implementations in arch/powerpc/crypto/. For now,
just export these functions. These exports will go away once the AES
modes are migrated to the library as well. (Trying to split up the
assembly file seemed like much more trouble than it would be worth.)

Another challenge with this code is that the POWER8 assembly code uses a
custom format for the expanded AES key. Since that code is imported
from OpenSSL and is also targeted to POWER8 (rather than POWER9 which
has better data movement and byteswap instructions), that is not easily
changed. For now I've just kept the custom format. To maintain full
correctness, this requires executing some slow fallback code in the case
where the usability of VSX changes between key expansion and use. This
should be tolerable, as this case shouldn't happen in practice.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-14-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 0892c91b 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: powerpc/aes: Migrate SPE optimized code into library

Move the PowerPC SPE AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AE

lib/crypto: powerpc/aes: Migrate SPE optimized code into library

Move the PowerPC SPE AES assembly code into lib/crypto/, wire the key
expansion and single-block en/decryption functions up to the AES library
API, and remove the superseded "aes-ppc-spe" crypto_cipher algorithm.

The result is that both the AES library and crypto_cipher APIs are now
optimized with SPE, whereas previously only crypto_cipher was (and
optimizations weren't enabled by default, which this commit fixes too).

Note that many of the functions in the PowerPC SPE assembly code are
still used by the AES mode implementations in arch/powerpc/crypto/. For
now, just export these functions. These exports will go away once the
AES modes are migrated to the library as well. (Trying to split up the
assembly files seemed like much more trouble than it would be worth.)

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-13-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# 2b1ef7ae 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64/aes: Migrate optimized code into library

Move the ARM64 optimized AES key expansion and single-block AES
en/decryption code into lib/crypto/, wire it up to the AES library API,
and

lib/crypto: arm64/aes: Migrate optimized code into library

Move the ARM64 optimized AES key expansion and single-block AES
en/decryption code into lib/crypto/, wire it up to the AES library API,
and remove the superseded crypto_cipher algorithms.

The result is that both the AES library and crypto_cipher APIs are now
optimized for ARM64, whereas previously only crypto_cipher was (and the
optimizations weren't enabled by default, which this fixes as well).

Note: to see the diff from arch/arm64/crypto/aes-ce-glue.c to
lib/crypto/arm64/aes.h, view this commit with 'git show -M10'.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-12-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# fa229775 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm/aes: Migrate optimized code into library

Move the ARM optimized single-block AES en/decryption code into
lib/crypto/, wire it up to the AES library API, and remove the
superseded "ae

lib/crypto: arm/aes: Migrate optimized code into library

Move the ARM optimized single-block AES en/decryption code into
lib/crypto/, wire it up to the AES library API, and remove the
superseded "aes-arm" crypto_cipher algorithm.

The result is that both the AES library and crypto_cipher APIs are now
optimized for ARM, whereas previously only crypto_cipher was (and the
optimizations weren't enabled by default, which this fixes as well).

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-11-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# a22fd0e3 12-Jan-2026 Eric Biggers <ebiggers@kernel.org>

lib/crypto: aes: Introduce improved AES library

The kernel's AES library currently has the following issues:

- It doesn't take advantage of the architecture-optimized AES code,
including the impl

lib/crypto: aes: Introduce improved AES library

The kernel's AES library currently has the following issues:

- It doesn't take advantage of the architecture-optimized AES code,
including the implementations using AES instructions.

- It's much slower than even the other software AES implementations: 2-4
times slower than "aes-generic", "aes-arm", and "aes-arm64".

- It requires that both the encryption and decryption round keys be
computed and cached. This is wasteful for users that need only the
forward (encryption) direction of the cipher: the key struct is 484
bytes when only 244 are actually needed. This missed optimization is
very common, as many AES modes (e.g. GCM, CFB, CTR, CMAC, and even the
tweak key in XTS) use the cipher only in the forward (encryption)
direction even when doing decryption.

- It doesn't provide the flexibility to customize the prepared key
format. The API is defined to do key expansion, and several callers
in drivers/crypto/ use it specifically to expand the key. This is an
issue when integrating the existing powerpc, s390, and sparc code,
which is necessary to provide full parity with the traditional API.

To resolve these issues, I'm proposing the following changes:

1. New structs 'aes_key' and 'aes_enckey' are introduced, with
corresponding functions aes_preparekey() and aes_prepareenckey().

Generally these structs will include the encryption+decryption round
keys and the encryption round keys, respectively. However, the exact
format will be under control of the architecture-specific AES code.

(The verb "prepare" is chosen over "expand" since key expansion isn't
necessarily done. It's also consistent with hmac*_preparekey().)

2. aes_encrypt() and aes_decrypt() will be changed to operate on the new
structs instead of struct crypto_aes_ctx.

3. aes_encrypt() and aes_decrypt() will use architecture-optimized code
when available, or else fall back to a new generic AES implementation
that unifies the existing two fragmented generic AES implementations.

The new generic AES implementation uses tables for both SubBytes and
MixColumns, making it almost as fast as "aes-generic". However,
instead of aes-generic's huge 8192-byte tables per direction, it uses
only 1024 bytes for encryption and 1280 bytes for decryption (similar
to "aes-arm"). The cost is just some extra rotations.

The new generic AES implementation also includes table prefetching,
making it have some "constant-time hardening". That's an improvement
from aes-generic which has no constant-time hardening.

It does slightly regress in constant-time hardening vs. the old
lib/crypto/aes.c which had smaller tables, and from aes-fixed-time
which disabled IRQs on top of that. But I think this is tolerable.
The real solutions for constant-time AES are AES instructions or
bit-slicing. The table-based code remains a best-effort fallback for
the increasingly-rare case where a real solution is unavailable.

4. crypto_aes_ctx and aes_expandkey() will remain for now, but only for
callers that are using them specifically for the AES key expansion
(as opposed to en/decrypting data with the AES library).

This commit begins the migration process by introducing the new structs
and functions, backed by the new generic AES implementation.

To allow callers to be incrementally converted, aes_encrypt() and
aes_decrypt() are temporarily changed into macros that use a _Generic
expression to call either the old functions (which take crypto_aes_ctx)
or the new functions (which take the new types). Once all callers have
been updated, these macros will go away, the old functions will be
removed, and the "_new" suffix will be dropped from the new functions.

Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20260112192035.10427-3-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# a229d832 11-Dec-2025 Eric Biggers <ebiggers@kernel.org>

lib/crypto: x86/nh: Migrate optimized code into library

Migrate the x86_64 implementations of NH into lib/crypto/. This makes
the nh() function be optimized on x86_64 kernels.

Note: this temporari

lib/crypto: x86/nh: Migrate optimized code into library

Migrate the x86_64 implementations of NH into lib/crypto/. This makes
the nh() function be optimized on x86_64 kernels.

Note: this temporarily makes the adiantum template not utilize the
x86_64 optimized NH code. This is resolved in a later commit that
converts the adiantum template to use nh() instead of "nhpoly1305".

Link: https://lore.kernel.org/r/20251211011846.8179-6-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


# b4a8528d 11-Dec-2025 Eric Biggers <ebiggers@kernel.org>

lib/crypto: arm64/nh: Migrate optimized code into library

Migrate the arm64 NEON implementation of NH into lib/crypto/. This
makes the nh() function be optimized on arm64 kernels.

Note: this tempo

lib/crypto: arm64/nh: Migrate optimized code into library

Migrate the arm64 NEON implementation of NH into lib/crypto/. This
makes the nh() function be optimized on arm64 kernels.

Note: this temporarily makes the adiantum template not utilize the arm64
optimized NH code. This is resolved in a later commit that converts the
adiantum template to use nh() instead of "nhpoly1305".

Link: https://lore.kernel.org/r/20251211011846.8179-5-ebiggers@kernel.org
Signed-off-by: Eric Biggers <ebiggers@kernel.org>

show more ...


12345