| #
71e59795 |
| 19-Mar-2026 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/ghash: Migrate optimized code into library
Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding assembly code into lib/crypto/, and wire it up to the GHASH library
lib/crypto: arm/ghash: Migrate optimized code into library
Remove the "ghash-neon" crypto_shash algorithm. Move the corresponding assembly code into lib/crypto/, and wire it up to the GHASH library.
This makes the GHASH library be optimized on arm (though only with NEON, not PMULL; for now the goal is just parity with crypto_shash). It greatly reduces the amount of arm-specific glue code that is needed, and it fixes the issue where this optimization was disabled by default.
To integrate the assembly code correctly with the library, make the following tweaks:
- Change the type of 'blocks' from int to size_t. - Change the types of 'dg' and 'h' to polyval_elem. Note that this simply reflects the format that the code was already using, at least on little endian CPUs. For big endian CPUs, add byte-swaps. - Remove the 'head' argument, which is no longer needed.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20260319061723.1140720-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
fa229775 |
| 12-Jan-2026 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/aes: Migrate optimized code into library
Move the ARM optimized single-block AES en/decryption code into lib/crypto/, wire it up to the AES library API, and remove the superseded "ae
lib/crypto: arm/aes: Migrate optimized code into library
Move the ARM optimized single-block AES en/decryption code into lib/crypto/, wire it up to the AES library API, and remove the superseded "aes-arm" crypto_cipher algorithm.
The result is that both the AES library and crypto_cipher APIs are now optimized for ARM, whereas previously only crypto_cipher was (and the optimizations weren't enabled by default, which this fixes as well).
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20260112192035.10427-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
a4e4e446 |
| 12-Jan-2026 |
Eric Biggers <ebiggers@kernel.org> |
crypto: arm/aes-neonbs - Use AES library for single blocks
aes-neonbs-glue.c calls __aes_arm_encrypt() and __aes_arm_decrypt() to en/decrypt single blocks for CBC encryption, XTS tweak encryption, a
crypto: arm/aes-neonbs - Use AES library for single blocks
aes-neonbs-glue.c calls __aes_arm_encrypt() and __aes_arm_decrypt() to en/decrypt single blocks for CBC encryption, XTS tweak encryption, and XTS ciphertext stealing. In preparation for making the AES library use this same ARM-optimized single-block AES en/decryption code and making it an internal implementation detail of the AES library, replace the calls to these functions with calls to the AES library.
Note that this reduces the size of the aesbs_cbc_ctx and aesbs_xts_ctx structs, since unnecessary decryption round keys are no longer included.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20260112192035.10427-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
29e39a11 |
| 11-Dec-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/nh: Migrate optimized code into library
Migrate the arm32 NEON implementation of NH into lib/crypto/. This makes the nh() function be optimized on arm32 kernels.
Note: this tempora
lib/crypto: arm/nh: Migrate optimized code into library
Migrate the arm32 NEON implementation of NH into lib/crypto/. This makes the nh() function be optimized on arm32 kernels.
Note: this temporarily makes the adiantum template not utilize the arm32 optimized NH code. This is resolved in a later commit that converts the adiantum template to use nh() instead of "nhpoly1305".
Link: https://lore.kernel.org/r/20251211011846.8179-4-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
ba6617bd |
| 18-Oct-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/blake2b: Migrate optimized code into library
Migrate the arm-optimized BLAKE2b code from arch/arm/crypto/ to lib/crypto/arm/. This makes the BLAKE2b library able to use it, and it a
lib/crypto: arm/blake2b: Migrate optimized code into library
Migrate the arm-optimized BLAKE2b code from arch/arm/crypto/ to lib/crypto/arm/. This makes the BLAKE2b library able to use it, and it also simplifies the code because it's easier to integrate with the library than crypto_shash.
This temporarily makes the arm-optimized BLAKE2b code unavailable via crypto_shash. A later commit reimplements the blake2b-* crypto_shash algorithms on top of the BLAKE2b library API, making it available again.
Note that as per the lib/crypto/ convention, the optimized code is now enabled by default. So, this also fixes the longstanding issue where the optimized BLAKE2b code was not enabled by default.
To see the diff from arch/arm/crypto/blake2b-neon-glue.c to lib/crypto/arm/blake2b.h, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20251018043106.375964-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
68546e56 |
| 06-Sep-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: curve25519: Consolidate into single module
Reorganize the Curve25519 library code:
- Build a single libcurve25519 module, instead of up to three modules: libcurve25519, libcurve25519-
lib/crypto: curve25519: Consolidate into single module
Reorganize the Curve25519 library code:
- Build a single libcurve25519 module, instead of up to three modules: libcurve25519, libcurve25519-generic, and an arch-specific module.
- Move the arch-specific Curve25519 code from arch/$(SRCARCH)/crypto/ to lib/crypto/$(SRCARCH)/. Centralize the build rules into lib/crypto/Makefile and lib/crypto/Kconfig.
- Include the arch-specific code directly in lib/crypto/curve25519.c via a header, rather than using a separate .c file.
- Eliminate the entanglement with CRYPTO. CRYPTO_LIB_CURVE25519 no longer selects CRYPTO, and the arch-specific Curve25519 code no longer depends on CRYPTO.
This brings Curve25519 in line with the latest conventions for lib/crypto/, used by other algorithms. The exception is that I kept the generic code in separate translation units for now. (Some of the function names collide between the x86 and generic Curve25519 code. And the Curve25519 functions are very long anyway, so inlining doesn't matter as much for Curve25519 as it does for some other algorithms.)
Link: https://lore.kernel.org/r/20250906213523.84915-11-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
11efae10 |
| 06-Sep-2025 |
Eric Biggers <ebiggers@kernel.org> |
crypto: arm/curve25519 - Remove unused kpp support
Curve25519 is used only via the library API, not the crypto_kpp API. In preparation for removing the unused crypto_kpp API for Curve25519, remove
crypto: arm/curve25519 - Remove unused kpp support
Curve25519 is used only via the library API, not the crypto_kpp API. In preparation for removing the unused crypto_kpp API for Curve25519, remove the unused "curve25519-neon" kpp algorithm.
Note that the underlying NEON optimized Curve25519 code remains fully supported and accessible via the library API.
It's also worth noting that even if the kpp support for Curve25519 comes back later, there is no need for arch-specific kpp glue code like this, as a single kpp algorithm that wraps the library API is sufficient.
Link: https://lore.kernel.org/r/20250906213523.84915-3-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
70cb6ca5 |
| 13-Jul-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/sha1: Migrate optimized code into library
Instead of exposing the arm-optimized SHA-1 code via arm-specific crypto_shash algorithms, instead just implement the sha1_blocks() library
lib/crypto: arm/sha1: Migrate optimized code into library
Instead of exposing the arm-optimized SHA-1 code via arm-specific crypto_shash algorithms, instead just implement the sha1_blocks() library function. This is much simpler, it makes the SHA-1 library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-1 code was disabled by default. SHA-1 still remains available through crypto_shash, but individual architectures no longer need to handle it.
To match sha1_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250712232329.818226-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
24c91b62 |
| 30-Jun-2025 |
Eric Biggers <ebiggers@kernel.org> |
lib/crypto: arm/sha512: Migrate optimized SHA-512 code to library
Instead of exposing the arm-optimized SHA-512 code via arm-specific crypto_shash algorithms, instead just implement the sha512_block
lib/crypto: arm/sha512: Migrate optimized SHA-512 code to library
Instead of exposing the arm-optimized SHA-512 code via arm-specific crypto_shash algorithms, instead just implement the sha512_blocks() library function. This is much simpler, it makes the SHA-512 (and SHA-384) library functions be arm-optimized, and it fixes the longstanding issue where the arm-optimized SHA-512 code was disabled by default. SHA-512 still remains available through crypto_shash, but individual architectures no longer need to handle it.
To match sha512_blocks(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t.
Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250630160320.2888-8-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@kernel.org>
show more ...
|
| #
ca4477e4 |
| 28-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm/sha256 - implement library instead of shash
Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler
crypto: arm/sha256 - implement library instead of shash
Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it.
To merge the scalar, NEON, and CE code all into one module cleanly, add !CPU_V7M as a direct dependency of the CE code. Previously, !CPU_V7M was only a direct dependency of the scalar and NEON code. The result is still the same because CPU_V7M implies !KERNEL_MODE_NEON, so !CPU_V7M was already an indirect dependency of the CE code.
To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t.
While renaming the assembly files, also fix the naming quirk where "sha2" meant sha256. (SHA-512 is also part of SHA-2.)
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
714656a8 |
| 22-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm - move library functions to arch/arm/lib/crypto/
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the arm BLAKE2s, ChaCha, and Poly130
crypto: arm - move library functions to arch/arm/lib/crypto/
Continue disentangling the crypto library functions from the generic crypto infrastructure by moving the arm BLAKE2s, ChaCha, and Poly1305 library functions into a new directory arch/arm/lib/crypto/ that does not depend on CRYPTO. This mirrors the distinction between crypto/ and lib/crypto/.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
1f81c582 |
| 13-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm/poly1305 - remove redundant shash algorithm
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual a
crypto: arm/poly1305 - remove redundant shash algorithm
Since crypto/poly1305.c now registers a poly1305-$(ARCH) shash algorithm that uses the architecture's Poly1305 library functions, individual architectures no longer need to do the same. Therefore, remove the redundant shash algorithm from the arch-specific code and leave just the library functions there.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
08820553 |
| 05-Apr-2025 |
Eric Biggers <ebiggers@google.com> |
crypto: arm/chacha - remove the redundant skcipher algorithms
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH), and xchacha12-$(ARCH) skcipher algorithms that use the architec
crypto: arm/chacha - remove the redundant skcipher algorithms
Since crypto/chacha.c now registers chacha20-$(ARCH), xchacha20-$(ARCH), and xchacha12-$(ARCH) skcipher algorithms that use the architecture's ChaCha and HChaCha library functions, individual architectures no longer need to do the same. Therefore, remove the redundant skcipher algorithms and leave just the library functions.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
7c79bdf9 |
| 03-Apr-2025 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/aes-neonbs - stop using the SIMD helper
Now that ARM permits use of the NEON unit in softirq context as well as task context, there is no longer a need to rely on the SIMD helper module
crypto: arm/aes-neonbs - stop using the SIMD helper
Now that ARM permits use of the NEON unit in softirq context as well as task context, there is no longer a need to rely on the SIMD helper module to construct async skciphers wrapping the sync ones, as the latter can always be called directly.
So remove these wrappers and the dependency on the SIMD helper. This permits the use of these algorithms by callers that only support synchronous use.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
e77fe9cc |
| 03-Apr-2025 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: arm/aes-ce - stop using the SIMD helper
Now that ARM permits use of the NEON unit in softirq context as well as task context, there is no longer a need to rely on the SIMD helper module to c
crypto: arm/aes-ce - stop using the SIMD helper
Now that ARM permits use of the NEON unit in softirq context as well as task context, there is no longer a need to rely on the SIMD helper module to construct async skciphers wrapping the sync ones, as the latter can always be called directly.
So remove these wrappers and the dependency on the SIMD helper. This permits the use of these algorithms by callers that only support synchronous use.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
17ec3e71 |
| 27-Feb-2025 |
Herbert Xu <herbert@gondor.apana.org.au> |
crypto: lib/Kconfig - Hide arch options from user
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may also lead to arch options being enabled but ineffective because of modular/built-in
crypto: lib/Kconfig - Hide arch options from user
The ARCH_MAY_HAVE patch missed arm64, mips and s390. But it may also lead to arch options being enabled but ineffective because of modular/built-in conflicts.
As the primary user of all these options wireguard is selecting the arch options anyway, make the same selections at the lib/crypto option level and hide the arch options from the user.
Instead of selecting them centrally from lib/crypto, simply set the default of each arch option as suggested by Eric Biggers.
Change the Crypto API generic algorithms to select the top-level lib/crypto options instead of the generic one as otherwise there is no way to enable the arch options (Eric Biggers). Introduce a set of INTERNAL options to work around dependency cycles on the CONFIG_CRYPTO symbol.
Fixes: 1047e21aecdf ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Arnd Bergmann <arnd@kernel.org> Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
1047e21a |
| 12-Feb-2025 |
Herbert Xu <herbert@gondor.apana.org.au> |
crypto: lib/Kconfig - Fix lib built-in failure when arch is modular
The HAVE_ARCH Kconfig options in lib/crypto try to solve the modular versus built-in problem, but it still fails when the the LIB
crypto: lib/Kconfig - Fix lib built-in failure when arch is modular
The HAVE_ARCH Kconfig options in lib/crypto try to solve the modular versus built-in problem, but it still fails when the the LIB option (e.g., CRYPTO_LIB_CURVE25519) is selected externally.
Fix this by introducing a level of indirection with ARCH_MAY_HAVE Kconfig options, these then go on to select the ARCH_HAVE options if the ARCH Kconfig options matches that of the LIB option.
Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202501230223.ikroNDr1-lkp@intel.com/ Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
1684e829 |
| 02-Dec-2024 |
Eric Biggers <ebiggers@google.com> |
arm/crc-t10dif: expose CRC-T10DIF function through lib
Move the arm CRC-T10DIF assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going
arm/crc-t10dif: expose CRC-T10DIF function through lib
Move the arm CRC-T10DIF assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going through the crypto API. It remains usable via the crypto API too via the shash algorithms that use the library interface. Thus all the arch-specific "shash" code becomes unnecessary and is removed.
Note: to see the diff from arch/arm/crypto/crct10dif-ce-glue.c to arch/arm/lib/crc-t10dif-glue.c, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://lore.kernel.org/r/20241202012056.209768-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
show more ...
|
| #
1e1b6dbc |
| 02-Dec-2024 |
Eric Biggers <ebiggers@google.com> |
arm/crc32: expose CRC32 functions through lib
Move the arm CRC32 assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going through the cr
arm/crc32: expose CRC32 functions through lib
Move the arm CRC32 assembly code into the lib directory and wire it up to the library interface. This allows it to be used without going through the crypto API. It remains usable via the crypto API too via the shash algorithms that use the library interface. Thus all the arch-specific "shash" code becomes unnecessary and is removed.
Note: to see the diff from arch/arm/crypto/crc32-ce-glue.c to arch/arm/lib/crc32-glue.c, view this commit with 'git show -M10'.
Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20241202010844.144356-6-ebiggers@kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com>
show more ...
|
| #
f235bc11 |
| 10-Aug-2024 |
Eric Biggers <ebiggers@google.com> |
crypto: arm/aes-neonbs - go back to using aes-arm directly
In aes-neonbs, instead of going through the crypto API for the parts that the bit-sliced AES code doesn't handle, namely AES-CBC encryption
crypto: arm/aes-neonbs - go back to using aes-arm directly
In aes-neonbs, instead of going through the crypto API for the parts that the bit-sliced AES code doesn't handle, namely AES-CBC encryption and single-block AES, just call the ARM scalar AES cipher directly.
This basically goes back to the original approach that was used before commit b56f5cbc7e08 ("crypto: arm/aes-neonbs - resolve fallback cipher at runtime"). Calling the ARM scalar AES cipher directly is faster, simpler, and avoids any chance of bugs specific to the use of fallback ciphers such as module loading deadlocks which have happened twice. The deadlocks turned out to be fixable in other ways, but there's no need to rely on anything so fragile in the first place.
The rationale for the above-mentioned commit was to allow people to choose to use a time-invariant AES implementation for the fallback cipher. There are a couple problems with that rationale, though:
- In practice the ARM scalar AES cipher (aes-arm) was used anyway, since it has a higher priority than aes-fixed-time. Users *could* go out of their way to disable or blacklist aes-arm, or to lower its priority using NETLINK_CRYPTO, but very few users customize the crypto API to this extent. Systems with the ARMv8 Crypto Extensions used aes-ce, but the bit-sliced algorithms are irrelevant on such systems anyway.
- Since commit 913a3aa07d16 ("crypto: arm/aes - add some hardening against cache-timing attacks"), the ARM scalar AES cipher is partially hardened against cache-timing attacks. It actually works like aes-fixed-time, in that it disables interrupts and prefetches its lookup table. It does use a larger table than aes-fixed-time, but even so, it is not clear that aes-fixed-time is meaningfully more time-invariant than aes-arm. And of course, the real solution for time-invariant AES is to use a CPU that supports AES instructions.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
b575b5a1 |
| 16-Jan-2023 |
Ard Biesheuvel <ardb@kernel.org> |
ARM: 9286/1: crypto: Implement fused AES-CTR/GHASH version of GCM
On 32-bit ARM, AES in GCM mode takes full advantage of the ARMv8 Crypto Extensions when available, resulting in a performance of 6-7
ARM: 9286/1: crypto: Implement fused AES-CTR/GHASH version of GCM
On 32-bit ARM, AES in GCM mode takes full advantage of the ARMv8 Crypto Extensions when available, resulting in a performance of 6-7 cycles per byte for typical IPsec frames on cores such as Cortex-A53, using the generic GCM template encapsulating the accelerated AES-CTR and GHASH implementations.
At such high rates, any time spent copying data or doing other poorly optimized work in the generic layer hurts disproportionately, and we can get a significant performance improvement by combining the optimized AES-CTR and GHASH implementations into a single GCM driver.
On Cortex-A53, this results in a performance improvement of around 75%, and AES-256-GCM-128 with RFC4106 encapsulation runs in 4 cycles per byte.
Note that this code takes advantage of the fact that kernel mode NEON is now supported in softirq context as well, and therefore does not provide a non-NEON fallback path at all. (AEADs are only callable in process or softirq context)
Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
show more ...
|
| #
61c581a4 |
| 03-Nov-2022 |
Ard Biesheuvel <ardb@kernel.org> |
crypto: move gf128mul library into lib/crypto
The gf128mul library does not depend on the crypto API at all, so it can be moved into lib/crypto. This will allow us to use it in other library code in
crypto: move gf128mul library into lib/crypto
The gf128mul library does not depend on the crypto API at all, so it can be moved into lib/crypto. This will allow us to use it in other library code in a subsequent patch without having to depend on CONFIG_CRYPTO.
While at it, change the Kconfig symbol name to align with other crypto library implementations. However, the source file name is retained, as it is reflected in the module .ko filename, and changing this might break things for users.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
cf514b2a |
| 20-Aug-2022 |
Robert Elliott <elliott@hpe.com> |
crypto: Kconfig - simplify cipher entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support",
crypto: Kconfig - simplify cipher entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
3f342a23 |
| 20-Aug-2022 |
Robert Elliott <elliott@hpe.com> |
crypto: Kconfig - simplify hash entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", o
crypto: Kconfig - simplify hash entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|
| #
ec84348d |
| 20-Aug-2022 |
Robert Elliott <elliott@hpe.com> |
crypto: Kconfig - simplify CRC entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or
crypto: Kconfig - simplify CRC entries
Shorten menu titles and make them consistent: - acronym - name - architecture features in parenthesis - no suffixes like "<something> algorithm", "support", or "hardware acceleration", or "optimized"
Simplify help text descriptions, update references, and ensure that https references are still valid.
Signed-off-by: Robert Elliott <elliott@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
show more ...
|