All implementations of chacha_init_arch() just call
chacha_init_generic(), so it is pointless. Just delete it, and replace
chacha_init() with what was previously chacha_init_generic().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
For many users, it's easier to supply a folio rather than an SG
list since they already have them. Add support for folios to the
acomp interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add ACOMP_REQUEST_ALLOC which is a wrapper around acomp_request_alloc
that falls back to a synchronous stack reqeust if the allocation
fails.
Also add ACOMP_REQUEST_ON_STACK which stores the request on the stack
only.
The request should be freed with acomp_request_free.
Finally add acomp_request_alloc_extra which gives the user extra
memory to use in conjunction with the request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As the only user of acomp/scomp uses a trivial single-page SG
list, remove support for everything else in preprataion for the
addition of virtual address support.
However, keep support for non-trivial source SG lists as that
user is currently jumping through hoops in order to linearise
the source data.
Limit the source SG linearisation buffer to a single page as
that user never goes over that. The only other potential user
is also unlikely to exceed that (IPComp) and it can easily do
its own linearisation if necessary.
Also keep the destination SG linearisation for IPComp.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Curiously, the Crypto API scatterwalk incremented pages by hand
rather than using nth_page. Possibly because scatterwalk predates
nth_page (the following commit is from the history tree):
commit 3957f2b34960d85b63e814262a8be7d5ad91444d
Author: James Morris <jmorris@intercode.com.au>
Date: Sun Feb 2 07:35:32 2003 -0800
[CRYPTO]: in/out scatterlist support for ciphers.
Fix this by using nth_page.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Now that the address returned by scatterwalk_map() is always being
stored into the same struct scatter_walk that is passed in, make
scatterwalk_map() do so itself and return void.
Similarly, now that scatterwalk_unmap() is always being passed the
address field within a struct scatter_walk, make scatterwalk_unmap()
take a pointer to struct scatter_walk instead of the address directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Separate out the HKDF functions into a separate module to
to make them available to other callers.
And add a testsuite to the module with test vectors
from RFC 5869 (and additional vectors for SHA384 and SHA512)
to ensure the integrity of the algorithm.
Signed-off-by: Hannes Reinecke <hare@kernel.org>
Acked-by: Eric Biggers <ebiggers@kernel.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Keith Busch <kbusch@kernel.org>
This adds request chaining and virtual address support to the
acomp interface.
It is identical to the ahash interface, except that a new flag
CRYPTO_ACOMP_REQ_NONDMA has been added to indicate that the
virtual addresses are not suitable for DMA. This is because
all existing and potential acomp users can provide memory that
is suitable for DMA so there is no need for a fall-back copy
path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than allocating the stream memory in the request object,
move it into a per-cpu buffer managed by scomp. This takes the
stress off the user from having to manage large request objects
and setting up their own per-cpu buffers in order to do so.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The tfm argument is completely unused and meaningless as the
same stream object is identical over all transforms of a given
algorithm. Remove it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mark the src.virt.addr field in struct skcipher_walk as a pointer
to const data. This guarantees that the user won't modify the data
which should be done through dst.virt.addr to ensure that flushing
is done when necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reuse the addr field from struct scatter_walk for skcipher_walk.
Keep the existing virt.addr fields but make them const for the
user to access the mapped address.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Rather than returning the address and storing the length into an
argument pointer, add an address field to the walk struct and use
that to store the address. The length is returned directly.
Change the done functions to use this stored address instead of
getting them from the caller.
Split the address into two using a union. The user should only
access the const version so that it is never changed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the definition of struct crypto_type into internal.h as it
is only used by API implementors and not algorithm implementors.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Implement the camellia128-cts-cmac and camellia256-cts-cmac enctypes from
rfc6803.
Note that the test vectors in rfc6803 for encryption are incomplete,
lacking the key usage number needed to derive Ke and Ki, and there are
errata for this:
https://www.rfc-editor.org/errata_search.php?rfc=6803
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Implement the aes128-cts-hmac-sha256-128 and aes256-cts-hmac-sha384-192
enctypes from rfc8009, overriding the rfc3961 kerberos 5 simplified crypto
scheme.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide key derivation interface functions and a helper to implement the
PRF+ function from rfc4402.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can perform crypto
requests, such as encrypt, decrypt, get_mic and verify_mic. These
functions take the previously prepared crypto objects to work on.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an API by which users of the krb5 crypto library can get an allocated
and keyed crypto object.
For encryption-mode operation, an AEAD object is returned; for
checksum-mode operation, a synchronous hash object is returned.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide some functions to allow the called to find out about the layout of
the crypto section:
(1) Calculate, for a given size of data, how big a buffer will be
required to hold it and where the data will be within it.
(2) Calculate, for an amount of buffer, what's the maximum size of data
that will fit therein, and where it will start.
(3) Determine where the data will be in a received message.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Provide core structures, an encoding-type registry and basic module and
config bits for a generic Kerberos crypto library.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add an AEAD template that does hash-then-cipher (unlike authenc that does
cipher-then-hash). This is required for a number of Kerberos 5 encoding
types.
[!] Note that the net/sunrpc/auth_gss/ implementation gets a pair of
ciphers, one non-CTS and one CTS, using the former to do all the aligned
blocks and the latter to do the last two blocks if they aren't also
aligned. It may be necessary to do this here too for performance reasons -
but there are considerations both ways:
(1) firstly, there is an optimised assembly version of cts(cbc(aes)) on
x86_64 that should be used instead of having two ciphers;
(2) secondly, none of the hardware offload drivers seem to offer CTS
support (Intel QAT does not, for instance).
However, I don't know if it's possible to query the crypto API to find out
whether there's an optimised CTS algorithm available.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Add some constants from the sunrpc headers.
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Herbert Xu <herbert@gondor.apana.org.au>
cc: "David S. Miller" <davem@davemloft.net>
cc: Chuck Lever <chuck.lever@oracle.com>
cc: Marc Dionne <marc.dionne@auristor.com>
cc: Eric Dumazet <edumazet@google.com>
cc: Jakub Kicinski <kuba@kernel.org>
cc: Paolo Abeni <pabeni@redhat.com>
cc: Simon Horman <horms@kernel.org>
cc: linux-afs@lists.infradead.org
cc: linux-nfs@vger.kernel.org
cc: linux-crypto@vger.kernel.org
cc: netdev@vger.kernel.org
Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.
This generates equivalent if not superior code on x86 with gcc 12.
Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous. To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.
For now, still limit segments to PAGE_SIZE. This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.
Even so, this change still helps a lot in cases where messages cross a
page boundary. For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary. These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer. That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code. This change solves the problem; all these
messages now get passed to the assembly code in one part.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove various functions that are no longer used.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a function that creates a scatterlist that represents the remaining
data in a walk. This will be used to replace chain_to_walk() in
net/tls/tls_device_fallback.c so that it will no longer need to reach
into the internals of struct scatter_walk.
Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively. They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>. Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.
The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist. E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.
Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist(). Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.
Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist. These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively. Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.
scatterwalk_copychunks() itself is left unchanged for now. It will be
removed after its callers are updated to use other functions instead.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(). Also add scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone().
A later patch will remove scatterwalk_done() and scatterwalk_pagedone().
The new code eliminates the error-prone 'more' parameter. Advancing to
the next sg entry now only happens just-in-time in scatterwalk_next().
The new code also pairs the dcache flush more closely with the actual
write, similar to memcpy_to_page(). Previously it was paired with
advancing to the next page. This is currently causing bugs where the
dcache flush is incorrectly being skipped, usually due to
scatterwalk_copychunks() being called without a following
scatterwalk_done(). The dcache flush may have been placed where it was
in order to not call flush_dcache_page() redundantly when visiting a
page more than once. However, that case is rare in practice, and most
architectures either do not implement flush_dcache_page() anyway or
implement it lazily where it just clears a page flag.
Another limitation of the old code was that by the time the flush
happened, there was no way to tell if more than one page needed to be
flushed. That has been sufficient because the code goes page by page,
but I would like to optimize that on !HIGHMEM platforms. The new code
makes this possible, and a later patch will implement this optimization.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk. Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.
Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.
Later patches will convert various users to use these functions.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The scatterwalk_* functions are designed to advance to the next sg entry
only when there is more data from the request to process. Compared to
the alternative of advancing after each step if !sg_is_last(sg), this
has the advantage that it doesn't cause problems if users accidentally
don't terminate their scatterlist with the end marker (which is an easy
mistake to make, and there are examples of this).
Currently, the advance to the next sg entry happens in
scatterwalk_done(), which is called after each "step" of the walk. It
requires the caller to pass in a boolean 'more' that indicates whether
there is more data. This works when the caller immediately knows
whether there is more data, though it adds some complexity. However in
the case of scatterwalk_copychunks() it's not immediately known whether
there is more data, so the call to scatterwalk_done() has to happen
higher up the stack. This is error-prone, and indeed the needed call to
scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is
sometimes called multiple times in a row. This causes a zero-length
step to get added in some cases, which is unexpected and seems to work
only by accident.
This patch begins the switch to a less error-prone approach where the
advance to the next sg entry happens just in time instead. For now,
that means just doing the advance in scatterwalk_clamp() if it's needed
there. Initially this is redundant, but it's needed to keep the tree in
a working state as later patches change things to the final state.
Later patches will similarly move the dcache flushing logic out of
scatterwalk_done() and then remove scatterwalk_done() entirely.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch adds virtual address support to ahash. Virtual addresses
were previously only supported through shash. The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.
The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining. Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This adds request chaining to the ahash interface. Request chaining
allows multiple requests to be submitted in one shot. An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break
up chained requests and submit them one-by-one.
A new err field is added to struct crypto_async_request to record
the return value for each individual request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Set the request tfm directly in SYNC_SKCIPHER_REQUEST_ON_STACK since
the tfm is already available.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The callers of crypto_sig_sign() assume that the signature size is
always equivalent to the key size.
This happens to be true for RSA, which is currently the only algorithm
implementing the ->sign() callback. But it is false e.g. for X9.62
encoded ECDSA signatures because they have variable length.
Prepare for addition of a ->sign() callback to such algorithms by
letting the callback return the signature size (or a negative integer
on error). When testing the ->sign() callback in test_sig_one(),
use crypto_sig_maxsize() instead of crypto_sig_keysize() to verify that
the test vector's signature does not exceed an algorithm's maximum
signature size.
There has been a relatively recent effort to upstream ECDSA signature
generation support which may benefit from this change:
https://lore.kernel.org/linux-crypto/20220908200036.2034-1-ignat@cloudflare.com/
However the main motivation for this commit is to reduce the number of
crypto_sig_keysize() callers: This function is about to be changed to
return the size in bits instead of bytes and that will require amending
most callers to divide the return value by 8.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
skcipher_walk_done() has an unusual calling convention, and some of its
local variables have unclear names. Document it and rename variables to
make it a bit clearer what is going on. No change in behavior.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Due to the removal of the Niagara2 SPU driver, crypto_hash_walk_first(),
crypto_hash_walk_done(), crypto_hash_walk_last(), and struct
crypto_hash_walk are now only used in crypto/ahash.c. Therefore, make
them all private to crypto/ahash.c. I.e. un-export the two functions
that were exported, make the functions static, and move the struct
definition to the .c file. As part of this, move the functions to
earlier in the file to avoid needing to add forward declarations.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
gf128mul_4k_bbe(), gf128mul_bbe() and gf128mul_init_4k_bbe()
are part of the library originally added in 2006 by
commit c494e0705d67 ("[CRYPTO] lib: table driven multiplications in
GF(2^128)")
but have never been used.
Remove them.
(BBE is Big endian Byte/Big endian bits
Note the 64k table version is used and I've left that in)
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the physical address support in skcipher_walk is not used anymore,
remove all the code associated with it. This includes:
- The skcipher_walk_async() and skcipher_walk_complete() functions;
- The SKCIPHER_WALK_PHYS flag and everything conditional on it;
- The buffers, phys, and virt.page fields in struct skcipher_walk;
- struct skcipher_walk_buffer.
As a result, skcipher_walk now just supports virtual addresses.
Physical address support in skcipher_walk is unneeded because drivers
that need physical addresses just use the scatterlists directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
API:
- Add sig driver API.
- Remove signing/verification from akcipher API.
- Move crypto_simd_disabled_for_test to lib/crypto.
- Add WARN_ON for return values from driver that indicates memory corruption.
Algorithms:
- Provide crc32-arch and crc32c-arch through Crypto API.
- Optimise crc32c code size on x86.
- Optimise crct10dif on arm/arm64.
- Optimise p10-aes-gcm on powerpc.
- Optimise aegis128 on x86.
- Output full sample from test interface in jitter RNG.
- Retry without padata when it fails in pcrypt.
Drivers:
- Add support for Airoha EN7581 TRNG.
- Add support for STM32MP25x platforms in stm32.
- Enable iproc-r200 RNG driver on BCMBCA.
- Add Broadcom BCM74110 RNG driver.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCgAdFiEEn51F/lCuNhUwmDeSxycdCkmxi6cFAmc6sQsACgkQxycdCkmx
i6dfHxAAnkI65TE6agZq9DlkEU4ZqOsxxdk0MsGIhbCUTxW3KENzu9vtKjnvg9T/
Ou0d2J49ny87Y4zaA59Wf/Q1+gg5YSQR5kelonpfrPLkCkJjr72HZpyCHv8TTzEC
uHHoVj9cnPIF5/yfiqQsrWT1ACip9vn+slyVPaMJV1qR6gnvnSALtsg4e/vKHkn7
ZMaf2pZ2ROYXdB02nMK5KQcCrxD64MQle/yQepY44eYjnT+XclkqPdi6o1nUSpj/
RFAeY0jFSTu0pj3DqT48TnU/LiiNLlFOZrGjCdEySoac63vmTtKqfYDmrRaFz4hB
sucxbgJ3xnnYseRijtfXnxaD/IkDJln+ipGNQKAZLfOVMDCTxPdYGmOpobMTXMS+
0sY0eAHgqr23P9pOp+sOzcAEFIqg6llAYQVWx3Zl4vpXBUuxzg6AqmHnPicnck7y
Lw1cJhQxij2De3dG2ZL/0dgQxMjGN/YfCM8SSg6l+Xn3j4j47rqJNH2ZsmXtbJ2n
kTkmemmWdgRR1IvgQQGsvyKs9ThkcEDW+IzW26SUv3Clvru2NSkX4ZPHbezZQf+D
R0wMZsW3Fw7Zymerz1GIBSqdLnsyFWtIAjukDpOR6ordPgOBeDt76v6tw5vL2/II
KYoeN1pdEEecwuhAsEvCryT5ZG4noBeNirf/ElWAfEybgcXiTks=
=T8pa
-----END PGP SIGNATURE-----
Merge tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"API:
- Add sig driver API
- Remove signing/verification from akcipher API
- Move crypto_simd_disabled_for_test to lib/crypto
- Add WARN_ON for return values from driver that indicates memory
corruption
Algorithms:
- Provide crc32-arch and crc32c-arch through Crypto API
- Optimise crc32c code size on x86
- Optimise crct10dif on arm/arm64
- Optimise p10-aes-gcm on powerpc
- Optimise aegis128 on x86
- Output full sample from test interface in jitter RNG
- Retry without padata when it fails in pcrypt
Drivers:
- Add support for Airoha EN7581 TRNG
- Add support for STM32MP25x platforms in stm32
- Enable iproc-r200 RNG driver on BCMBCA
- Add Broadcom BCM74110 RNG driver"
* tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits)
crypto: marvell/cesa - fix uninit value for struct mv_cesa_op_ctx
crypto: cavium - Fix an error handling path in cpt_ucode_load_fw()
crypto: aesni - Move back to module_init
crypto: lib/mpi - Export mpi_set_bit
crypto: aes-gcm-p10 - Use the correct bit to test for P10
hwrng: amd - remove reference to removed PPC_MAPLE config
crypto: arm/crct10dif - Implement plain NEON variant
crypto: arm/crct10dif - Macroify PMULL asm code
crypto: arm/crct10dif - Use existing mov_l macro instead of __adrl
crypto: arm64/crct10dif - Remove remaining 64x64 PMULL fallback code
crypto: arm64/crct10dif - Use faster 16x64 bit polynomial multiply
crypto: arm64/crct10dif - Remove obsolete chunking logic
crypto: bcm - add error check in the ahash_hmac_init function
crypto: caam - add error check to caam_rsa_set_priv_key_form
hwrng: bcm74110 - Add Broadcom BCM74110 RNG driver
dt-bindings: rng: add binding for BCM74110 RNG
padata: Clean up in padata_do_multithreaded()
crypto: inside-secure - Fix the return value of safexcel_xcbcmac_cra_init()
crypto: qat - Fix missing destroy_workqueue in adf_init_aer()
crypto: rsassa-pkcs1 - Reinstate support for legacy protocols
...
encrypt_blob(), decrypt_blob() and create_signature() were some of the
functions added in 2018 by
commit 5a30771832aa ("KEYS: Provide missing asymmetric key subops for new
key type ops [ver #2]")
however, they've not been used.
Remove them.
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Alternatively to the X9.62 encoding of ecdsa signatures, which uses
ASN.1 and is already supported by the kernel, there's another common
encoding called P1363. It stores r and s as the concatenation of two
big endian, unsigned integers. The name originates from IEEE P1363.
Add a P1363 template in support of the forthcoming SPDM library
(Security Protocol and Data Model) for PCI device authentication.
P1363 is prescribed by SPDM 1.2.1 margin no 44:
"For ECDSA signatures, excluding SM2, in SPDM, the signature shall be
the concatenation of r and s. The size of r shall be the size of
the selected curve. Likewise, the size of s shall be the size of
the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for
the size of r and s. The byte order for r and s shall be in big
endian order. When placing ECDSA signatures into an SPDM signature
field, r shall come first followed by s."
Link: https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
software_key_query() returns the maximum signature and digest size for a
given key to user space. When it only supported RSA keys, calculating
those sizes was trivial as they were always equivalent to the key size.
However when ECDSA was added, the function grew somewhat complicated
calculations which take the ASN.1 encoding and curve into account.
This doesn't scale well and adjusting the calculations is easily
forgotten when adding support for new encodings or curves. In fact,
when NIST P521 support was recently added, the function was initially
not amended:
https://lore.kernel.org/all/b749d5ee-c3b8-4cbd-b252-7773e4536e07@linux.ibm.com/
Introduce a ->max_size() callback to struct sig_alg and take advantage
of it to move the signature size calculations to ecdsa-x962.c.
Introduce a ->digest_size() callback to struct sig_alg and move the
maximum ECDSA digest size to ecdsa.c. It is common across ecdsa-x962.c
and the upcoming ecdsa-p1363.c and thus inherited by both of them.
For all other algorithms, continue using the key size as maximum
signature and digest size.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
crypto_sig_maxsize() is a bit of a misnomer as it doesn't return the
maximum signature size, but rather the key size.
Rename it as well as all implementations of the ->max_size callback.
A subsequent commit introduces a crypto_sig_maxsize() function which
returns the actual maximum signature size.
While at it, change the return type of crypto_sig_keysize() from int to
unsigned int for consistency with crypto_akcipher_maxsize(). None of
the callers checks for a negative return value and an error condition
can always be indicated by returning zero.
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>