Cryptographers and cryptography engineers love to talk about the latest attacks and how to mitigate them. LadderLeak breaks ECDSA with less than 1 bit of nonce leakage? Raccoon attack brings the Hidden Number attack to finite field Diffie-Hellman in TLS?
And while this sort of research is important and fun, most software developers have much bigger problems to contend with, when it comes to the cryptographic security of their products and services.
So let’s start by talking about Java cryptography.
Art by Khia.
Cryptography in Java
In Java, the way you’re supposed to encrypt data using symmetric cryptography is with the javax.crypto.Cipher
class. So to encrypt with AES-GCM, you’d call Cipher.getInstance("AES/GCM/NoPadding")
and use the resulting object to process your data. javax.crypto.Cipher
can be used for a lot of ill-advised modes (including ECB mode) and ciphers (including DES).
Can you guess what class you’d use to encrypt data with RSA in Java?
https://www.youtube.com/watch?v=9nSQs0Gr9FA
That’s right! RSA goes in the Cipher.getInstance("RSA/ECB/OAEPWithSHA-256AndMGF1Padding")
hole.
(Or, more likely, Cipher.getInstance("RSA/ECB/PKCS1Padding")
, which is not great.)
Art by Khia.
Also, as a reminder: You don’t want to encrypt data with RSA directly. You want to encrypt symmetric keys with RSA, and then encrypt your actual data with those symmetric keys. Preferably using a KEM+DEM paradigm.
Fun anecdote: The naming of RSA/ECB/$padding
is misleading because it doesn’t actually implement a sort of ECB mode. However, a few projects over the years missed that memo and decided to implement RSA-ECB so they could be “compatible” with what they thought Java did. That is: They broke long messages into equal sized chunks (237 bytes for 2048-bit RSA), encrypted them independently, and then concatenated the ciphertexts together.
But it doesn’t end there. AES-GCM explodes brilliantly if you ever reuse a nonce. Naturally, the Cipher
class shifts all of this burden onto the unwitting developer that calls it, which results in a regurgitated mess that looks like this (from the Java documentation):
GCMParameterSpec s = ...; cipher.init(..., s); // If the GCM parameters were generated by the provider, it can // be retrieved by: // cipher.getParameters().getParameterSpec(GCMParameterSpec.class); cipher.updateAAD(...); // AAD cipher.update(...); // Multi-part update cipher.doFinal(...); // conclusion of operation // Use a different IV value for every encryption byte[] newIv = ...; s = new GCMParameterSpec(s.getTLen(), newIv); cipher.init(..., s); ...
If you fail to perform this kata perfectly, you’ll introduce a nonce reuse vulnerability into your application.
And if you look a little deeper, you’ll also learn that their software implementation of AES (which is used in any platform without hardware AES available–such as Android on older hardware) isn’t hardened against cache-timing attacks… although their GHASH implementation is (which implies cache-timing attacks are within their threat model). But that’s an implementation problem, not a design problem.
Kludgey, hard-to-use, easy-to-misuse. It doesn’t have to be this way.
Learning From PHP
In 2015, when PHP 7 was being discussed on their mailing list, someone had the brilliant idea of creating a simple, cross-platform, extension-independent interface for getting random bytes and integers.
This effort would become random_bytes()
and random_int()
in PHP 7. (If you want to see how messy things were before PHP 7, take a look at the appropriate polyfill library.)
However, the initial design for this feature sucked really badly. Imagine the following code snippet:
function makePassword(int $length = 20): string{ $password = ''; for ($i = 0; $i < $length; ++$i) { $password .= chr(random_int(33, 124)); } return $password;}
If your operating system’s random number generator failed (e.g. you’re in a chroot and cannot access /dev/urandom
), then the random_int()
call would have returned false
.
Because of type shenanigans in earlier versions of PHP, chr(false)
returns a NUL byte. (This is fixed since 7.4 when strict_types is enabled, but the latest version at the time was PHP 5.6.)
After a heated debate on both the Github issue tracker for PHP and the internal mailing lists, the PHP project did the right thing: It will throw an Exception if it cannot safely generate random data.
Exceptions are developer-friendly: If you do not catch the exception, it kills the script immediately. If you decide to catch them, you can handle them in whatever graceful way you prefer. (Pretty error pages are often better than a white page and HTTP 500 status code, after all.)
Art by Khia.
In version 7.2 of the PHP programming language, they also made another win: Libsodium was promoted as part of the PHP standard library.
However, this feature isn’t as good as it could be: It’s easy to mix up inputs to the libsodium API since it expects string
arguments instead of dedicated types (X25519SecretKey
vs X25519PublicKey
). To address this, the open source community has provided PHP libraries that avoid this mistake.
(I bet you weren’t expecting to hear that PHP is doing better with cryptography than Java in 2021, but here we are!)
Art by Khia.
Towards Usable Cryptography Interfaces
How can we do better? At a minimum, we need to internalize Avi Douglen’s rule of usable security.
Security at the expense of usability comes at the expense of security.
I’d like to propose a set of tenets that cryptography libraries can use to self-evaluate the usability of their own designs and implementations. Keep in mind that these are tenets, not a checklist, so the spirit of the law means more than the literal narrowly-scoped interpretation.
1. Follow the Principle of Least Astonishment
Cryptography code should be boring, never astonishing.
For example: If you’re comparing cryptographic outputs, it should always be done in constant-time–even if timing leaks do not help attackers (i.e. in password hashing validation).
2. Provide High-Level Abstractions
For example: Sealed boxes.
Most developers don’t need to fiddle with RSA and AES to construct their own hybrid public-key encryption designs (as you’d need to with Java). What they really need is a simple way to say, “Encrypt this so that only the recipient can decrypt it, but not the sender.”
This requires talking to your users and figuring out what their needs are instead of just assuming you know best.
3. Logically Separate Algorithm Categories
Put simply: Asymmetric cryptography belongs in a different API than symmetric cryptography. A similar separation should probably exist for specialized algorithms (e.g. multi-party computation and Shamir Secret Sharing).
Java may have got this one horribly wrong, but they’re not alone. The JWE specification (RFC 7518) also allows you to encrypt keys with:
- RSA with PKCS#1 v1.5 padding (asymmetric encryption)
- RSA with OAEP padding (asymmetric encryption)
- ECDH (asymmetric key agreement)
- AES-GCM (symmetric encryption)
If your users aren’t using a high-level abstraction, at least give them separate APIs for different algorithm types. If nothing else, it saves your users from having to ever type RSA/ECB
again.
N.b. this means we should also collectively stop using the simple sign
and verify
verbs for Symmetric Authentication (i.e. HMAC) when these verbs imply digital signature algorithms (which are inherently asymmetric). Qualified verbs (verify
–> sign_verify
, mac_verify
) are okay here.
4. Use Type-Safe Interfaces
If you allow any arbitrary string or byte array to be passed as the key, IV, etc. in your cryptography library, someone will misuse it.
Instead, you should have dedicated {structs, classes, types} (select appropriate) for each different kind of cryptography key your library expects. These keys should also provide guarantees about their contents (i.e. an Aes256GcmKey is always 32 bytes).
5. Defaults Matter
If you instantiate a new SymmetricCipher class, its default state should be an authenticated mode; never ECB.
The default settings are the ones that 80% of real world users should be using, if not a higher percentage.
6. Reduce Cognitive Load
If you’re encrypting data, why should your users even need to know what a nonce is to use it safely?
You MAY allow an explicit nonce if it makes sense for their application, but if they don’t provide one, generate a random nonce and handle it for them.
Aside: There’s an argument that we should have a standard for committing, deterministic authenticated encryption. If you need non-determinism, stick a random 256-bit nonce in the AAD and you get that property too. I liked to call this combination AHEAD (Authenticated Hedged Encryption with Associated Data), in the same vein as Hedged Signatures.
The less choices a user has to make to get their code working correctly, the less likely they’ll accidentally introduce a subtle flaw that makes their application hideously insecure.
7. Don’t Fail at Failure
If what you’re doing is sensitive to error oracles (e.g. padding), you have to be very careful about how you fail.
For example: RSA decryption with PKCS#1v1.5 padding. Doing a constant-time swap between the actual plaintext and some random throwaway value so the decryption error can result from the symmetric decryption is better than aborting.
Conversely, if you’re depending on a component to generate randomness for you and it fails, it shouldn’t fail silently and return bad data.
Security is largely about understanding how systems fail, so there’s no one-size-fits-all answer for this. However, the exact failure mechanism for a cryptographic feature should be decided very thoughtfully.
8. Minimize Runtime Negotiation
This is more of a concern for applications than libraries, but it bears mentioning here: The algorithm you’re using shouldn’t be something an attacker can decide. It should be determined at compile-time, not at run-time.
For example: There were several vulnerabilities in JSON Web Tokens where you could swap out the alg
header to none
(which removed all security) or from RS256
(RSA signed) to HS256
…which meant the RSA public key was being used as an HMAC symmetric key. (See tenet 4 above.)
Header art by Scruff Kerfluff.
https://soatok.blog/2021/02/24/cryptography-interface-design-is-a-security-concern/
#API #APIDesign #crypto #cryptography #cryptographyLibraries #SecurityGuidance
A paper was published on the IACR’s ePrint archive yesterday, titled LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage.The ensuing discussion on /r/crypto led to several interesting questions that I thought would be worth capturing and answering in detail.
What’s Significant About the LadderLeak Paper?
This is best summarized by Table 1 from the paper.
The sections labeled “This work” are what’s new/significant about this research.
The paper authors were able to optimize existing attacks exploiting one-bit leakages against 192-bit and 160-bit elliptic curves. They were further able to exploit leakages of less than one bit in the same curves.How Can You Leak Less Than One Bit?
We’re used to discrete quantities in computer science, but you can leak less than one bit of information in the case of side-channels.Biased modular reduction can also create a vulnerable scenario: If you know the probability of a 0 or a 1 in a given position in the bit-string of the one-time number (i.e. the most significant bit) is not 0.5 to 0.5, but some other ratio (e.g. 0.51 to 0.49), you can (over many samples) conclude a probability of a specific bit in your dataset.
If “less than one bit” sounds strange, that’s probably our fault for always rounding up to the nearest bit when we express costs in computer science.
What’s the Cost of the Attack?
Consult Table 3 from the paper for empirical cost data:
Table 3 from the LadderLeak paper.How Devastating is LadderLeak?
First, it assumes a lot of things:
- That you’re using ECDSA with either sect163r1 or secp192r1 (NIST P-192). Breaking larger curves requires more bits of bias (as far as we know).
- That you’re using a cryptography library with cache-timing leaks.
- That you have a way to measure the timing leaks (and not just pilfer the ECDSA secret key; i.e. in a TPM setup). This threat model generally assumes some sort of physical access.
But if you can pull the attack off, you can successfully recover the device’s ECDSA secret key. Which, for protocols like TLS, allow an attacker to impersonate a certificate-bearer (typically the server)… which is pretty devastating.
Is ECDSA Broken Now?
Non-deterministic ECDSA is not significantly more broken with LadderLeak than it already was by other attacks. LadderLeak does not break the Internet.Fundamentally, LadderLeak doesn’t really change the risk calculus. Bleichenbacher’s attack framework for solving the Hidden Number Problem using Lattices was already practical, with sufficient samples.
There’s even a CryptoPals challenge about these attacks.
As an acquaintance put it, the authors made a time-memory trade-off with a leaky oracle. It’s a neat result worthy of publication, but we aren’t any minutes closer to midnight with this revelation.
Is ECDSA’s k-value Really a Nonce?
Ehhhhhhhhh, sorta.It’s complicated!
Nonce in cryptography has always meant “number that must be used only once” (typically per key). See: AES-GCM.
Nonces are often confused for initialization vectors (IVs), which in addition to a nonce’s requirements for non-reuse must also be unpredictable. See: AES-CBC.
However, nonces and IVs can both be public, whereas ECDSA k-values MUST NOT be public! If you recover the k-value for a given signature, you can recover the secret key too.
That is to say, ECDSA k-values must be all of the above:
- Never reused
- Unpredictable
- Secret
- Unbiased
They’re really in a class of their own.
For that reason, it’s probably better to think of the k-value as a per-signature key than a simple nonce. (n.b. Many cryptography libraries actually implement them as a one-time ECDSA keypair.)
What’s the Difference Between Random and Unpredictable?
The HMAC-SHA256 output of a message under a secret key is unpredictable for anyone not in possession of said secret key. This value, though unpredictable, is not random, since signing the same message twice yields the same output.A large random integer when subjected to modular reduction by a non-Mersenne prime of the same magnitude will be biased towards small values. This bias may be negligible, but it makes the bit string that represents the reduced integer more predictable, even though it’s random.
What Should We Do? How Should We Respond?
First, don’t panic. This is interesting research and its authors deserve to enjoy their moment, but the sky is not falling.Second, acknowledge that none of the attacks are effective against EdDSA.
If you feel the urge to do something about this attack paper, file a support ticket with all of your third-party vendors and business partners that handle cryptographic secrets to ask them if/when they plan to support EdDSA (especially if FIPS compliance is at all relevant to your work, since EdDSA is coming to FIPS 186-5).
Reason: With increased customer demand for EdDSA, more companies will adopt this digital signature algorithm (which is much more secure against real-world attacks). Thus, we can ensure an improved attack variant that actually breaks ECDSA doesn’t cause the sky to fall and the Internet to be doomed.
(Seriously, I don’t think most companies can overcome their inertia regarding ECDSA to EdDSA migration if their customers never ask for it.)
https://soatok.blog/2020/05/26/learning-from-ladderleak-is-ecdsa-broken/
#crypto #cryptography #digitalSignatureAlgorithm #ECDSA #ellipticCurveCryptography #LadderLeak