Search
Items tagged with: classicalCryptography
Did you know that, in the Furry Fandom, the most popular choice in species for one’s fursona is actually a hybrid?
Source: FurScience
Of course, we’re not talking about that kind of hybrid today. I just thought it was an amusing coincidence.
Art: Lynx vs Jackalope
Nor are we talking about what comes to mind for engineers accustomed to classical cryptography when you say Hybrid.
(Such engineers typically envision some combination of asymmetric key encapsulation with symmetric encryption; because too many people encrypt with RSA directly and the sane approach is often described as a Hybrid Cryptosystem in the literature.)
Rather, Hybrid Cryptography in today’s context refers to:
Cryptography systems that use a post-quantum cryptography algorithm, combined with one of the algorithms deployed today that aren’t resistant to quantum computers.
If you need to differentiate the two, PQ-Hybrid might be a better designation.
Why Hybrid Cryptosystems?
At some point in the future (years or decades from now), humanity may build a practical quantum computer. This will be a complete disaster for all of the cryptography deployed on the Internet today.
In response to this distant existential threat, cryptographers have been hard at work designing and attacking algorithms that remain secure even when quantum computers arrive. These algorithms are classified as post-quantum cryptography (mostly to distinguish it from techniques that uses quantum computers to facilitate cryptography rather than attack it, which is “quantum cryptography” and not really worth our time talking about). Post-quantum cryptography is often abbreviated as “PQ Crypto” or “PQC”.
However, a lot of the post-quantum cryptography designs are relatively new or comparatively less studied than their classical (pre-quantum) counterparts. Several of the Round 1 candidates to NIST’s post quantum cryptography project were broken immediately (PDF). Exploit code referenced in PDF duplicated below.:
#!/usr/bin/env python3import binascii, structdef recover_bit(ct, bit): assert bit < len(ct) // 4000 ts = [struct.unpack('BB', ct[i:i+2]) for i in range(4000*bit, 4000*(bit+1), 2)] xs, ys = [a for a, b in ts if b == 1], [a for a, b in ts if b == 2] return sum(xs) / len(xs) >= sum(ys) / len(ys)def decrypt(ct): res = sum(recover_bit(ct, b) << b for b in range(len(ct) // 4000)) return int.to_bytes(res, len(ct) // 4000 // 8, 'little')kat = 0for l in open('KAT_GuessAgain/GuessAgainEncryptKAT_2000.rsp'): if l.startswith('msg = '): # only used for verifying the recovered plaintext. msg = binascii.unhexlify(l[len('msg = '):].strip()) elif l.startswith('c = '): ct = binascii.unhexlify(l[len('c = '):].strip()) print('{}attacking known-answer test #{}'.format('\n' * (kat > 0), kat)) print('correct plaintext: {}'.format(binascii.hexlify(msg).decode())) plain = decrypt(ct) print('recovered plaintext: {} ({})'.format(binascii.hexlify(plain).decode(), plain == msg)) kat += 1
More pertinent to our discussions: Rainbow, which was one of the Round 3 Finalists for post-quantum digital signature algorithms, was discovered in 2020 to be much easier to attack than previously thought. Specifically, for the third round parameters, the attack cost was reduced by a factor of , , and .
That security reduction is just a tad bit more concerning than a Round 1 candidate being totally broken, since NIST had concluded by then that Rainbow was a good signature algorithm until that attack was discovered. Maybe there are similar attacks just waiting to be found?
Given that new cryptography is accompanied by less confidence than incumbent cryptography, hybrid designs are an excellent way to mitigate the risk of attack advancements in post-quantum cryptography:
If the security of your system requires breaking the cryptography used today AND breaking one of the new-fangled designs, you’ll always be at least as secure as the stronger algorithm.
Art: Lynx vs Jackalope
Why Is Hybrid Cryptography Controversial?
Despite the risks of greenfield cryptographic algorithms, the NSA has begun recommending a strictly-PQ approach to cryptography and have explicitly stated that they will not require hybrid designs.
Another pushback on hybrid cryptography comes from Uri Blumenthal of MIT’s Lincoln Labs on the IETF CFRG mailing list (the acronym CRQC expands to “Cryptographically-Relevant Quantum Computer”):
Here are the possibilities and their relation to the usefulness of the Hybrid approach.1. CRQC arrived, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.
2. CRQC arrived, Classic hold against classic attacks, PQ algorithms fail – Hybrid is useless.
3. CRQC arrived, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.
4. CRQC arrived, Classic hold against classic attacks, PQ algorithms broken – Hybrid useless.
5. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.
6. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms broken – Hybrid helps.
7. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.
8. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms broken – Hybrid is useless.
Uri Blumenthal, IETF CFRG mailing list, December 2021 (link)
Why Hybrid Is Actually A Damn Good Idea
Art: Scruff Kerfluff
Uri’s risk analysis is, of course, flawed. And I’m not the first to disagree with him.
First, Uri’s framing sort of implies that each of the 8 possible outputs of these 3 boolean variables are relatively equally likely outcomes.
It’s very tempting to look at this and think, “Wow, that’s a lot of work for something that only helps in 12.5% of possible outcomes!” Uri didn’t explicitly state this assumption, and he might not even believe that, but it is a cognitive trap that emerges in the structure of his argument, so watch your step.
Second, for many candidate algorithms, we’re already in scenario 6 that Uri outlined! It’s not some hypothetical future, it’s the present state of affairs.
To wit: The advances in cryptanalysis on Rainbow don’t totally break it in a practical sense, but they do reduce the security by a devastating margin (which will require significantly larger parameter sets and performance penalties to remedy).
For many post-quantum algorithms, we’re still uncertain about which scenario is most relevant. But since PQ algorithms are being successfully attacked and a quantum computer still hasn’t arrived, and classical algorithms are still holding up fine, it’s very clear that “hybrid helps” is the world we most likely inhabit today, and likely will for many years (until the existence of quantum computers is finally settled).
Finally, even in other scenarios (which are more relevant for other post-quantum algorithms), hybrid doesn’t significantly hurt security. It does carry a minor cost to bandwidth and performance, and it does mean having a larger codebase to review when compared with jettisoning the algorithms we use today, but I’d argue that the existing code is relatively low risk compared to new code.
From what I’ve read, the NSA didn’t make as strong an argument as Uri; they said hybrid would not be required, but didn’t go so far as to attack it.
Hybrid cryptography is a long-term bet that will protect the most users from cryptanalytic advancements, contrasted with strictly-PQ and no-PQ approaches.
Why The Hybrid Controversy Remains Unsettled
Even if we can all agree that hybrid is the way to go, there’s still significant disagreement on exactly how to do it.
Hybrid KEMs
There are two schools of thought on hybrid Key Encapsulation Mechanisms (KEMs):
- Wrap the post-quantum KEM in the encrypted channel created by the classical KEM.
- Use both the post-quantum KEM and classical KEM as inputs to a secure KDF, then use a single encrypted channel secured by both.
The first option (layered) has the benefit of making migrations smoother. You can begin with classical cryptography (i.e. ECDHE for TLS ciphersuites), which is what most systems online support today. Then you can do your post-quantum cryptography inside the existing channel to create a post-quantum-secure channel. This also lends toward opportunistic upgrades (which might not be a good idea).
The second option (composite) has the benefit of making the security of your protocol all-or-nothing: You cannot attack the weak now and the strong part later. The session keys you’ll derive require attacking both algorithms in order to get access to the plaintext. Additionally, you only need a single layer. The complexity lies entirely within the handshake, instead of every packet.
Personally, I think composite is a better option for security than layered.
Hybrid Signatures
There are, additionally, two different schools of thought on hybrid digital signature algorithms. However, the difference is more subtle than with KEMs.
- Require separate classical signatures and post-quantum signatures.
- Specify a composite mode that combines the two together and treat it as a distinct algorithm.
To better illustrate what this looks like, I outlined what a composite hybrid digital signature algorithm could look like on the CFRG mailing list:
primary_seed := randombytes_buf(64) // store thised25519_seed := hash_sha512256(PREFIX_CLASSICAL || primary_seed)pq_seed := hash_sha512256(PREFIX_POSTQUANTUM || primary_seed)ed25519_keypair := crypto_sign_seed_keypair(ed25519_seed)pq_keypair := pqcrypto_sign_seed_keypair(pq_seed)
Your composite public key would be your Ed25519 public key, followed by your post-quantum public key. Since Ed25519 public keys are always 32 bytes, this is easy to implement securely.
Every composite signature would be an Ed25519 signature concatenated with the post-quantum signature. Since Ed25519 signatures are always 64 bytes, this leads to a predictable signature size relative to the post-quantum signature.
The main motivation for preferring a composite hybrid signature over a detached hybrid signature is to push the hybridization of cryptography lower in the stack so developers don’t have to think about these details. They just select HYBRIDSIG1 or HYBRIDSIG2 in their ciphersuite configuration, and cryptographers get to decide what that means.
TL;DR
Hybrid designs of post-quantum crypto are good, and I think composite hybrid designs make the most sense for both KEMs and signatures.
https://soatok.blog/2022/01/27/the-controversy-surrounding-hybrid-cryptography/
#asymmetricCryptography #classicalCryptography #cryptography #digitalSignatureAlgorithm #hybridCryptography #hybridDesigns #KEM #keyEncapsulationMechanism #NISTPostQuantumCryptographyProject #NISTPQC #postQuantumCryptography
Let me state up front that, while we’re going to be talking about an open source project that was recently submitted to Hacker News’s “Show HN” section, the intent of this post is not at all to shame the developer who tried their damnedest to do the right thing. They’re the victim, not the culprit.RSA, Ya Don’t Say
Earlier this week, an HN user shared their open source fork of a Facebook’s messenger client, with added encryption. Their motivation was, as stated in the readme:It is known that Facebook scans your messages. If you need to keep using Facebook messenger but care about privacy, Zuccnet might help.It’s pretty simple: you and your friend have Zuccnet installed. Your friend gives you their Zuccnet public key. Then, when you send a message to your friend on Zuccnet, your message is encrypted on your machine before it is sent across Facebook to your friend. Then, your friend’s Zuccnet decrypts the message. Facebook never sees the content of your message.
I’m not a security person and there’s probably some stuff I’ve missed – any contributions are very welcome! This is very beta, don’t take it too seriously.
From Zuccnet’s very humble README.
So far, so good. Facebook is abysmal for privacy, so trying to take matters into your own hands to encrypt data so Facebook can’t see what you’re talking about is, in spirit, a wonderful idea.(Art by Khia.)
However, there is a problem with the execution of this idea. And this isn’t a problem unique to Zuccnet. Several times per year, I come across some well-meaning software project that makes the same mistake: Encrypting messages with RSA directly is bad.
From the Zuccnet source code:
const encryptMessage = (message, recipientPublicKey) => { const encryptedMessage = crypto.publicEncrypt( { key: recipientPublicKey, padding: crypto.constants.RSA_PKCS1_OAEP_PADDING, oaepHash: "sha256", }, Buffer.from(message), ); return encryptedMessage.toString("base64");};/** * * @param {String} encryptedMessage - base64 encoded string */const decryptMessage = encryptedMessage => { const encryptedMessageBuffer = Buffer.from(encryptedMessage, "base64"); const { privateKey } = getOrCreateZuccnetKeyPair(); const message = crypto.privateDecrypt( { key: privateKey, padding: crypto.constants.RSA_PKCS1_OAEP_PADDING, oaepHash: "sha256", }, Buffer.from(encryptedMessageBuffer), );};
To the Zuccnet author’s credit, they’re using OAEP padding, not PKCS#1 v1.5 padding. This means their code isn’t vulnerable to Bleichenbacher’s 1998 padding oracle attack (n.b. most of the RSA code I encounter in the wild is vulnerable to this attack).
However, there are other problems with this code:
- If you try to encrypt a message longer than 256 bytes with a 2048-bit RSA public key, it will fail. (Bytes matter here, not characters, even for English speakers–because emoji.)
- This design (encrypting with a static RSA public key per recipient) completely lacks forward secrecy. This is the same reason that PGP encryption sucks (or, at least, one of the reasons PGP sucks).
There are many ways to work around the first limitation.
Some cryptography libraries let you treat RSA as a block cipher in ECB mode and encrypt each chunk independently. This is an incredibly stupid API deign choice: It’s slow (asymmetric cryptography operations are on the order of tens-to-hundreds-of-thousands times slower than symmetric cryptography) and you can drop/reorder/replay blocks, since ECB mode provides no semantic security.
I have strong opinions about cryptographic library design.
(Art by Swizz.)A much better strategy is to encrypt the data with a symmetric key, then encrypt that key with RSA. (See the end of the post for special treatment options that are especially helpful for RSA with PKCS#1 v1.5 padding.)
Working around the second problem usually requires an Authenticated Key Exchange (AKE), similar to what I covered in my Guide to End-to-End Encryption. Working around this second problem also solves the first problem, so it’s usually better to just implement a forward-secret key exchange protocol than try to make RSA secure.
(You can get forward secrecy without an AKE, by regularly rotating keys, but AKEs make forward secrecy automatic and on-by-default without forcing humans to make a decision to rotate a credential– something most people don’t do unless they have to. AKEs trade user experience complexity for protocol complexity–and this trade-off is almost universally worth taking.)
Although AKEs are extremely useful, they’re a bit complex for most software developers to pick up without prior cryptography experience. (If they were easier, after all, there wouldn’t be so much software that encrypts messages directly with RSA in the first place.)
Note: RSA itself isn’t the reason that this lacks forward secrecy. The problem is how RSA is used.
Recommendations
For Developers
First, consider not using RSA. Hell, while you’re at it, don’t write any cryptography code that you don’t have to.Libsodium (which you should use) does most of this for you, and can easily be turned into an AKE comparable to the one Signal uses. The less cryptography code you have to write, the less can go catastrophically wrong–especially in production systems.
If jettisoning RSA from your designs is a non-starter, you should at least consider taking the Dhole Moments Pledge for Software Developers:
I will not encrypt messages directly with RSA, or any other asymmetric primitive.Simple enough, right?
Instead, if you find yourself needing to encrypt a message with RSA, remind yourself that RSA is for encrypting symmetric keys, not messages. And then plan your protocol design accordingly.Also, I’m pretty sure RSA isn’t random-key robust. Ask your favorite cryptographer if it matters for whatever you’re building.
(But seriously, you’re better off not using RSA at all.)
For Cryptography Libraries
Let’s ask ourselves, “Why are we forcing developers to know or even care about these details?”Libsodium doesn’t encumber developers with unnecessary decisions like this. Why does the crypto module built into JavaScript? Why does the crypto module built into most programming languages that offer one, for that matter? (Go is a notable exception here, because their security team is awesome and forward-thinking.)
In my opinion, we should stop shipping cryptography interfaces that…
- Mix symmetric and asymmetric cryptography in the same API
- Allow developers to encrypt directly with asymmetric primitives
- Force developers to manage their own nonces/initialization vectors
- Allow public/private keys to easily get confused (e.g. lack of type safety)
For example: Dhole Crypto is close to my ideal for general-purpose encryption.
Addendum: Securing RSA with PKCS#1 v1.5
Update: Neil Madden informs me that what I wrote here is actually very similar to a standard construction called RSA-KEM. You should use RSA-KEM instead of what I’ve sketched out, since that’s better studied by cryptographers.(I’ve removed the original sketch below, to prevent accidental misuse.)
https://soatok.blog/2021/01/20/please-stop-encrypting-with-rsa-directly/
#asymmetricCryptography #cryptography #RSA #SecurityGuidance #symmetricCryptography