Search
Items tagged with: RSA
There is, at the time of this writing, an ongoing debate in the Crypto Research Forum Group (CFRG) at the IETF about KEM combiners.
One of the participants, Deirdre Connolly, wrote a blog post titled How to Hold KEMs. The subtitle is refreshingly honest: “A living document on how to juggle these damned things.”
Deirdre also co-authored the paper describing a Hybrid KEM called X-Wing, which combines X25519 with ML-KEM-768 (which is the name for a standardized tweak of Kyber, which I happened to opine on a few years ago).
After sharing a link to Deirdre’s blog in a few places, several friendly folk expressed confusion about KEMs in general.
So very briefly, here’s an introduction to Key Encapsulation Mechanisms in general, to serve as a supplementary resource for any future discussion on KEMs.
You shouldn’t need to understand lattices or number theory, or even how to read a mathematics paper, to understand this stuff.
Building Intuition for KEMs
For the moment, let’s suspend most real-world security risks and focus on a simplified, ideal setting.
To begin with, you need some kind of Asymmetric Encryption.
Asymmetric Encryption means, “Encrypt some data with a Public Key, then Decrypt the ciphertext with a Secret Key.” You don’t need to know, or even care, about how it works at the moment.
Your mental model for asymmetric encryption and decryption should look like this:
interface AsymmetricEncryption { encrypt(publicKey: CryptoKey, plaintext: Uint8Array); decrypt(secretKey: CryptoKey, ciphertext: Uint8Array);}
As I’ve written previously, you never want to actually use asymmetric encryption directly.
Using asymmetric encryption safely means using it to exchange a key used for symmetric data encryption, like so:
// Alice sends an encrypted key to BobsymmetricKey = randomBytes(32)sendToBob = AsymmetricEncryption.encrypt( bobPublicKey, symmetricKey)// Bob decrypts the encrypted key from Alicedecrypted = AsymmetricEncryption.decrypt( bobSecretKey, sendToBob)assert(decrypted == symmetricKey) // true
You can then use symmetricKey
to encrypt your actual messages and, unless something else goes wrong, only the other party can read them. Hooray!
And, ideally, this is where the story would end. Asymmetric encryption is cool. Don’t look at the scroll bar.
Unfortunately
The real world isn’t as nice as our previous imagination.
We just kind of hand-waved that asymmetric encryption is a thing that happens, without further examination. It turns out, you have to examine further in order to be secure.
The most common asymmetric encryption algorithm deployed on the Internet as of February 2024 is called RSA. It involves Number Theory. You can learn all about it from other articles if you’re curious. I’m only going to describe the essential facts here.
Keep in mind, the primary motivation for KEMs comes from post-quantum cryptography, not RSA.
From Textbook RSA to Real World RSA
RSA is what we call a “trapdoor permutation”: It’s easy to compute encryption (one way), but decrypting is only easy if you have the correct secret key (the other way).
RSA operates on large blocks, related to the size of the public key. For example: 2048-bit RSA public keys operate on 2048-bit messages.
Encrypting with RSA involves exponents. The base of these exponents is your message. The outcome of the exponent operation is reduced, using the modulus operator, by the public key.
(The correct terminology is actually slightly different, but we’re aiming for intuition, not technical precision. Your public key is both the large modulus and exponent used for encryption. Don’t worry about it for the moment.)
If you have a very short message, and a tiny exponent (say, 3
), you don’t need the secret key to decrypt it. You can just take the cube-root of the ciphertext and recover the message!
That’s obviously not very good!
To prevent this very trivial weakness, cryptographers proposed standardized padding schemes to ensure that the output of the exponentiation is always larger than the public key. (We say, “it must wrap the modulus”.)
The most common padding mode is called PKCS#1 v1.5 padding. Almost everything that implements RSA uses this padding scheme. It’s also been publicly known to be insecure since 1998.
The other padding mode, which you should be using (if you even use RSA at all) is called OAEP. However, even OAEP isn’t fool proof: If you don’t implement it in constant-time, your application will be vulnerable to a slightly different attack.
This Padding Stinks; Can We Dispense Of It?
It turns out, yes, we can. Safely, too!
We need to change our relationship with our asymmetric encryption primitive.
Instead of encrypting the secret we actually want to use, let’s just encrypt some very large random value.
Then we can use the result with a Key Derivation Function (which you can think of, for the moment, like a hash function) to derive a symmetric encryption key.
class OversimplifiedKEM { function encaps(pk: CryptoKey) { let N = pk.getModulus() let r = randomNumber(1, N-1) let c = AsymmetricEncryption.encrypt(pk, r) return [c, kdf(r)] } function decaps(sk: CryptoKey, c: Uint8Array) { let r2 = AsymmetricEncryption.decrypt(sk, c) return kdf(r2) }}
In the pseudocode above, the actual asymmetric encryption primitive doesn’t involve any sort of padding mode. It’s textbook RSA, or equivalent.
KEMs are generally constructed somewhat like this, but they’re all different in their own special, nuanced ways. Some will look like what I sketched out, others will look subtly different.Understanding that KEMs are a construction on top of asymmetric encryption is critical to understanding them.
It’s just a slight departure from asymmetric encryption as most developers intuit it.
Cool, we’re almost there.
The one thing to keep in mind: While this transition from Asymmetric Encryption (also known as “Public Key Encryption”) to a Key Encapsulation Mechanism is easy to follow, the story isn’t as simple as “it lets you skip padding”. That’s an RSA specific implementation detail for this specific path into KEMs.
The main thing you get out of KEMs is called IND-CCA security, even when the underlying Public Key Encryption mechanism doesn’t offer that property.
IND-CCA security is a formal notion that basically means “protection against an attacker that can alter ciphertexts and study the system’s response, and then learn something useful from that response”.
IND-CCA is short for “indistinguishability under chosen ciphertext attack”. There are several levels of IND-CCA security (1, 2, and 3). Most modern systems aim for IND-CCA2.
Most people reading this don’t have to know or even care what this means; it will not be on the final exam. But cryptographers and their adversaries do care about this.
What Are You Feeding That Thing?
Deirdre’s blog post touched on a bunch of formal security properties for KEMs, which have names like X-BIND-K-PK
or X-BIND-CT-PK
.
Most of this has to deal with, “What exactly gets hashed in the KEM construction at the KDF step?” (although some properties can hold even without being included; it gets complicated).
For example, from the pseudocode in the previous section, it’s more secure to not only hash r
, but also c
and pk
, and any other relevant transcript data.
class BetterKEM { function encaps(pk: CryptoKey) { let N = pk.getModulus() let r = randomNumber(1, N-1) let c = AsymmetricEncryption.encrypt(pk, r) return [c, kdf(pk, c, r)] } function decaps(sk: CryptoKey, c: Uint8Array) { let pk = sk.getPublickey() let r2 = AsymmetricEncryption.decrypt(sk, c) return kdf(pk, c, r2) }}
In this example, BetterKem
is greatly more secure than OversimplifiedKEM
, for reasons that have nothing to do with the underlying asymmetric primitive. The thing it does better is commit more of its context into the KDF step, which means that there’s less pliability given to attackers while still targeting the same KDF output.
If you think about KDFs like you do hash functions (which they’re usually built with), changing any of the values in the transcript will trigger the avalanche effect: The resulting calculation, which is not directly transmitted, is practically indistinguishable from random bytes. This is annoying to try to attack–even with collision attack strategies (birthday collision, Pollard’s rho, etc.).
However, if your hash function is very slow (i.e., SHA3-256), you might be worried about the extra computation and energy expenditure, especially if you’re working with larger keys.
Specifically, the size of keys you get from ML-KEM or other lattice-based cryptography.
That’s where X-Wing is actually very clever: It combines X25519 and ML-KEM-768 in such a way that binds the output to both keys without requiring the additional bytes of ML-KEM-768 ciphertext or public key.
From the X-Wing paper.
However, it’s only secure to use it this way because of the specific properties of ML-KEM and X25519.
Some questions may come to mind:
- Does this additional performance hit actually matter, though?
- Would a general purpose KEM combiner be better than one that’s specially tailored to the primitives it uses?
- Is it secure to simply concatenate the output of multiple asymmetric operations to feed into a single KDF, or should a more specialized KDF be defined for this purpose?
Well, that’s exactly what the CFRG is debating!
Closing Thoughts
KEMs aren’t nearly as arcane or unapproachable as you may suspect. You don’t really even need to know any of the mathematics to understand them, though it certainly does help.
I hope that others find this useful.
Header art by Harubaki and AJ.
https://soatok.blog/2024/02/26/kem-trails-understanding-key-encapsulation-mechanisms/
#asymmetricCryptography #cryptography #KDF #KEM #keyEncapsulationMechanism #postQuantumCryptography #RSA
Did you know that, in the Furry Fandom, the most popular choice in species for one’s fursona is actually a hybrid?Source: FurScience
Of course, we’re not talking about that kind of hybrid today. I just thought it was an amusing coincidence.
Art: Lynx vs Jackalope
Nor are we talking about what comes to mind for engineers accustomed to classical cryptography when you say Hybrid.
(Such engineers typically envision some combination of asymmetric key encapsulation with symmetric encryption; because too many people encrypt with RSA directly and the sane approach is often described as a Hybrid Cryptosystem in the literature.)
Rather, Hybrid Cryptography in today’s context refers to:
Cryptography systems that use a post-quantum cryptography algorithm, combined with one of the algorithms deployed today that aren’t resistant to quantum computers.
If you need to differentiate the two, PQ-Hybrid might be a better designation.Why Hybrid Cryptosystems?
At some point in the future (years or decades from now), humanity may build a practical quantum computer. This will be a complete disaster for all of the cryptography deployed on the Internet today.In response to this distant existential threat, cryptographers have been hard at work designing and attacking algorithms that remain secure even when quantum computers arrive. These algorithms are classified as post-quantum cryptography (mostly to distinguish it from techniques that uses quantum computers to facilitate cryptography rather than attack it, which is “quantum cryptography” and not really worth our time talking about). Post-quantum cryptography is often abbreviated as “PQ Crypto” or “PQC”.
However, a lot of the post-quantum cryptography designs are relatively new or comparatively less studied than their classical (pre-quantum) counterparts. Several of the Round 1 candidates to NIST’s post quantum cryptography project were broken immediately (PDF). Exploit code referenced in PDF duplicated below.:
#!/usr/bin/env python3import binascii, structdef recover_bit(ct, bit): assert bit < len(ct) // 4000 ts = [struct.unpack('BB', ct[i:i+2]) for i in range(4000*bit, 4000*(bit+1), 2)] xs, ys = [a for a, b in ts if b == 1], [a for a, b in ts if b == 2] return sum(xs) / len(xs) >= sum(ys) / len(ys)def decrypt(ct): res = sum(recover_bit(ct, b) << b for b in range(len(ct) // 4000)) return int.to_bytes(res, len(ct) // 4000 // 8, 'little')kat = 0for l in open('KAT_GuessAgain/GuessAgainEncryptKAT_2000.rsp'): if l.startswith('msg = '): # only used for verifying the recovered plaintext. msg = binascii.unhexlify(l[len('msg = '):].strip()) elif l.startswith('c = '): ct = binascii.unhexlify(l[len('c = '):].strip()) print('{}attacking known-answer test #{}'.format('\n' * (kat > 0), kat)) print('correct plaintext: {}'.format(binascii.hexlify(msg).decode())) plain = decrypt(ct) print('recovered plaintext: {} ({})'.format(binascii.hexlify(plain).decode(), plain == msg)) kat += 1
More pertinent to our discussions: Rainbow, which was one of the Round 3 Finalists for post-quantum digital signature algorithms, was discovered in 2020 to be much easier to attack than previously thought. Specifically, for the third round parameters, the attack cost was reduced by a factor of , , and .
That security reduction is just a tad bit more concerning than a Round 1 candidate being totally broken, since NIST had concluded by then that Rainbow was a good signature algorithm until that attack was discovered. Maybe there are similar attacks just waiting to be found?
Given that new cryptography is accompanied by less confidence than incumbent cryptography, hybrid designs are an excellent way to mitigate the risk of attack advancements in post-quantum cryptography:
If the security of your system requires breaking the cryptography used today AND breaking one of the new-fangled designs, you’ll always be at least as secure as the stronger algorithm.
Art: Lynx vs Jackalope
Why Is Hybrid Cryptography Controversial?
Despite the risks of greenfield cryptographic algorithms, the NSA has begun recommending a strictly-PQ approach to cryptography and have explicitly stated that they will not require hybrid designs.Another pushback on hybrid cryptography comes from Uri Blumenthal of MIT’s Lincoln Labs on the IETF CFRG mailing list (the acronym CRQC expands to “Cryptographically-Relevant Quantum Computer”):
Here are the possibilities and their relation to the usefulness of the Hybrid approach.1. CRQC arrived, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.
2. CRQC arrived, Classic hold against classic attacks, PQ algorithms fail – Hybrid is useless.
3. CRQC arrived, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.
4. CRQC arrived, Classic hold against classic attacks, PQ algorithms broken – Hybrid useless.
5. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.
6. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms broken – Hybrid helps.
7. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.
8. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms broken – Hybrid is useless.
Uri Blumenthal, IETF CFRG mailing list, December 2021 (link)
Why Hybrid Is Actually A Damn Good Idea
Art: Scruff KerfluffUri’s risk analysis is, of course, flawed. And I’m not the first to disagree with him.
First, Uri’s framing sort of implies that each of the 8 possible outputs of these 3 boolean variables are relatively equally likely outcomes.
It’s very tempting to look at this and think, “Wow, that’s a lot of work for something that only helps in 12.5% of possible outcomes!” Uri didn’t explicitly state this assumption, and he might not even believe that, but it is a cognitive trap that emerges in the structure of his argument, so watch your step.
Second, for many candidate algorithms, we’re already in scenario 6 that Uri outlined! It’s not some hypothetical future, it’s the present state of affairs.
To wit: The advances in cryptanalysis on Rainbow don’t totally break it in a practical sense, but they do reduce the security by a devastating margin (which will require significantly larger parameter sets and performance penalties to remedy).
For many post-quantum algorithms, we’re still uncertain about which scenario is most relevant. But since PQ algorithms are being successfully attacked and a quantum computer still hasn’t arrived, and classical algorithms are still holding up fine, it’s very clear that “hybrid helps” is the world we most likely inhabit today, and likely will for many years (until the existence of quantum computers is finally settled).
Finally, even in other scenarios (which are more relevant for other post-quantum algorithms), hybrid doesn’t significantly hurt security. It does carry a minor cost to bandwidth and performance, and it does mean having a larger codebase to review when compared with jettisoning the algorithms we use today, but I’d argue that the existing code is relatively low risk compared to new code.
From what I’ve read, the NSA didn’t make as strong an argument as Uri; they said hybrid would not be required, but didn’t go so far as to attack it.
Hybrid cryptography is a long-term bet that will protect the most users from cryptanalytic advancements, contrasted with strictly-PQ and no-PQ approaches.
Why The Hybrid Controversy Remains Unsettled
Even if we can all agree that hybrid is the way to go, there’s still significant disagreement on exactly how to do it.Hybrid KEMs
There are two schools of thought on hybrid Key Encapsulation Mechanisms (KEMs):
- Wrap the post-quantum KEM in the encrypted channel created by the classical KEM.
- Use both the post-quantum KEM and classical KEM as inputs to a secure KDF, then use a single encrypted channel secured by both.
The first option (layered) has the benefit of making migrations smoother. You can begin with classical cryptography (i.e. ECDHE for TLS ciphersuites), which is what most systems online support today. Then you can do your post-quantum cryptography inside the existing channel to create a post-quantum-secure channel. This also lends toward opportunistic upgrades (which might not be a good idea).
The second option (composite) has the benefit of making the security of your protocol all-or-nothing: You cannot attack the weak now and the strong part later. The session keys you’ll derive require attacking both algorithms in order to get access to the plaintext. Additionally, you only need a single layer. The complexity lies entirely within the handshake, instead of every packet.
Personally, I think composite is a better option for security than layered.
Hybrid Signatures
There are, additionally, two different schools of thought on hybrid digital signature algorithms. However, the difference is more subtle than with KEMs.
- Require separate classical signatures and post-quantum signatures.
- Specify a composite mode that combines the two together and treat it as a distinct algorithm.
To better illustrate what this looks like, I outlined what a composite hybrid digital signature algorithm could look like on the CFRG mailing list:
primary_seed := randombytes_buf(64) // store thised25519_seed := hash_sha512256(PREFIX_CLASSICAL || primary_seed)pq_seed := hash_sha512256(PREFIX_POSTQUANTUM || primary_seed)ed25519_keypair := crypto_sign_seed_keypair(ed25519_seed)pq_keypair := pqcrypto_sign_seed_keypair(pq_seed)
Your composite public key would be your Ed25519 public key, followed by your post-quantum public key. Since Ed25519 public keys are always 32 bytes, this is easy to implement securely.
Every composite signature would be an Ed25519 signature concatenated with the post-quantum signature. Since Ed25519 signatures are always 64 bytes, this leads to a predictable signature size relative to the post-quantum signature.
The main motivation for preferring a composite hybrid signature over a detached hybrid signature is to push the hybridization of cryptography lower in the stack so developers don’t have to think about these details. They just select HYBRIDSIG1 or HYBRIDSIG2 in their ciphersuite configuration, and cryptographers get to decide what that means.
TL;DR
Hybrid designs of post-quantum crypto are good, and I think composite hybrid designs make the most sense for both KEMs and signatures.https://soatok.blog/2022/01/27/the-controversy-surrounding-hybrid-cryptography/
#asymmetricCryptography #classicalCryptography #cryptography #digitalSignatureAlgorithm #hybridCryptography #hybridDesigns #KEM #keyEncapsulationMechanism #NISTPostQuantumCryptographyProject #NISTPQC #postQuantumCryptography
Last year I wrote a grab-bag post titled, Don’t Forget To Brush Your Fur, because I’m terrible at SEO or making content easily discoverable.
In the same vein as that previous example, this is going to be in the style of Lightning Round talks at technology conferences.
Why are we doing this again?
I maintain a running list of things to write about, and cross ideas off whenever I cover a topic.
After a few months of doing this, I realize most of what remains is kinda interesting but not quite interesting enough to warrant a dedicated entry.
It always needs more isogenies.
(Art by Lynx vs Jackalope)
Contents
- Asymmetric Key Wear-Out
- HMAC Wear-Out?
- Asymmetric Commitments
- Against “Fluffies”
- A Meditation on Furries and Cringe
- Furries and Blue State Privilege
Asymmetric Key Wear-Out
Last year, I wrote about cryptographic wear-out for symmetric encryption. That post has attracted quite a bit of feedback from folks requesting comparisons against other block cipher modes, etc. One topic that I didn’t see requested much, but is equally interesting, is how this reasoning can be applied to asymmetric cryptography (if at all).
Let’s get one thing clear: Cryptography keys don’t “wear out” in the same sense as a physical key might. What we’re talking about is an ever-increasing risk of a collision occurring in random nonces.
ECDSA Key Wear-Out
ECDSA signatures involve a one-time secret, k. The scalar multiplication of k and the base point for the curve is encoded as half of the signature (r
), while its modular inverse is multiplied by the sum of the truncated message hash and the product of r
and the secret key to produce the other half of the signature (s
).
If your selection of k is biased, or k is ever reused for two different messages, you can leak the secret key.
Strictly speaking, for any given ECDSA curve, there is only one k
value that corresponds to a given r
for all users (n.b it’s not distinct per keypair).
This means that all users of e.g. ECDSA over NIST P-256 have to worry about a shared cryptographic wear-out: After 2^112 signatures, there is a 2^-32 chance of a single collision occurring.
Fortunately, the search space of possible k-values is enormous, and this will not impose a real-world operational risk in the near future. If you’re worried about multi-user attacks, P-384 gives you a wear-out threshold of 2^176 messages, which we’re probably never going to achieve.
RSA Key Wear-Out
In order to calculate the wear-out for an RSA message, you first have to begin with an attack model. Previously, we were looking at algorithms that would become brittle if a nonce was reused.
RSA doesn’t have nonces. You can’t attack RSA this way.
But let’s assume that such an attack did exist. What might the safety limit look like? There are two remaining possible considerations for RSA’s security against cryptographic wear-out: Key size and padding mode.
RSA private keys are two prime numbers (p, q). RSA public keys are the product of the two primes (n) and a public exponent (e) that must be coprime to (p-1)(q-1). (In practice, e is usually set to 3, 65537, or some other small prime.)
The security of RSA is subexponential to key size, based on the difficulty of integer factoring attacks and the requirement for p and q to be prime numbers.
This primeness restriction doesn’t apply to your message. The padding mode dictates your upper limit on message size; e.g., PKCS#1 v1.5 padding will take up at least 3 bytes:
- For encryption,
x = 0x00 || 0x02 || r || 0x00 || m
, wherer
is random padding bytes (minimum 8 bytes). - For signatures,
x = 0x00 || 0x01 || 0xFF..FF || 0x00 || m
. - In either case, the padding is always at least 11 bytes long.
So if you have 2048-bit RSA keys, you can encrypt or sign up to 245 bytes (1960 bits) with PKCS#1 v1.5 padding. This corresponds to a safety limit of 2^974 messages.
(Art by Lynx vs Jackalope)
HMAC Wear-Out?
To keep things simple, the security of HMAC can be reduced to the collision risk of the underlying hash function.
If you’re trying to estimate when to rotate symmetric keys used for HMAC, take the birthday bound of the underlying hash function as your starting point.
- For SHA-256, you have a 50% chance of a collision after 2^128 messages. For a 2^-32 chance, you can get 2^112 messages out of a single key.
- For SHA-384, this is 2^176 messages.
- For SHA-512, this is 2^240 messages.
In either case, however, these numbers might as well be infinity.
With apologies to Filippo Valsorda.
Asymmetric Commitments
Did you know that fast MACs such as GHASH and Poly1305 aren’t random-key robust? This property can matter in surprising ways.
Did you know that ECDSA and RSA don’t qualify for this property either? This is related to the topics of malleability and exclusive ownership. You can learn more about this in the CryptoGotchas page.
Essentially, if a signature scheme is malleable or fails to provide exclusive ownership, it’s possible to construct two arbitrary (m, pk) pairs that produce the same signature.
Any nonmalleable signature scheme with exclusive ownership (i.e. Ed25519 with low-order point rejection and canonical signature checks, as provided by the latest version of libsodium) provides sufficient commitment–mostly due to how it uses a collision-resistant cryptographic hash function. (It’s also worth noting: HashEdDSA doesn’t. Isn’t cryptography fun?)
Generally, if you need random-key robustness, you want to explicitly make it part of your design.
Against “Fluffies”
In my blog post about the neverending wheel of Furry Twitter discourse, I mentioned the controversy around SFW spaces for underage furries.
Everything I said in that post is still accurate (go read it if you haven’t), but I want to emphasize something that maybe some people overlooked.
https://twitter.com/SoatokDhole/status/1426638694786682884
Underage furries calling themselves “fluffies” is a bad idea, for two reasons.
Divide and Conquer
The first reason is tactical, and not specific to what they’re calling themselves: If you label yourselves separately from the larger furry community, you make it much easier to be targeted–especially by propaganda. There’s a severely disturbed alt-right fringe to the furry fandom (dubbed alt-furry, the Furry Raiders, and so many other names) that would love nothing more than to sink their claws into younger furs.
It’ll start innocently enough (“Yay, you have your own space!”), but it will quickly accelerate (“Congrats on kicking those degenerates to the curb!”) to horrible places (“All LGBTQIA+ people are degenerates”), gliding on the wings of edgy humor.
This descent into madness is also known as the PewDiePipeline and all parents of furries should be made aware of it, lest it happen to their child:
https://www.youtube.com/watch?v=pnmRYRRDbuw
It bears emphasizing: This existence of a PewDiePipeline within the “fluffy” space is not predicated on the intentions of the proponents. They can have all the best intentions in the world and it will still happen to their microcosm.
https://twitter.com/ARCADEGUTS/status/1425687280983937027
Preventing this from happening will require an almost inhuman degree of vigilance and dedication to correcting discourse from going sour. None of us are omniscient, so I wouldn’t take that bet.
Pre-Existing Terminology
The second reason the “fluffies” label is a bad idea is more specific to the word “fluffies” in particular: It already refers to a very disturbing meme on 4chan from not-very-many years ago: Fluffy Abuse Threads.
I’m intentionally not including any videos or images of this topic. There just aren’t enough content warnings for how gross this content is.
By calling yourselves “fluffies”, the most deranged 4chan-dwellers and/or Kiwi Farms lurkers on the Internet will begin associate you with the “fluffy abuse” memes, and may even act accordingly. In their twisted minds, they may even rationalize their conduct as if somehow you’re consenting to the abuse, by virtue of what you call yourselves.
Look, I get it: When you’re young, the over-sexualization of the media can be very uncomfortable, and it’s natural to want to avoid it. Additionally, it’s only human to want your own special club with a special name to hang out with your exclusive (n.b. same-age) community.
But please think carefully about what you’re doing, how you’re doing it, and which adults you decide to trust.
Also: maybe talk to older queers and/or furries about the history of the Furry Fandom, Pride, and kink before you make dangerous moves that make you more vulnerable to the worst humanity has to offer? Even if you don’t agree with us, we don’t want to see you get hurt.
There definitely is room in the furry fandom for people who are not comfortable with sexual content, or simply don’t want to be inundated with it all the damn time. It doesn’t need to be an exclusive thing or concept; instead, it should be normalized.
Ultimately, there’s probably a lot of work to be done to ensure kids and families have a safe and enjoyable furry con experience during daylight hours without repressing the queer and sexual identities of consenting adults at night. The best way to get from here to there is to talk, not to isolate.
Otherwise, we’ll keep seeing occurrences like this:
https://twitter.com/PrincelyKaden/status/1426192114694692866
The onus here is going to be largely on furry convention staff and chatroom moderators to actually listen to people reporting abusive behavior. They haven’t always been good about that, and it’s time for change.
https://twitter.com/MegaplexCon/status/1425966589241970693
A Meditation on Furries and Cringe
Every once in a while, I get a comment or email like this one:
https://twitter.com/SoatokDhole/status/1360835077899436033
The biggest magnet for poorly-reasoned hate comments is, surprisingly, my tear-down of the “sigma male” meme.
You’d think the exposure of TheDonald’s non-CloudFlare IP address would draw more ire than having correct opinions on masculinity, but here we are.
Art by Sophie
Let’s talk about masculinity for a moment, guys.
There is nothing manlier than being your authentic self. Even if that means liking some “girly” things. Even if that means being soft and vulnerable at times. Even if that means actually conforming to some stereotypes perpetuated by toxic masculinity when it coincides with your likes and interests. You do you.
But this isn’t just true of the male gender. Authenticity is the epitome of humanity. There’s nothing that stops women and enbies from being ruthlessly themselves.
You can’t be authentic when you’re participating in Cringe Culture, which blindly tears large swaths of people down to stoke the feelings of superiority in the people who evade its blast.
People are weird. I’m weird. I don’t expect everyone to like me, nor do I want them to. (Parasocial relationships suck!)
It’s okay to be a little obsessed about something other people look down on just because you happen to like it. Just make sure you’re not eschewing your adult responsibilities. (We all have bills to pay and promises to keep to the people that matter to us.)
If people don’t like you because you suddenly revealed your fondness for classic video games, rock-tumbling, or linear algebra? Fuck ’em. May the bridges you burn light the way to people who will appreciate you for who you truly are.
I’ve been told my blog is “weapons grade cringe” before, because I dared talk about encryption while having what, to most adults, comes across as little more than a cartoon brand or company mascot.
(Art by Lynx vs Jackalope)
Furries and Blue State Privilege
I sympathize with most queer people and/or furries for not wanting to subject themselves to the bigotry that runs rampant in Red States, but the ones who are jerks to other members of their community for living in those states, I can do without.
https://twitter.com/SarahcatFursuit/status/1413566747148435456
Being an asshole to someone because they live in, or are moving to, a state whose politics you dislike is equal parts stupid, selfish, and self-defeating:
- It’s stupid because there’s no reason for expressing prejudice or painting with broad brushes. For example: “Florida Furs are bad people” is an attack on the author of this blog.
- It’s selfish because not everyone who wants to leave these states has the resources or opportunity to do so, so all you’re doing is shining a spotlight on your own privilege. Way to show your entire ass to the community.
- It’s self-defeating because of the way the U.S. political system is architected:
If you wished for a genie to move every LGBTQIA+ person to the west coast of the United States, within a few years you’d essentially reduce support for LGBTQIA+ rights to approximately 6 out of 100 votes in the US Senate and 68 out of 435 in the House of Representatives.When you factor in who owns the land in the big tech cities (San Francisco, Seattle, etc.) and how much political and economic power they wield, it becomes very clear that your shaming of others for not boarding the bandwagon serves the interests of the worst of humanity: Landlords and venture capitalists.
Not a good move for people who claim to be progressive, and want to achieve progressive political outcomes nationwide.
The fact that some states have horrendous laws on the books, even worse bastards enforcing these laws, and somehow even more terrible politicians gatekeeping any meaningful progress from changing the system isn’t ever going to be improved from the outside.
I say all this, and I acknowledge Florida does suck in a lot of obvious ways: Our governor (Ron DeSantis) has a disposition that would actually be improved if he wore clown make-up to press appearances. We also have far too many furries that are anti-maskers, anti-vaxxers, or both.
https://twitter.com/SoatokDhole/status/1300911840000708608
But when furries go out of their way to shame someone, simply for living here? You’re not helping. Seriously stop and think about your priorities.
And maybe–just maybe–be surgically precise when you decide insults are warranted.
Now that I’ve flushed the blog post topic buffer, I’m fresh out of ideas. Let me know some topics that interest you in my Telegram group so I don’t get bored and eventually write Buzzfeed-quality crap like this:
In hindsight, ideas like this are 90% of the reason Cringe Culture refuses to die.
https://soatok.blog/2021/08/16/lightning-round/
#asymmetricCryptography #ECDSA #Florida #furries #FurryFandom #HMAC #Politics #RSA #Society #wearOut
There are a lot of random topics I’ve wanted to write about since I started Dhole Moments, and for one reason or another, haven’t actually written about. I know from past experience with other projects that if you don’t occasionally do some housekeeping, your backlog eventually collapses under its own gravity and you can never escape from it.So, to prevent that, I’d like to periodically take some time to clean up some of those loose ends that collect over time.
Random-Access AEAD
AEAD stands for Authenticated Encryption with Associated Data. Typically, AEAD constructions involve a stream cipher (which may also be a block cipher in counter mode) and a message authentication code (which may also be an almost-universal hash function).AEAD modes are designed for one-shot APIs: Encrypt (then authenticate) all at once; (verify then) decrypt all at once. AES-GCM, ChaPoly, etc.
AEADs are less great at providing random access to the underlying plaintext. For example: If you’re encrypting a 240 GB file with AES-GCM, but you only need a 512 KB chunk at some arbitrary point in the file, you’re forced to choose between either:
- Authenticating the rest of the AES-GCM ciphertext, then decrypting only the relevant chunk. (Performance sucks.)
- Sacrificing integrity and decrypting the desired chunk with AES-CTR.
Being forced to choose between speed and security will almost certainly result in a loss of security. The incentives of software developers (especially with fly-by-night startup engineers) all-but-guarantee this outcome.
Consequently, there have been several implementations of streaming-friendly AEAD. The most famous of which is Phil Rogaway’s STREAM construction.
Source: Rogaway’s paper
The downside to STREAM is that it requires an additional T bytes (e.g. 16 for an 128-bit authentication tag) for each chunk of the plaintext.
A similar solution, as implemented in the AWS Encryption SDK, is to carefully separate plaintexts into equal-sized frames and have special rules governing IV/nonce selection. This lets you facilitate random access while still making the security of the whole system easy to reason about.
Can we do better than STREAM and message framing?
The most straightforward idea is to use a Merkle tree on the ciphertext with a stream cipher for extracting a distinct key for each leaf node. This can be applied to existing AEAD ciphertexts, out of band, to create a sort of deep authentication tag that can be used to authenticate any random subset of the message (provided you have the correct nonce/key).
However, I haven’t found the time to develop this idea into something that can be toyed with by myself and other researchers.
More Introductory Articles
Let’s face it:Art by Riley
I’ve previously suggested an alternative strategy for programmers to learn cryptography. I’d like to do more posts covering introductory material for the topics I’m familiar with, so anyone who wants to actually employ my proposed strategy can carry themselves across the finish line.
Dissecting Dog-Whistles
Random fact: My fursona is a dhole–also known as a whistling dog.Soatok is a dhole, not a fox. Art by Khia.
Coincidentally, I’m deeply fascinated by language, and planned to start a series analyzing dog-whistle language (especially the kind commonly used against queer subcultures).
However, the very nature of dog-whistle language provides a veneer of plausible deniability for the whistler’s intent, which makes it very difficult to address them in a meaningful way that doesn’t undermine your own credibility.
So, for the time being, this is on the back-burner.
Reader Questions
I’ve received quite a few questions via email and social media since I started this blog in April. The most obvious thing to do with these questions would be to periodically collate a bunch of them into a Questions and Answer style post.However! I have an open source projected called FAQ Off that is way more efficient at the Q&A format than a long-form blog post. If you’d like to see it in action, start here.
Art by Kyume
General Punditry
I make a lot of dumb jokes, typically involving puns and other wordplay. Most of these live in private Telegram conversations with other furries, but a few have leaked out onto Twitter over the years.Is automated vulnerability scanning a nessusity?— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) November 23, 2017
Nurse: "I suspect this patient attempted to shove a foreign object into their urethra for pleasure"
Doctor: "I believe your theory is sound"
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) June 25, 2018
A lot of them involve queer lingo.
People say it's lonely at the top.No wonder there's so many bottoms in this fandom 😛
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 30, 2019
BitTorrent users are thirsty bottoms. Always complaining about wanting more seed.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 4, 2019
My RAID controller has big disk synergy
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 1, 2018
Some of them involve furry in-jokes.
Q) Why are foxes so prevalent in the furry fandom?A) We're a sub-culture not a dom-culture.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) May 24, 2018
Intrusion detection systems are old hat. What we need is a protrusion detection system.
Introducing OwO
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) February 1, 2018
Some are just silly.
Using mined bitcoins to buy a pumpkin spice latte makes you an ASIC bitch, right?— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) February 6, 2018
So in gay male furry culture if you give into a booty call from your ex-boyfriend… does that mean you were craving the XD?
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 25, 2018
If SQL is pronounced "sequel" then PHP must be pronounced "fap".
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) July 23, 2019
What do you call a submissive dragon with a mathematics background who's already lubed up for you?
A sliding scale.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 26, 2018
Did you hear about the clairvoyant babyfur that broke RSA?
Turns out, all you needed was a padding oracle.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) October 1, 2017
I should look for my next partner in a nuclear chemistry lab.
I hear they're good at dating.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 8, 2016
In my humble opinion, there haven’t been nearly enough puns on this blog (unless the embedded tweets above count).
Normally, this is where I’d proclaim, “I shall rectify this mistake” and proceed to make an ass out of myself, but I don’t like forced and obvious puns.
A lot of furries get this wrong: “Pawesome” is not clever, unless you’re talking to someone with a marsupial fursona. Then maybe.
The best puns come in two forms: They’re either so clever that you never saw it coming, or they’re just clever enough that the punchline lands at the same time you realized a bad pun was even possible.
Only Soatok brand puns are 100% whole groan— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) January 26, 2018
Miscellaneous / Meta
The past few blog posts touched a little on political subjects (especially How and Why America Was Hit So Hard By COVID-19, but this short-term trend actually started with my Pride Month post).At some point in the future, I may write a post dedicated to politics, but for the time being, it’s not really a subject I care enough about in and of itself to emphasize all the time.
Let me be clear: Being gay in America is inherently political. Developing technology is inherently political (although you don’t always realize it). Being a gay technologist, saying something politically significant is an inevitability.
But I’m not interested in the traditional roles and narratives that infect politics and political discourse. Labels are stupid and I’m not interested in being a Useful Idiot for anyone’s propaganda.
The most difficult thing about writing blog posts for me is coming up with a meaningful title. I’ve lost many hours due to the writer’s block that ensues.
The second most difficult thing for me is writing closing statements that aren’t totally redundant.
https://www.youtube.com/embed/l44OV2jlN7A?start=665&feature=oembed
George Carlin – “Count the Superfluous Redundant Pleonastic Tautologies” – Skip to 11:05 if WordPress breaks something
Some bloggers like to sign off like they’re writing an email. “Happy hacking!” and whatnot. To me, this feels forced and inauthentic, like a bad pun.So instead, here’s a totally sick piece of art I got from @MrJimmyDaFloof.
Furry artists are, like the rest of the fandom, amazing.
https://soatok.blog/2020/07/07/dont-forget-to-brush-your-fur/
Let me state up front that, while we’re going to be talking about an open source project that was recently submitted to Hacker News’s “Show HN” section, the intent of this post is not at all to shame the developer who tried their damnedest to do the right thing. They’re the victim, not the culprit.
RSA, Ya Don’t Say
Earlier this week, an HN user shared their open source fork of a Facebook’s messenger client, with added encryption. Their motivation was, as stated in the readme:
It is known that Facebook scans your messages. If you need to keep using Facebook messenger but care about privacy, Zuccnet might help.It’s pretty simple: you and your friend have Zuccnet installed. Your friend gives you their Zuccnet public key. Then, when you send a message to your friend on Zuccnet, your message is encrypted on your machine before it is sent across Facebook to your friend. Then, your friend’s Zuccnet decrypts the message. Facebook never sees the content of your message.
I’m not a security person and there’s probably some stuff I’ve missed – any contributions are very welcome! This is very beta, don’t take it too seriously.
From Zuccnet’s very humble README.
So far, so good. Facebook is abysmal for privacy, so trying to take matters into your own hands to encrypt data so Facebook can’t see what you’re talking about is, in spirit, a wonderful idea.
(Art by Khia.)
However, there is a problem with the execution of this idea. And this isn’t a problem unique to Zuccnet. Several times per year, I come across some well-meaning software project that makes the same mistake: Encrypting messages with RSA directly is bad.
From the Zuccnet source code:
const encryptMessage = (message, recipientPublicKey) => { const encryptedMessage = crypto.publicEncrypt( { key: recipientPublicKey, padding: crypto.constants.RSA_PKCS1_OAEP_PADDING, oaepHash: "sha256", }, Buffer.from(message), ); return encryptedMessage.toString("base64");};/** * * @param {String} encryptedMessage - base64 encoded string */const decryptMessage = encryptedMessage => { const encryptedMessageBuffer = Buffer.from(encryptedMessage, "base64"); const { privateKey } = getOrCreateZuccnetKeyPair(); const message = crypto.privateDecrypt( { key: privateKey, padding: crypto.constants.RSA_PKCS1_OAEP_PADDING, oaepHash: "sha256", }, Buffer.from(encryptedMessageBuffer), );};
To the Zuccnet author’s credit, they’re using OAEP padding, not PKCS#1 v1.5 padding. This means their code isn’t vulnerable to Bleichenbacher’s 1998 padding oracle attack (n.b. most of the RSA code I encounter in the wild is vulnerable to this attack).
However, there are other problems with this code:
- If you try to encrypt a message longer than 256 bytes with a 2048-bit RSA public key, it will fail. (Bytes matter here, not characters, even for English speakers–because emoji.)
- This design (encrypting with a static RSA public key per recipient) completely lacks forward secrecy. This is the same reason that PGP encryption sucks (or, at least, one of the reasons PGP sucks).
There are many ways to work around the first limitation.
Some cryptography libraries let you treat RSA as a block cipher in ECB mode and encrypt each chunk independently. This is an incredibly stupid API deign choice: It’s slow (asymmetric cryptography operations are on the order of tens-to-hundreds-of-thousands times slower than symmetric cryptography) and you can drop/reorder/replay blocks, since ECB mode provides no semantic security.
I have strong opinions about cryptographic library design.
(Art by Swizz.)
A much better strategy is to encrypt the data with a symmetric key, then encrypt that key with RSA. (See the end of the post for special treatment options that are especially helpful for RSA with PKCS#1 v1.5 padding.)
Working around the second problem usually requires an Authenticated Key Exchange (AKE), similar to what I covered in my Guide to End-to-End Encryption. Working around this second problem also solves the first problem, so it’s usually better to just implement a forward-secret key exchange protocol than try to make RSA secure.
(You can get forward secrecy without an AKE, by regularly rotating keys, but AKEs make forward secrecy automatic and on-by-default without forcing humans to make a decision to rotate a credential– something most people don’t do unless they have to. AKEs trade user experience complexity for protocol complexity–and this trade-off is almost universally worth taking.)
Although AKEs are extremely useful, they’re a bit complex for most software developers to pick up without prior cryptography experience. (If they were easier, after all, there wouldn’t be so much software that encrypts messages directly with RSA in the first place.)
Note: RSA itself isn’t the reason that this lacks forward secrecy. The problem is how RSA is used.
Recommendations
For Developers
First, consider not using RSA. Hell, while you’re at it, don’t write any cryptography code that you don’t have to.
Libsodium (which you should use) does most of this for you, and can easily be turned into an AKE comparable to the one Signal uses. The less cryptography code you have to write, the less can go catastrophically wrong–especially in production systems.
If jettisoning RSA from your designs is a non-starter, you should at least consider taking the Dhole Moments Pledge for Software Developers:
I will not encrypt messages directly with RSA, or any other asymmetric primitive.Simple enough, right?
Instead, if you find yourself needing to encrypt a message with RSA, remind yourself that RSA is for encrypting symmetric keys, not messages. And then plan your protocol design accordingly.
Also, I’m pretty sure RSA isn’t random-key robust. Ask your favorite cryptographer if it matters for whatever you’re building.
(But seriously, you’re better off not using RSA at all.)
For Cryptography Libraries
Let’s ask ourselves, “Why are we forcing developers to know or even care about these details?”
Libsodium doesn’t encumber developers with unnecessary decisions like this. Why does the crypto module built into JavaScript? Why does the crypto module built into most programming languages that offer one, for that matter? (Go is a notable exception here, because their security team is awesome and forward-thinking.)
In my opinion, we should stop shipping cryptography interfaces that…
- Mix symmetric and asymmetric cryptography in the same API
- Allow developers to encrypt directly with asymmetric primitives
- Force developers to manage their own nonces/initialization vectors
- Allow public/private keys to easily get confused (e.g. lack of type safety)
For example: Dhole Crypto is close to my ideal for general-purpose encryption.
Addendum: Securing RSA with PKCS#1 v1.5
Update: Neil Madden informs me that what I wrote here is actually very similar to a standard construction called RSA-KEM. You should use RSA-KEM instead of what I’ve sketched out, since that’s better studied by cryptographers.
(I’ve removed the original sketch below, to prevent accidental misuse.)
https://soatok.blog/2021/01/20/please-stop-encrypting-with-rsa-directly/
#asymmetricCryptography #cryptography #RSA #SecurityGuidance #symmetricCryptography
Governments are back on their anti-encryption bullshit again.Between the U.S. Senate’s “EARN IT” Act, the E.U.’s slew of anti-encryption proposals, and Australia’s new anti-encryption law, it’s become clear that the authoritarians in office view online privacy as a threat to their existence.
Normally, when the governments increase their anti-privacy sabre-rattling, technologists start talking more loudly about Tor, Signal, and other privacy technologies (usually only to be drowned out by paranoid people who think Tor and Signal are government backdoors or something stupid; conspiracy theories ruin everything!).
I’m not going to do that.
Instead, I’m going to show you how to add end-to-end encryption to any communication software you’re developing. (Hopefully, I’ll avoid making any bizarre design decisions along the way.)
But first, some important disclaimers:
- Yes, you should absolutely do this. I don’t care how banal your thing is; if you expect people to use it to communicate with each other, you should make it so that you can never decrypt their communications.
- You should absolutely NOT bill the thing you’re developing as an alternative to Signal or WhatsApp.
- The goal of doing this is to increase the amount of end-to-end encryption deployed on the Internet that the service operator cannot decrypt (even if compelled by court order) and make E2EE normalized. The goal is NOT to compete with highly specialized and peer-reviewed privacy technology.
- I am not a lawyer, I’m some furry who works in cryptography. The contents of this blog post is not legal advice, nor is it endorsed by any company or organization. Ask the EFF for legal questions.
The organization of this blog post is as follows: First, I’ll explain how to encrypt and decrypt data between users, assuming you have a key. Next, I’ll explain how to build an authenticated key exchange and a ratcheting protocol to determine the keys used in the first step. Afterwards, I’ll explore techniques for binding authentication keys to identities and managing trust. Finally, I’ll discuss strategies for making it impractical to ever backdoor your software (and impossible to silently backdoor it), just to piss the creeps and tyrants of the world off even more.
You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing.
(Art by Kyume.)
Preliminaries
Choosing a Cryptography Library
In the examples contained on this page, I will be using the Sodium cryptography library. Specifically, my example code will be written with the Sodium-Plus library for JavaScript, since it strikes a good balance between performance and being cross-platform.const { SodiumPlus } = require('sodium-plus');(async function() { // Select a backend automatically const sodium = await SodiumPlus.auto(); // Do other stuff here})();
Libsodium is generally the correct choice for developing cryptography features in software, and is available in most programming languages,
If you’re prone to choose a different library, you should consult your cryptographer (and yes, you should have one on your payroll if you’re doing things different) about your design choices.
Threat Modelling
Remember above when I said, “You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing”?How far you go in implementing the steps outlined on this blog post should be informed by a threat model, not an ad hoc judgment.
For example, if you’re encrypting user data and storing it in the cloud, you probably want to pass the Mud Puddle Test:
1. First, drop your device(s) in a mud puddle.
2. Next, slip in said puddle and crack yourself on the head. When you regain consciousness you’ll be perfectly fine, but won’t for the life of you be able to recall your device passwords or keys.
3. Now try to get your cloud data back.Did you succeed? If so, you’re screwed. Or to be a bit less dramatic, I should say: your cloud provider has access to your ‘encrypted’ data, as does the government if they want it, as does any rogue employee who knows their way around your provider’s internal policy checks.
Matthew Green describes the Mud Puddle Test, which Apple products definitely don’t pass.
If you must fail the Mud Puddle Test for your users, make sure you’re clear and transparent about this in the documentation for your product or service.(Art by Swizz.)
I. Symmetric-Key Encryption
The easiest piece of this puzzle is to encrypt data in transit between both ends (thus, satisfying the loosest definition of end-to-end encryption).At this layer, you already have some kind of symmetric key to use for encrypting data before you send it, and for decrypting it as you receive it.
For example, the following code will encrypt/decrypt strings and return hexadecimal strings with a version prefix.
const VERSION = "v1";/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, key, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(50)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, key, aad ); return plaintext.toString('utf-8');}
Under-the-hood, this is using XChaCha20-Poly1305, which is less sensitive to timing leaks than AES-GCM. However, like AES-GCM, this encryption mode doesn’t provide message- or key-commitment.
If you want key commitment, you should derive two keys from
$key
using a KDF based on hash functions: One for actual encryption, and the other as a key commitment value.If you want message commitment, you can use AES-CTR + HMAC-SHA256 or XChaCha20 + BLAKE2b-MAC.
If you want both, ask Taylor Campbell about his BLAKE3-based design.
A modified version of the above code with key-commitment might look like this:
const VERSION = "v2";/** * Derive an encryption key and a commitment hash. * @param {CryptographyKey} key * @param {Uint8Array} nonce * @returns {{encKey: CryptographyKey, commitment: Uint8Array}} */async function deriveKeys(key, nonce) { const encKey = new CryptographyKey(await sodium.crypto_generichash( new Uint8Array([0x01].append(nonce)), key )); const commitment = await sodium.crypto_generichash( new Uint8Array([0x02].append(nonce)), key ); return {encKey, commitment};}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const {encKey, commitment} = await deriveKeys(key, nonce); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, encKey, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(commitment) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(114)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const storedCommitment = await sodium.sodium_hex2bin(encrypted.slice(50, 114)); const {encKey, commitment} = await deriveKeys(key, nonce); if (!(await sodium.sodium_memcmp(storedCommitment, commitment))) { throw new Error("Incorrect commitment value"); } const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, encKey, aad ); return plaintext.toString('utf-8');}
Another design choice you might make is to encode ciphertext with base64 instead of hexadecimal. That doesn’t significantly alter the design here, but it does mean your decoding logic has to accommodate this.
You SHOULD version your ciphertexts, and include this in the AAD provided to your AEAD encryption mode. I used “v1” and “v2” as a version string above, but you can use your software name for that too.
II. Key Agreement
If you’re not familiar with Elliptic Curve Diffie-Hellman or Authenticated Key Exhcanges, the two of the earliest posts on this blog were dedicated to those topics.Key agreement in libsodium uses Elliptic Curve Diffie-Hellman over Curve25519, or X25519 for short.
There are many schools of thought for extending ECDH into an authenticated key exchange protocol.
We’re going to implement what the Signal Protocol calls X3DH instead of doing some interactive EdDSA + ECDH hybrid, because X3DH provides cryptographic deniability (see this section of the X3DH specification for more information).
For the moment, I’m going to assume a client-server model. That may or may not be appropriate for your design. You can substitute “the server” for “the other participant” in a peer-to-peer configuration.
Head’s up: This section of the blog post is code-heavy.
Update (November 23, 2020): I implemented this design in TypeScript, if you’d like something tangible to work with. I call my library, Rawr X3DH.
X3DH Pre-Key Bundles
Each participant will need to upload an Ed25519 identity key once (which is a detail covered in another section), which will be used to sign bundles of X25519 public keys to use for X3DH.Your implementation will involve a fair bit of boilerplate, like so:
/** * Generate an X25519 keypair. * * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}} */async function generateKeyPair() { const keypair = await sodium.crypto_box_keypair(); return { secretKey: await sodium.crypto_box_secretkey(keypair), publicKey: await sodium.crypto_box_publickey(keypair) };}/** * Generates some number of X25519 keypairs. * * @param {number} preKeyCount * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}[]} */async function generateBundle(preKeyCount = 100) { const bundle = []; for (let i = 0; i < preKeyCount; i++) { bundle.push(await generateKeyPair()); } return bundle;}/** * BLAKE2b( len(PK) | PK_0, PK_1, ... PK_n ) * * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function prehashPublicKeysForSigning(publicKeys) { const hashState = await sodium.crypto_generichash_init(); // First, update the state with the number of public keys const pkLen = new Uint8Array([ (publicKeys.length >>> 24) & 0xff, (publicKeys.length >>> 16) & 0xff, (publicKeys.length >>> 8) & 0xff, publicKeys.length & 0xff ]); await sodium.crypto_generichash_update(hashState, pkLen); // Next, update the state with each public key for (let pk of publicKeys) { await sodium.crypto_generichash_update( hashState, pk.getBuffer() ); } // Return the finalized BLAKE2b hash return await sodium.crypto_generichash_final(hashState);}/** * Signs a bundle. Returns the signature. * * @param {Ed25519SecretKey} signingKey * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function signBundle(signingKey, publicKeys) { return sodium.crypto_sign_detached( await prehashPublicKeysForSigning(publicKeys), signingKey );}/** * This is just so you can see how verification looks. * * @param {Ed25519PublicKey} verificationKey * @param {X25519PublicKey[]} publicKeys * @param {Uint8Array} signature */async function verifyBundle(verificationKey, publicKeys, signature) { return sodium.crypto_sign_verify_detached( await prehashPublicKeysForSigning(publicKeys), verificationKey, signature );}
This boilerplate exists just so you can do something like this:
/** * Generate some number of X25519 keypairs. * Persist the bundle. * Sign the bundle of publickeys with the Ed25519 secret key. * Return the signed bundle (which can be transmitted to the server.) * * @param {Ed25519SecretKey} signingKey * @param {number} numKeys * @returns {{signature: string, bundle: string[]}} */async function x3dh_pre_key(signingKey, numKeys = 100) { const bundle = await generateBundle(numKeys); const publicKeys = bundle.map(x => x.publicKey); const signature = await signBundle(signingKey, publicKeys); // This is a stub; how you persist it is app-specific: persistBundleNotDefinedHere(signingKey, bundle); // Hex-encode all the public keys const encodedBundle = []; for (let pk of publicKeys) { encodedBundle.push(await sodium.sodium_bin2hex(pk.getBuffer())); } return { 'signature': await sodium.sodium_bin2hex(signature), 'bundle': encodedBundle };}
And then you can drop the output of
x3dh_pre_key(secretKey)
into a JSON-encoded HTTP request.In accordance to Signal’s X3DH spec, you want to use
x3dh_pre_key(secretKey, 1)
to generate the “signed pre-key” bundle andx3dn_pre_key(secretKey, 100)
when pushing 100 one-time keys to the server.X3DH Initiation
This section conforms to the Sending the Initial Message section of the X3DH specification.When you initiate a conversation, the server should provide you with a bundle containing:
- Your peer’s Identity key (an Ed25519 public key)
- Your peer’s current Signed Pre-Key (an X25519 public key)
- (If any remain unburned) One of your key’s One-Time Keys (an X25519 public key) — and then delete it
If we assume the structure of this response looks like this:
{ "IdentityKey": "...", "SignedPreKey": { "Signature": "..." "PreKey": "..." }, "OneTimeKey": "..." // or NULL}
Then we can write the initiation step of the handshake like so:
/** * Get SK for initializing an X3DH handshake * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} senderKey */async function x3dh_initiate_send_get_sk(r, senderKey) { const identityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey) ); const signedPreKey = new X25519PublicKey( await sodium.sodium_hex2bin(r.SignedPreKey.PreKey) ); const signature = await sodium.sodium_hex2bin(r.SignedPreKey.Signature); // Check signature const valid = await verifyBundle(identityKey, [signedPreKey], signature); if (!valid) { throw new Error("Invalid signature"); } const ephemeral = await generateKeyPair(); const ephSecret = ephemeral.secretKey; const ephPublic = ephemeral.publicKey; // Turn the Ed25519 keys into X25519 keys for X3DH: const senderX = await sodium.crypto_sign_ed25519_sk_to_curve25519(senderKey); const recipientX = await sodium.crypto_sign_ed25519_pk_to_curve25519(identityKey); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(senderX, signedPreKey); const DH2 = await sodium.crypto_scalarmult(ephSecret, recipientX); const DH3 = await sodium.crypto_scalarmult(ephSecret, signedPreKey); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( ephSecret, new X25519PublicKey(await sodium.sodium_hex2bin(r.OneTimeKey)) ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); ephSecret.wipe(); senderX.wipe(); return { IK: identityKey, EK: ephPublic, SK: SK, OTK: r.OneTimeKey // might be NULL };}/** * Initialize an X3DH handshake * * @param {string} recipientIdentity - Some identifier for the user * @param {Ed25519SecretKey} secretKey - Sender's secret key * @param {Ed25519PublicKey} publicKey - Sender's public key * @param {string} message - The initial message to send * @returns {object} */async function x3dh_initiate_send(recipientIdentity, secretKey, publicKey, message) { const r = await get_server_response(recipientIdentity); const {IK, EK, SK, OTK} = await x3dh_initiate_send_get_sk(r, secretKey); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(publicKey.getBuffer()) .concat(IK.getBuffer()) ) ); /* * We're going to set the session key for our recipient to SK. * This might invoke a ratchet. * * Either SK or the output of the ratchet derived from SK * will be returned by getEncryptionKey(). */ await setSessionKey(recipientIdentity, SK); const encrypted = await encryptData( message, await getEncryptionKey(recipientIdentity), assocData ); return { "Sender": my_identity_string, "IdentityKey": await sodium.sodium_bin2hex(publicKey), "EphemeralKey": await sodium.sodium_bin2hex(EK), "OneTimeKey": OTK, "CipherText": encrypted };}
We didn’t define
setSessionKey()
orgetEncryptionKey()
above. It will be covered later.X3DH – Receiving an Initial Message
This section implements the Receiving the Initial Message section of the X3DH Specification.We’re going to assume the structure of the request looks like this:
{ "Sender": "...", "IdentityKey": "...", "EphemeralKey": "...", "OneTimeKey": "...", "CipherText": "..."}
The code to handle this should look like this:
/** * Handle an X3DH initiation message as a receiver * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} identitySecret * @param {Ed25519PublicKey} identityPublic * @param {Ed25519SecretKey} preKeySecret */async function x3dh_initiate_recv_get_sk( r, identitySecret, identityPublic, preKeySecret) { // Decode strings const senderIdentityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey), ); const ephemeral = new X25519PublicKey( await sodium.sodium_hex2bin(r.EphemeralKey), ); // Ed25519 -> X25519 const senderX = await sodium.crypto_sign_ed25519_pk_to_curve25519(senderIdentityKey); const recipientX = await sodium.crypto_sign_ed25519_sk_to_curve25519(identitySecret); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(preKeySecret, senderX); const DH2 = await sodium.crypto_scalarmult(recipientX, ephemeral); const DH3 = await sodium.crypto_scalarmult(preKeySecret, ephemeral); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( await fetchAndWipeOneTimeSecretKey(r.OneTimeKey), ephemeral ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); recipientX.wipe(); return { Sender: r.Sender, SK: SK, IK: senderIdentityKey };}/** * Initiate an X3DH handshake as a recipient * * @param {object} req - Request object * @returns {string} - The initial message */async function x3dh_initiate_recv(req) { const {identitySecret, identityPublic} = await getIdentityKeypair(); const {preKeySecret, preKeyPublic} = await getPreKeyPair(); const {Sender, SK, IK} = await x3dh_initiate_recv_get_sk( req, identitySecret, identityPublic, preKeySecret, preKeyPublic ); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(IK.getBuffer()) .concat(identityPublic.getBuffer()) ) ); try { await setSessionKey(senderIdentity, SK); return decryptData( req.CipherText, await getEncryptionKey(senderIdentity), assocData ); } catch (e) { await destroySessionKey(senderIdentity); throw e; }}
And with that, you’ve successfully implemented X3DH and symmetric encryption in JavaScript.
We abstracted some of the details away (i.e.
kdf()
, the transport mechanisms, the session key management mechanisms, and a few others). Some of them will be highly specific to your application, so it doesn’t make a ton of sense to flesh them out.One thing to keep in mind: According to the X3DH specification, participants should regularly (e.g. weekly) replace their Signed Pre-Key in the server with a fresh one. They should also publish more One-Time Keys when they start to run low.
If you’d like to see a complete reference implementation of X3DH, as I mentioned before, Rawr-X3DH implements it in TypeScript.
Session Key Management
Using X3DH to for every message is inefficient and unnecessary. Even the Signal Protocol doesn’t do that.Instead, Signal specifies a Double Ratchet protocol that combines a Symmetric-Key Ratchet on subsequent messages, and a Diffie-Hellman-based ratcheting protocol.
Signal even specifies integration guidelines for the Double Ratchet with X3DH.
It’s worth reading through the specification to understand their usages of Key-Derivation Functions (KDFs) and KDF Chains.
Although it is recommended to use HKDF as the Signal protocol specifies, you can strictly speaking use any secure keyed PRF to accomplish the same goal.
What follows is an example of a symmetric KDF chain that uses BLAKE2b with 512-bit digests of the current session key; the leftmost half of the BLAKE2b digest becomes the new session key, while the rightmost half becomes the encryption key.
const SESSION_KEYS = {};/** * Note: In reality you'll want to have two separate sessions: * One for receiving data, one for sending data. * * @param {string} identity * @param {CryptographyKey} key */async function setSessionKey(identity, key) { SESSION_KEYS[identity] = key;}async function getEncryptionKey(identity) { if (!SESSION_KEYS[identity]) { throw new Error("No session key for " + identity"); } const blake2bMac = await sodium.crypto_generichash( SESSION_KEYS[identity], null, 64 ); SESSION_KEYS[identity] = new CryptographyKey(blake2bMac.slice(0, 32)); return new CryptographyKey(blake2bMac.slice(32, 64));}
In the interest of time, a full DHRatchet implementation is left as an exercise to the reader (since it’s mostly a state machine), but using the appropriate functions provided by sodium-plus (
crypto_box_keypair()
,crypto_scalarmult()
) should be relatively straightforward.Make sure your KDFs use domain separation, as per the Signal Protocol specifications.
Group Key Agreement
The Signal Protocol specified X3DH and the Double Ratchet for securely encrypting information between two parties.Group conversations are trickier, because you have to be able to encrypt information that multiple recipients can decrypt, add/remove participants to the conversation, etc.
(The same complexity comes with multi-device support for end-to-end encryption.)
The best design I’ve read to date for tackling group key agreement is the IETF Messaging Layer Security RFC draft.
I am not going to implement the entire MLS RFC in this blog post. If you want to support multiple devices or group conversations, you’ll want a complete MLS implementation to work with.
Brief Recap
That was a lot of ground to cover, but we’re not done yet.(Art by Khia.)
So far we’ve tackled encryption, initial key agreement, and session key management. However, we did not flesh out how Identity Keys (which are signing keys–Ed25519 specifically–rather than Diffie-Hellman keys) are managed. That detail was just sorta hand-waved until now.
So let’s talk about that.
III. Identity Key Management
There’s a meme among technology bloggers to write a post titled “Falsehoods Programmers Believe About _____”.Fortunately for us, Identity is one of the topics that furries are positioned to understand better than most (due to fursonas): Identities have a many-to-many relationship with Humans.
In an end-to-end encryption protocol, each identity will consist of some identifier (phone number, email address, username and server hostname, etc.) and an Ed25519 keypair (for which the public key will be published).
But how do you know whether or not a given public key is correct for a given identity?
This is where we segue into one of the hard problems in cryptography, where the solutions available are entirely dependent on your threat model: Public Key Infrastructure (PKI).
Some common PKI designs include:
- Certificate Authorities (CAs) — TLS does this
- Web-of-Trust (WoT) — The PGP ecosystem does this
- Trust On First Use (TOFU) — SSH does this
- Key Transparency / Certificate Transparency (CT) — TLS also does this for ensuring CA-issued certificates are auditable (although it was originally meant to replace Certificate Authorities)
And you can sort of choose-your-own-adventure on this one, depending on what’s most appropriate for the type of software you’re building and who your customers are.
One design I’m particularly fond of is called Gossamer, which is a PKI design without Certificate Authorities, originally designed for making WordPress’s automatic updates more secure (i.e. so every developer can sign their theme and plugin updates).
Since we only need to maintain an up-to-date repository of Ed25519 identity keys for each participant in our end-to-end encryption protocol, this makes Gossamer a suitable starting point.
Gossamer specifies a limited grammar of Actions that can be performed: AppendKey, RevokeKey, AppendUpdate, RevokeUpdate, and AttestUpdate. These actions are signed and published to an append-only cryptographic ledger.
I would propose a sixth action: AttestKey, so you can have WoT-like assurances and key-signing parties. (If nothing else, you should be able to attest that the identity keys of other cryptographic ledgers in the network are authentic at a point in time.)
IV. Backdoor Resistance
In the previous section, I proposed the use of Gossamer as a PKI for Identity Keys. This would provide Ed25519 keypairs for use with X3DH and the Double Ratchet, which would in turn provide session keys to use for symmetric authenticated encryption.If you’ve implemented everything preceding this section, you have a full-stack end-to-end encryption protocol. But let’s make intelligence agencies and surveillance capitalists even more mad by making it impractical to backdoor our software (and impossible to silently backdoor it).
How do we pull that off?
You want Binary Transparency.
For us, the implementation is simple: Use Gossamer as it was originally intended (i.e. to secure your software distribution channels).
Gossamer provides up-to-date verification keys and a commitment to a cryptographic ledger of every software update. You can learn more about its inspiration here.
It isn’t enough to merely use Gossamer to manage keys and update signatures. You need independent third parties to use the AttestUpdate action to assert one or more of the following:
- That builds are reproducible from the source code.
- That they have reviewed the source code and found no evidence of backdoors or exploitable vulnerabilities.
(And then you should let your users decide which of these independent third parties they trust to vet software updates.)
Closing Remarks
The U.S. Government cries and moans a lot about “criminals going dark” and wonders a lot about how to solve the “going dark problem”.If more software developers implement end-to-end encryption in their communications software, then maybe one day they won’t be able to use dragnet surveillance to spy on citizens and they’ll be forced to do actual detective work to solve actual crimes.
Y’know, like their job description actually entails?
Let’s normalize end-to-end encryption. Let’s normalize backdoor-resistant software distribution.
Let’s collectively tell the intelligence community in every sophisticated nation state the one word they don’t hear often enough:
Especially if you’re a furry. Because we improve everything! :3
Questions You Might Have
What About Private Contact Discovery?
That’s one of the major reasons why the thing we’re building isn’t meant to compete with Signal (and it MUST NOT be advertised as such):Signal is a privacy tool, and their servers have no way of identifying who can contact who.
What we’ve built here isn’t a complete privacy solution, it’s only providing end-to-end encryption (and possibly making NSA employees cry at their desk).
Does This Design Work with Federation?
Yes. Each identifier string can be [username] at [hostname].What About Network Metadata?
If you want anonymity, you want to use Tor.Why Are You Using Ed25519 Keys for X3DH?
If you only read the key agreement section of this blog post and the fact that I’m passing around Ed25519 public keys seems weird, you might have missed the identity section of this blog post where I suggested piggybacking on another protocol called Gossamer to handle the distribution of Ed25519 public keys. (Gossamer is also beneficial for backdoor resistance in software update distribution, as described in the subsequent section.)Furthermore, we’re actually using birationally equivalent X25519 keys derived from the Ed25519 keypair for the X3DH step. This is a deviation from what Signal does (using X25519 keys everywhere, then inventing an EdDSA variant to support their usage).
const publicKeyX = await sodium.crypto_sign_ed25519_pk_to_curve25519(foxPublicKey);const secretKeyX = await sodium.crypto_sign_ed25519_sk_to_curve25519(wolfSecretKey);
(Using fox/wolf instead of Alice/Bob, because it’s cuter.)
This design pattern has a few advantages:
- It makes Gossamer integration seamless, which means you can use Ed25519 for identities and still have a deniable X3DH handshake for 1:1 conversations while implementing the rest of the designs proposed.
- This approach to X3DH can be implemented entirely with libsodium functions, without forcing you to write your own cryptography implementations (i.e. for XEdDSA).
The only disadvantages I’m aware of are:
- It deviates from Signal’s core design in a subtle way that means you don’t get to claim the exact same advantages Signal does when it comes to peer review.
- Some cryptographers are distrustful of the use of birationally equivalent X25519 keys from Ed25519 keys (although there isn’t a vulnerability any of them have been able to point me to that doesn’t involve torsion groups–which libsodium’s implementation already avoids).
If these concerns are valid enough to decide against my implementation above, I invite you to talk with cryptographers about your concerns and then propose alternatives.
Has Any of This Been Implemented Already?
You can find implementations for the designs discussed on this blog post below:
- Rawr-X3DH implements X3DH in TypeScript (added 2020-11-23)
I will update this section of the blog post as implementations surface.
https://soatok.blog/2020/11/14/going-bark-a-furrys-guide-to-end-to-end-encryption/
#authenticatedEncryption #authenticatedKeyExchange #crypto #cryptography #encryption #endToEndEncryption #libsodium #OnlinePrivacy #privacy #SecurityGuidance #symmetricEncryption
If you’re ever tasked with implementing a cryptography feature–whether a high-level protocol or a low-level primitive–you will have to take special care to ensure you’re not leaking secret information through side-channels.
The descriptions of algorithms you learn in a classroom or textbook are not sufficient for real-world use. (Yes, that means your toy RSA implementation based on GMP from your computer science 101 class isn’t production-ready. Don’t deploy it.)
But what are these elusive side-channels exactly, and how do you prevent them? And in cases where you cannot prevent them, how can you mitigate the risk to your users?
Art by Swizz.
Contents
- Cryptographic Side-Channels
- Side-Channel Prevention and Mitigation
- Design Patterns for Algorithmic Constant-Time Code
- Constant-Time String Comparison
- Alternative: “Double HMAC” String Comparison
- Constant-Time Conditional Select
- Constant-Time String Inequality Comparison
- Constant-Time Integer Multiplication
- Constant-Time Integer Division
- Constant-Time Modular Inversion
- Constant-Time Null-Byte Trimming
- Further Reading and Online Resources
- Errata
Cryptographic Side-Channels
The concept of a side-channel isn’t inherently cryptographic, as Taylor Hornby demonstrates, but a side-channel can be a game over vulnerability in a system meant to maintain confidentiality (even if only for its cryptography keys).
Cryptographic side-channels allow an attacker to learn secret data from your cryptography system. To accomplish this, the attacker doesn’t necessarily study the system’s output (i.e. ciphertext); instead, they observe some other measurement, such as how much time or power was spent performing an operation, or what kind of electromagnetic radiation was emitted.
Important: While being resistant to side-channels is a prerequisite for implementations to be secure, it isn’t in and of itself sufficient for security. The underlying design of the primitives, constructions, and high-level protocols needs to be secure first, and that requires a clear and specific threat model for what you’re building.
Constant-time ECDSA doesn’t help you if you reuse k-values like it’s going out of style, but variable-time ECDSA still leaks your secret key to anyone who cares to probe your response times. Secure cryptography is very demanding.
Art by Riley.
Timing Leaks
Timing side-channels leak secrets through how much time it takes for an operation to complete.
There are many different flavors of timing leakage, including:
- Fast-failing comparison functions (memcmp() in C)
- Cache-timing vulnerabilities (e.g. software AES)
- Memory access patterns
- Conditional branches controlled by secrets
The bad news about timing leaks is that they’re almost always visible to an attacker over the network (including over the Internet (PDF)).
The good news is that most of them can be prevented or mitigated in software.
Art by Kyume.
Power Usage
Different algorithms or processor operations may require different amounts of power.
For example, squaring a large number may take less power than multiplying two different large numbers. This observation has led to the development of power analysis attacks against RSA.
Power analysis is especially relevant for embedded systems and smart cards, which are easier to extract a meaningful signal from than your desktop computer.
Some information leakage through power usage can be prevented through careful engineering (for example: BearSSL, which uses Montgomery multiplication instead of square-and-multiply).
But that’s not always an option, so generally these risks are mitigated.
My reaction when I first learned of power leaks: WATT (Art by Swizz)
Electromagnetic Emissions
Your computer is a reliable source of electromagnetic emissions (such as radio waves). Some of these emissions may reveal information about your cryptographic secrets, especially to an attacker with physical proximity to your device.
The good news is that research into EM emission side-channels isn’t as mature as side-channels through timing leaks or power usage. The bad news is that mitigations for breakthroughs will generally require hardware (e.g. electromagnetic shielding).
Aren’t computers terrifying? (Art by Swizz)
Side-Channel Prevention and Mitigation
Now that we’ve established a rough sense of some of the types of side-channels that are possible, we can begin to identify what causes them and aspire to prevent the leaks from happening–and where we can’t, to mitigate the risk to a reasonable level.
Note: To be clear, I didn’t cover all of the types of side-channels.
Prevention vs. Mitigation
Preventing a side-channel means eliminating the conditions that allow the information leak to occur in the first place. For timing leaks, this means making all algorithms constant-time.
There are entire classes of side-channel leaks that aren’t possible or practical to mitigate in software. When you encounter one, the best you can hope to do is mitigate the risk.
Ideally, you want to make the attack more expensive to pull off than the reward an attacker will gain from it.
What is Constant-Time?
https://www.youtube.com/watch?v=ZD_H1ePLylA
When an implementation is said to be constant-time, what we mean is that the execution time of the code is not a function of its secret inputs.
Vulnerable AES uses table look-ups to implement the S-Box. Constant-time AES is either implemented in hardware, or is bitsliced.
Malicious Environments and Algorithmic Constant-Time
One of the greatest challenges with writing constant-time code is distinguishing between algorithmic constant-time and provably constant-time. The main difference between the two is that you cannot trust your compiler (especially a JIT compiler), which may attempt to optimize your code in a way that reintroduces the side-channel you aspired to remove.
A sufficiently advanced compiler optimization is indistinguishable from an adversary.John Regehr, possibly with apologies to Arthur C. Clarke
For compiled languages, this is a tractable but expensive problem to solve: You simply have to formally verify everything from the source code to the compiler to the silicon chips that the code will be deployed on, and then audit your supply chain to prevent malicious tampering from going undetected.
For interpreted languages (e.g. PHP and JavaScript), this formal verification strategy isn’t really an option, unless you want to formally verify the runtime that interprets scripts and prove that the operations remain constant-time on top of all the other layers of distrust.
Is this level of paranoia really worth the effort?
For our cases, anyway! (Art by Khia.)
For that reason, we’re going to assume that algorithmic constant-time is adequate for the duration of this blog post.
If your threat model prevents you from accepting this assumption, feel free to put in the extra effort yourself and tell me how it goes. After all, as a furry who writes blog posts in my spare time for fun, I don’t exactly have the budget for massive research projects in formal verification.
Mitigation with Blinding Techniques
The best mitigation for some side-channels is called blinding: Obfuscating the inputs with some random data, then deobfuscating the outputs with the same random data, such that your keys are not revealed.
Two well-known examples include RSA decryption and Elliptic Curve Diffie-Hellman. I’ll focus on the latter, since it’s not as widely covered in the literature (although several cryptographers I’ve talked with were somehow knowledgeable about it; I suspect gatekeeping is involved).
Blinded ECDH Key Exchange
In typical ECDH implementations, you will convert a point on a Weierstrass curve to a Jacobian coordinate system .
The exact conversion formula is (, ). The conversion almost makes intuitive sense.
Where does come from though?
Art by circuitslime
It turns out, the choice for is totally arbitrary. Libraries typically set it equal to 1 (for best performance), but you can also set it to a random number. (You cannot set it to 0, however, for obvious reasons.)
Choosing a random number means the calculations performed over Jacobian coordinates will be obscured by a randomly chosen factor (and thus, if is only used once per scalar multiplication, the bitwise signal the attackers rely on will be lost).
Blinding techniques are cool. (Art by Khia.)
I think it’s really cool how one small tweak to the runtime of an algorithm can make it significantly harder to attack.
Design Patterns for Algorithmic Constant-Time Code
Mitigation techniques are cool, but preventing side-channels is a better value-add for most software.
To that end, let’s look at some design patterns for constant-time software. Some of these are relatively common; others, not so much.
Art by Scout Pawfoot.
If you prefer TypeScript / JavaScirpt, check out Soatok’s constant-time-js library on Github / NPM.
Constant-Time String Comparison
Rather than using string comparison (== in most programming languages, memcmp() in C), you want to compare cryptographic secrets and/or calculated integrity checks with a secure compare algorithm, which looks like this:
- Initialize a variable (let’s call it D) to zero.
- For each byte of the two strings:
- Calculate (lefti XOR righti)
- Bitwise OR the current value of D with the result of the XOR, store the output in D
- When the loop has concluded, D will be equal to 0 if and only if the two strings are equal.
In code form, it looks like this:
<?phpfunction ct_compare(string $left, string $right): bool{ $d = 0; $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { return false; // Lengths differ } for ($i = 0; $i < $length; ++$i) { $leftCharCode = unpack('C', $left[$i])[1]; $rightCharCode = unpack('C', $right[$i])[1]; $d |= ($leftCharCode ^ $rightCharCode); } return $d === 0;}
In this example, I’m using PHP’s unpack() function to avoid cache-timing leaks with ord() and chr(). Of course, you can simply use hash_equals() instead of writing it yourself (PHP 5.6.0+).
Alternative: “Double HMAC” String Comparison
If the previous algorithm won’t work (i.e. because you’re concerned your JIT compiler will optimize it away), there is a popular alternative to consider. It’s called “Double HMAC” because it was traditionally used with Encrypt-Then-HMAC schemes.
The algorithm looks like this:
- Generate a random 256-bit key, K. (This can be cached between invocations, but it should be unpredictable.)
- Calculate HMAC-SHA256(K, left).
- Calculate HMAC-SHA256(K, right).
- Return true if the outputs of step 2 and 3 are equal.
This is provably secure, so long as HMAC-SHA256 is a secure pseudo-random function and the key K is unknown to the attacker.
In code form, the Double HMAC compare function looks like this:
<?phpfunction hmac_compare(string $left, string $right): bool{ static $k = null; if (!$k) $k = random_bytes(32); return ( hash_hmac('sha256', $left, $k) === hash_hmac('sha256', $right, $k) );}
Constant-Time Conditional Select
I like to imagine a conversation between a cryptography engineer and a Zen Buddhist, that unfolds like so:
- CE: “I want to eliminate branching side-channels from my code.”
- ZB: “Then do not have branches in your code.”
And that is precisely what we intend to do with a constant-time conditional select: Eliminate branches by conditionally returning between one of two strings, without an IF statement.
Mind. Blown. (Art by Khia.)
This isn’t as tricky as it sounds. We’re going to use XOR and two’s complement to achieve this.
The algorithm looks like this:
- Convert the selection bit (TRUE/FALSE) into a mask value (-1 for TRUE, 0 for FALSE). Bitwise, -1 looks like 111111111…1111111111, while 0 looks like 00000000…00000000.
- Copy the right string into a buffer, call it tmp.
- Calculate left XOR right, call it x.
- Return (tmp XOR (x AND mask)).
Once again, in code this algorithm looks like this:
<?phpfunction ct_select( bool $returnLeft, string $left, string $right): string { $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { throw new Exception('ct_select() expects two strings of equal length'); } // Mask byte $mask = (-$returnLeft) & 0xff; // X $x = (string) ($left ^ $right); // Output = Right XOR (X AND Mask) $output = ''; for ($i = 0; $i < $length; $i++) { $rightCharCode = unpack('C', $right[$i])[1]; $xCharCode = unpack('C', $x[$i])[1]; $output .= pack( 'C', $rightCharCode ^ ($xCharCode & $mask) ); } return $output;}
You can test this code for yourself here. The function was designed to read intuitively like a ternary operator.
A Word of Caution on Cleverness
In some languages, it may seem tempting to use the bitwise trickery to swap out pointers instead of returning a new buffer. But do not fall for this Siren song.
If, instead of returning a new buffer, you just swap pointers, what you’ll end up doing is creating a timing leak through your memory access patterns. This can culminate in a timing vulnerability, but even if your data is too big to fit in a processor’s cache line (I dunno, Post-Quantum RSA keys?), there’s another risk to consider.
Virtual memory addresses are just beautiful lies. Where your data lives on the actual hardware memory is entirely up to the kernel. You can have two blobs with contiguous virtual memory addresses that live on separate memory pages, or even separate RAM chips (if you have multiple).
If you’re swapping pointers around, and they point to two different pieces of hardware, and one is slightly faster to read from than the other, you can introduce yet another timing attack through which pointer is being referenced by the processor.
It’s timing leaks all the ways down! (Art by Swizz)
If you’re swapping between X and Y before performing a calculation, where:
- X lives on RAM chip 1, which takes 3 ns to read
- Y lives on RAM chip 2, which takes 4 ns to read
…then the subsequent use of the swapped pointers reveals whether you’re operating on X or Y in the timing: It will take slightly longer to read from Y than from X.
The best way to mitigate this problem is to never design your software to have it in the first place. Don’t be clever on this one.
Constant-Time String Inequality Comparison
Sometimes you don’t just need to know if two strings are equal, you also need to know which one is larger than the other.
To accomplish this in constant-time, we need to maintain two state variables:
- gt (initialized to 0, will be set to 1 at some point if left > right)
- eq (initialized to 1, will be set to 0 at some point if left != right)
Endian-ness will dictate the direction our algorithm goes, but we’re going to perform two operations in each cycle:
- gt should be bitwise ORed with (eq AND ((right – left) right shifted 8 times)
- eq should be bitwise ANDed with ((right XOR left) – 1) right shifted 8 times
If right and left are ever different, eq will be set to 0.
If the first time they’re different the value for lefti is greater than the value for righti, then the subtraction will produce a negative number. Right shifting a negative number 8 places then bitwise ANDing the result with eq (which is only 1 until two bytes differ, and then 0 henceforth if they do) will result in a value for 1 with gt. Thus, if (righti – lefti) is negative, gt will be set to 1. Otherwise, it remains 0.
At the end of this loop, return (gt + gt + eq) – 1. This will result in the following possible values:
- left < right: -1
- left == right: 0
- left > right: 1
The arithmetic based on the possible values of gt and eq should be straightforward.
- Different (eq == 0) but not greater (gt == 0) means left < right, -1.
- Different (eq == 0) and greater (gt == 1) means left > right, 1.
- If eq == 1, no bytes ever differed, so left == right, 0.
A little endian implementation is as follows:
<?phpfunction str_compare(string $left, string $right): int{ $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { throw new Exception('ct_select() expects two strings of equal length'); } $gt = 0; $eq = 1; $i = $length; while ($i > 0) { --$i; $leftCharCode = unpack('C', $left[$i])[1]; $rightCharCode = unpack('C', $right[$i])[1]; $gt |= (($rightCharCode - $leftCharCode) >> 8) & $eq; $eq &= (($rightCharCode ^ $leftCharCode) -1) >> 8; } return ($gt + $gt + $eq) - 1;}
Demo for this function is available here.
Constant-Time Integer Multiplication
Multiplying two integers is one of those arithmetic operations that should be constant-time. But on many older processors, it isn’t.
Of course there’s a microarchitecture timing leak! (Art by Khia.)
Fortunately, there is a workaround. It involves an algorithm called Ancient Egyptian Multiplication in some places or Peasant Multiplication in others.
Multiplying two numbers and this way looks like this:
- Determine the number of operations you need to perform. Generally, this is either known ahead of time or .
- Set to 0.
- Until the operation count reaches zero:
- If the lowest bit of is set, add to .
- Left shift by 1.
- Right shfit by 1.
- Return .
The main caveat here is that you want to use bitwise operators in step 3.1 to remove the conditional branch.
Rather than bundle example code in our blog post, please refer to the implementation in sodium_compat (a pure PHP polyfill for libsodium).
For big number libraries, implementing Karatsuba on top of this integer multiplying function should be faster than attempting to multiply bignums this way.
Constant-Time Integer Division
Although some cryptography algorithms call for integer division, division isn’t usually expected to be constant-time.
However, if you look up a division algorithm for unsigned integers with a remainder, you’ll likely encounter this algorithm, which is almost constant-time:
if D = 0 then error(DivisionByZeroException) endQ := 0 -- Initialize quotient and remainder to zeroR := 0 for i := n − 1 .. 0 do -- Where n is number of bits in N R := R << 1 -- Left-shift R by 1 bit R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator if R ≥ D then R := R − D Q(i) := 1 endend
If we use the tricks we learned from implementing constant-time string inequality with constant-time conditional selection, we can implement this algorithm without timing leaks.
Our constant-time version of this algorithm looks like this:
if D = 0 then error(DivisionByZeroException) endQ := 0 -- Initialize quotient and remainder to zeroR := 0 for i := n − 1 .. 0 do -- Where n is number of bits in N R := R << 1 -- Left-shift R by 1 bit R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator compared := ct_compare(R, D) -- Use constant-time inequality -- if R > D then compared == 1, swap = 1 -- if R == D then compared == 0, swap = 1 -- if R < D then compared == -1, swap = 0 swap := (1 - ((compared >> 31) & 1)) -- R' = R - D -- Q' = Q, Q = 1 Rprime := R - D Qprime := Q Qprime(i) := 1 -- The i'th bit is set to 1 -- Replace (R with R', Q with Q') if swap == 1 R = ct_select(swap, Rprime, R) Q = ct_select(swap, Qprime, Q)end
It’s approximately twice as slow as the original, but it’s constant-time.
(Art by Khia.)
Constant-Time Modular Inversion
Modular inversion is the calculation of for some prime . This is used in a lot of places, but especially in elliptic curve cryptography and RSA.
Daniel J. Bernstein and Bo-Yin Yang published a paper on fast constant-time GCD and Modular Inversion in 2019. The algorithm in question is somewhat straightforward to implement (although determining whether or not that implementation is safe is left as an exercise to the rest of us).
A simpler technique is to use Fermat’s Little Theorem: for some prime . This only works with prime fields, and is slower than a Binary GCD (which isn’t [i]necessarily constant-time, as OpenSSL discovered).
BearSSL provides an implementation (and accompanying documentation) for a constant-time modular inversion algorithm based on Binary GCD.
(In the future, I may update this section of this blog post with an implementation in PHP, using the GMP extension.)
Constant-Time Null-Byte Trimming
Shortly after this guide first went online, security researchers published the Raccoon Attack, which used a timing leak in the number of leading 0 bytes in the pre-master secret–combined with a lattice attack to solve the hidden number problem–to break TLS-DH(E).
To solve this, you need two components:
- A function that returns a slice of an array without timing leaks.
- A function that counts the number of significant bytes (i.e. ignores leading zero bytes, counts from the first non-zero byte).
A timing-safe array resize function needs to do two things:
- Touch every byte of the input array once.
- Touch every byte of the output array at least once, linearly. The constant-time division algorithm is useful here (to calculate x mod n for the output array index).
- Conditionally select between input[x] and the existing output[x_mod_n], based on whether x >= target size.
I’ve implemented this in my constant-time-js library:
Further Reading and Online Resources
If you’re at all interested in cryptographic side-channels, your hunger for knowledge probably won’t be sated by a single blog post. Here’s a collection of articles, papers, books, etc. worth reading.
- BearSSL’s Documentation on Constant-Time Code — A must-read for anyone interested in this topic
- Cryptographically Secure PHP Development — How to write secure cryptography in languages that cryptographers largely neglect
- CryptoCoding — A style guide for writing secure cryptography code in C (with example code!)
- CryptoGotchas — An overview of the common mistakes one can make when writing cryptography code (which is a much wider scope than side-channels)
- Meltdown and Spectre — Two vulnerabilities that placed side-channels in the scope of most of infosec that isn’t interested in cryptography
- Serious Cryptography — For anyone who lacks the background knowledge to fully understand what I’m talking about on this page
Errata
- 2020-08-27: The original version of this blog post incorrectly attributed Jacobian coordinate blinding to ECDSA hardening, rather than ECDH hardening. This error was brought to my attention by Thai Duong. Thanks Thai!
- 2020-08-27: Erin correctly pointed out that omitting memory access timing was a disservice to developers, who might not be aware of the risks involved. I’ve updated the post to call this risk out specifically (especially in the conditional select code, which some developers might try to implement with pointer swapping without knowing the risks involved). Thanks Erin!
I hope you find this guide to side-channels helpful.
Thanks for reading!
Follow my blog for more Defense Against the Bark Arts posts in the future.
https://soatok.blog/2020/08/27/soatoks-guide-to-side-channel-attacks/
#asymmetricCryptography #constantTime #cryptography #ECDH #ECDSA #ellipticCurveCryptography #RSA #SecurityGuidance #sideChannels #symmetricCryptography
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography