Search
Items tagged with: ECDSA
Last year I wrote a grab-bag post titled, Don’t Forget To Brush Your Fur, because I’m terrible at SEO or making content easily discoverable.
In the same vein as that previous example, this is going to be in the style of Lightning Round talks at technology conferences.
Why are we doing this again?
I maintain a running list of things to write about, and cross ideas off whenever I cover a topic.
After a few months of doing this, I realize most of what remains is kinda interesting but not quite interesting enough to warrant a dedicated entry.
It always needs more isogenies.
(Art by Lynx vs Jackalope)
Contents
- Asymmetric Key Wear-Out
- HMAC Wear-Out?
- Asymmetric Commitments
- Against “Fluffies”
- A Meditation on Furries and Cringe
- Furries and Blue State Privilege
Asymmetric Key Wear-Out
Last year, I wrote about cryptographic wear-out for symmetric encryption. That post has attracted quite a bit of feedback from folks requesting comparisons against other block cipher modes, etc. One topic that I didn’t see requested much, but is equally interesting, is how this reasoning can be applied to asymmetric cryptography (if at all).
Let’s get one thing clear: Cryptography keys don’t “wear out” in the same sense as a physical key might. What we’re talking about is an ever-increasing risk of a collision occurring in random nonces.
ECDSA Key Wear-Out
ECDSA signatures involve a one-time secret, k. The scalar multiplication of k and the base point for the curve is encoded as half of the signature (r
), while its modular inverse is multiplied by the sum of the truncated message hash and the product of r
and the secret key to produce the other half of the signature (s
).
If your selection of k is biased, or k is ever reused for two different messages, you can leak the secret key.
Strictly speaking, for any given ECDSA curve, there is only one k
value that corresponds to a given r
for all users (n.b it’s not distinct per keypair).
This means that all users of e.g. ECDSA over NIST P-256 have to worry about a shared cryptographic wear-out: After 2^112 signatures, there is a 2^-32 chance of a single collision occurring.
Fortunately, the search space of possible k-values is enormous, and this will not impose a real-world operational risk in the near future. If you’re worried about multi-user attacks, P-384 gives you a wear-out threshold of 2^176 messages, which we’re probably never going to achieve.
RSA Key Wear-Out
In order to calculate the wear-out for an RSA message, you first have to begin with an attack model. Previously, we were looking at algorithms that would become brittle if a nonce was reused.
RSA doesn’t have nonces. You can’t attack RSA this way.
But let’s assume that such an attack did exist. What might the safety limit look like? There are two remaining possible considerations for RSA’s security against cryptographic wear-out: Key size and padding mode.
RSA private keys are two prime numbers (p, q). RSA public keys are the product of the two primes (n) and a public exponent (e) that must be coprime to (p-1)(q-1). (In practice, e is usually set to 3, 65537, or some other small prime.)
The security of RSA is subexponential to key size, based on the difficulty of integer factoring attacks and the requirement for p and q to be prime numbers.
This primeness restriction doesn’t apply to your message. The padding mode dictates your upper limit on message size; e.g., PKCS#1 v1.5 padding will take up at least 3 bytes:
- For encryption,
x = 0x00 || 0x02 || r || 0x00 || m
, wherer
is random padding bytes (minimum 8 bytes). - For signatures,
x = 0x00 || 0x01 || 0xFF..FF || 0x00 || m
. - In either case, the padding is always at least 11 bytes long.
So if you have 2048-bit RSA keys, you can encrypt or sign up to 245 bytes (1960 bits) with PKCS#1 v1.5 padding. This corresponds to a safety limit of 2^974 messages.
(Art by Lynx vs Jackalope)
HMAC Wear-Out?
To keep things simple, the security of HMAC can be reduced to the collision risk of the underlying hash function.
If you’re trying to estimate when to rotate symmetric keys used for HMAC, take the birthday bound of the underlying hash function as your starting point.
- For SHA-256, you have a 50% chance of a collision after 2^128 messages. For a 2^-32 chance, you can get 2^112 messages out of a single key.
- For SHA-384, this is 2^176 messages.
- For SHA-512, this is 2^240 messages.
In either case, however, these numbers might as well be infinity.
With apologies to Filippo Valsorda.
Asymmetric Commitments
Did you know that fast MACs such as GHASH and Poly1305 aren’t random-key robust? This property can matter in surprising ways.
Did you know that ECDSA and RSA don’t qualify for this property either? This is related to the topics of malleability and exclusive ownership. You can learn more about this in the CryptoGotchas page.
Essentially, if a signature scheme is malleable or fails to provide exclusive ownership, it’s possible to construct two arbitrary (m, pk) pairs that produce the same signature.
Any nonmalleable signature scheme with exclusive ownership (i.e. Ed25519 with low-order point rejection and canonical signature checks, as provided by the latest version of libsodium) provides sufficient commitment–mostly due to how it uses a collision-resistant cryptographic hash function. (It’s also worth noting: HashEdDSA doesn’t. Isn’t cryptography fun?)
Generally, if you need random-key robustness, you want to explicitly make it part of your design.
Against “Fluffies”
In my blog post about the neverending wheel of Furry Twitter discourse, I mentioned the controversy around SFW spaces for underage furries.
Everything I said in that post is still accurate (go read it if you haven’t), but I want to emphasize something that maybe some people overlooked.
https://twitter.com/SoatokDhole/status/1426638694786682884
Underage furries calling themselves “fluffies” is a bad idea, for two reasons.
Divide and Conquer
The first reason is tactical, and not specific to what they’re calling themselves: If you label yourselves separately from the larger furry community, you make it much easier to be targeted–especially by propaganda. There’s a severely disturbed alt-right fringe to the furry fandom (dubbed alt-furry, the Furry Raiders, and so many other names) that would love nothing more than to sink their claws into younger furs.
It’ll start innocently enough (“Yay, you have your own space!”), but it will quickly accelerate (“Congrats on kicking those degenerates to the curb!”) to horrible places (“All LGBTQIA+ people are degenerates”), gliding on the wings of edgy humor.
This descent into madness is also known as the PewDiePipeline and all parents of furries should be made aware of it, lest it happen to their child:
https://www.youtube.com/watch?v=pnmRYRRDbuw
It bears emphasizing: This existence of a PewDiePipeline within the “fluffy” space is not predicated on the intentions of the proponents. They can have all the best intentions in the world and it will still happen to their microcosm.
https://twitter.com/ARCADEGUTS/status/1425687280983937027
Preventing this from happening will require an almost inhuman degree of vigilance and dedication to correcting discourse from going sour. None of us are omniscient, so I wouldn’t take that bet.
Pre-Existing Terminology
The second reason the “fluffies” label is a bad idea is more specific to the word “fluffies” in particular: It already refers to a very disturbing meme on 4chan from not-very-many years ago: Fluffy Abuse Threads.
I’m intentionally not including any videos or images of this topic. There just aren’t enough content warnings for how gross this content is.
By calling yourselves “fluffies”, the most deranged 4chan-dwellers and/or Kiwi Farms lurkers on the Internet will begin associate you with the “fluffy abuse” memes, and may even act accordingly. In their twisted minds, they may even rationalize their conduct as if somehow you’re consenting to the abuse, by virtue of what you call yourselves.
Look, I get it: When you’re young, the over-sexualization of the media can be very uncomfortable, and it’s natural to want to avoid it. Additionally, it’s only human to want your own special club with a special name to hang out with your exclusive (n.b. same-age) community.
But please think carefully about what you’re doing, how you’re doing it, and which adults you decide to trust.
Also: maybe talk to older queers and/or furries about the history of the Furry Fandom, Pride, and kink before you make dangerous moves that make you more vulnerable to the worst humanity has to offer? Even if you don’t agree with us, we don’t want to see you get hurt.
There definitely is room in the furry fandom for people who are not comfortable with sexual content, or simply don’t want to be inundated with it all the damn time. It doesn’t need to be an exclusive thing or concept; instead, it should be normalized.
Ultimately, there’s probably a lot of work to be done to ensure kids and families have a safe and enjoyable furry con experience during daylight hours without repressing the queer and sexual identities of consenting adults at night. The best way to get from here to there is to talk, not to isolate.
Otherwise, we’ll keep seeing occurrences like this:
https://twitter.com/PrincelyKaden/status/1426192114694692866
The onus here is going to be largely on furry convention staff and chatroom moderators to actually listen to people reporting abusive behavior. They haven’t always been good about that, and it’s time for change.
https://twitter.com/MegaplexCon/status/1425966589241970693
A Meditation on Furries and Cringe
Every once in a while, I get a comment or email like this one:
https://twitter.com/SoatokDhole/status/1360835077899436033
The biggest magnet for poorly-reasoned hate comments is, surprisingly, my tear-down of the “sigma male” meme.
You’d think the exposure of TheDonald’s non-CloudFlare IP address would draw more ire than having correct opinions on masculinity, but here we are.
Art by Sophie
Let’s talk about masculinity for a moment, guys.
There is nothing manlier than being your authentic self. Even if that means liking some “girly” things. Even if that means being soft and vulnerable at times. Even if that means actually conforming to some stereotypes perpetuated by toxic masculinity when it coincides with your likes and interests. You do you.
But this isn’t just true of the male gender. Authenticity is the epitome of humanity. There’s nothing that stops women and enbies from being ruthlessly themselves.
You can’t be authentic when you’re participating in Cringe Culture, which blindly tears large swaths of people down to stoke the feelings of superiority in the people who evade its blast.
People are weird. I’m weird. I don’t expect everyone to like me, nor do I want them to. (Parasocial relationships suck!)
It’s okay to be a little obsessed about something other people look down on just because you happen to like it. Just make sure you’re not eschewing your adult responsibilities. (We all have bills to pay and promises to keep to the people that matter to us.)
If people don’t like you because you suddenly revealed your fondness for classic video games, rock-tumbling, or linear algebra? Fuck ’em. May the bridges you burn light the way to people who will appreciate you for who you truly are.
I’ve been told my blog is “weapons grade cringe” before, because I dared talk about encryption while having what, to most adults, comes across as little more than a cartoon brand or company mascot.
(Art by Lynx vs Jackalope)
Furries and Blue State Privilege
I sympathize with most queer people and/or furries for not wanting to subject themselves to the bigotry that runs rampant in Red States, but the ones who are jerks to other members of their community for living in those states, I can do without.
https://twitter.com/SarahcatFursuit/status/1413566747148435456
Being an asshole to someone because they live in, or are moving to, a state whose politics you dislike is equal parts stupid, selfish, and self-defeating:
- It’s stupid because there’s no reason for expressing prejudice or painting with broad brushes. For example: “Florida Furs are bad people” is an attack on the author of this blog.
- It’s selfish because not everyone who wants to leave these states has the resources or opportunity to do so, so all you’re doing is shining a spotlight on your own privilege. Way to show your entire ass to the community.
- It’s self-defeating because of the way the U.S. political system is architected:
If you wished for a genie to move every LGBTQIA+ person to the west coast of the United States, within a few years you’d essentially reduce support for LGBTQIA+ rights to approximately 6 out of 100 votes in the US Senate and 68 out of 435 in the House of Representatives.When you factor in who owns the land in the big tech cities (San Francisco, Seattle, etc.) and how much political and economic power they wield, it becomes very clear that your shaming of others for not boarding the bandwagon serves the interests of the worst of humanity: Landlords and venture capitalists.
Not a good move for people who claim to be progressive, and want to achieve progressive political outcomes nationwide.
The fact that some states have horrendous laws on the books, even worse bastards enforcing these laws, and somehow even more terrible politicians gatekeeping any meaningful progress from changing the system isn’t ever going to be improved from the outside.
I say all this, and I acknowledge Florida does suck in a lot of obvious ways: Our governor (Ron DeSantis) has a disposition that would actually be improved if he wore clown make-up to press appearances. We also have far too many furries that are anti-maskers, anti-vaxxers, or both.
https://twitter.com/SoatokDhole/status/1300911840000708608
But when furries go out of their way to shame someone, simply for living here? You’re not helping. Seriously stop and think about your priorities.
And maybe–just maybe–be surgically precise when you decide insults are warranted.
Now that I’ve flushed the blog post topic buffer, I’m fresh out of ideas. Let me know some topics that interest you in my Telegram group so I don’t get bored and eventually write Buzzfeed-quality crap like this:
In hindsight, ideas like this are 90% of the reason Cringe Culture refuses to die.
https://soatok.blog/2021/08/16/lightning-round/
#asymmetricCryptography #ECDSA #Florida #furries #FurryFandom #HMAC #Politics #RSA #Society #wearOut
There are a lot of random topics I’ve wanted to write about since I started Dhole Moments, and for one reason or another, haven’t actually written about. I know from past experience with other projects that if you don’t occasionally do some housekeeping, your backlog eventually collapses under its own gravity and you can never escape from it.So, to prevent that, I’d like to periodically take some time to clean up some of those loose ends that collect over time.
Random-Access AEAD
AEAD stands for Authenticated Encryption with Associated Data. Typically, AEAD constructions involve a stream cipher (which may also be a block cipher in counter mode) and a message authentication code (which may also be an almost-universal hash function).AEAD modes are designed for one-shot APIs: Encrypt (then authenticate) all at once; (verify then) decrypt all at once. AES-GCM, ChaPoly, etc.
AEADs are less great at providing random access to the underlying plaintext. For example: If you’re encrypting a 240 GB file with AES-GCM, but you only need a 512 KB chunk at some arbitrary point in the file, you’re forced to choose between either:
- Authenticating the rest of the AES-GCM ciphertext, then decrypting only the relevant chunk. (Performance sucks.)
- Sacrificing integrity and decrypting the desired chunk with AES-CTR.
Being forced to choose between speed and security will almost certainly result in a loss of security. The incentives of software developers (especially with fly-by-night startup engineers) all-but-guarantee this outcome.
Consequently, there have been several implementations of streaming-friendly AEAD. The most famous of which is Phil Rogaway’s STREAM construction.
Source: Rogaway’s paper
The downside to STREAM is that it requires an additional T bytes (e.g. 16 for an 128-bit authentication tag) for each chunk of the plaintext.
A similar solution, as implemented in the AWS Encryption SDK, is to carefully separate plaintexts into equal-sized frames and have special rules governing IV/nonce selection. This lets you facilitate random access while still making the security of the whole system easy to reason about.
Can we do better than STREAM and message framing?
The most straightforward idea is to use a Merkle tree on the ciphertext with a stream cipher for extracting a distinct key for each leaf node. This can be applied to existing AEAD ciphertexts, out of band, to create a sort of deep authentication tag that can be used to authenticate any random subset of the message (provided you have the correct nonce/key).
However, I haven’t found the time to develop this idea into something that can be toyed with by myself and other researchers.
More Introductory Articles
Let’s face it:Art by Riley
I’ve previously suggested an alternative strategy for programmers to learn cryptography. I’d like to do more posts covering introductory material for the topics I’m familiar with, so anyone who wants to actually employ my proposed strategy can carry themselves across the finish line.
Dissecting Dog-Whistles
Random fact: My fursona is a dhole–also known as a whistling dog.Soatok is a dhole, not a fox. Art by Khia.
Coincidentally, I’m deeply fascinated by language, and planned to start a series analyzing dog-whistle language (especially the kind commonly used against queer subcultures).
However, the very nature of dog-whistle language provides a veneer of plausible deniability for the whistler’s intent, which makes it very difficult to address them in a meaningful way that doesn’t undermine your own credibility.
So, for the time being, this is on the back-burner.
Reader Questions
I’ve received quite a few questions via email and social media since I started this blog in April. The most obvious thing to do with these questions would be to periodically collate a bunch of them into a Questions and Answer style post.However! I have an open source projected called FAQ Off that is way more efficient at the Q&A format than a long-form blog post. If you’d like to see it in action, start here.
Art by Kyume
General Punditry
I make a lot of dumb jokes, typically involving puns and other wordplay. Most of these live in private Telegram conversations with other furries, but a few have leaked out onto Twitter over the years.Is automated vulnerability scanning a nessusity?— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) November 23, 2017
Nurse: "I suspect this patient attempted to shove a foreign object into their urethra for pleasure"
Doctor: "I believe your theory is sound"
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) June 25, 2018
A lot of them involve queer lingo.
People say it's lonely at the top.No wonder there's so many bottoms in this fandom 😛
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 30, 2019
BitTorrent users are thirsty bottoms. Always complaining about wanting more seed.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 4, 2019
My RAID controller has big disk synergy
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 1, 2018
Some of them involve furry in-jokes.
Q) Why are foxes so prevalent in the furry fandom?A) We're a sub-culture not a dom-culture.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) May 24, 2018
Intrusion detection systems are old hat. What we need is a protrusion detection system.
Introducing OwO
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) February 1, 2018
Some are just silly.
Using mined bitcoins to buy a pumpkin spice latte makes you an ASIC bitch, right?— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) February 6, 2018
So in gay male furry culture if you give into a booty call from your ex-boyfriend… does that mean you were craving the XD?
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) August 25, 2018
If SQL is pronounced "sequel" then PHP must be pronounced "fap".
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) July 23, 2019
What do you call a submissive dragon with a mathematics background who's already lubed up for you?
A sliding scale.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 26, 2018
Did you hear about the clairvoyant babyfur that broke RSA?
Turns out, all you needed was a padding oracle.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) October 1, 2017
I should look for my next partner in a nuclear chemistry lab.
I hear they're good at dating.
— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) December 8, 2016
In my humble opinion, there haven’t been nearly enough puns on this blog (unless the embedded tweets above count).
Normally, this is where I’d proclaim, “I shall rectify this mistake” and proceed to make an ass out of myself, but I don’t like forced and obvious puns.
A lot of furries get this wrong: “Pawesome” is not clever, unless you’re talking to someone with a marsupial fursona. Then maybe.
The best puns come in two forms: They’re either so clever that you never saw it coming, or they’re just clever enough that the punchline lands at the same time you realized a bad pun was even possible.
Only Soatok brand puns are 100% whole groan— Mastodon: soatok@furry.engineer, Cohost: soatok (@SoatokDhole) January 26, 2018
Miscellaneous / Meta
The past few blog posts touched a little on political subjects (especially How and Why America Was Hit So Hard By COVID-19, but this short-term trend actually started with my Pride Month post).At some point in the future, I may write a post dedicated to politics, but for the time being, it’s not really a subject I care enough about in and of itself to emphasize all the time.
Let me be clear: Being gay in America is inherently political. Developing technology is inherently political (although you don’t always realize it). Being a gay technologist, saying something politically significant is an inevitability.
But I’m not interested in the traditional roles and narratives that infect politics and political discourse. Labels are stupid and I’m not interested in being a Useful Idiot for anyone’s propaganda.
The most difficult thing about writing blog posts for me is coming up with a meaningful title. I’ve lost many hours due to the writer’s block that ensues.
The second most difficult thing for me is writing closing statements that aren’t totally redundant.
https://www.youtube.com/embed/l44OV2jlN7A?start=665&feature=oembed
George Carlin – “Count the Superfluous Redundant Pleonastic Tautologies” – Skip to 11:05 if WordPress breaks something
Some bloggers like to sign off like they’re writing an email. “Happy hacking!” and whatnot. To me, this feels forced and inauthentic, like a bad pun.So instead, here’s a totally sick piece of art I got from @MrJimmyDaFloof.
Furry artists are, like the rest of the fandom, amazing.
https://soatok.blog/2020/07/07/dont-forget-to-brush-your-fur/
Earlier this year, Cendyne published A Deep Dive into Ed25519 Signatures, which covered some of the different types of digital signature algorithms, but mostly delved into the Ed25519 algorithm. Truth in advertising.
This got me thinking, “Why isn’t there a better comparison of different elliptic curve signature algorithms available online?”
Art: LvJ
Most people just defer to SafeCurves, but it’s a little dated: We have complete addition formulas for Weierstrass curves now, but SafeCurves doesn’t reflect that.
For the purpose of simplicity, I’m not going to focus on a general treatment of Elliptic Curve Cryptography (ECC), which includes pairing-based cryptography, Elliptic-Curve Diffie-Hellman, and (arguably) isogeny cryptography.
Instead, I’m going to focus entirely on elliptic curve digital signature algorithms.
Note: The content of this post is a bit lower-level than most programmers ever need to be concerned with. If you’re a programmer and interested in learning cryptography, start here. If you’re looking for library recommendations, libsodium is a good safe default.
Compliance Rules Everything Around Me
If you have to meet some arbitrary compliance requirements (i.e. FIPS 140-3, CNSA, etc.), your decision is already made for you, and you shouldn’t waste your time reading blogs like this that will only get your hopes up about the options available to you.
Choose the option your compliance officer demands, and hope it’s good enough.
“Sure, let me check that box.”
Art: LvJ
Elliptic Curves for Signature Algorithms
Let’s start with the same curve Cendyne analyzed: Ed25519.
Ed25519 (EdDSA, Curve25519)
Ed25519 is one of the two digital signature algorithms today that use the EdDSA algorithm framework. The other is Ed448, which targets a higher security level (224-bit vs 128-bit) but is also slower and uses SHAKE256 (which is overkill and not great for performance).
Ed25519 is a safe default choice for most applications where a digital signature is appropriate, for many reasons:
- Ed25519 uses deterministic nonces, which means you’re severely unlikely to ever reproduce the Sony ECDSA k-reuse bug in your system.
The deterministic nonce is calculated from the SHA512 hash of the secret key and message. Two invocations tocrypto_sign_ed25519()
with the same message and secret key will produce the same signature, but the intermediate nonce value is never revealed to an attacker. - Ed25519 includes the public key in the data hashed to produce the signature (more specifically s from the (R,s) pair). This offers a property that ECDSA lacks: Exclusive Ownership. I’ve written about this property before.
Without Exclusive Ownership, it’s possible to create a single signature value that’s valid for multiple different (message, public key) pairs.
Years ago, there would have an additional list item: Ed25519 uses Edward Curves, which have complete addition formulas and are therefore safer to implement in constant-time than Weierstrass curves (i.e. the NIST curves). However, we now have complete addition formulas for Weierstrass curves, so this has become a moot point (assuming your implementation uses complete addition formulas).
Ed25519 targets the 128-bit security level.
Why Not Use Ed25519?
There is one minor pitfall of Ed25519 that makes it unsuitable for esoteric uses (say, Ring Signature Schemes or zero-knowledge proofs): Ed25519 is not a prime-order group; it has a cofactor h = 8. This detail famously created a double-spend vulnerability in all CryptoNote-based cryptocurrencies (including Monero).
For systems that want the security of Ed25519 and its various well-studied implementations, but still need a prime-order group for their protocol, cryptographers have developed the Ristretto Group to meet your needs.
If you’re working on embedded systems, the determinism inherent to EdDSA might be undesirable due to the possibility of fault attacks. You can use a hedged variant of Ed25519 to mitigate this risk.
Additionally, Ed25519 is not approved for many government applications, although it did make the latest draft revision of FIPS 186 in 2019. If you care about compliance (see above), you cannot use Ed25519. Yet.
A niche Internet meme for cryptography engineers
Guidance for Ed25519
Unless legally prohibited, Ed25519 should be your default choice, unless you need a prime-order group. In that case, build your desired protocol atop Ristretto255.
If you’re not sure if you need a prime-order group, you probably don’t. It’s a specialized requirement for uncommon use cases (ring signatures, password authenticated key exchange protocols, zero-knowledge proofs, etc.).
Art: LvJ
The Bitcoin Curve (ECDSA, secp256k1)
Secp256k1 is a Koblitz curve, which is a special case of Weierstrass curves that are more performant when used in binary fields, of the form, . This curve is almost exclusively used in cryptocurrency software.
There is no specified reason why Bitcoin chose secp256k1 over another elliptic curve at the time of its inception, but we can speculate:
The author was a pseudonymous contributor to the Metzdowd mailing list for cypherpunks, and probably didn’t trust the NIST curves. Since Ed25519 didn’t exist at the time, the only obvious choice for a hipster elliptic curve parameter selection was to rely on the SECG recommendations, which specify the NIST and Koblitz curves. If you cross the NIST curves off the list, only the Koblitz curves remained.
Therefore, the selection of secp256k1 is likely an artefact of computer history and not a compelling reason to select secp256k1 in new designs. Please look elsewhere.
Fact: Imgflip didn’t have a single secp256k1 meme until I made this one.
Secp256k1 targets the 128-bit security level.
Guidance for secp256k1
Don’t bother, there are better options. (i.e. Ed25519)
If you’re writing software for a cryptocurrency-related project, and you feel compelled to use secp256k1 for the sake of reducing your code footprint, please strongly consider the option of burning everything to the proverbial ground.
Cryptocurrency sucks!
Art: Swizz
Cryptocurrency Aside, Why Avoid Secp256k1?
As we noted above, secp256k1 isn’t widely used outside of cryptocurrency.
As a direct consequence of this (as we’ll discuss in the NIST P-256 section), most cryptography libraries don’t offer optimized, side-channel-resistant implementations of secp256k1; even if they do offer optimized implementations of NIST P-256.
(Meanwhile, Ed25519 is designed to be side-channel and misuse-resistant, partly due to its Schnorr construction and constant-time ladder for scalar multiplication, so any library that implements Ed25519 is overwhelmingly likely to be constant-time.)
Therefore, any secp256k1 library for most programming languages that isn’t an FFI wrapper for libsecp256k1 will have worse performance than the other 256-bit curves.
https://twitter.com/bascule/status/1320183684935290882
Additionally, secp256k1 implementations are often a source of exploitable side-channels that permit attackers to pilfer your secret keys.
The previously linked article was about BouncyCastle’s implementation (which covers Java and .NET), but there’s still plenty of secp256k1 implementations that don’t FFI libsecp256k1.
From a quick Google Search:
- Python (uses EEA rather than Binary GCD for modular inverse)
- Go (uses Numbers, which weren’t designed for cryptography)
- PHP (uses GMP, which isn’t constant-time)
- JavaScript (calls here, which uses bn.js, which isn’t constant-time)
If you’re using secp256k1, and you’re not basing your choice on cybercash-interop, you’re playing with fire at the implementation and ecosystem levels–even if there are no security problems with the Koblitz curve itself.
You are much better off choosing any different curve than secp256k1 if you don’t have a Bitcoin/Ethereum/etc. interoperability requirement.
“No thanks, I use Ed25519.”
Art: LvJ
NIST P-256 (ECDSA, secp256r1)
NIST P-256 is the go-to curve to use with ECDSA in the modern era. Unlike Ed25519, P-256 uses a prime-order group, and is an approved algorithm to use in FIPS-validated modules.
Most cryptography libraries offer optimized assembly implementations of NIST P-256, which makes it less likely that your signing operations will leak timing information or become a significant performance bottleneck.
P-256 targets the 128-bit security level.
Why Not Use P-256?
Once upon a time, P-256 was riskier than Ed25519 (for signatures) and X25519 (for Diffie-Hellman), due to the incomplete addition formulas that led to timing-leaky implementations.
If you’re running old software, you may still be vulnerable to timing attacks that can recover your ECDSA secret key. However, there is a good chance that you’re on a modern and secure implementation in 2022, especially if you’re outsourcing this to OpenSSL or its derivatives.
ECDSA requires a secure randomness source to sign data. If you don’t have one available, and you sign anything, you’re coughing up your secret key to any attacker capable of observing multiple signatures.
Guidance for P-256
P-256 is an acceptable choice, especially if you’re forced to cope with FIPS and/or the CNSA suite requirements when using cryptography.
Of course, if you can get away with Ed25519, use Ed25519 instead.
If you use P-256, make sure you’re using it with SHA-256. Some implementations may default to something weaker (e.g. SHA-1).
If you’re also going to be performing ECDH with P-256, make sure you use compressed points. There used to be a patent; it died in 2018.
If you can afford it, make sure you use deterministic ECDSA (RFC 6979) or hedged signatures (if fault attacks are relevant to your threat model).
Art: LvJ
NIST P-384 (ECDSA, secp384r1)
NIST P-384 has a larger field than the curves we’ve previously examined, which allows P-384 to target the 192-bit security level. That’s the primary reason why anyone would choose P-384.
Naturally, elliptic curve security is more complicated than merely security against the Elliptic Curve Discrete Logarithm Problem (ECDLP).
P-384 is most often paired with SHA-384, which is the most widely used flavor of the SHA-2 family hash functions that isn’t susceptible to length-extension attacks. (There are also truncated SHA-512 variants specified later, but that’s also what SHA-384 is under-the-hood.)
If you’re aiming to build a “secure-by-default” tool for a system that the US government might one day become a customer of, with minimal cryptographic primitive choice, using NIST P-384 with SHA-384 makes for a reasonably minimalistic bundle.
Why Not Use P-384?
Unlike P-256, most P-384 implementations don’t use constant-time, optimized, and/or formally verified assembly code. (Notable counter-examples: AWS-LC and Go x/crypto.)
Like P-256, P-384 also requires a secure randomness source to sign data. If you aren’t providing one, expect your signing key to end up on fail0verflow one day.
Guidance for P-384
If you use P-384, make sure you’re using it with SHA-384.
The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: Kyume
NIST P-521 (ECDSA, secp521r1)
Biggest curve is best curve! — the clueless
https://www.youtube.com/watch?v=i_APoSfCYwU
Systems that choose P-521 often have an interesting threat model, even though said threat model is rarely formally specified.
It’s overwhelmingly likely that what eventually breaks the 256-bit elliptic curves will also break P-521 in short order: Cryptography Relevant Quantum Computers.
The only thing P-521 does against CRQCs that P-256 doesn’t is require more quantum memory. If you’re worried about QRQCs, you might want to look into hybrid post-quantum signature schemes.
If you’re choosing P-521 in your designs, you’re basically saying, “I want to have 256 bits of asymmetric cryptographic security, come hell or high water!” even though the 128-bit security level is likely just fine for your actual threats.
Aside: P-521 and 512-bit ECC Security
P-521 is not a typo, although people sometimes think it is. P-521 uses the Mersenne prime instead of a 512-bit near-Mersenne prime.
This has led to an unfortunate trend in cryptography media to map ECC key sizes to symmetric security levels that misleads people as to the relationship between the two. For example:
Regrettably, this is misleading, because plotting the ECC Key Size versus equivalent Symmetric Security isn’t a how ECDLP security works. The ratio of the exponents involved is totally linear; it doesn’t suddenly increase beyond 384-bit curves for a mysterious mathematical reason.
- 256-bit Curves target the 128-bit security level
- 384-bit Curves target the 192-bit security level
- 512-bit Curves target the 256-bit security level
- 521-bit Curves actually target the 260-bit security level, but that meets or exceeds the 256-bit security level, so that’s how the standards are interpreted
The reason for this boils down entirely to the best attack against the Elliptic Curve Discrete Logarithm Problem: Pollard’s Rho, which recovers the secret key from an -bit public key (which has a search space) in guesses.
Taking the square root of a number is the same as halving its exponent, so the security level is half: .
Takeaway: If someone tells you that you need a 521-bit curve to meet the 256-bit security level, they are mistaken and it’s not their fault.
Art: Harubaki
Why Not Use P-521?
It’s slow. Much slower than P-256 and Ed25519. Modestly slower than P-384.
Unlike P-384, you’re less likely to find an optimized, constant-time P-521 implementation.
Guidance for P-521
First, make a concerted effort to figure out the motivation for P-521 in your designs. Chances are, someone is putting too much emphasis on the wrong things for security.
If you use P-521, make sure you’re using it with SHA-512.
The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: LvJ
Ed448 (EdDSA, Curve448)
Ed448 is the P-521 of the Edwards curves: It mostly exists to give standards committees a psychological comfort for the unlikely event that 256-bit ECC is desperately broken but ECC larger than 384 bits is somehow still safe.
https://twitter.com/dchest/status/703017144053833728
The very concept of having multiple “security levels” for raw cryptography primitives is mostly an artefact of the historical military roots of cryptography, rather than a serious consideration in the modern world.
Unfortunately, this leads to implementations that prioritize runtime algorithm selection negotiation, which maximizes the risk of protocol-level vulnerabilities. See also: JWT.
Ed448 was specified to use SHAKE256, which is a needlessly conservative decision which leads to an unnecessary performance bottleneck.
Why Not Use Ed448?
Aside from the performance hit mentioned previously, there’s no compelling reason to avoid Ed448 that isn’t also true of either Ed25519 or P-384.
Guidance for Ed448
If you want more speed, go with Ed25519. In addition to being faster, Ed25519 is also very widely supported.
If you need a prime-order field, use Decaf with Ed448 or consider P-384.
The Brainpool Curves
The main motivation for the Brainpool curves is that the NIST curves were not generated in a “verifiable pseudo-random way”.
The only reasons you’d ever want to support the Brainpool curves include:
- You think the NIST curves are somehow backdoored by the NSA
- You don’t appreciate small attack surfaces in cryptography libraries
- The German government told you to (see: compliance)
Most of the advice for the NIST Curves at each security level can be copy/pasted for the Brainpool curves, with one important caveat:
When considering real-world implementations, Brainpool curves are more likely to use the general purpose Big Number procedures (which aren’t always constant-time), rather than optimized assembly code, than the NIST curves are.
Therefore, my general guidance for the Brainpool curves is simply:
- Proceed at your own peril
- Consider hiring a cryptography engineer to study the implementation you’re relying on, especially with regard to timing attacks
Me when I hear “brainpool”
Art: LvJ
Re-Examining the SafeCurves Criteria
Here’s a 2022 refresh of the SafeCurves criteria for all of the curves considered by this blog post.
SafeCurve Criteria | Relevance to the Curves Listed Above |
---|---|
Fields | All relevant curves satisfy the requirements |
Equations | All relevant curves satisfy the requirements |
Base Points | All relevant curves satisfy the requirements |
Rho | All relevant curves satisfy the requirements |
Transfers | All relevant curves satisfy the requirements |
Discriminants | Only secp256k1 doesn’t satisfy the requirements (out of the curves listed in this blog post) |
Rigidity | The NIST curves do not meet this requirement. If you care about whether or not the standards were manipulated to insert a backdoor, rigidity matters to you. Otherwise, it’s not a deal-breaker. |
Ladders | While a Montgomery ladder is beneficial for speed and implementation security, it isn’t strictly speaking required. This is an icing-on-the-cake consideration. |
Twists | The only curve listed above that doesn’t meet the requirement is the 256-bit Brainpool curve (brainpoolp256t1). |
Completeness | All relevant curves satisfy the requirements, as of 2015. SafeCurves is out of date here. |
Indistinguishability | All relevant curves satisfy the requirements, as of 2014. |
SafeCurves continues to be a useful resource, especially if you stray from the guidance on this page.
For example: You wouldn’t want to use pairing-friendly curves for general purpose ECC digital signatures, because they’re suitable for specialized problems. SafeCurves correctly recommends not using BN(2,254).
However, SafeCurves is showing its age in 2022. BN curves still end up in digital signature protocol standards even though BLS-12-381 is clearly a better choice.
The Internet would benefit greatly for an updated SafeCurves that focuses on newer elliptic curve algorithms.
Art: Scruff
TL;DR
Ed25519 is great. NIST P-256 and P-384 are okay (with caveats). Anything else is questionable, and their parameter selection should come with a clear justification.
https://soatok.blog/2022/05/19/guidance-for-choosing-an-elliptic-curve-signature-algorithm-in-2022/
#asymmetricCryptography #BrainpoolCurves #cryptography #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #ellipticCurveCryptography #P256 #P384 #P521 #secp256k1 #secp256r1 #secp384r1 #secp521r1 #SecurityGuidance
A question I get asked frequently is, “How did you learn cryptography?”I could certainly tell everyone my history as a self-taught programmer who discovered cryptography when, after my website for my indie game projects kept getting hacked, I was introduced to cryptographic hash functions… but I suspect the question folks want answered is, “How would you recommend I learn cryptography?” rather than my cautionary tale about poorly-implemented password hash being a gateway bug.
The Traditional Ways to Learn
There are two traditional ways to learn cryptography.If you want a book to augment your journey in either traditional path, I recommend Serious Cryptography by Jean-Philippe Aumasson.
Academic Cryptography
The traditional academic way to learn cryptography involves a lot of self-study about number theory, linear algebra, discrete mathematics, probability, permutations, and field theory.You’d typically start off with classical ciphers (Caesar, etc.) then work your way through the history of ciphers until you finally reach an introduction to the math underpinning RSA and Diffie-Hellman, and maybe taught about Schneier’s Law and cautioned to only use AES and SHA-2… and then you’re left to your own devices unless you pursue a degree in cryptography.
The end result of people carelessly exploring this path is a lot of designs like Telegram’s MTProto that do stupid things with exotic block cipher modes and misusing vanilla cryptographic hash functions as message authentication codes; often with textbook a.k.a. unpadded RSA, AES in ECB, CBC, or some rarely-used mode that the author had to write custom code to handle (using ECB mode under the hood), and (until recently) SHA-1.
People who decide to pursue cryptography as a serious academic discipline will not make these mistakes. They’re far too apt for the common mistakes. Instead, they run the risk of spending years involved in esoteric research about homomorphic encryption, cryptographic pairings, and other cool stuff that might not see real world deployment (outside of novel cryptocurrency hobby projects) for five or more years.
That is to say: Academia is a valid path to pursue, but it’s not for everyone.
If you want to explore this path, Cryptography I by Dan Boneh is a great starting point.
Security Industry-Driven Cryptography
The other traditional way to learn cryptography is to break existing cryptography implementations. This isn’t always as difficult as it may sound: Reverse engineering video games to defeat anti-cheat protections has led several of my friends into learning about cryptography.For security-minded folks, the best place to start is the CryptoPals challenges. Another alternative is CryptoHack.
There are also plenty of CTF events all year around, but they’re rarely a good cryptography learning exercise above what CryptoPals offers. (Though there are notable exceptions.)
A Practical Approach to Learning Cryptography
Art by Kyume.If you’re coming from a computer programming background and want to learn cryptography, the traditional approaches carry the risk of Reasoning By Lego.
Instead, the approach I recommend is to start gaining experience with the safest, highest-level libraries and then slowly working your way down into the details.
This approach has two benefits:
- If you have to implement something while you’re still learning, your knowledge and experience is stilted towards “use something safe and secure” not “hack together something with Blowfish in ECB mode and MD5 because they’re familiar”.
- You can let your own curiosity guide your education rather than follow someone else’s study guide.
To illustrate what this looks like, here’s how a JavaScript developer might approach learning cryptography, starting from the most easy-mode library and drilling down into specifics.
Super Easy Mode: DholeCrypto
Disclaimer: This is my project.Dhole Crypto is an open source library, implemented in JavaScript and PHP and powered by libsodium, that tries to make security as easy as possible.
I designed Dhole Crypto for securing my own projects without increasing the cognitive load of anyone reviewing my code.
If you’re an experienced programmer, you should be able to successfully use Dhole Crypto in a Node.js/PHP project. If it does not come easy, that is a bug that should be fixed immediately.
Easy Mode: Libsodium
Using libsodium is slightly more involved than Dhole Crypto: Now you have to know what a nonce is, and take care to manage them carefully.Advantage: Your code will be faster than if you used Dhole Crypto.
Libsodium is still pretty easy. If you use this cheat sheet, you can implement something secure without much effort. If you deviate from the cheat sheet, pay careful attention to the documentation.
If you’re writing system software (i.e. programming in C), libsodium is an incredibly easy-to-use library.
Moderate Difficulty: Implementing Protocols
Let’s say you’re working on a project where libsodium is overkill, and you only need a few cryptography primitives and constructions (e.g. XChaCha20-Poly1305). A good example: In-browser JavaScript.Instead of forcing your users to download the entire Sodium library, you might opt to implement a compatible construction using JavaScript implementations of these primitives.
Since you have trusted implementations to test your construction against, this should be a comparatively low-risk effort (assuming the primitive implementations are also secure), but it’s not one that should be undertaken without all of the prior experience.
Note: At this stage you are not implementing the primitives, just using them.
Hard Difficulty: Designing Protocols and Constructions
Repeat after me: “I will not roll my own crypto before I’m ready.” Art by AtlasInu.To distinguish: TLS and Noise are protocols. AES-GCM and XChaCha20-Poly1305 are constructions.
Once you’ve implemented protocols and constructions, the next step in your self-education is to design new ones.
Maybe you want to combine XChaCha20 with a MAC based on the BLAKE3 hash function, with some sort of SIV to make the whole shebang nonce-misuse resistant?
You wouldn’t want to dive headfirst into cryptography protocol/construction design without all of the prior experience.
Very Hard Mode: Implementing Cryptographic Primitives
It’s not so much that cryptography primitives are hard to implement. You could fit RC4 in a tweet before they raised the character limit to 280. (Don’t use RC4 though!)The hard part is that they’re hard to implement securely. See also: LadderLeak.
Usually when you get to this stage in your education, you will have also picked up one or both of the traditional paths to augment your understanding. If not, you really should.
Nightmare Mode: Designing Cryptography Primitives
A lot of people like to dive straight into this stage early in their education. This usually ends in tears.If you’ve mastered every step in my prescribed outline and pursued both of the traditional paths to the point that you have a novel published attack in a peer-reviewed journal (and mirrored on ePrint), then you’re probably ready for this stage.
Bonus: If you’re a furry and you become a cryptography expert, you can call yourself a cryptografur. If you had no other reason to learn cryptography, do it just for pun!
Header art by circuitslime.
https://soatok.blog/2020/06/10/how-to-learn-cryptography-as-a-programmer/
#cryptography #education #programming #Technology
Let’s talk about digital signature algorithms.
Digital signature algorithms are one of the coolest ideas to come out of asymmetric (a.k.a. public-key) cryptography, but they’re so simple and straightforward that most cryptography nerds don’t spend a lot of time thinking about them.
Even though you are more likely to run into a digital signature as a building block (e.g. certificate signatures in TLS) than think about them in isolation (e.g. secure software releases), they’re still really cool and worth learning about.
What’s a Digital Signature?
A digital signature is some string that proves that a specific message was signed by some specific entity in possession of the secret half of an asymmetric key-pair. Digital Signature Algorithms define the process for securely signing and verifying messages with their associated signatures.
For example, if I have the following keypair:
- Secret key:
9080a2c7897faeb8526968161695da0f7b3afa2e8e7d8e8369a85547ab48ea05
- Public key:
482b8d3430445cdad6b5ce59778e09ab59d099120f32d316e881db1a6330390b
I can cryptographically sign the message “Dhole Moments: Never a dull moment!” with the above secret key, and it will generate the signature string: 63629779a31b623486145359c6f1d56602d8d9135e4b17fa2ae3667c8947397decd7ae01bfed08645a429f5dee906e87df4e18eefdfff9acb5b1488c9dec800f
.
If you only have the message, signature string, and my public key, you can verify that I signed the message. But, very crucially, you cannot sign messages and convince someone else that they came from me. (With symmetric authentication schemes, such as HMAC, you can.)
A digital signature algorithm is considered secure if, in order for anyone else to pass off a different message as being signed by me, they would need my secret key to succeed. When this assumption holds true, we say the scheme is secure against existential forgery attacks.
How Do Digital Signatures Work?
Simple answer: They generally combine a cryptographic hash function (e.g. SHA-256) with some asymmetric operation, and the details beyond that are all magic.
More complicated answer: That depends entirely on the algorithm in question!
Art by Swizz
For example, with RSA signatures, you actually encrypt a hash of the message with your secret key to sign the message, and then you RSA-decrypt it with your public key to verify the signature. This is backwards from RSA encryption (where you do the totally sane thing: encrypt with public key, decrypt with secret key).
In contrast, with ECDSA signatures, you’re doing point arithmetic over an elliptic curve (with a per-signature random value).
Yet another class of digital signature algorithms are hash-based signatures, such as SPHINCS+ from the NIST Post-Quantum Cryptography Standardization effort, wherein your internals consist entirely of hash functions (and trees of hash functions, and stream ciphers built with other hash functions).
In all cases, the fundamental principle stays the same: You sign a message with a secret key, and can verify it with a public key.
In the interest of time, I’m not going to dive deep into how each signature algorithm works. That can be the subject of future blog posts (one for each of the algorithms in question).
Quick aside: Cryptographers who stumble across my blog might notice that I deviate from convention a bit. They typically refer to the sensitive half of an asymmetric key pair as a “private key”, but I instead call it a “secret key”.
The main reason for this is that “secret key” can be abbreviated as “sk” and public key can be abbreviated as “pk”, whereas private/public doesn’t share this convenience. If you ever come across their writings and wonder about this discrepancy, I’m breaking away from the norm and their way is more in line with the orthodoxy.
What Algorithms Should I Use?
What algorithm, indeed! (Art by circuitslime)
If you find yourself asking this question, you’re probably dangerously close to rolling your own crypto. If so, you’ll want to hire a cryptographer to make sure your designs aren’t insecure. (It’s extremely easy to design or implement otherwise-secure cryptography in an insecure way.)
Recommended Digital Signature Algorithms
(Update, 2022-05-19): I’ve published a more in-depth treatment of the Elliptic Curve Digital Signature Algorithms a few years after this post was created. A lot of the topics covered by EdDSA and ECDSA are focused on there.
EdDSA: Edwards Curve DSA
EdDSA comes in two variants: Ed25519 (widely supported in a lot of libraries and protocols) and Ed448 (higher security level, but not implemented or supported in as many places).
The IETF standardized EdDSA in RFC 8032, in an effort related to the standardization of RFC 7748 (titled: Elliptic Curves for Security).
Formally, EdDSA is derived from Schnorr signatures and defined over Edwards curves. EdDSA’s design was motivated by the real-world security failures of ECDSA:
- Whereas ECDSA requires a per-signature secret number () to protect the secret key, EdDSA derives the per-signature nonce deterministically from a hash of the secret key and message.
- ECDSA with biased nonces can also leak your secret key through lattice attacks. To side-step this, EdDSA uses a hash function twice the size as the prime (i.e. SHA-512 for Ed25519), which guarantees that the distribution of the output of the modular reduction is unbiased (assuming uniform random inputs).
- ECDSA implemented over the NIST Curves is difficult to implement in constant-time: Complicated point arithmetic rules, point division, etc. EdDSA only uses operations that are easy to implement in constant-time.
For a real-world example of why EdDSA is better than ECDSA, look no further than the Minerva attacks, and the Ed25519 designer’s notes on why EdDSA stood up to the attacks.
The security benefits of EdDSA over ECDSA are so vast that FIPS 186-5 is going to include Ed25519 and Ed448.
Hooray for EdDSA adoption even in federal hellscapes.
This is kind of a big deal! The FIPS standards are notoriously slow-moving, and they’re deeply committed to a sunk cost fallacy on algorithms they previously deemed acceptable for real-world deployment.
RFC 6979: Deterministic ECDSA
Despite EdDSA being superior to ECDSA in virtually every way (performance, security, misuse-resistance), a lot of systems still require ECDSA support for the foreseeable future.
If ECDSA is here to stay, we might as well make it suck less in real-world deployments. And that’s exactly what Thomas Pornin did when he wrote RFC 6979: Deterministic Usage of DSA and ECDSA.
(Like EdDSA, Deterministic ECDSA is on its way to FIPS 186-5. Look for it in FIPS-compliant hardware 5 years from now when people actually bother to update their implementations.)
Acceptable Digital Signature Algorithms
ECDSA Signatures
The Elliptic Curve Digital Signature Algorithm (ECDSA) is the incumbent design for signatures. Unlike EdDSA, ECDSA is a more flexible design that has been applied to many different types of curves.
This is more of a curse than a blessing, as Microsoft discovered with CVE-2020-0601: You could take an existing (signature, public key) pair with standard curve, explicitly set the generator point equal to the victim’s public key, and set your secret key to 1, and Windows’s cryptography library would think, “This is fine.”
For this reason, cryptographers were generally wary of proposals to add support for Koblitz curves (including secp256k1–the Bitcoin curve) or Brainpool curves into protocols that are totally fine with NIST P-256 (and maybe NIST P-384 if you need it for compliance reasons).
For that reason, if you can’t use EdDSA or RFC 6979, your fallback option is ECDSA with one of those two curves (secp256r1, secp384r1), and making sure that you have access to a reliable cryptographic random number generator.
RSA Signatures
It’s high time the world stopped using RSA.
Not just for the reasons that Trail of Bits is arguing (which I happen to agree with), but more importantly:
Replacing RSA with EdDSA (or Deterministic ECDSA) also gives teams an opportunity to practice migrating from one cryptography algorithm suite to another, which will probably be a much-needed experience when quantum computers come along and we’re all forced to migrate to post-quantum cryptography.
Encryption is a bigger risk of being broken by quantum computers than signature schemes: If you encrypt data today, a quantum computer 20 years down the line can decrypt it immediately. Conversely, messages that are signed today cannot be broken until after a quantum computer exists.
That being said, if you only need signatures and not encryption, RSA is still acceptable. If you also need encryption, don’t use RSA for that purpose.
If you can, use PSS padding rather than PKCS#1 v1.5 padding, with SHA-256 or SHA-384. But for signatures (i.e. not encryption), PKCS#1 v1.5 padding is fine.
Dishonorable Mention
DSA Signatures
There’s really no point in using classical DSA, when ECDSA is widely supported and has more ongoing attention from cryptography experts.
If you’re designing a system in 2020 that uses DSA, my only question for you is…
WHYYYYYY?! (Art by Khia)
Upcoming Signature Algorithms
Although it is far too early to consider adopting these yet, cryptographers are working on new designs that protect against wider ranges of real-world threats.
Let’s briefly look at some of them and speculate wildly about what the future looks like. For fun. Don’t use these yet, unless you have a very good reason to do so.
Digital Signature Research Topics
Hedged Signatures
Above, we concluded that EdDSA and Deterministic ECDSA were generally the best choice (and what I’d recommend for software developers). There is one important caveat: Fault attacks.
A fault attack is when you induce a hardware fault into a computer chip, and thereby interfere with the correct functioning of a cryptography algorithm. This is especially relevant to embedded devices and IoT.
The IETF’s CFRG is investigating the use of additional randomization of messages (rather than randomizing signatures) as a safeguard against leaking secret keys through fault injection.
Of course, the Dhole Cryptography Library (my libsodium wrapper for JavaScript and PHP) already provides a form of Hedged Signatures.
If this technique is proven successful at mitigating fault injection attacks, then libsodium users will be able to follow the technique outlined in Dhole Crypto to safeguard their own protocols against fault attacks. Until then, they’re at least as safe as deterministic EdDSA today.
Threshold ECDSA Signatures
Suppose you have a scenario where you want 3-or-more people to have to sign a message before it’s valid. That’s exactly what Threshold ECDSA with Fast Trustless Setup aspires to provide.
Although this is mostly being implemented in cryptocurrency projects today, the cryptography underpinnings are fascinating. At worst, this will be one good side-effect to come from blockchain mania.
Post-Quantum Digital Signatures
Hash-Based Signatures
The best hash-based signature schemes are based on the SPHINCS design for one simple reason: It’s stateless.
In earlier hash-based digital signatures, such as XMSS, you have to maintain a state of which keys you’ve already used, to prevent attacks. Google’s Adam Langley previously described this as a “huge foot-cannon” for security (although probably okay in some environments, such as an HSM).
Lattice-Based Signatures
There are a lot of post-quantum signature algorithm designs defined over lattice groups, but my favorite lattice-based design is called FALCON. FALCON stands for FAst-Fourier Lattice-based COmpact Signatures Over NTRU.
Sign Here, Please
Who knew there would be so much complexity involved with such a simple cryptographic operation? And we didn’t even dive deep on how any of them work.
That’s the problem with cryptography: It’s a fractal of complexity. The more you know about these topics, the deeper the complexity becomes.
But if you’re implementing a protocol today and need a digital signature algorithm, use (in order of preference):
- Ed25519 or Ed448
- ECDSA over NIST P-256 or P-384, with RFC 6979
- ECDSA over NIST P-256 or P-384, without RFC 6979
- RSA (as a last resort)
But most importantly: make sure you have a cryptographer audit your designs.
(Header art by Kyume.)
https://soatok.blog/2020/04/26/a-furrys-guide-to-digital-signature-algorithms/
#crypto #cryptography #DeterministicSignatures #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #FIPS #FIPS186 #FIPSCompliance #RFC6979 #SecurityGuidance
Earlier this year, Cendyne published A Deep Dive into Ed25519 Signatures, which covered some of the different types of digital signature algorithms, but mostly delved into the Ed25519 algorithm. Truth in advertising.This got me thinking, “Why isn’t there a better comparison of different elliptic curve signature algorithms available online?”
Art: LvJ
Most people just defer to SafeCurves, but it’s a little dated: We have complete addition formulas for Weierstrass curves now, but SafeCurves doesn’t reflect that.
For the purpose of simplicity, I’m not going to focus on a general treatment of Elliptic Curve Cryptography (ECC), which includes pairing-based cryptography, Elliptic-Curve Diffie-Hellman, and (arguably) isogeny cryptography.
Instead, I’m going to focus entirely on elliptic curve digital signature algorithms.
Note: The content of this post is a bit lower-level than most programmers ever need to be concerned with. If you’re a programmer and interested in learning cryptography, start here. If you’re looking for library recommendations, libsodium is a good safe default.
Compliance Rules Everything Around Me
If you have to meet some arbitrary compliance requirements (i.e. FIPS 140-3, CNSA, etc.), your decision is already made for you, and you shouldn’t waste your time reading blogs like this that will only get your hopes up about the options available to you.Choose the option your compliance officer demands, and hope it’s good enough.
“Sure, let me check that box.”
Art: LvJElliptic Curves for Signature Algorithms
Let’s start with the same curve Cendyne analyzed: Ed25519.Ed25519 (EdDSA, Curve25519)
Ed25519 is one of the two digital signature algorithms today that use the EdDSA algorithm framework. The other is Ed448, which targets a higher security level (224-bit vs 128-bit) but is also slower and uses SHAKE256 (which is overkill and not great for performance).Ed25519 is a safe default choice for most applications where a digital signature is appropriate, for many reasons:
- Ed25519 uses deterministic nonces, which means you’re severely unlikely to ever reproduce the Sony ECDSA k-reuse bug in your system.
The deterministic nonce is calculated from the SHA512 hash of the secret key and message. Two invocations tocrypto_sign_ed25519()
with the same message and secret key will produce the same signature, but the intermediate nonce value is never revealed to an attacker.- Ed25519 includes the public key in the data hashed to produce the signature (more specifically s from the (R,s) pair). This offers a property that ECDSA lacks: Exclusive Ownership. I’ve written about this property before.
Without Exclusive Ownership, it’s possible to create a single signature value that’s valid for multiple different (message, public key) pairs.Years ago, there would have an additional list item: Ed25519 uses Edward Curves, which have complete addition formulas and are therefore safer to implement in constant-time than Weierstrass curves (i.e. the NIST curves). However, we now have complete addition formulas for Weierstrass curves, so this has become a moot point (assuming your implementation uses complete addition formulas).
Ed25519 targets the 128-bit security level.
Why Not Use Ed25519?
There is one minor pitfall of Ed25519 that makes it unsuitable for esoteric uses (say, Ring Signature Schemes or zero-knowledge proofs): Ed25519 is not a prime-order group; it has a cofactor h = 8. This detail famously created a double-spend vulnerability in all CryptoNote-based cryptocurrencies (including Monero).For systems that want the security of Ed25519 and its various well-studied implementations, but still need a prime-order group for their protocol, cryptographers have developed the Ristretto Group to meet your needs.
If you’re working on embedded systems, the determinism inherent to EdDSA might be undesirable due to the possibility of fault attacks. You can use a hedged variant of Ed25519 to mitigate this risk.
Additionally, Ed25519 is not approved for many government applications, although it did make the latest draft revision of FIPS 186 in 2019. If you care about compliance (see above), you cannot use Ed25519. Yet.
A niche Internet meme for cryptography engineers
Guidance for Ed25519
Unless legally prohibited, Ed25519 should be your default choice, unless you need a prime-order group. In that case, build your desired protocol atop Ristretto255.If you’re not sure if you need a prime-order group, you probably don’t. It’s a specialized requirement for uncommon use cases (ring signatures, password authenticated key exchange protocols, zero-knowledge proofs, etc.).
Art: LvJ
The Bitcoin Curve (ECDSA, secp256k1)
Secp256k1 is a Koblitz curve, which is a special case of Weierstrass curves that are more performant when used in binary fields, of the form, . This curve is almost exclusively used in cryptocurrency software.There is no specified reason why Bitcoin chose secp256k1 over another elliptic curve at the time of its inception, but we can speculate:
The author was a pseudonymous contributor to the Metzdowd mailing list for cypherpunks, and probably didn’t trust the NIST curves. Since Ed25519 didn’t exist at the time, the only obvious choice for a hipster elliptic curve parameter selection was to rely on the SECG recommendations, which specify the NIST and Koblitz curves. If you cross the NIST curves off the list, only the Koblitz curves remained.
Therefore, the selection of secp256k1 is likely an artefact of computer history and not a compelling reason to select secp256k1 in new designs. Please look elsewhere.
Fact: Imgflip didn’t have a single secp256k1 meme until I made this one.
Secp256k1 targets the 128-bit security level.
Guidance for secp256k1
Don’t bother, there are better options. (i.e. Ed25519)If you’re writing software for a cryptocurrency-related project, and you feel compelled to use secp256k1 for the sake of reducing your code footprint, please strongly consider the option of burning everything to the proverbial ground.
Cryptocurrency sucks!
Art: SwizzCryptocurrency Aside, Why Avoid Secp256k1?
As we noted above, secp256k1 isn’t widely used outside of cryptocurrency.As a direct consequence of this (as we’ll discuss in the NIST P-256 section), most cryptography libraries don’t offer optimized, side-channel-resistant implementations of secp256k1; even if they do offer optimized implementations of NIST P-256.
(Meanwhile, Ed25519 is designed to be side-channel and misuse-resistant, partly due to its Schnorr construction and constant-time ladder for scalar multiplication, so any library that implements Ed25519 is overwhelmingly likely to be constant-time.)
Therefore, any secp256k1 library for most programming languages that isn’t an FFI wrapper for libsecp256k1 will have worse performance than the other 256-bit curves.
https://twitter.com/bascule/status/1320183684935290882
Additionally, secp256k1 implementations are often a source of exploitable side-channels that permit attackers to pilfer your secret keys.
The previously linked article was about BouncyCastle’s implementation (which covers Java and .NET), but there’s still plenty of secp256k1 implementations that don’t FFI libsecp256k1.
From a quick Google Search:
- Python (uses EEA rather than Binary GCD for modular inverse)
- Go (uses Numbers, which weren’t designed for cryptography)
- PHP (uses GMP, which isn’t constant-time)
- JavaScript (calls here, which uses bn.js, which isn’t constant-time)
If you’re using secp256k1, and you’re not basing your choice on cybercash-interop, you’re playing with fire at the implementation and ecosystem levels–even if there are no security problems with the Koblitz curve itself.
You are much better off choosing any different curve than secp256k1 if you don’t have a Bitcoin/Ethereum/etc. interoperability requirement.
“No thanks, I use Ed25519.”
Art: LvJNIST P-256 (ECDSA, secp256r1)
NIST P-256 is the go-to curve to use with ECDSA in the modern era. Unlike Ed25519, P-256 uses a prime-order group, and is an approved algorithm to use in FIPS-validated modules.Most cryptography libraries offer optimized assembly implementations of NIST P-256, which makes it less likely that your signing operations will leak timing information or become a significant performance bottleneck.
P-256 targets the 128-bit security level.
Why Not Use P-256?
Once upon a time, P-256 was riskier than Ed25519 (for signatures) and X25519 (for Diffie-Hellman), due to the incomplete addition formulas that led to timing-leaky implementations.If you’re running old software, you may still be vulnerable to timing attacks that can recover your ECDSA secret key. However, there is a good chance that you’re on a modern and secure implementation in 2022, especially if you’re outsourcing this to OpenSSL or its derivatives.
ECDSA requires a secure randomness source to sign data. If you don’t have one available, and you sign anything, you’re coughing up your secret key to any attacker capable of observing multiple signatures.
Guidance for P-256
P-256 is an acceptable choice, especially if you’re forced to cope with FIPS and/or the CNSA suite requirements when using cryptography.Of course, if you can get away with Ed25519, use Ed25519 instead.
If you use P-256, make sure you’re using it with SHA-256. Some implementations may default to something weaker (e.g. SHA-1).
If you’re also going to be performing ECDH with P-256, make sure you use compressed points. There used to be a patent; it died in 2018.
If you can afford it, make sure you use deterministic ECDSA (RFC 6979) or hedged signatures (if fault attacks are relevant to your threat model).
Art: LvJ
NIST P-384 (ECDSA, secp384r1)
NIST P-384 has a larger field than the curves we’ve previously examined, which allows P-384 to target the 192-bit security level. That’s the primary reason why anyone would choose P-384.Naturally, elliptic curve security is more complicated than merely security against the Elliptic Curve Discrete Logarithm Problem (ECDLP).
P-384 is most often paired with SHA-384, which is the most widely used flavor of the SHA-2 family hash functions that isn’t susceptible to length-extension attacks. (There are also truncated SHA-512 variants specified later, but that’s also what SHA-384 is under-the-hood.)
If you’re aiming to build a “secure-by-default” tool for a system that the US government might one day become a customer of, with minimal cryptographic primitive choice, using NIST P-384 with SHA-384 makes for a reasonably minimalistic bundle.
Why Not Use P-384?
Unlike P-256, most P-384 implementations don’t use constant-time, optimized, and/or formally verified assembly code. (Notable counter-examples: AWS-LC and Go x/crypto.)Like P-256, P-384 also requires a secure randomness source to sign data. If you aren’t providing one, expect your signing key to end up on fail0verflow one day.
Guidance for P-384
If you use P-384, make sure you’re using it with SHA-384.The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: Kyume
NIST P-521 (ECDSA, secp521r1)
Biggest curve is best curve! — the cluelesshttps://www.youtube.com/watch?v=i_APoSfCYwU
Systems that choose P-521 often have an interesting threat model, even though said threat model is rarely formally specified.
It’s overwhelmingly likely that what eventually breaks the 256-bit elliptic curves will also break P-521 in short order: Cryptography Relevant Quantum Computers.
The only thing P-521 does against CRQCs that P-256 doesn’t is require more quantum memory. If you’re worried about QRQCs, you might want to look into hybrid post-quantum signature schemes.
If you’re choosing P-521 in your designs, you’re basically saying, “I want to have 256 bits of asymmetric cryptographic security, come hell or high water!” even though the 128-bit security level is likely just fine for your actual threats.
Aside: P-521 and 512-bit ECC Security
P-521 is not a typo, although people sometimes think it is. P-521 uses the Mersenne prime instead of a 512-bit near-Mersenne prime.This has led to an unfortunate trend in cryptography media to map ECC key sizes to symmetric security levels that misleads people as to the relationship between the two. For example:
Regrettably, this is misleading, because plotting the ECC Key Size versus equivalent Symmetric Security isn’t a how ECDLP security works. The ratio of the exponents involved is totally linear; it doesn’t suddenly increase beyond 384-bit curves for a mysterious mathematical reason.
- 256-bit Curves target the 128-bit security level
- 384-bit Curves target the 192-bit security level
- 512-bit Curves target the 256-bit security level
- 521-bit Curves actually target the 260-bit security level, but that meets or exceeds the 256-bit security level, so that’s how the standards are interpreted
The reason for this boils down entirely to the best attack against the Elliptic Curve Discrete Logarithm Problem: Pollard’s Rho, which recovers the secret key from an -bit public key (which has a search space) in guesses.
Taking the square root of a number is the same as halving its exponent, so the security level is half: .
Takeaway: If someone tells you that you need a 521-bit curve to meet the 256-bit security level, they are mistaken and it’s not their fault.
Art: Harubaki
Why Not Use P-521?
It’s slow. Much slower than P-256 and Ed25519. Modestly slower than P-384.Unlike P-384, you’re less likely to find an optimized, constant-time P-521 implementation.
Guidance for P-521
First, make a concerted effort to figure out the motivation for P-521 in your designs. Chances are, someone is putting too much emphasis on the wrong things for security.If you use P-521, make sure you’re using it with SHA-512.
The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: LvJ
Ed448 (EdDSA, Curve448)
Ed448 is the P-521 of the Edwards curves: It mostly exists to give standards committees a psychological comfort for the unlikely event that 256-bit ECC is desperately broken but ECC larger than 384 bits is somehow still safe.https://twitter.com/dchest/status/703017144053833728
The very concept of having multiple “security levels” for raw cryptography primitives is mostly an artefact of the historical military roots of cryptography, rather than a serious consideration in the modern world.
Unfortunately, this leads to implementations that prioritize runtime algorithm selection negotiation, which maximizes the risk of protocol-level vulnerabilities. See also: JWT.
Ed448 was specified to use SHAKE256, which is a needlessly conservative decision which leads to an unnecessary performance bottleneck.
Why Not Use Ed448?
Aside from the performance hit mentioned previously, there’s no compelling reason to avoid Ed448 that isn’t also true of either Ed25519 or P-384.Guidance for Ed448
If you want more speed, go with Ed25519. In addition to being faster, Ed25519 is also very widely supported.If you need a prime-order field, use Decaf with Ed448 or consider P-384.
The Brainpool Curves
The main motivation for the Brainpool curves is that the NIST curves were not generated in a “verifiable pseudo-random way”.The only reasons you’d ever want to support the Brainpool curves include:
- You think the NIST curves are somehow backdoored by the NSA
- You don’t appreciate small attack surfaces in cryptography libraries
- The German government told you to (see: compliance)
Most of the advice for the NIST Curves at each security level can be copy/pasted for the Brainpool curves, with one important caveat:
When considering real-world implementations, Brainpool curves are more likely to use the general purpose Big Number procedures (which aren’t always constant-time), rather than optimized assembly code, than the NIST curves are.
Therefore, my general guidance for the Brainpool curves is simply:
- Proceed at your own peril
- Consider hiring a cryptography engineer to study the implementation you’re relying on, especially with regard to timing attacks
Me when I hear “brainpool”
Art: LvJRe-Examining the SafeCurves Criteria
Here’s a 2022 refresh of the SafeCurves criteria for all of the curves considered by this blog post.
SafeCurve Criteria Relevance to the Curves Listed Above Fields All relevant curves satisfy the requirements Equations All relevant curves satisfy the requirements Base Points All relevant curves satisfy the requirements Rho All relevant curves satisfy the requirements Transfers All relevant curves satisfy the requirements Discriminants Only secp256k1 doesn’t satisfy the requirements (out of the curves listed in this blog post) Rigidity The NIST curves do not meet this requirement.
If you care about whether or not the standards were manipulated to insert a backdoor, rigidity matters to you. Otherwise, it’s not a deal-breaker.Ladders While a Montgomery ladder is beneficial for speed and implementation security, it isn’t strictly speaking required.
This is an icing-on-the-cake consideration.Twists The only curve listed above that doesn’t meet the requirement is the 256-bit Brainpool curve (brainpoolp256t1). Completeness All relevant curves satisfy the requirements, as of 2015.
SafeCurves is out of date here.Indistinguishability All relevant curves satisfy the requirements, as of 2014. SafeCurves continues to be a useful resource, especially if you stray from the guidance on this page.
For example: You wouldn’t want to use pairing-friendly curves for general purpose ECC digital signatures, because they’re suitable for specialized problems. SafeCurves correctly recommends not using BN(2,254).
However, SafeCurves is showing its age in 2022. BN curves still end up in digital signature protocol standards even though BLS-12-381 is clearly a better choice.
The Internet would benefit greatly for an updated SafeCurves that focuses on newer elliptic curve algorithms.
Art: Scruff
TL;DR
Ed25519 is great. NIST P-256 and P-384 are okay (with caveats). Anything else is questionable, and their parameter selection should come with a clear justification.https://soatok.blog/2022/05/19/guidance-for-choosing-an-elliptic-curve-signature-algorithm-in-2022/
#asymmetricCryptography #BrainpoolCurves #cryptography #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #ellipticCurveCryptography #P256 #P384 #P521 #secp256k1 #secp256r1 #secp384r1 #secp521r1 #SecurityGuidance
A paper was published on the IACR’s ePrint archive yesterday, titled LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage.
The ensuing discussion on /r/crypto led to several interesting questions that I thought would be worth capturing and answering in detail.
What’s Significant About the LadderLeak Paper?
This is best summarized by Table 1 from the paper.
The sections labeled “This work” are what’s new/significant about this research.
The paper authors were able to optimize existing attacks exploiting one-bit leakages against 192-bit and 160-bit elliptic curves. They were further able to exploit leakages of less than one bit in the same curves.
How Can You Leak Less Than One Bit?
We’re used to discrete quantities in computer science, but you can leak less than one bit of information in the case of side-channels.
Biased modular reduction can also create a vulnerable scenario: If you know the probability of a 0 or a 1 in a given position in the bit-string of the one-time number (i.e. the most significant bit) is not 0.5 to 0.5, but some other ratio (e.g. 0.51 to 0.49), you can (over many samples) conclude a probability of a specific bit in your dataset.
If “less than one bit” sounds strange, that’s probably our fault for always rounding up to the nearest bit when we express costs in computer science.
What’s the Cost of the Attack?
Consult Table 3 from the paper for empirical cost data:
Table 3 from the LadderLeak paper.
How Devastating is LadderLeak?
First, it assumes a lot of things:
- That you’re using ECDSA with either sect163r1 or secp192r1 (NIST P-192). Breaking larger curves requires more bits of bias (as far as we know).
- That you’re using a cryptography library with cache-timing leaks.
- That you have a way to measure the timing leaks (and not just pilfer the ECDSA secret key; i.e. in a TPM setup). This threat model generally assumes some sort of physical access.
But if you can pull the attack off, you can successfully recover the device’s ECDSA secret key. Which, for protocols like TLS, allow an attacker to impersonate a certificate-bearer (typically the server)… which is pretty devastating.
Is ECDSA Broken Now?
Non-deterministic ECDSA is not significantly more broken with LadderLeak than it already was by other attacks. LadderLeak does not break the Internet.
Fundamentally, LadderLeak doesn’t really change the risk calculus. Bleichenbacher’s attack framework for solving the Hidden Number Problem using Lattices was already practical, with sufficient samples.
There’s even a CryptoPals challenge about these attacks.
As an acquaintance put it, the authors made a time-memory trade-off with a leaky oracle. It’s a neat result worthy of publication, but we aren’t any minutes closer to midnight with this revelation.
Is ECDSA’s k-value Really a Nonce?
Ehhhhhhhhh, sorta.
It’s complicated!
Nonce in cryptography has always meant “number that must be used only once” (typically per key). See: AES-GCM.
Nonces are often confused for initialization vectors (IVs), which in addition to a nonce’s requirements for non-reuse must also be unpredictable. See: AES-CBC.
However, nonces and IVs can both be public, whereas ECDSA k-values MUST NOT be public! If you recover the k-value for a given signature, you can recover the secret key too.
That is to say, ECDSA k-values must be all of the above:
- Never reused
- Unpredictable
- Secret
- Unbiased
They’re really in a class of their own.
For that reason, it’s probably better to think of the k-value as a per-signature key than a simple nonce. (n.b. Many cryptography libraries actually implement them as a one-time ECDSA keypair.)
What’s the Difference Between Random and Unpredictable?
The HMAC-SHA256 output of a message under a secret key is unpredictable for anyone not in possession of said secret key. This value, though unpredictable, is not random, since signing the same message twice yields the same output.
A large random integer when subjected to modular reduction by a non-Mersenne prime of the same magnitude will be biased towards small values. This bias may be negligible, but it makes the bit string that represents the reduced integer more predictable, even though it’s random.
What Should We Do? How Should We Respond?
First, don’t panic. This is interesting research and its authors deserve to enjoy their moment, but the sky is not falling.
Second, acknowledge that none of the attacks are effective against EdDSA.
If you feel the urge to do something about this attack paper, file a support ticket with all of your third-party vendors and business partners that handle cryptographic secrets to ask them if/when they plan to support EdDSA (especially if FIPS compliance is at all relevant to your work, since EdDSA is coming to FIPS 186-5).
Reason: With increased customer demand for EdDSA, more companies will adopt this digital signature algorithm (which is much more secure against real-world attacks). Thus, we can ensure an improved attack variant that actually breaks ECDSA doesn’t cause the sky to fall and the Internet to be doomed.
(Seriously, I don’t think most companies can overcome their inertia regarding ECDSA to EdDSA migration if their customers never ask for it.)
https://soatok.blog/2020/05/26/learning-from-ladderleak-is-ecdsa-broken/
#crypto #cryptography #digitalSignatureAlgorithm #ECDSA #ellipticCurveCryptography #LadderLeak
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography
Previously on Dead Ends in Cryptanalysis, we talked about length-extension attacks and precisely why modern hash functions like SHA-3 and BLAKE2 aren’t susceptible.
The art and science of side-channel cryptanalysis is one of the subjects I’m deeply fascinated by, and it’s something you’ll hear me yap about a lot on this blog in the future.
Since my background before computer security was in web development, I spend a lot of time talking about timing side-channels in particular, as well as their mitigations (see also: constant-time-js).
Pictured: Me, when an interesting attack gets published on ePrint.
(Art by Khia.)
However, timing side-channels aren’t omnipotent. Even if your code isn’t constant-time, that doesn’t mean you necessarily have a vulnerability. Case in point:
Length Leaks Are Usually Nothing-Burgers
If you look closely at a constant-time string equality function, you’ll see some clause that looks like this:
if (left.length !== right.length) return false;
A common concern that crops up in bikeshedding discussions about the correct implementation of a constant-time compare is, “This will fail fast if two strings of non-equal length are provided; doesn’t this leak information about the strings being compared?”
Sure, but it won’t affect the security of the application that uses it. Consider a contrived example:
- You’re encrypting with AES-CTR then authenticating the ciphertext with HMAC-SHA256 (Encrypt then MAC).
- For added fun, let’s assume you’re using HKDF-HMAC-SHA512 with a 256-bit salt to derive separate a separate encryption keys and MAC keys from the input key. This salt is prepended to the ciphertext and included as an input to the HMAC tag calculation. Now you don’t have to worry about cryptographic wear-out.
- You’re padding the plaintext to exactly 16 kilobytes prior to encryption, because the exact length of the plaintext is considered sensitive.
- You remove the padding after decryption.
- Your constant-time comparison is used to validate the HMAC tags.
Even though the length of your plaintext is sensitive, it doesn’t really matter that length mismatches leak here: The inputs to the constant-time compare are always HMAC-SHA256 outputs. They will always be 32 bytes (256 bits) long. This is public knowledge.
If you’ve somehow managed to design a protocol that depends on the secrecy of the length of a non-truncated HMAC-SHA256 output to be secure, you’ve probably fucked up something fierce.
However, if you were comparing the unpadded plaintexts with this function–or passing the unpadded plaintext to a hash function–you might have cause for concern.
“Double HMAC” is a defense against compiler/JIT optimizations, not length leaks.
(Art by Khia.)
When Do Timing Leaks Cause Impact?
Timing side-channels only lead to a vulnerability when they reveal some information about one of the secret inputs to a cryptographic function.
- Leaking how many leading bytes match when comparing HMACs can allow an attacker to forge a valid authentication tag for a chosen message–which often enables further attacks (e.g. padding oracles with AES-CBC + HMAC). The cryptographic secret is the correct authentication tag for a chosen message under a key known only to the defender.
- Leaking the number of leading zeroes introduced the risk of lattice attacks in TLS when used with Diffie-Hellman ciphersuites. See also: the Raccoon Attack. The cryptographic secret is the zero-trimmed shared secret, which is an input to a hash function.
- Leaking the secret number in the modular inverse step when calculating an ECDSA signature gives attackers enough information to recover the secret key. This can happen if you’re using non-constant-time arithmetic.
Timing attacks can even break state-of-the-art cryptography projects, like the algorithms submitted to NIST’s Post-Quantum Cryptography standardization effort:
https://twitter.com/EllipticKiwi/status/1295670085969838080
However–and this is important–if what leaks is a public input (n.b. something the attackers already knows anyway), then who cares?
(Art by Khia.)
Why Timing Leaks Don’t Break Signature Verification
If you’re reviewing some cryptography library and discovered a timing leak in the elliptic curve signature verification function, you might feel tempted to file a vulnerability report with the maintainers of the library.
If so, you’re wasting your time and theirs, for two reasons:
- Signature verification is performed over public inputs (message, public key, signature).
- Knowing which byte verification the comparison fails on isn’t sufficient for forging a signature for a chosen message.
The first part is obvious (and discussed above), but the second might seem untrue at first: If HMAC breaks this way, why doesn’t ECDSA also suffer here?
The Anatomy of Elliptic Curve Digital Signatures
Elliptic curve signatures are usually encoded as . How these numbers are derived and verified depends on the algorithm in question.
In the case of ECDSA, you calculate two numbers (, ) based on the hash of the plaintext and , both multiplied by the modular inverse of (mod ). You then calculate a curve point based on the public key (). The signature is valid if and only if the x coordinate of that curve point is equal to from the signature (and isn’t equal to the point at infinity).
Why Don’t Timing Attacks Do Anything Here?
Even with a timing leak on the string compare function in hand, you cannot easily find a valid for a chosen message for two reasons:
- The derivation of is effectively an All-Or-Nothing Transform based on secret inputs.
- The curve point equation ) multiplies the ratio r/s by the public key (because ).
In order to calculate a valid pair that will validate , you’d need to know the secret key that corresponds to .
It’s not impossible to calculate this value, but it’s computationally infeasible, and the difficulty of this problem is approximately one fourth the signature size. That is to say, 512-bit signatures, derived from 256-bit keys, have a security level of 128 bits.
Thus, timing leakage won’t let you perform an existential forgery here.
Aside: Don’t confuse signatures for MACs, as iMessage famously did.
(Art by Khia.)
Under What Conditions Could Timing Side-Channels Matter to ECDSA Verification?
Suppose you have a JSON Web Token library that’s vulnerable to the type confusion attack (wherein you can swap out the "alg":"ES256"
with "alg":"HS256"
and then use the public key as if it was an HMAC symmetric key).
In this hypothetical scenario, let’s say you’re using this JWT library in an OIDC-like configuration, where the identity provider signs tokens and the application verifies them, using a public key known to the application.
Also assume, for absolutely contrived reasons, that the public key is not known to the attacker.
If you had a timing attack that leaks the public key, that would be a viable (if horrendously slow) way to make the vulnerability exploitable.
However, even in this setup, the timing leak still doesn’t qualify as a real vulnerability. It merely defeats attempts at Security Through Obscurity. The real vulnerability is any JWT library that allows this attack (or alg=none).
Additionally, you can recover the public key if you have sufficient knowledge of the curve algorithm used, message signed, etc.–which you do if the algorithm is ES256
–so you don’t really even need a timing leak for this. Consequently, timing leaks would only help you if the original algorithm was something custom and obscure to attackers.
(Aside: there are two possible public keys from each signature, so the signature alone isn’t sufficient for uniquely identifying public keys. If you’re hoping to reduce protocol bandwidth through this trick, it won’t work.)
TL;DR
In order for a timing leak to be useful for cryptanalysis, it cannot leak a publicly-known input to the cryptographic operation.
https://soatok.blog/2021/06/07/dead-ends-in-cryptanalysis-2-timing-side-channels/
#cryptanalysis #crypto #cryptography #deadEndsInCryptanalysis #ECDSA #sideChannels #Technology #timingAttacks
This is the first entry in a (potentially infinite) series of dead end roads in the field of cryptanalysis.Cryptography engineering is one of many specialties within the wider field of security engineering. Security engineering is a discipline that chiefly concerns itself with studying how systems fail in order to build better systems–ones that are resilient to malicious acts or even natural disasters. It sounds much simpler than it is.
If you want to develop and securely implement a cryptography feature in the application you’re developing, it isn’t enough to learn how to implement textbook descriptions of cryptography primitives during your C.S. undergrad studies (or equivalent). An active interest in studying how cryptosystems fail is the prerequisite for being a cryptography engineer.
Thus, cryptography engineering and cryptanalysis research go hand-in-hand.
Pictured: How I feel when someone tells me about a novel cryptanalysis technique relevant to the algorithm or protocol I’m implementing. (Art by Khia.)
If you are interested in exploring the field of cryptanalysis–be it to contribute on the attack side of cryptography or to learn better defense mechanisms–you will undoubtedly encounter roads that seem enticing and not well-tread, and it might not be immediately obvious why the road is a dead end. Furthermore, beyond a few comparison tables on Wikipedia or obscure Stack Exchange questions, the cryptology literature is often sparse on details about why these avenues lead nowhere.
So let’s explore where some of these dead-end roads lead, and why they stop where they do.
(Art by Kyume.)
Length Extension Attacks
It’s difficult to provide a better summary of length extension attacks than what Skull Security wrote in 2012. However, that only addresses “What are they?”, “How do you use them?”, and “Which algorithms and constructions are vulnerable?”, but leaves out a more interesting question: “Why were they even possible to begin with?”An Extensive Tale
Tale, not tail! (Art by Swizz.)To really understand length extension attacks, you have to understand how cryptographic hash functions used to be designed. This might sound intimidating, but we don’t need to delve too deep into the internals.
A cryptographic hash function is a keyless pseudorandom transformation from a variable length input to a fixed-length output. Hash functions are typically used as building blocks for larger constructions (both reasonable ones like HMAC-SHA-256, and unreasonable ones like my hash-crypt project).
However, hash functions like SHA-256 are designed to operate on sequential blocks of input. This is because sometimes you need to stream data into a hash function rather than load it all into memory at once. (This is why you can sha256sum a file larger than your available RAM without crashing your computer or causing performance headaches.)
A streaming hash function API might look like this:
class MyCoolHash(BaseHashClass): @staticmethod def init(): """ Initialize the hash state. """ def update(data): """ Update the hash state with additional data. """ def digest(): """ Finalize the hash function. """ def compress(): """ (Private method.) """
To use it, you’d callhash = MyCoolHash.init()
and then chain togetherhash.update()
calls with data as you load it from disk or the network, until you’ve run out of data. Then you’d calldigest()
and obtain the hash of the entire message.There are two things to take away right now:
- You can call
update()
multiple times, and that’s valid.- Your data might not be an even multiple of the internal block size of the hash function. (More often than not, it won’t be!)
So what happens when you call
digest()
and the amount of data you’ve passed toupdate()
is not an even multiple of the hash size?For most hash functions, the answer is simple: Append some ISO/IEC 7816-4 padding until you get a full block, run that through a final iteration of the internal compression function–the same one that gets called on
update()
–and then output the current internal state.Let’s take a slightly deeper look at what a typical runtime would look like for the MyCoolHash class I sketched above:
hash = MyCoolHash.init()
- Initialize some variables to some constants (initialization vectors).
hash.update(blockOfData)
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
compress()
on that data and clear the buffer. Otherwise, just append blockOfData to the currently buffered data.- For every 32 byte of data not yet touched by
compress()
, invokecompress()
on this block (updating the internal state).- If you have any leftover bytes, append to the internal buffer for the next invocation to process.
hash.update(moreData)
- Same as before, except there might be some buffered data from step 2.
output = hash.digest()
- If you have any data left in the buffer, append a 0x80 byte followed by a bunch of 0x00 bytes of padding until you reach the block size. If you don’t, you have an entire block of padding (0x80 followed by 0x00s).
- Call
compress()
one last time.- Serialize the internal hash state as a byte array or hexadecimal-encoded string (depending on usage). Return that to the caller.
This is fairly general description that will hold for most older hash functions. Some details might be slightly wrong (subtly different padding scheme, whether or not to include a block of empty padding on
digest()
invocations, etc.).The details aren’t super important. Just the rhythm of the design.
init()
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- …
digest()
- load buffer, pad,
compress()
- serialize internal state
- return
And thus, without having to know any of the details about what
compress()
even looks like, the reason why length extension attacks were ever possible should leap out at you!Art by Khia.
If it doesn’t, look closely at the difference between
update()
anddigest()
.There are only two differences:
update()
doesn’t pad before callingcompress()
digest()
returns the internal state thatcompress()
always mutatesThe reason length-extension attacks are possible is that, for some hash functions, the output of
digest()
is its full internal state.This means that you can run take an existing hash function and pretend it’s the internal state after an
update()
call instead of adigest()
call by appending the padding and then, after callingcompress()
, appending additional data of your choice.The (F)Utility of Length Extension
Length-Extension Attacks are mostly used for attacking naive message authentication systems where someone attempts to authenticate a message (M) with a secret key (k), but they construct it like so:
auth_code = vulnerable_hash(k.append(M))
If this sounds like a very narrow use-case, that’s because it is. However, it still broke Flickr’s API once, and it’s a popular challenge for CTF competitions around the world.Consequently, length-extension attacks are sometimes thought to be vulnerabilities of the construction rather than a vulnerability of the hash function. For a Message Authentication Code construction, these are classified under canonicalization attacks.
After all, even though SHA-256 is vulnerable to length-extension, but you can’t actually exploit it unless someone is using it in a vulnerable fashion.
That being said, it’s often common to say that hash functions like SHA-256 and SHA-512 are prone to length-extension.
Ways to Avoid Length-Extension Attacks
Use HMAC. HMAC was designed to prevent these kinds of attacks.Alternatively, if you don’t have any cryptographic secrets, you can always do what bitcoin did: Hash your hash again.
return sha256(sha256(message))
Note: Don’t actually do that, it’s dangerous for other reasons. You also don’t want to take this to an extreme. If you iterate your hash too many times, you’ll reinvent PBKDF1 and its insecurity. Two is plenty.Or you can do something really trivial (which ultimately became another standard option in the SHA-2 family of hash functions):
Always start with a 512-bit hash and then truncate your output so the attacker never recovers the entire internal state of your hash in order to extend it.
That’s why you’ll sometimes see SHA-512/224 and SHA-512/256 in a list of recommendations. This isn’t saying “use one or the other”, that’s the (rather confusing) notation for a standardized SHA-512 truncation.
Note: This is actually what SHA-384 has done all along, and that’s one of the reasons why you see SHA-384 used more than SHA-512.
If you want to be extra fancy, you can also just use a different hash function that isn’t vulnerable to length extension, such as SHA-3 or BLAKE2.
Questions and Answers
Art by Khia.Why isn’t BLAKE2 vulnerable to length extension attacks?
Quite simply: It sets a flag in the internal hash state before compressing the final buffer.If you try to deserialize this state then invoke
update()
, you’ll get a different result than BLAKE2’scompress()
produced duringdigest()
.For a secure hash function, a single bit of difference in the internal state should result in a wildly different output. (This is called the avalanche effect.)
Why isn’t SHA-3 vulnerable to length extension attacks?
SHA-3 is a sponge construction whose internal state is much larger than the hash function output. This prevents an attacker from recovering the hash function’s internal state from a message digest (similar to the truncated hash function discussed above).Why don’t length-extension attacks break digital signature algorithms?
Digital signature algorithms–such as RSASSA, ECDSA, and EdDSA–take a cryptographic hash of a message and then perform some asymmetric cryptographic transformation of the hash with the secret key to produce a signature that can be verified with a public key. (The exact details are particular to the signature algorithm in question.)Length-extension attacks only allow you to take a valid H(k || m) and produce a valid H(k || m || padding || extra) hash that will validate, even if you don’t know k. They don’t magically create collisions out of thin air.
Even if you use a weak hash function like SHA-1, knowing M and H(M) is not sufficient to calculate a valid signature. (You need to be able to know these values in order to verify the signature anyway.)
The security of digital signature algorithms depends entirely on the secrecy of the signing key and the security of the asymmetric cryptographic transformation used to generate a signature. (And its resilience to side-channel attacks.)
However, a more interesting class of attack is possible for systems that expect digital signatures to have similar properties as cryptographic hash functions. This would qualify as a protocol vulnerability, not a length-extension vulnerability.
TL;DR
Art by Khia.Length-extension attacks exploit a neat property of a few cryptographic hash functions–most of which you shouldn’t be using in 2020 anyway (SHA-2 is still fine)–but can only be exploited by a narrow set of circumstances.
If you find yourself trying to use length-extension to break anything else, you’ve probably run into a cryptographic dead end and need to backtrack onto more interesting avenues of exploitation–of which there are assuredly many (unless your cryptography is boring).
Next: Timing Side-Channels
https://soatok.blog/2020/10/06/dead-ends-in-cryptanalysis-1-length-extension-attacks/
#cryptanalysis #crypto #cryptographicHashFunction #cryptography #lengthExtensionAttacks
If you’re ever tasked with implementing a cryptography feature–whether a high-level protocol or a low-level primitive–you will have to take special care to ensure you’re not leaking secret information through side-channels.
The descriptions of algorithms you learn in a classroom or textbook are not sufficient for real-world use. (Yes, that means your toy RSA implementation based on GMP from your computer science 101 class isn’t production-ready. Don’t deploy it.)
But what are these elusive side-channels exactly, and how do you prevent them? And in cases where you cannot prevent them, how can you mitigate the risk to your users?
Art by Swizz.
Contents
- Cryptographic Side-Channels
- Side-Channel Prevention and Mitigation
- Design Patterns for Algorithmic Constant-Time Code
- Constant-Time String Comparison
- Alternative: “Double HMAC” String Comparison
- Constant-Time Conditional Select
- Constant-Time String Inequality Comparison
- Constant-Time Integer Multiplication
- Constant-Time Integer Division
- Constant-Time Modular Inversion
- Constant-Time Null-Byte Trimming
- Further Reading and Online Resources
- Errata
Cryptographic Side-Channels
The concept of a side-channel isn’t inherently cryptographic, as Taylor Hornby demonstrates, but a side-channel can be a game over vulnerability in a system meant to maintain confidentiality (even if only for its cryptography keys).
Cryptographic side-channels allow an attacker to learn secret data from your cryptography system. To accomplish this, the attacker doesn’t necessarily study the system’s output (i.e. ciphertext); instead, they observe some other measurement, such as how much time or power was spent performing an operation, or what kind of electromagnetic radiation was emitted.
Important: While being resistant to side-channels is a prerequisite for implementations to be secure, it isn’t in and of itself sufficient for security. The underlying design of the primitives, constructions, and high-level protocols needs to be secure first, and that requires a clear and specific threat model for what you’re building.
Constant-time ECDSA doesn’t help you if you reuse k-values like it’s going out of style, but variable-time ECDSA still leaks your secret key to anyone who cares to probe your response times. Secure cryptography is very demanding.
Art by Riley.
Timing Leaks
Timing side-channels leak secrets through how much time it takes for an operation to complete.
There are many different flavors of timing leakage, including:
- Fast-failing comparison functions (memcmp() in C)
- Cache-timing vulnerabilities (e.g. software AES)
- Memory access patterns
- Conditional branches controlled by secrets
The bad news about timing leaks is that they’re almost always visible to an attacker over the network (including over the Internet (PDF)).
The good news is that most of them can be prevented or mitigated in software.
Art by Kyume.
Power Usage
Different algorithms or processor operations may require different amounts of power.
For example, squaring a large number may take less power than multiplying two different large numbers. This observation has led to the development of power analysis attacks against RSA.
Power analysis is especially relevant for embedded systems and smart cards, which are easier to extract a meaningful signal from than your desktop computer.
Some information leakage through power usage can be prevented through careful engineering (for example: BearSSL, which uses Montgomery multiplication instead of square-and-multiply).
But that’s not always an option, so generally these risks are mitigated.
My reaction when I first learned of power leaks: WATT (Art by Swizz)
Electromagnetic Emissions
Your computer is a reliable source of electromagnetic emissions (such as radio waves). Some of these emissions may reveal information about your cryptographic secrets, especially to an attacker with physical proximity to your device.
The good news is that research into EM emission side-channels isn’t as mature as side-channels through timing leaks or power usage. The bad news is that mitigations for breakthroughs will generally require hardware (e.g. electromagnetic shielding).
Aren’t computers terrifying? (Art by Swizz)
Side-Channel Prevention and Mitigation
Now that we’ve established a rough sense of some of the types of side-channels that are possible, we can begin to identify what causes them and aspire to prevent the leaks from happening–and where we can’t, to mitigate the risk to a reasonable level.
Note: To be clear, I didn’t cover all of the types of side-channels.
Prevention vs. Mitigation
Preventing a side-channel means eliminating the conditions that allow the information leak to occur in the first place. For timing leaks, this means making all algorithms constant-time.
There are entire classes of side-channel leaks that aren’t possible or practical to mitigate in software. When you encounter one, the best you can hope to do is mitigate the risk.
Ideally, you want to make the attack more expensive to pull off than the reward an attacker will gain from it.
What is Constant-Time?
https://www.youtube.com/watch?v=ZD_H1ePLylA
When an implementation is said to be constant-time, what we mean is that the execution time of the code is not a function of its secret inputs.
Vulnerable AES uses table look-ups to implement the S-Box. Constant-time AES is either implemented in hardware, or is bitsliced.
Malicious Environments and Algorithmic Constant-Time
One of the greatest challenges with writing constant-time code is distinguishing between algorithmic constant-time and provably constant-time. The main difference between the two is that you cannot trust your compiler (especially a JIT compiler), which may attempt to optimize your code in a way that reintroduces the side-channel you aspired to remove.
A sufficiently advanced compiler optimization is indistinguishable from an adversary.John Regehr, possibly with apologies to Arthur C. Clarke
For compiled languages, this is a tractable but expensive problem to solve: You simply have to formally verify everything from the source code to the compiler to the silicon chips that the code will be deployed on, and then audit your supply chain to prevent malicious tampering from going undetected.
For interpreted languages (e.g. PHP and JavaScript), this formal verification strategy isn’t really an option, unless you want to formally verify the runtime that interprets scripts and prove that the operations remain constant-time on top of all the other layers of distrust.
Is this level of paranoia really worth the effort?
For our cases, anyway! (Art by Khia.)
For that reason, we’re going to assume that algorithmic constant-time is adequate for the duration of this blog post.
If your threat model prevents you from accepting this assumption, feel free to put in the extra effort yourself and tell me how it goes. After all, as a furry who writes blog posts in my spare time for fun, I don’t exactly have the budget for massive research projects in formal verification.
Mitigation with Blinding Techniques
The best mitigation for some side-channels is called blinding: Obfuscating the inputs with some random data, then deobfuscating the outputs with the same random data, such that your keys are not revealed.
Two well-known examples include RSA decryption and Elliptic Curve Diffie-Hellman. I’ll focus on the latter, since it’s not as widely covered in the literature (although several cryptographers I’ve talked with were somehow knowledgeable about it; I suspect gatekeeping is involved).
Blinded ECDH Key Exchange
In typical ECDH implementations, you will convert a point on a Weierstrass curve to a Jacobian coordinate system .
The exact conversion formula is (, ). The conversion almost makes intuitive sense.
Where does come from though?
Art by circuitslime
It turns out, the choice for is totally arbitrary. Libraries typically set it equal to 1 (for best performance), but you can also set it to a random number. (You cannot set it to 0, however, for obvious reasons.)
Choosing a random number means the calculations performed over Jacobian coordinates will be obscured by a randomly chosen factor (and thus, if is only used once per scalar multiplication, the bitwise signal the attackers rely on will be lost).
Blinding techniques are cool. (Art by Khia.)
I think it’s really cool how one small tweak to the runtime of an algorithm can make it significantly harder to attack.
Design Patterns for Algorithmic Constant-Time Code
Mitigation techniques are cool, but preventing side-channels is a better value-add for most software.
To that end, let’s look at some design patterns for constant-time software. Some of these are relatively common; others, not so much.
Art by Scout Pawfoot.
If you prefer TypeScript / JavaScirpt, check out Soatok’s constant-time-js library on Github / NPM.
Constant-Time String Comparison
Rather than using string comparison (== in most programming languages, memcmp() in C), you want to compare cryptographic secrets and/or calculated integrity checks with a secure compare algorithm, which looks like this:
- Initialize a variable (let’s call it D) to zero.
- For each byte of the two strings:
- Calculate (lefti XOR righti)
- Bitwise OR the current value of D with the result of the XOR, store the output in D
- When the loop has concluded, D will be equal to 0 if and only if the two strings are equal.
In code form, it looks like this:
<?phpfunction ct_compare(string $left, string $right): bool{ $d = 0; $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { return false; // Lengths differ } for ($i = 0; $i < $length; ++$i) { $leftCharCode = unpack('C', $left[$i])[1]; $rightCharCode = unpack('C', $right[$i])[1]; $d |= ($leftCharCode ^ $rightCharCode); } return $d === 0;}
In this example, I’m using PHP’s unpack() function to avoid cache-timing leaks with ord() and chr(). Of course, you can simply use hash_equals() instead of writing it yourself (PHP 5.6.0+).
Alternative: “Double HMAC” String Comparison
If the previous algorithm won’t work (i.e. because you’re concerned your JIT compiler will optimize it away), there is a popular alternative to consider. It’s called “Double HMAC” because it was traditionally used with Encrypt-Then-HMAC schemes.
The algorithm looks like this:
- Generate a random 256-bit key, K. (This can be cached between invocations, but it should be unpredictable.)
- Calculate HMAC-SHA256(K, left).
- Calculate HMAC-SHA256(K, right).
- Return true if the outputs of step 2 and 3 are equal.
This is provably secure, so long as HMAC-SHA256 is a secure pseudo-random function and the key K is unknown to the attacker.
In code form, the Double HMAC compare function looks like this:
<?phpfunction hmac_compare(string $left, string $right): bool{ static $k = null; if (!$k) $k = random_bytes(32); return ( hash_hmac('sha256', $left, $k) === hash_hmac('sha256', $right, $k) );}
Constant-Time Conditional Select
I like to imagine a conversation between a cryptography engineer and a Zen Buddhist, that unfolds like so:
- CE: “I want to eliminate branching side-channels from my code.”
- ZB: “Then do not have branches in your code.”
And that is precisely what we intend to do with a constant-time conditional select: Eliminate branches by conditionally returning between one of two strings, without an IF statement.
Mind. Blown. (Art by Khia.)
This isn’t as tricky as it sounds. We’re going to use XOR and two’s complement to achieve this.
The algorithm looks like this:
- Convert the selection bit (TRUE/FALSE) into a mask value (-1 for TRUE, 0 for FALSE). Bitwise, -1 looks like 111111111…1111111111, while 0 looks like 00000000…00000000.
- Copy the right string into a buffer, call it tmp.
- Calculate left XOR right, call it x.
- Return (tmp XOR (x AND mask)).
Once again, in code this algorithm looks like this:
<?phpfunction ct_select( bool $returnLeft, string $left, string $right): string { $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { throw new Exception('ct_select() expects two strings of equal length'); } // Mask byte $mask = (-$returnLeft) & 0xff; // X $x = (string) ($left ^ $right); // Output = Right XOR (X AND Mask) $output = ''; for ($i = 0; $i < $length; $i++) { $rightCharCode = unpack('C', $right[$i])[1]; $xCharCode = unpack('C', $x[$i])[1]; $output .= pack( 'C', $rightCharCode ^ ($xCharCode & $mask) ); } return $output;}
You can test this code for yourself here. The function was designed to read intuitively like a ternary operator.
A Word of Caution on Cleverness
In some languages, it may seem tempting to use the bitwise trickery to swap out pointers instead of returning a new buffer. But do not fall for this Siren song.
If, instead of returning a new buffer, you just swap pointers, what you’ll end up doing is creating a timing leak through your memory access patterns. This can culminate in a timing vulnerability, but even if your data is too big to fit in a processor’s cache line (I dunno, Post-Quantum RSA keys?), there’s another risk to consider.
Virtual memory addresses are just beautiful lies. Where your data lives on the actual hardware memory is entirely up to the kernel. You can have two blobs with contiguous virtual memory addresses that live on separate memory pages, or even separate RAM chips (if you have multiple).
If you’re swapping pointers around, and they point to two different pieces of hardware, and one is slightly faster to read from than the other, you can introduce yet another timing attack through which pointer is being referenced by the processor.
It’s timing leaks all the ways down! (Art by Swizz)
If you’re swapping between X and Y before performing a calculation, where:
- X lives on RAM chip 1, which takes 3 ns to read
- Y lives on RAM chip 2, which takes 4 ns to read
…then the subsequent use of the swapped pointers reveals whether you’re operating on X or Y in the timing: It will take slightly longer to read from Y than from X.
The best way to mitigate this problem is to never design your software to have it in the first place. Don’t be clever on this one.
Constant-Time String Inequality Comparison
Sometimes you don’t just need to know if two strings are equal, you also need to know which one is larger than the other.
To accomplish this in constant-time, we need to maintain two state variables:
- gt (initialized to 0, will be set to 1 at some point if left > right)
- eq (initialized to 1, will be set to 0 at some point if left != right)
Endian-ness will dictate the direction our algorithm goes, but we’re going to perform two operations in each cycle:
- gt should be bitwise ORed with (eq AND ((right – left) right shifted 8 times)
- eq should be bitwise ANDed with ((right XOR left) – 1) right shifted 8 times
If right and left are ever different, eq will be set to 0.
If the first time they’re different the value for lefti is greater than the value for righti, then the subtraction will produce a negative number. Right shifting a negative number 8 places then bitwise ANDing the result with eq (which is only 1 until two bytes differ, and then 0 henceforth if they do) will result in a value for 1 with gt. Thus, if (righti – lefti) is negative, gt will be set to 1. Otherwise, it remains 0.
At the end of this loop, return (gt + gt + eq) – 1. This will result in the following possible values:
- left < right: -1
- left == right: 0
- left > right: 1
The arithmetic based on the possible values of gt and eq should be straightforward.
- Different (eq == 0) but not greater (gt == 0) means left < right, -1.
- Different (eq == 0) and greater (gt == 1) means left > right, 1.
- If eq == 1, no bytes ever differed, so left == right, 0.
A little endian implementation is as follows:
<?phpfunction str_compare(string $left, string $right): int{ $length = mb_strlen($left, '8bit'); if (mb_strlen($right, '8bit') !== $length) { throw new Exception('ct_select() expects two strings of equal length'); } $gt = 0; $eq = 1; $i = $length; while ($i > 0) { --$i; $leftCharCode = unpack('C', $left[$i])[1]; $rightCharCode = unpack('C', $right[$i])[1]; $gt |= (($rightCharCode - $leftCharCode) >> 8) & $eq; $eq &= (($rightCharCode ^ $leftCharCode) -1) >> 8; } return ($gt + $gt + $eq) - 1;}
Demo for this function is available here.
Constant-Time Integer Multiplication
Multiplying two integers is one of those arithmetic operations that should be constant-time. But on many older processors, it isn’t.
Of course there’s a microarchitecture timing leak! (Art by Khia.)
Fortunately, there is a workaround. It involves an algorithm called Ancient Egyptian Multiplication in some places or Peasant Multiplication in others.
Multiplying two numbers and this way looks like this:
- Determine the number of operations you need to perform. Generally, this is either known ahead of time or .
- Set to 0.
- Until the operation count reaches zero:
- If the lowest bit of is set, add to .
- Left shift by 1.
- Right shfit by 1.
- Return .
The main caveat here is that you want to use bitwise operators in step 3.1 to remove the conditional branch.
Rather than bundle example code in our blog post, please refer to the implementation in sodium_compat (a pure PHP polyfill for libsodium).
For big number libraries, implementing Karatsuba on top of this integer multiplying function should be faster than attempting to multiply bignums this way.
Constant-Time Integer Division
Although some cryptography algorithms call for integer division, division isn’t usually expected to be constant-time.
However, if you look up a division algorithm for unsigned integers with a remainder, you’ll likely encounter this algorithm, which is almost constant-time:
if D = 0 then error(DivisionByZeroException) endQ := 0 -- Initialize quotient and remainder to zeroR := 0 for i := n − 1 .. 0 do -- Where n is number of bits in N R := R << 1 -- Left-shift R by 1 bit R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator if R ≥ D then R := R − D Q(i) := 1 endend
If we use the tricks we learned from implementing constant-time string inequality with constant-time conditional selection, we can implement this algorithm without timing leaks.
Our constant-time version of this algorithm looks like this:
if D = 0 then error(DivisionByZeroException) endQ := 0 -- Initialize quotient and remainder to zeroR := 0 for i := n − 1 .. 0 do -- Where n is number of bits in N R := R << 1 -- Left-shift R by 1 bit R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator compared := ct_compare(R, D) -- Use constant-time inequality -- if R > D then compared == 1, swap = 1 -- if R == D then compared == 0, swap = 1 -- if R < D then compared == -1, swap = 0 swap := (1 - ((compared >> 31) & 1)) -- R' = R - D -- Q' = Q, Q = 1 Rprime := R - D Qprime := Q Qprime(i) := 1 -- The i'th bit is set to 1 -- Replace (R with R', Q with Q') if swap == 1 R = ct_select(swap, Rprime, R) Q = ct_select(swap, Qprime, Q)end
It’s approximately twice as slow as the original, but it’s constant-time.
(Art by Khia.)
Constant-Time Modular Inversion
Modular inversion is the calculation of for some prime . This is used in a lot of places, but especially in elliptic curve cryptography and RSA.
Daniel J. Bernstein and Bo-Yin Yang published a paper on fast constant-time GCD and Modular Inversion in 2019. The algorithm in question is somewhat straightforward to implement (although determining whether or not that implementation is safe is left as an exercise to the rest of us).
A simpler technique is to use Fermat’s Little Theorem: for some prime . This only works with prime fields, and is slower than a Binary GCD (which isn’t [i]necessarily constant-time, as OpenSSL discovered).
BearSSL provides an implementation (and accompanying documentation) for a constant-time modular inversion algorithm based on Binary GCD.
(In the future, I may update this section of this blog post with an implementation in PHP, using the GMP extension.)
Constant-Time Null-Byte Trimming
Shortly after this guide first went online, security researchers published the Raccoon Attack, which used a timing leak in the number of leading 0 bytes in the pre-master secret–combined with a lattice attack to solve the hidden number problem–to break TLS-DH(E).
To solve this, you need two components:
- A function that returns a slice of an array without timing leaks.
- A function that counts the number of significant bytes (i.e. ignores leading zero bytes, counts from the first non-zero byte).
A timing-safe array resize function needs to do two things:
- Touch every byte of the input array once.
- Touch every byte of the output array at least once, linearly. The constant-time division algorithm is useful here (to calculate x mod n for the output array index).
- Conditionally select between input[x] and the existing output[x_mod_n], based on whether x >= target size.
I’ve implemented this in my constant-time-js library:
Further Reading and Online Resources
If you’re at all interested in cryptographic side-channels, your hunger for knowledge probably won’t be sated by a single blog post. Here’s a collection of articles, papers, books, etc. worth reading.
- BearSSL’s Documentation on Constant-Time Code — A must-read for anyone interested in this topic
- Cryptographically Secure PHP Development — How to write secure cryptography in languages that cryptographers largely neglect
- CryptoCoding — A style guide for writing secure cryptography code in C (with example code!)
- CryptoGotchas — An overview of the common mistakes one can make when writing cryptography code (which is a much wider scope than side-channels)
- Meltdown and Spectre — Two vulnerabilities that placed side-channels in the scope of most of infosec that isn’t interested in cryptography
- Serious Cryptography — For anyone who lacks the background knowledge to fully understand what I’m talking about on this page
Errata
- 2020-08-27: The original version of this blog post incorrectly attributed Jacobian coordinate blinding to ECDSA hardening, rather than ECDH hardening. This error was brought to my attention by Thai Duong. Thanks Thai!
- 2020-08-27: Erin correctly pointed out that omitting memory access timing was a disservice to developers, who might not be aware of the risks involved. I’ve updated the post to call this risk out specifically (especially in the conditional select code, which some developers might try to implement with pointer swapping without knowing the risks involved). Thanks Erin!
I hope you find this guide to side-channels helpful.
Thanks for reading!
Follow my blog for more Defense Against the Bark Arts posts in the future.
https://soatok.blog/2020/08/27/soatoks-guide-to-side-channel-attacks/
#asymmetricCryptography #constantTime #cryptography #ECDH #ECDSA #ellipticCurveCryptography #RSA #SecurityGuidance #sideChannels #symmetricCryptography
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography