Search
Items tagged with: DSSE
Canonicalization Attacks occur when a protocol that feeds data into a hash function used in a Message Authentication Code (MAC) or Digital Signature calculation fails to ensure some property that’s expected of the overall protocol.
The textbook example of a canonicalization attack is the length-extension attack against hash functions such as MD5–which famously broke the security of Flickr’s API signatures.
But there’s a more interesting attack to think about, which affects the design of security token/envelope formats (PASETO, DSSE, etc.) and comes up often when folks try to extend basic notions of authenticated encryption (AE) to include additional authenticated (but unencrypted) data (thus yielding an AEAD mode).
Let’s start with a basic AE definition, then extend it to AEAD poorly, then break our extension. Afterwards, we can think about strategies for doing it better.
Turning CTR+HMAC into AEAD
Signal uses AES-CBC then HMAC-SHA2 to encrypt messages between mobile devices.
We often refer to this shape as “CBC+HMAC” (although this is a few typos away from being confused with a very different idea called CBC-MAC).
When CBC+HMAC is used with the AES block cipher with 256-bit keys and HMAC-SHA2, it becomes AES-256-CBC+HMAC-SHA256. What a mouthful!
Yuck! Who let a cryptography nerd name anything?
(Art by Lynx vs Jackalope)
In modern designs, AES-CTR is often preferable to AES-CBC, since you don’t have to deal with padding (or padding oracles).
Most systems these days prefer GCM over CBC+HMAC or CTR+HMAC. I don’t like AES-GCM (especially if your use-case is “support platforms without hardware acceleration”), but it’s hardly the worst choice for most applications. AES-GCM is a dedicated AEAD mode, while CBC+HMAC and CTR+HMAC merely provide AE.
Why Does Additional Data Matter?
Art: Harubaki
The main purpose of Additional Data (the AD in AEAD) is to bind an encrypted payload (ciphertext + authentication tag) to a given context. This is extremely helpful in mitigating Confused Deputy attacks.
Critically, this additional data is not encrypted. (At least, on this level; it’s probably wise to communicate over HTTPS!)
Naive CTR+HMAC to AEAD Extension
In a normal CTR+HMAC definition, your algorithm looks something like this:
- Generate a random nonce equal to the block size of your block cipher. (16 bytes for AES.)
- Encrypt your message with AES-CTR, using the given key and IV.
- Calculate the HMAC-SHA2 output of the IV followed by the ciphertext from step 2.
- Return IV, Ciphertext, MAC.
Decryption basically runs steps 3 and 2 in reverse: Recalculate the MAC (in constant-time!), decrypt ciphertext, return plaintext.
The most obvious way to extend this design to support additional authenticated data is to append it to the ciphertext.
This yields the following updated protocol:
- Generate a random nonce equal to the block size of your block cipher. (16 bytes for AES.)
- Encrypt your message with AES-CTR, using the given key and nonce.
- Calculate the HMAC-SHA2 output of the IV followed by the ciphertext from step 2, then the additional authenticated data.
- Return IV, Ciphertext, MAC.
Suffice to say, this is not a secure construction.
The Canonicalization Attack
Let’s say you built this extended protocol to encrypt a payload that looks like a URI string, but wanted to bind the token to a given browser session, so you use the HTTP User-Agent header as the AAD.
When you generate a token, you might produce the following:
const crypto = require('crypto');function splitKey(key) { let hmac; hmac = crypto.createHmac('sha256', key); hmac.update('encrypt-key'); let Ek = hmac.digest(); hmac = crypto.createHmac('sha256', key); hmac.update('hmac-key'); let Ak = hmac.digest(); return [Ek, Ak];}function encryptWithContext(plaintext, aad, key) { let [encKey, authKey] = splitKey(key); console.log(encKey, authKey); let nonce = crypto.randomBytes(16); const aes = crypto.createCipheriv('aes-256-ctr', encKey, nonce); const ciphertext = aes.update(plaintext); aes.final(); // Pay attention to this part: const hmac = crypto.createHmac('sha256', authKey); hmac.update(nonce); hmac.update(ciphertext); hmac.update(aad); return [nonce, ciphertext, hmac.digest()];}let plaintext = [ 'expires=1630223780', 'access_group=1234', 'subject=auth-v2.example.com', 'restrictions=on'].join('&');// expires=1630223780&access_group=1234&subject=auth-v2.example.com&restrictions=on// const key = crypto.randomBytes(32);let [nonce, ciphertext, tag] = encryptWithContext(plaintext, userAgent, key);
So here’s the clever attack: If you can shift bytes from the payload into the prefix of your User-Agent string, they’ll produce the same HMAC tag.
Attackers can truncate as much of the payload as they want by prepending it to the User-Agent included in their HTTP request.
You can even turn this:
expires=1630223780&access_group=1234&subject=auth-v2.example.com&restrictions=on
…into this:
expires=1630223780&access_group=1234&subject=auth-v2.example.com
…without invalidating the existing authentication tag–just by ensuring that the last 16 bytes of ciphertext are prepended to your User-Agent and removed from the payload.
More broadly, any time you have a multi-part message being fed into a hash function, if you aren’t careful with how you feed it into the hash function, you can induce trivial collisions.
See also: Iota’s Kerl hash function.
This is obviously true, because hash functions are deterministic: The same input will always produce the same output. If you can control one or more parts of a multi-part message, you can collide the input–thereby creating a collision in the output.
This can affect any protocol that depends on hash functions, but most obviously, HMAC and Digital Signature algorithms are in scope.
But what does “being careful” look like? Let’s look at a safe example.
Pre-Authentication Encoding (PAE)
Earlier I had mentioned PASETO and DSSE. They both have this notion of a “PAE” algorithm which aims to prevent canonicalization attacks.
PASETO’s definiton for PAE is as follows:
function LE64(n) { var str = ''; for (var i = 0; i < 8; ++i) { if (i === 7) { // Clear the MSB for interoperability n &= 127; } str += String.fromCharCode(n & 255); n = n >>> 8; } return str;}function PAE(pieces) { if (!Array.isArray(pieces)) { throw TypeError('Expected an array.'); } var count = pieces.length; var output = LE64(count); for (var i = 0; i < count; i++) { output += LE64(pieces.length); /*** Soatok Note: This JS pseudocode incorrectly assumes strings, rather than buffers. It's only meant to illustrate the idea. In real implementations, don't join Buffers with +. ***/ output += pieces[i]; } return output;}
What this does (with all lengths as 64-bit unsigned integers, serialized as 8 bytes):
- Prepend the number of parts being hashed.
- For each part, first prefix its length, then its value.
This is an obvious mitigation for canonicalization attacks:
- If you feed in a different number of pieces, the count (the first 8 bytes) will differ.
- If you try to move data from one piece to another, you’ll produce different lengths for both pieces, which will not yield an input collision to the hash function.
However, it’s important that both mechanism are in play to guarantee security:
- Without the length prefixes, we’re no different than the CTR+HMAC extension we defined above.
- Without the count prefix, it’s possible to drop pieces and then include a dummy “length” in the payload of others to create an input collision.
What’s an Input Collision?
First, you need to know what a collision attack is.
Consider a hash function, H(). If you can identify any two input messages (m1, m2) such that H(m1) = H(m2), you’ve found a collision in the output of the hash function.
An input collision is dumber than that.
If m1 is constructed from multiple segments with different meanings, you don’t need an m2. Just find multiple ways (the collisions) to result in the same m1 value (the input).
That’s what we did earlier when we shifted bytes from the ciphertext to the user agent.
DSSE Leaves Me Dizzy
It should come as no surprise that I find DSSE’s definition of PAE to be somewhat bizarre.
PAE(type, body) = "DSSEv1" + SP + LEN(type) + SP + type + SP + LEN(body) + SP + body+ = concatenationSP = ASCII space [0x20]"DSSEv1" = ASCII [0x44, 0x53, 0x53, 0x45, 0x76, 0x31]LEN(s) = ASCII decimal encoding of the byte length of s, with no leading zeros
The only thing that saves them from canonicalization attacks is that the number of pieces is constant.
If the number of pieces was variable (e.g. if the KEYID
was optionally included in the signature, but they forgot to always include a hard-coded 0 length if it was absent), you could defeat their flavor of PAE by constructing two different messages that produce the same hash in the digital signature algorithm.
This is because the number of pieces isn’t included in the DSSE definition. (If they ever support a variable number of components, and fail to include the count in the signature, they’ll be vulnerable.)
Amusingly, the rationale page for DSSE using PAE states:
- Why use PAE?
- Because we need an unambiguous way of serializing two fields, payloadType and payload. PAE is already documented and good enough. No need to reinvent the wheel.
…Yet, they didn’t actually [i]use the “already documented and good enough” definition of PAE from PASETO.
Let’s not use DSSE’s definition.
(Art by Lynx vs Jackalope)
Fixing AES-CTR+HMAC with PAE
This is pretty straightforward patch:
function encryptWithContext(plaintext, aad, key) { let [encKey, authKey] = splitKey(key); console.log(encKey, authKey); let nonce = crypto.randomBytes(16); const aes = crypto.createCipheriv('aes-256-ctr', encKey, nonce); const ciphertext = aes.update(plaintext); aes.final(); // Pay attention to this part: const hmac = crypto.createHmac('sha256', authKey);- hmac.update(nonce);- hmac.update(ciphertext);- hmac.update(aad);+ hmac.update(PAE([nonce, ciphertext, aad])); return [nonce, ciphertext, hmac.digest()]; }
The only conceivable way to attack this design is to aim for an integer overflow, which will require sending at least 2^63 bytes–at which point, you’re more likely to DDoS the target than succeed.
Wrapping Up
Canonicalization Attacks broadly aren’t well-understood or widely appreciated risks with cryptography protocol design outside of specialist circles (although many application security folks are at least aware of specific instances, i.e. Length Extension).
Part of the reason for this lack of knowledge transfer is that all of the AEAD modes defend against it by design, and most artisanal authenticated encryption constructions don’t bother with additional authenticated data, and most home-made cryptography protocols don’t even authenticate their ciphertexts correctly, and …
You get the point, I hope. There’s unaddressed concerns all the way down. Expecting people who aren’t specialized experts in this specific field to get all of them right is frankly unreasonable. In practice, outside of cryptography, canonicalization either doesn’t matter or there’s something else wrong that’s far more urgent.
https://soatok.blog/2021/07/30/canonicalization-attacks-against-macs-and-signatures/
#collisionAttacks #cryptographicHashFunction #cryptography #digitalSignatureAlgorithm #DSSE #HMAC #lengthExtensionAttacks #MACs #PASETO #SecurityGuidance
This is the first entry in a (potentially infinite) series of dead end roads in the field of cryptanalysis.Cryptography engineering is one of many specialties within the wider field of security engineering. Security engineering is a discipline that chiefly concerns itself with studying how systems fail in order to build better systems–ones that are resilient to malicious acts or even natural disasters. It sounds much simpler than it is.
If you want to develop and securely implement a cryptography feature in the application you’re developing, it isn’t enough to learn how to implement textbook descriptions of cryptography primitives during your C.S. undergrad studies (or equivalent). An active interest in studying how cryptosystems fail is the prerequisite for being a cryptography engineer.
Thus, cryptography engineering and cryptanalysis research go hand-in-hand.
Pictured: How I feel when someone tells me about a novel cryptanalysis technique relevant to the algorithm or protocol I’m implementing. (Art by Khia.)
If you are interested in exploring the field of cryptanalysis–be it to contribute on the attack side of cryptography or to learn better defense mechanisms–you will undoubtedly encounter roads that seem enticing and not well-tread, and it might not be immediately obvious why the road is a dead end. Furthermore, beyond a few comparison tables on Wikipedia or obscure Stack Exchange questions, the cryptology literature is often sparse on details about why these avenues lead nowhere.
So let’s explore where some of these dead-end roads lead, and why they stop where they do.
(Art by Kyume.)
Length Extension Attacks
It’s difficult to provide a better summary of length extension attacks than what Skull Security wrote in 2012. However, that only addresses “What are they?”, “How do you use them?”, and “Which algorithms and constructions are vulnerable?”, but leaves out a more interesting question: “Why were they even possible to begin with?”An Extensive Tale
Tale, not tail! (Art by Swizz.)To really understand length extension attacks, you have to understand how cryptographic hash functions used to be designed. This might sound intimidating, but we don’t need to delve too deep into the internals.
A cryptographic hash function is a keyless pseudorandom transformation from a variable length input to a fixed-length output. Hash functions are typically used as building blocks for larger constructions (both reasonable ones like HMAC-SHA-256, and unreasonable ones like my hash-crypt project).
However, hash functions like SHA-256 are designed to operate on sequential blocks of input. This is because sometimes you need to stream data into a hash function rather than load it all into memory at once. (This is why you can sha256sum a file larger than your available RAM without crashing your computer or causing performance headaches.)
A streaming hash function API might look like this:
class MyCoolHash(BaseHashClass): @staticmethod def init(): """ Initialize the hash state. """ def update(data): """ Update the hash state with additional data. """ def digest(): """ Finalize the hash function. """ def compress(): """ (Private method.) """
To use it, you’d callhash = MyCoolHash.init()
and then chain togetherhash.update()
calls with data as you load it from disk or the network, until you’ve run out of data. Then you’d calldigest()
and obtain the hash of the entire message.There are two things to take away right now:
- You can call
update()
multiple times, and that’s valid.- Your data might not be an even multiple of the internal block size of the hash function. (More often than not, it won’t be!)
So what happens when you call
digest()
and the amount of data you’ve passed toupdate()
is not an even multiple of the hash size?For most hash functions, the answer is simple: Append some ISO/IEC 7816-4 padding until you get a full block, run that through a final iteration of the internal compression function–the same one that gets called on
update()
–and then output the current internal state.Let’s take a slightly deeper look at what a typical runtime would look like for the MyCoolHash class I sketched above:
hash = MyCoolHash.init()
- Initialize some variables to some constants (initialization vectors).
hash.update(blockOfData)
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
compress()
on that data and clear the buffer. Otherwise, just append blockOfData to the currently buffered data.- For every 32 byte of data not yet touched by
compress()
, invokecompress()
on this block (updating the internal state).- If you have any leftover bytes, append to the internal buffer for the next invocation to process.
hash.update(moreData)
- Same as before, except there might be some buffered data from step 2.
output = hash.digest()
- If you have any data left in the buffer, append a 0x80 byte followed by a bunch of 0x00 bytes of padding until you reach the block size. If you don’t, you have an entire block of padding (0x80 followed by 0x00s).
- Call
compress()
one last time.- Serialize the internal hash state as a byte array or hexadecimal-encoded string (depending on usage). Return that to the caller.
This is fairly general description that will hold for most older hash functions. Some details might be slightly wrong (subtly different padding scheme, whether or not to include a block of empty padding on
digest()
invocations, etc.).The details aren’t super important. Just the rhythm of the design.
init()
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- …
digest()
- load buffer, pad,
compress()
- serialize internal state
- return
And thus, without having to know any of the details about what
compress()
even looks like, the reason why length extension attacks were ever possible should leap out at you!Art by Khia.
If it doesn’t, look closely at the difference between
update()
anddigest()
.There are only two differences:
update()
doesn’t pad before callingcompress()
digest()
returns the internal state thatcompress()
always mutatesThe reason length-extension attacks are possible is that, for some hash functions, the output of
digest()
is its full internal state.This means that you can run take an existing hash function and pretend it’s the internal state after an
update()
call instead of adigest()
call by appending the padding and then, after callingcompress()
, appending additional data of your choice.The (F)Utility of Length Extension
Length-Extension Attacks are mostly used for attacking naive message authentication systems where someone attempts to authenticate a message (M) with a secret key (k), but they construct it like so:
auth_code = vulnerable_hash(k.append(M))
If this sounds like a very narrow use-case, that’s because it is. However, it still broke Flickr’s API once, and it’s a popular challenge for CTF competitions around the world.Consequently, length-extension attacks are sometimes thought to be vulnerabilities of the construction rather than a vulnerability of the hash function. For a Message Authentication Code construction, these are classified under canonicalization attacks.
After all, even though SHA-256 is vulnerable to length-extension, but you can’t actually exploit it unless someone is using it in a vulnerable fashion.
That being said, it’s often common to say that hash functions like SHA-256 and SHA-512 are prone to length-extension.
Ways to Avoid Length-Extension Attacks
Use HMAC. HMAC was designed to prevent these kinds of attacks.Alternatively, if you don’t have any cryptographic secrets, you can always do what bitcoin did: Hash your hash again.
return sha256(sha256(message))
Note: Don’t actually do that, it’s dangerous for other reasons. You also don’t want to take this to an extreme. If you iterate your hash too many times, you’ll reinvent PBKDF1 and its insecurity. Two is plenty.Or you can do something really trivial (which ultimately became another standard option in the SHA-2 family of hash functions):
Always start with a 512-bit hash and then truncate your output so the attacker never recovers the entire internal state of your hash in order to extend it.
That’s why you’ll sometimes see SHA-512/224 and SHA-512/256 in a list of recommendations. This isn’t saying “use one or the other”, that’s the (rather confusing) notation for a standardized SHA-512 truncation.
Note: This is actually what SHA-384 has done all along, and that’s one of the reasons why you see SHA-384 used more than SHA-512.
If you want to be extra fancy, you can also just use a different hash function that isn’t vulnerable to length extension, such as SHA-3 or BLAKE2.
Questions and Answers
Art by Khia.Why isn’t BLAKE2 vulnerable to length extension attacks?
Quite simply: It sets a flag in the internal hash state before compressing the final buffer.If you try to deserialize this state then invoke
update()
, you’ll get a different result than BLAKE2’scompress()
produced duringdigest()
.For a secure hash function, a single bit of difference in the internal state should result in a wildly different output. (This is called the avalanche effect.)
Why isn’t SHA-3 vulnerable to length extension attacks?
SHA-3 is a sponge construction whose internal state is much larger than the hash function output. This prevents an attacker from recovering the hash function’s internal state from a message digest (similar to the truncated hash function discussed above).Why don’t length-extension attacks break digital signature algorithms?
Digital signature algorithms–such as RSASSA, ECDSA, and EdDSA–take a cryptographic hash of a message and then perform some asymmetric cryptographic transformation of the hash with the secret key to produce a signature that can be verified with a public key. (The exact details are particular to the signature algorithm in question.)Length-extension attacks only allow you to take a valid H(k || m) and produce a valid H(k || m || padding || extra) hash that will validate, even if you don’t know k. They don’t magically create collisions out of thin air.
Even if you use a weak hash function like SHA-1, knowing M and H(M) is not sufficient to calculate a valid signature. (You need to be able to know these values in order to verify the signature anyway.)
The security of digital signature algorithms depends entirely on the secrecy of the signing key and the security of the asymmetric cryptographic transformation used to generate a signature. (And its resilience to side-channel attacks.)
However, a more interesting class of attack is possible for systems that expect digital signatures to have similar properties as cryptographic hash functions. This would qualify as a protocol vulnerability, not a length-extension vulnerability.
TL;DR
Art by Khia.Length-extension attacks exploit a neat property of a few cryptographic hash functions–most of which you shouldn’t be using in 2020 anyway (SHA-2 is still fine)–but can only be exploited by a narrow set of circumstances.
If you find yourself trying to use length-extension to break anything else, you’ve probably run into a cryptographic dead end and need to backtrack onto more interesting avenues of exploitation–of which there are assuredly many (unless your cryptography is boring).
Next: Timing Side-Channels
https://soatok.blog/2020/10/06/dead-ends-in-cryptanalysis-1-length-extension-attacks/
#cryptanalysis #crypto #cryptographicHashFunction #cryptography #lengthExtensionAttacks