Search
Items tagged with: ZeroDay
Spyware written for educational institutions to flex their muscles of control over students and their families when learning from their home computer is still, categorically, spyware.
Depending on your persuasion, the previous sentence sounds like either needless pedantry, or it reads like tautology. But we need to be clear on our terms.
- Educational spyware is still spyware.
- Spyware is categorized as a subset of malware.
When vulnerabilities are discovered in malware, the normal rules of coordinated disclosure are out of scope. Are we clear?
(Art by Khia.)
So let’s talk about Proctorio!
https://twitter.com/Angry_Cassie/status/1301360994044850182
I won’t go into the details of Proctorio or why it’s terrible for (especially disadvantaged) students. Read Cassie’s Twitter thread for more context on that. Seriously. I’m not gonna be one of those guys that talks over women, and neither should you.
What I am here to talk about today is these dubious claim about the security of their product:
From their Data Security page.
Zero-Knowledge Encryption? OMGWTFBBQ!
In cryptography, there are a class of algorithms called Zero-Knowledge Proofs. In a Zero-Knowledge Proof, you prove that you possess some fact without revealing any details about the fact.
It’s kind of abstract to think about (and until we’re ready to talk about Pedersen commitments, I’m just going to defer to Sarah Jamie Lewis), but the only thing you need to know about “Zero Knowledge” in Cryptography is that the output is a boolean (True, False).
You can’t use “Zero Knowledge” anything to encrypt. So “Zero-Knowledge Encryption” is a meaningless buzzword.
https://twitter.com/SchmiegSophie/status/1304407483658592256
So what are they actually describing when they say Zero Knowledge Encryption?
Also from their Data Security page.
Okay, so they’ve built their own key distribution system and are encrypting with AES-GCM… and shipped this in a Chrome extension. But before we get to that, look at this Daily Vulnerability Tests claim.
Bullshit.
Running Nessus (or equivalent) on a cron job isn’t meaningful metric of security. At best, it creates alert fatigue when you accidentally screw up a deployment configuration or forget to update your software for 4+ years. (Y’know, like JsZip 3.2.1, which they bundle.)
A dumb vulnerability scan isn’t the same thing as a routine (usually quarterly) penetration test or a code audit. And if you’re working in cryptography, you better have both!
Timing Leaks in Proctorio’s AES-GCM Implementation
If you download version 1.4.20241.1.0 of the Proctorio Chrome Extension, run src/assets/J5HG.js
through a JS beautifier, and then look at its contents, you will quickly realize this is a JavaScript cryptography library.
Since the “zero knowledge” encryption they’re so proud about uses AES-GCM, let’s focus on that.
Proctorio’s AES-GCM implementation exists in an object called dhs.mode.gcm
, which is mildly obfuscated, but contains the following functions:
encrypt()
– Encrypt with AES-GCMdecrypt()
– Decrypt with AES-GCMaa()
– GHASH block multiplicationj()
– XOR + GMAC utility functionO()
– Called by encrypt() and decrypt(); does all of the AES-CTR + GMAC fun
If you’re not familiar with AES-GCM, just know this: Timing leaks can be used to leak your GMAC key to outside applications, which completely breaks the authentication of AES-GCM and opens the door to chosen-ciphertext attacks.
So is their implementation of AES-GCM constant-time? Let’s take a look at aa()
:
aa: function(a, b) { var c, d, e, f, g, h = dhs.bitArray.ba; for (e = [0, 0, 0, 0], f = b.slice(0), c = 0; 128 > c; c++) { for ( (d = 0 !== (a[Math.floor(c / 32)] & 1 << 31 - c % 32)) && (e = h(e, f)), g = 0 !== (1 & f[3]), d = 3; d > 0; d-- ) f[d] = f[d] >>> 1 | (1 & f[d - 1]) << 31; f[0] >>>= 1, g && (f[0] ^= -520093696) } return e},
This is a bit obtuse, but this line leaks the lowest bit of f
with each iteration: g = 0 !== (1 & f[3])
.
Since f
gets bitwise right-shifted 128 times, this actually leaks the bit of every value of f
in each block multiplication, since the execution of (f[0] ^= -520093696)
depends on whether or not g
is set to true
.
Timing leaks? So much for “zero-knowledge”!
Also, they claim to be FIPS 140-2 compliant, but this is how they generate randomness in their cryptography library.
https://twitter.com/SoatokDhole/status/1304623361461547009
(Although, that’s probably a lie.)
To mitigate these vulnerabilities, one needs look no further than the guide to side-channel attacks I published last month.
(Also, use WebCrypto to generate entropy! What the fuck.)
If Proctorio is Insecure, What Should We Use Instead?
Nothing.
Schools that demand students install spyware on their personal computers are only a step removed from domestic abusers who install stalkerware on their victims’ phones.
Proctorio isn’t the problem here, they’re only a symptom.
Schools that insist on violating the integrity and parental dominion of their students’ home computers are the problem here.
https://twitter.com/lex_about_sex/status/1304156305398140928
If you want to ensure the integrity of students’ education, try teaching them about consent and ethical computing. (Y’know, concepts that are fundamentally incompatible with the business model of Proctorio and Proctorio’s competitors.)
Disclosure Timeline
Really? Really?
This was a zero-day disclosure, because full disclosure is the responsible choice when dealing with spyware. Don’t even @ me.
Disclaimer: This security research was conducted by Soatok– some furry on the Internet–while bored on his off-time and does not reflect the opinions of any company.
Domains to Sinkhole
If you’re looking to protect your home network from this spyware, here are a list of domains to sinkhole (i.e. with Pi-Hole).
- proctorauth.com
- proctordata.com
- getproctorio.com
- proctor.io
- proctor.in
- az545770.vo.msecnd.net
Update (2020-09-16): AES in a Hole
I got curious about the security of how they were encrypting data, not just the authentication of that encrypted data.
After all, if Proctorio decided to home-bake GCM in an insecure way, it’s overwhelmingly likely that they rolled their own AES implementation too.
Interlude: Software AES
Implementing AES in software is terrifying. Even moreso in JavaScript, since your code might be secure in a Node.js environment but insecure in a web browser–or vice versa! And you have no way of knowing as a JavaScript developer because you can’t verify the interpreter.
At the risk of being slightly reductionist, Software AES can be implemented in roughly two ways:
- The insecure but fast way.
- The slow but secure way.
Option 1 means using table look-ups, which are fast, but vulnerable to cache-timing attacks.
Option 2 means using a bitsliced implementation, like BearSSL provides.
Naturally, Proctorio’s AES Implementation Went For Option 1
I’m beginning to think Proctorio’s use of “zero-knowledge” is nothing more than self-description. (Art by Khia.)
You can find their AES implementation in the dhs.cipher.aes
and w
functions in J5HG.js.
What Is the Risk to Students?
Cache-timing vulnerabilities are much more dangerous to students than the previously disclosed issue with GMAC.
That is to say: They cough up your encryption keys. Furthermore, these only took 65 milliseconds to exploit (on 2006 hardware, to boot).
Cache-timing leaks can be easily exploited from other processes running on the same computer as Proctorio. This includes JavaScript code running in another browser window or tab.
Successfully exploiting this vulnerability would net you the keys used by Proctorio to encrypt and authenticate data, and exploiting it would not require violating any of the operating system’s security boundaries. Just running exploit code on the same hardware.
Isn’t cryptography fun?
https://soatok.blog/2020/09/12/edutech-spyware-is-still-spyware-proctorio-edition/
#AESGCM #cryptography #EduTech #JavaScript #Spyware #ZeroDay
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography
(If you aren’t interested in the background information, feel free to skip to the meat of this post. If you’re in a hurry, there’s a summary of results at the end.)
Around this time last year, I was writing Going Bark: A Furry’s Guide to End-to-End Encryption and the accompanying TypeScript implementation of the Extended 3-Way Diffie-Hellman authenticated key exchange (Rawr X3DH). In that blog post, I had said:
The goal of [writing] this is to increase the amount of end-to-end encryption deployed on the Internet that the service operator cannot decrypt (even if compelled by court order) and make E2EE normalized. The goal is NOT to compete with highly specialized and peer-reviewed privacy technology.
This effort had come on the heels of my analysis of bizarre choices in Zoom’s end-to-end encryption, and a brief foray into the discussion into the concept of cryptographic deniability.
I’m stating all this up-front because I want to re-emphasize that end-to-end encryption is important, and I don’t want to discourage the development of E2EE. Don’t let a critical post about someone else’s product discourage you from encrypting your users’ data.
Art: Swizz
Until recently, but especially at the time I wrote all of that, Threema had not been on my radar at all, for one simple reason: Until December 2020, Threema was not open source software.
In spite of this, Threema boasts over 1 million installs on the Google Play Store.
Partly as a result of being closed source for so long, but mostly the Threema team’s history of over-emphasizing the legal jurisdiction they operate from (Switzerland) in their claims about privacy and security, most of the cryptography experts I know had quietly put Threema in the “clown-shoes cryptography” bucket and moved on with their lives. After all, if your end-to-end encryption is well implemented and your engineers prioritized privacy with metadata collection, jurisdiction shouldn’t matter.
What changed for me was, recently, a well-meaning Twitter user mentioned Threema in response to a discussion about Signal.
https://twitter.com/KeijiCase/status/1455669171618914308
In response, I had casually glanced through their source code and pointed out a few obvious WTFs in the Twitter thread. I had planned on following up by conducting a thorough analysis of their code and reporting my findings to them privately (which is called coordinated disclosure, not “responsible disclosure”).
But then I read this bit of FUD on their Messenger Comparison page.
Signal requires users to disclose personally identifiable information. Threema, on the other hand, can be used anonymously: Users don’t have to provide their phone number or email address. The fact that Signal, being a US-based IT service provider, is subject to the CLOUD Act only makes this privacy deficit worse.Threema – Messenger Comparison
Art: LvJ
Thus, because of their deliberate misinformation (something I’ve opposed for years), Threema has been disqualified from any such courtesy. They will see this blog post, and its contents, once it’s public and not a moment sooner.
How Are Threema’s Claims FUD?
Threema’s FUD comparison against Signal.
The quoted paragraph is deceptive, and was apparently designed to make their prospective customers distrustful of Signal.
The CLOUD Act isn’t black magic; it can only force Signal to turn over the data they actually possess. Which is, as demonstrated by a consistent paper trail of court records, almost nothing.
As usual, we couldn’t provide any of that. It’s impossible to turn over data that we never had access to in the first place. Signal doesn’t have access to your messages; your chat list; your groups; your contacts; your stickers; your profile name or avatar; or even the GIFs you search for. As a result, our response to the subpoena will look familiar. It’s the same set of “Account and Subscriber Information” that we can provide: Unix timestamps for when each account was created and the date that each account last connected to the Signal service.That’s it.
The Signal blog
Additionally, their claim that “Threema […] can be used anonymously” is, at best, a significant stretch. At worst, they’re lying by omission.
Sure, it’s possible to purchase Threema with cryptocurrency rather than using the Google Play Store. And if you assume cryptocurrency is private (n.b., the blockchain is more like tweeting all your financial transactions, unless you use something like Zcash), that probably sounds like a sweet deal.
However, even if you skip the Google Play Store, you’re constantly beaconing a device identifier to their server (which is stored on your device) whenever a license key check is initiated.
Bear in mind, over 1 million of their installed base is through the Google Play Store. This means that, in practice, almost nobody actually takes advantage of this “possibility” of anonymity.
Additionally, their own whitepaper discusses the collection of users’ phone number and email addresses. Specifically, they store hashes (really HMAC with a static, publicly known key for domain separation, but they treat it as a hash) of identifiers (mobile numbers, email addresses) on their server.
Circling back to the possibility of anonymity issue: When it comes to security (and especially cryptography), the defaults matter a lot. That’s why well-written cryptographic tools prioritize both correctness and misuse resistance over shipping new features. The default configuration is the only configuration for all but your most savvy users, because it’s the path of least resistance. Trying to take credit for the mere existence of an alternate route (bypassing the Google Play Store, paying with cryptocurrency) that few people take is dishonest, and anyone in the cryptography space should know better. It’s fine to offer that option, but not fine to emphasize it in marketing.
The only correct criticism of Signal contained within their “comparison” is that, as of this writing, they still require a phone number to bootstrap an account. The phone number requirement makes it difficult for people to have multiple independent, compartmentalized identities; i.e. preventing your professional identity from intersecting with your personal identity–which is especially not great for LGBTQIA+ folks that aren’t out to their families (usually for valid safety reasons).
This obviously sucks, but fails to justify the other claims Threema made.
With all that out of the way, let’s look at Threema’s cryptography protocol and some of their software implementations.
Art: LvJ
Threema Issues and Design Flaws
To be up front: Some of the issues and design flaws discussed below are not security vulnerabilities. Some of them are.
Where possible, I’ve included a severity rating atop each section if I believe it’s a real vulnerability, and omitted this severity rating where I believe it is not. Ultimately, a lot of what’s written here is my opinion, and you’re free to disagree with it.
List of Issues and Vulnerabilities Discovered in Threema
- Issues With Threema’s Cryptographic Protocols
- No Forward Security
- Threema IDs Aren’t Scalable
- Peer Fingerprints Aren’t Collision-Resistant
- No Breadcrumb for Cryptography Migrations
- Inconsistency with Cryptographic Randomness
- Invisible Salamanders with Group Messaging
- Issues With Threema Android (Repository)
- Weak Encryption With Master Key (LocalCrypto)
- File Encryption Uses Unauthenticated CBC Mode
- Cache-Timing Leaks with Hex-Encoding (JNaCl)
- Issues With Threema Web (Repository)
Issues With Threema’s Cryptography Protocols
This first discussion is about weaknesses in Threema’s cryptography protocol, irrespective of the underlying implementation.
At its core, Threema uses similar cryptography to Tox. This means that they’re using some version of NaCl (the library that libsodium is based on) in every implementation.
This would normally be a boring (n.b. obviously secure) choice, but the security bar for private messaging apps is very high–especially for a Signal competitor.
It isn’t sufficient for a secure messenger competing with Signal to just use NaCl. You also have to build well-designed protocols atop the APIs that NaCl gives you.
No Forward Secrecy
The goal of end-to-end encryption is to protect users from the service provider. Any security property guaranteed by transport-layer cryptography (i.e. HTTPS) are therefore irrelevant, since the cryptography is terminated on the server rather than your peer’s device.
The Threema team claims to provide forward secrecy, but only on the network connection.
Forward secrecy: Threema provides forward secrecy on the network connection (not on the end-to-end layer).
This is how Threema admits a weakness in their construction. “We offer it at a different [but irrelevant] layer.”
That’s not how any of this works.
Art: LvJ
Their whitepaper acknowledges this deficit.
As I’ve demonstrated previously, it’s not difficult to implement Signal’s X3DH AKE (which offers forward secrecy) using libsodium. Most of what I’ve done there can be done with NaCl (basically use SHA512 instead of BLAKE2b and you’re golden).
The X3DH handshake is essentially multiple Curve25519 ECDH handshakes (one long-term, one short-term e.g. biweekly, one totally ephemeral), which are mixed together using a secure key derivation function (i.e. HKDF).
To the state of the art for secure messaging that Threema claims, Forward Secrecy is table stakes. Threema’s end-to-end encryption completely lacks this property (and transport-layer doesn’t count). No amount of hand-waving on their part can make this not a weakness in Threema.
The specification for X3DH has been public for 5 years. My proof-of-concept TypeScript implementation that builds atop libsodium is nearly a year old.
If the Threema team wanted to fix this, it would not be hard for them to do so. Building a usable messaging app is much harder than building X3DH on top of a well-studied Curve25519 implementation.
Threema IDs Aren’t Scalable
Severity: Low
Impact: Denial of Service
Threema IDs are 8-digit alphanumeric unique identifiers, chosen randomly, that serve as a pseudonymous mapping to an asymmetric keypair.
This means there are possible Threema IDs (2.8 trillion). This is approximately , so we can say there’s about a 41-bit keyspace.
That may seem like a large number (more than 100,000 the human population), but they’re chosen randomly. Which means: The birthday problem rears its ugly head.
Threema will begin to experience collisions (with 50% probability) after (roughly 1.7 million) Threema IDs have been reserved.
At first, this won’t be a problem: If you collide with someone else’s Threema ID, you just have to generate another one. That’s just an extra round-trip for the Threema server to say “LOL NO try again”. In fact, given the Google Play Store estimates for the number of Threema installs, they’re probably in excess of the birthday bound today.
Quick aside: One known problem with Threema IDs is that users don’t know they’re supposed to back them up, so when they switch phones, they lose access to their old IDs and secret keys. Aside from the obvious social engineering risk that emerges from habitually tolerating new Threema IDs for all contacts (“I lost my old Threema ID, again, so blindly trust that it’s me and not a scammer”), there’s a bigger problem.
Since Threema IDs are used by each app to identify peers, it’s not possible for Threema to recycle expired IDs. In order to do so, Threema would need to be able to inspect the social graph to determine which Threema IDs can be freed, which would be a huge privacy violation and go against their marketing.
So what happens if someone maliciously reserve-then-discards billions of Threema IDs?
Due to the pigeonhole principle, this will eventually lead to an address space exhaustion and prevent more Threema users from being onboarded. However, trouble will begin before it gets this far: At a certain point, legitimate users’ attempts to register a new Threema ID will result in an unreasonable number of contentions with reserved IDs.
Neither the Threema website nor their whitepaper discusses how Threema can cope with this sort of network strain.
Art: LvJ
This problem could have been prevented if the Threema designers were more cognizant of the birthday bound.
Additionally, the fact that Threema IDs are generated server-side is not sufficient to mitigate this risk. As long as IDs are never recycled unless explicitly deleted by their owner, they will inevitably run head-first into this problem.
Peer Fingerprints Aren’t Collision-Resistant
Severity: Informational
Impact: None
From the Threema whitepaper:
Truncating a SHA-256 hash to 128 bits doesn’t give you 128 bits of security against collision attacks. It gives you 64 bits of security, which is about the same security that SHA-1 gives you against collision attacks.
This is, once again, attributable to the birthday bound problem that affects Threema IDs.
Art: Swizz
Related: I also find it interesting that they only hash the Curve25519 public key and not the combination of public key and Threema ID. The latter construction would frustrate batch attacks by committing both the public key (which is random and somewhat opaque to users) and the Threema ID (which users see and interact with) to a given fingerprint:
- Finding a collision against a 128-bit probability space, where the input is a public key, can be leveraged against any user.
- Finding a collision against a 128-bit probability space, where the input is a public key and a given Threema ID, can only be leveraged against a targeted user (i.e. that Threema ID).
Both situations still suck because of the 128-bit truncation, but Threema managed to choose the worse of two options, and opened the door to a multi-user attack.
Impact of the Peer Fingerprint Bypass
To begin with, let’s assume the attacker has compromised all of the Threema servers and is interested in attacking the security of the end-to-end encryption. (Assuming the server is evil is table stakes for the minimum threat model for any end-to-end encrypted messaging app.)
Imagine a situation with Alice, Bob, and Dave.
Alice pushes her Threema ID and public key to the server (A_id, A_pk
), then chats with Bob legitimately (B_id, B_pk
). Bob suggests that Alice talks to his drug dealer, Dave (D_id, D_pk
).
The attacker can obtain knowledge of everyone’s public keys, and begin precomputing fingerprint collisions for any participants in the network. Their first collision will occur after keypairs are generated (with 50% probability), and then collisions will only become more frequent.
What happens when an attacker finds fingerprint collisions for people that are close on the social graph?
When Alice goes to talk with Dave, the attacker replaces her public key with one that collides with her fingerprint, but whose secret key is known to the attacker (M_pk1
). The attacker does the same thing in the opposite direction (D_id, M_pk2
).
When Alice, Bob, and Dave compare their fingerprints, they will think nothing is amiss. In reality, an attacker can silently intercept Alice’s private conversation with Dave.
The full range of key exchanges looks like this:
Alice -> Bob: (A_id, A_pk) Alice -> Dave: (A_id, M_pk1) Bob -> Alice: (B_id, B_pk) Bob -> Dave: (B_id, D_pk) Dave -> Bob: (D_id, D_pk) Dave -> Alice: (D_id, M_pk2) SHA256-128(A_pk) == SHA256-128(M_pk1), A_pk != M_pk1 SHA256-128(D_pk) == SHA256-128(M_pk2), D_pk != M_pk2
Alice will encrypt messages against M_pk2
, rather than D_pk
. But the fingerprints will match for both. Dave will respond by encrypting messages against M_pk1
, instead of A_pk
. The attacker can sit in the middle and re-encrypt messages on behalf of the recipient. Even if both Alice and Dave compare the fingerprints they see with what Bob sees, nobody will detect anything.
(Bob’s communications remain unaffected since he already downloaded everyone’s public keys. This only affects new conversations.)
This is why you only need a collision attack to violate the security of the peer fingerprint, and not a preimage attack.
To be clear, a targeted attack is much more expensive (roughly several trillion times the cost of a general attack; versus ).
This is a certificational weakness, and I’m only including it as further evidence of poor cryptographic engineering by Threema’s team rather than an overt vulnerability.
Update (2021-11-08): Remember, this is a list of issues I discovered, not specifically a list of vulnerabilities. People trying to argue in the comments, or my inbox, about whether this is “really” a vulnerability is getting a little tiring; hence, this reminder.
How to Fix Threema’s Fingerprints
First, you need to include both the Threema ID and curve25519 public key in the calculation. This will frustrate batch attacks. If they were to switch to something like X3DH, using the long-term identity key for fingerprints would also be reasonable.
Next, calculate a winder margin of truncation. here’s a handy formula to use: where is the expected population of users. If you set equal to , you end up with a fingerprint that truncates to 168 bits, rather than 128 bits.
This formula yields a probability space with a birthday bound of for collisions, and a preimage cost of (even with a trillion Threema IDs reserved), which plainly isn’t going to ever happen.
No Breadcrumb for Cryptography Migrations
Severity: Informational
Impact: Frustration of Engineering Efforts
There is no concept of versioning anywhere in Threema’s protocols or designs, which means that one day migrating to better cryptography, without introducing the risk of downgrade attacks, simply isn’t possible.
The lack of any cryptography migration breadcrumb also prevents Threema from effectively mitigating security weaknesses inherent to their protocols and designs. You’ll see why this is a problem when we start looking at the implementations.
Cryptography migrations are difficult in general, because:
- Secure cryptography is not backwards compatible with insecure cryptography.
- In any distributed system, if you upgrade writers to a new cryptography protocol before your readers are upgraded too, you’ll cause availability issues.
The asynchronous nature of mobile messaging apps makes this even more challenging.
Inconsistency with Cryptographic Randomness
Severity: None
Impact: Annoyed Cryptography Auditors
Threema commits the same sin as most PGP implementations in misunderstanding the difference between /dev/random
and /dev/urandom
on Linux.
See also: Myths about /dev/urandom and How to Safely Generate a Random Number. From the latter:
Doesn’t the man page say to use /dev/random?You should ignore the man page. Don’t use /dev/random. The distinction between /dev/random and /dev/urandom is a Unix design wart. The man page doesn’t want to admit that, so it invents a security concern that doesn’t really exist. Consider the cryptographic advice in random(4) an urban legend and get on with your life.
Emphasis mine.
If you use /dev/random instead of urandom, your program will unpredictably (or, if you’re an attacker, very predictably) hang when Linux gets confused about how its own RNG works. Using /dev/random will make your programs less stable, but it won’t make them any more cryptographically safe.
Emphasis not mine.
This is an easy fix (/dev/random
-> /dev/urandom
), but it signals that the whitepaper’s author lacks awareness of cryptographic best practices.
And it turns out, they actually use /dev/urandom
in their code. So this is just an inconsistency and an annoyance rather than a flaw.
Source: UNDERTALE
Update (2021-11-08): Yes, I’m aware that the Linux RNG changed in 5.6 to make/dev/random
behave the way it always should have.However, Linux 5.6 is extremely unlikely to help anyone affected by the old Android
SecureRandom
bug that Threema has implied as part of their threat model when they called it out in their Cryptography Whitepaper, so I didn’t originally deem it fit to mention.
Invisible Salamanders with Group Messaging
Severity: Medium
Impact: Covert channel through media messages due to multi-key attacks
Note: This was discovered after the initial blog post was published and added later the same day.
Threema doesn’t do anything special (e.g. TreeKEM) for Group Messaging. Instead, groups are handled by the client and messages are encrypted directly to all parties.
This provides no group messaging authentication whatsoever. A user with a malicious Threema client can trivially join a group then send different messages to different recipients.
Imagine a group of five stock traders (A, B, C, D, E). User A posts a message to the group such that B, C, and D see “HOLD” but user E sees “SELL”.
An attacker can have some real fun with that, and it’s probably the easiest attack to pull off that I’ll discuss in this post.
User E won’t have much recourse, either: Users B, C, and D will all see a different message than User E, and will think E is dishonest. Even if User E provides a screenshot, the rest of the users will trust their own experiences with their “private messaging app” and assume User E is spinning yarns to discredit User A. It would then be very easy for A to gaslight E. This is the sort of attack that us LGBTQIA+ folks and furries are often, sadly, all too familiar with (usually from our families).
Additionally, there’s an invisible salamanders attack with media files.
Invisible Salamanders is an attack technique in systems where traditional, fast AEAD modes are employed, but more than one key can be selected. The security modes of most AEAD modes assumed one fixed symmetric encryption key held by both parties in their security designs.
To exploit the Invisible Salamanders attack:
- Generate two (or more) Xsalsa20-Poly1305 keys that will encrypt different media files to a given ciphertext + tag.
- Send a different key to a different subset of group participants.
Both parties will download the same encrypted file, but will see a different plaintext. Threema cannot detect this attack server-side to mitigate impact, either.
Art: LvJ
Encrypting multiple plaintexts, each under a different key, that produce an identical ciphertext and authentication tag is possible with AES-GCM, AES-GCM-SIV, and even Xsalsa20-Poly1305 (NaCl secretbox, which is what Threema uses).
Preventing this kind of misuse was never a security goal of these modes, and is generally not recognized as a vulnerability in the algorithms. (It would only qualify as a vulnerability if the algorithm designers stated an assumption that this violated.) However, Invisible Salamanders absolutely is a vulnerability in the protocols that build atop the algorithms. Thus, it qualifies as a vulnerability in Threema.
Here’s a Black Hat talk by Paul Grubbs explaining how the Invisible Salamanders technique works in general:
https://www.youtube.com/watch?v=3M1jIO-jLHI
This isn’t a problem for i.e. Signal, because the Double Ratchet algorithm keeps the key synchronized for all group members. Each ciphertext is signed by the sender, but encrypted with a Double Ratchet key. There’s no opportunity to convince one partition of the Group to use a different key to decrypt a message. See also: Sesame for multi-device.
The reason the vulnerability exists is that Poly1305, GMAC, etc. are fast symmetric-key message authentication algorithms, but they are not collision-resistant hash functions (e.g. SHA-256).
When you use a collision-resistant hash function, instead of a polynomial evaluation MAC, you’re getting a property called message commitment. If you use a hash function over the encryption key (and, hopefully, some domain-separation constant)–or a key the encryption key is deterministically derived from–you obtain a property called key commitment.
In either case, you can claim your AEAD mode is also random-key robust. This turns out to be true of AES-CBC + HMAC-SHA2 (what Signal uses), due to HMAC-SHA2.
Art: Scruff
Invisible Salamanders Mitigation with NaCl
First, you’ll need to split the random per-media-file key into two keys:
- A derived encryption key, which will take the place of what is currently the only key.
- A derived authentication key, which will be used with
crypto_auth
andcrypto_auth_verify
to commit the ciphertext + tag.
It’s important that both keys are derived from the same input key, and that the key derivation relies on a strong pseudorandom function.
Pseudocode:
function encryptMediaV2(data: Buffer, fileKey: Buffer) { const encKey = HmacSha256('File Encryption Key', fileKey); const authKey = HmacSha256('Media Integrity Key', fileKey); const encrypted = NaCl.crypto_secretbox(data, nonce, encKey); const commitment = NaCl.crypto_auth(encrypted, authKey); return Buffer.concat([commitment, encrypted]);}function decryptMediaV2(downloaded: Buffer, fileKey: Buffer) { const tag = downloaded.slice(0, 32); const ciphertext = downloaded.slice(32); const authKey = HmacSha256('Media Integrity Key', fileKey); if (!NaCl.crypto_auth_verify(tag, ciphertext, authKey)) { throw new Exception("bad"); } const encKey = HmacSha256('File Encryption Key', fileKey); return NaCl.crypto_secretbox_open(ciphertext, nonce, encKey);}
This code does two things:
- It derives two keys instead of only using the one. You could also just use a SHA512 hash, and then dedicate the left half to encryption and the right half to authentication. Both are fine.
- It uses the second key (not for encryption) to commit the ciphertext (encrypted file). This provides both message- and key-encryption.
If you didn’t care about message-commitment, and only cared about key-commitment, you could just skip the crypto_auth
entirely and just publish the authKey
as a public commitment hash of the key.
This corresponds to Type I in the Key Committing AEADs paper (section 3.1), if you’re trying to build a security proof.
Of course, the migration story for encrypted media in Threema is going to be challenging even if they implement my suggestion.
Issues With Threema Android
Weak Encryption with Master Key (LocalCrypto)
Severity: Low/High
Impact: Weak KDF with Crib (default) / Loss of Confidentiality (no passphrase)
The on-device protection of your Master Key (which also protects your Curve25519 secret key) consists of the following:
- A hard-coded obfuscation key (
950d267a88ea77109c50e73f47e06972dac4397c99ea7e67affddd32da35f70c
), which is XORed with the file’s contents. - (Optional) If the user sets a passphrase, calculate the PBKDF2-SHA1 of their passphrase (with only 10,000 iterations) and XOR the master key with this output.
If the user opts to not use a passphrase, if their phone is ever seized from a government agency, it might as well be stored as plaintext.
Art: LvJ
To be charitable, maybe that kind of attack is outside of their (unpublished) threat model.
Even if a user elects to store a passphrase, the low iteration count of PBKDF2 will allow for sophisticated adversaries to be able to launch offline attacks against the encrypted key file.
The 4-byte SHA1 verification checksum of the plaintext master key gives cracking code a crib for likely successful attempts (which, for weak passphrases, will almost certainly mean “you found the right key”). This is somehow worse than a typical textbook MAC-and-Encrypt design.
The checksum-as-crib is even more powerful if you’ve sent the target a photo before attempting a device seizure: Just keep trying to crack the Master Key then, after each time the checksum passes, decrypt the photo until you’ve successfully decrypted the known plaintext.
The verification checksum saves you from wasted decryption attempts; if the KDF output doesn’t produce a SHA1 hash that begins with the verification checksum, you can keep iterating until it does.
Once you’ve reproduced the file you sent in the first place, you also have their Curve25519 secret key, which means you can decrypt every message they’ve ever sent or received (especially if the Threema server operator colludes with their government).
Art: LvJ
Also, Array.equals()
isn’t constant-time. Threema should know this by now thanks to their Cure53 audit finding other examples of it a year ago. It’s 2021, you can use MessageDigest.isEqual()
for this.
Update: An Even Faster Attack Strategy
SHA1 can be expensive in a loop. A much faster technique is to do the XOR dance with the deobfuscated master key file, then see if you can decrypt the private_key
file.
Because this file is AES-CBC encrypted using the Master Key, you can just verify that the decryption result ends in a valid padding block. Because Curve25519 secret keys are 32 bytes long, there should be a full 16-byte block of PKCS#7 padding bytes when you’ve guessed the correct key.
You can then use the 4-byte SHA-1 checksum and a scalarmult vs. the target’s public key to confirm you’ve guessed the correct password.
Thanks to @Sc00bzT for pointing this attack strategy out.
File Encryption Uses Unauthenticated CBC Mode
Severity: Low
Impact: Unauthenticated encryption (but local)
Threema’s MasterKey
class has an API used elsewhere throughout the application that encrypts and decrypts files using AES/CBC/PKCS5Padding
. This mode is widely known to be vulnerable to padding oracle attacks, and has a worse wear-out story than other AES modes.
Unlike the care taken with nonces for message encryption, Threema doesn’t bother trying to keep track of which IVs it has seen before, even though a CBC collision will happen much sooner than an Xsalsa20 collision. It also just uses SecureRandom
despite the whitepaper claiming to avoid it due to weaknesses with that class on Android.
Additionally, there’s no domain separation or protection against type confusion in the methods that build atop this feature. They’re just AES-CBC-encrypted blobs that are decrypted and trusted to be the correct file format. So you can freely swap ciphertexts around and they’ll just get accepted in incorrect places.
Tangent: The Pure-Java NaCl implementation they use when JNI isn’t available also uses SecureRandom
. If you’re going to include a narrative in your Cryptography Whitepaper, maybe check that you’re consistently adhering to it?
Cache-Timing Leaks with Hex-Encoding (JNaCl)
Severity: Low
Impact: Information disclosure through algorithm time
This isn’t a meaningfully practical risk, but it’s still disappointing to see in their pure-Java NaCl implementation. Briefly:
- JNaCl definition for hex-encoding and decoding
- OpenJDK definition for
[url=https://github.com/openjdk/jdk/blob/739769c8fc4b496f08a92225a12d07414537b6c0/src/java.base/share/classes/java/lang/Character.java#L10660-L10662]Character.digit()[/url]
- OpenJDK definition for
[url=https://github.com/openjdk/jdk/blob/739769c8fc4b496f08a92225a12d07414537b6c0/make/data/characterdata/CharacterDataLatin1.java.template#L196]CharacterDataLatin1.digit()[/url]
Because this implementation uses table lookups, whenever a secret (plaintext or key) goes through one of the JNaCl hexadecimal functions, it will leak the contents of the secret through cache-timing.
For reference, here’s how libsodium implements hex-encoding and decoding.
Art: Swizz
Issues With Threema Web
I’m not going to spend a lot of time on the Threema Web project, since it’s been in maintenance-only mode since at least January.
Insecure Password-Based Key Derivation
Severity: High
Impact: Insecure cryptographic storage
While SHA512 is a good cryptographic hash function, it’s not a password hash function. Those aren’t the same thing.
Threema’s Web client derives the keystore encryption key from a password by using the leftmost 32 bytes of a SHA512 hash of the password.
/** * Convert a password string to a NaCl key. This is done by getting a * SHA512 hash and returning the first 32 bytes. */private pwToKey(password: string): Uint8Array { const bytes = this.stringToBytes(password); const hash = nacl.hash(bytes); return hash.slice(0, nacl.secretbox.keyLength);}
Once again, just because you’re using NaCl, doesn’t mean you’re using it well.
This code opens the door to dictionary attacks, rainbow tables, accelerated offline attacks, and all sorts of other nasty scenarios that would have been avoided if a password hashing algorithm was used instead of SHA512.
Also, this is another cache-timing leak in most JavaScript engines and the entire method that contains it could have been replaced by Uint8Array.from(password, 'utf-8')
.
Threema can’t claim they were avoiding the UInt8Array.from()
method there because of compatibility concerns (e.g. with IE11) because they use it here.
Art: LvJ
Summary of Results
In the cryptography employed by Threema, I was able to quickly identify 5 6 issues, of which 2 3 directly negatively impact the security of their product (Threema IDs Aren’t Scalable can lead to address exhaustion and Denial-of-Service; Peer Fingerprints Aren’t Collision-Resistant allows moderately-funded adversaries to bypass fingerprint detection for a discount).
Both security issues in the Threema cryptography protocol were caused by a poor understanding of the birthday bound of a pseudorandom function–something that’s adequately covered by Dan Boneh’s Cryptography I course.
Additionally, the total lack of forward secrecy invalidates the Threema marketing claims of being more private or secure than Signal.
Update (3:45 PM): After initially publishing this, I realized there was a third security issue in the cryptography protocol, concerning Group Messaging: Invisible Salamanders.
In the Android app, I was able to identify 3 issues, of which 2 directly negatively impact the security of their product (Weak Encryption With Master Key (LocalCrypto) provides a very weak obfuscation or somewhat weak KDF (with a checksum) that, either way, makes leaking the key easier than it should be; File Encryption Uses Unauthenticated CBC Mode introduces all of the problems of CBC mode and unauthenticated encryption).
Finally, I only identified 1 security issue in the web client (Insecure Password-Based Key Derivation) before I saw the maintenance notice in the README on GitHub and decided it’s not worth my time to dive any deeper.
I did not study the iOS app at all. Who knows what dragons there be?
Art: LvJ
There were a few other issues that I thought existed, and later realized was false. For example: At first glance, it looked like they weren’t making sure received messages didn’t collide with an existing nonce (n.b. only on messages being sent)–which, since the same key is used in both directions, would be catastrophic. It turns out, they do store the nonces on received messages, so a very obvious attack isn’t possible.
The fact that Threema’s developers built atop NaCl probably prevented them from implementing higher-severity issues in their product. Given that Threema Web finding, I can’t help but ponder if they would have been better served by libsodium instead of NaCl.
Threema has been publicly audited (twice!) by vendors that they hired to perform code audits, and yet so many amateur cryptography mistakes persist in their designs and implementations years later. From their most recent audit:
Cure53’s conclusion doesn’t jive with my observations. I don’t know if that says something about them, or something about me, or even something much more meta and existential about the nature of cryptographic work.
Art: Riley
Is Threema Vulnerable to Attack?
Unfortunately, yes. In only a few hours of review, I was able to identify 3 vulnerabilities in Threema’s cryptography, as well as 3 others affecting their Android and web apps.
How Severe Are These Issues?
While there are several fundamental flaws in Threema’s overall cryptography, they mostly put the service operators at risk and signal a lack of understanding of the basics of cryptography. (Namely: discrete probability and pseudorandom functions.)
The biggest and most immediate concern for Threema users is that a malicious user can send different media messages to different members of the same group, and no one can detect the deception. This is a much easier attack to pull off than anything else discussed above, and can directly be used to sew confusion and enable gaslighting.
For Threema Enterprise users, imagine someone posting a boring document in a group chat for work purposes, while also covertly leaking confidential and proprietary documents to someone that’s not supposed to have access to said documents. Even though you all see the same encrypted file, the version you decrypt is very different from what’s being fed to the leaker. Thus, Threema’s vulnerability offers a good way for insider threats to hide their espionage in plain sight.
The remaining issues discussed do not put anyone at risk, and are just uncomfortable design warts in Threema.
Recommendations for Threema Users
Basically, I don’t recommend Threema.
Art: LvJ
Most of what I shared here isn’t a game over vulnerability, provided you aren’t using Threema for group messaging, but my findings certainly debunk the claims made by Threema’s marketing copy.
If you are using Threema for group messaging–and especially for sharing files–you should be aware of the Invisible Salamanders attack discussed above.
When in doubt, just use Signal. It’s free, open source, private, and secure.
The reason you hear less about Signal on blogs like this is because, when people like me reviews their code, we don’t find these sorts of problems. I’ve tried to find problems before.
If you want a federated, desktop-first experience with your end-to-end encryption without a phone number, I don’t have any immediate replacement recommendations. Alternatives exist, but there’s no clear better option that’s production-ready today.
If you want all of the above and mobile support too, with Tor support as a first-class feature enabled by default, Open Privacy is developing Cwtch. It’s still beta software, though, and doesn’t support images or video yet. You also can’t install it through the Google Play Store (although that will probably change when they’re out of beta).
Looking forward, Signal recently announced the launch of anti-spam and spam-reporting features. This could indicate that the phone number requirement could be vanishing soon. (They already have a desktop client, after all.) If that happens, I implore everyone to ditch Threema immediately.
Disclosure Timeline
This is all zero-day. I did not notify Threema ahead of time with these findings.
Threema talks a big talk–calling themselves more private/secure than Signal and spreading FUD instead of an honest comparison.
If you’re going to engage in dishonest behavior, I’m going to treat you the same way I treat other charlatans. Especially when your dishonesty will deceive users into trusting an inferior product with their most sensitive and intimate conversations.
Threema also like to use the term “responsible disclosure” (which is a term mostly used by vendors to gaslight security researchers into thinking full disclosure is unethical) instead of the correct term (coordinated disclosure).
Additionally, in cryptography, immediate full disclosure is preferred over coordinated disclosure or non-disclosure. The responsibility of a security engineer is to protect the users, not the vendors, so in many cases, full disclosure is responsible disclosure.
https://twitter.com/ThreemaApp/status/1455960743002656776
That’s just a pet peeve of mine, though. Can we please dispense of this paleologism?
If you’re curious about the title, Threema’s three strikes were:
- Arrogance (claiming to be more private than Signal)
- Dishonesty (attempting to deceive their users about Signal’s privacy compared with Threema)
- Making amateur mistakes in their custom cryptography designs (see: everything I wrote above this section)
https://soatok.blog/2021/11/05/threema-three-strikes-youre-out/
#cryptography #OnlinePrivacy #privacy #privateMessaging #symmetricCryptography #Threema #vuln #ZeroDay
Governments are back on their anti-encryption bullshit again.Between the U.S. Senate’s “EARN IT” Act, the E.U.’s slew of anti-encryption proposals, and Australia’s new anti-encryption law, it’s become clear that the authoritarians in office view online privacy as a threat to their existence.
Normally, when the governments increase their anti-privacy sabre-rattling, technologists start talking more loudly about Tor, Signal, and other privacy technologies (usually only to be drowned out by paranoid people who think Tor and Signal are government backdoors or something stupid; conspiracy theories ruin everything!).
I’m not going to do that.
Instead, I’m going to show you how to add end-to-end encryption to any communication software you’re developing. (Hopefully, I’ll avoid making any bizarre design decisions along the way.)
But first, some important disclaimers:
- Yes, you should absolutely do this. I don’t care how banal your thing is; if you expect people to use it to communicate with each other, you should make it so that you can never decrypt their communications.
- You should absolutely NOT bill the thing you’re developing as an alternative to Signal or WhatsApp.
- The goal of doing this is to increase the amount of end-to-end encryption deployed on the Internet that the service operator cannot decrypt (even if compelled by court order) and make E2EE normalized. The goal is NOT to compete with highly specialized and peer-reviewed privacy technology.
- I am not a lawyer, I’m some furry who works in cryptography. The contents of this blog post is not legal advice, nor is it endorsed by any company or organization. Ask the EFF for legal questions.
The organization of this blog post is as follows: First, I’ll explain how to encrypt and decrypt data between users, assuming you have a key. Next, I’ll explain how to build an authenticated key exchange and a ratcheting protocol to determine the keys used in the first step. Afterwards, I’ll explore techniques for binding authentication keys to identities and managing trust. Finally, I’ll discuss strategies for making it impractical to ever backdoor your software (and impossible to silently backdoor it), just to piss the creeps and tyrants of the world off even more.
You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing.
(Art by Kyume.)
Preliminaries
Choosing a Cryptography Library
In the examples contained on this page, I will be using the Sodium cryptography library. Specifically, my example code will be written with the Sodium-Plus library for JavaScript, since it strikes a good balance between performance and being cross-platform.const { SodiumPlus } = require('sodium-plus');(async function() { // Select a backend automatically const sodium = await SodiumPlus.auto(); // Do other stuff here})();
Libsodium is generally the correct choice for developing cryptography features in software, and is available in most programming languages,
If you’re prone to choose a different library, you should consult your cryptographer (and yes, you should have one on your payroll if you’re doing things different) about your design choices.
Threat Modelling
Remember above when I said, “You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing”?How far you go in implementing the steps outlined on this blog post should be informed by a threat model, not an ad hoc judgment.
For example, if you’re encrypting user data and storing it in the cloud, you probably want to pass the Mud Puddle Test:
1. First, drop your device(s) in a mud puddle.
2. Next, slip in said puddle and crack yourself on the head. When you regain consciousness you’ll be perfectly fine, but won’t for the life of you be able to recall your device passwords or keys.
3. Now try to get your cloud data back.Did you succeed? If so, you’re screwed. Or to be a bit less dramatic, I should say: your cloud provider has access to your ‘encrypted’ data, as does the government if they want it, as does any rogue employee who knows their way around your provider’s internal policy checks.
Matthew Green describes the Mud Puddle Test, which Apple products definitely don’t pass.
If you must fail the Mud Puddle Test for your users, make sure you’re clear and transparent about this in the documentation for your product or service.(Art by Swizz.)
I. Symmetric-Key Encryption
The easiest piece of this puzzle is to encrypt data in transit between both ends (thus, satisfying the loosest definition of end-to-end encryption).At this layer, you already have some kind of symmetric key to use for encrypting data before you send it, and for decrypting it as you receive it.
For example, the following code will encrypt/decrypt strings and return hexadecimal strings with a version prefix.
const VERSION = "v1";/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, key, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(50)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, key, aad ); return plaintext.toString('utf-8');}
Under-the-hood, this is using XChaCha20-Poly1305, which is less sensitive to timing leaks than AES-GCM. However, like AES-GCM, this encryption mode doesn’t provide message- or key-commitment.
If you want key commitment, you should derive two keys from
$key
using a KDF based on hash functions: One for actual encryption, and the other as a key commitment value.If you want message commitment, you can use AES-CTR + HMAC-SHA256 or XChaCha20 + BLAKE2b-MAC.
If you want both, ask Taylor Campbell about his BLAKE3-based design.
A modified version of the above code with key-commitment might look like this:
const VERSION = "v2";/** * Derive an encryption key and a commitment hash. * @param {CryptographyKey} key * @param {Uint8Array} nonce * @returns {{encKey: CryptographyKey, commitment: Uint8Array}} */async function deriveKeys(key, nonce) { const encKey = new CryptographyKey(await sodium.crypto_generichash( new Uint8Array([0x01].append(nonce)), key )); const commitment = await sodium.crypto_generichash( new Uint8Array([0x02].append(nonce)), key ); return {encKey, commitment};}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const {encKey, commitment} = await deriveKeys(key, nonce); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, encKey, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(commitment) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(114)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const storedCommitment = await sodium.sodium_hex2bin(encrypted.slice(50, 114)); const {encKey, commitment} = await deriveKeys(key, nonce); if (!(await sodium.sodium_memcmp(storedCommitment, commitment))) { throw new Error("Incorrect commitment value"); } const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, encKey, aad ); return plaintext.toString('utf-8');}
Another design choice you might make is to encode ciphertext with base64 instead of hexadecimal. That doesn’t significantly alter the design here, but it does mean your decoding logic has to accommodate this.
You SHOULD version your ciphertexts, and include this in the AAD provided to your AEAD encryption mode. I used “v1” and “v2” as a version string above, but you can use your software name for that too.
II. Key Agreement
If you’re not familiar with Elliptic Curve Diffie-Hellman or Authenticated Key Exhcanges, the two of the earliest posts on this blog were dedicated to those topics.Key agreement in libsodium uses Elliptic Curve Diffie-Hellman over Curve25519, or X25519 for short.
There are many schools of thought for extending ECDH into an authenticated key exchange protocol.
We’re going to implement what the Signal Protocol calls X3DH instead of doing some interactive EdDSA + ECDH hybrid, because X3DH provides cryptographic deniability (see this section of the X3DH specification for more information).
For the moment, I’m going to assume a client-server model. That may or may not be appropriate for your design. You can substitute “the server” for “the other participant” in a peer-to-peer configuration.
Head’s up: This section of the blog post is code-heavy.
Update (November 23, 2020): I implemented this design in TypeScript, if you’d like something tangible to work with. I call my library, Rawr X3DH.
X3DH Pre-Key Bundles
Each participant will need to upload an Ed25519 identity key once (which is a detail covered in another section), which will be used to sign bundles of X25519 public keys to use for X3DH.Your implementation will involve a fair bit of boilerplate, like so:
/** * Generate an X25519 keypair. * * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}} */async function generateKeyPair() { const keypair = await sodium.crypto_box_keypair(); return { secretKey: await sodium.crypto_box_secretkey(keypair), publicKey: await sodium.crypto_box_publickey(keypair) };}/** * Generates some number of X25519 keypairs. * * @param {number} preKeyCount * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}[]} */async function generateBundle(preKeyCount = 100) { const bundle = []; for (let i = 0; i < preKeyCount; i++) { bundle.push(await generateKeyPair()); } return bundle;}/** * BLAKE2b( len(PK) | PK_0, PK_1, ... PK_n ) * * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function prehashPublicKeysForSigning(publicKeys) { const hashState = await sodium.crypto_generichash_init(); // First, update the state with the number of public keys const pkLen = new Uint8Array([ (publicKeys.length >>> 24) & 0xff, (publicKeys.length >>> 16) & 0xff, (publicKeys.length >>> 8) & 0xff, publicKeys.length & 0xff ]); await sodium.crypto_generichash_update(hashState, pkLen); // Next, update the state with each public key for (let pk of publicKeys) { await sodium.crypto_generichash_update( hashState, pk.getBuffer() ); } // Return the finalized BLAKE2b hash return await sodium.crypto_generichash_final(hashState);}/** * Signs a bundle. Returns the signature. * * @param {Ed25519SecretKey} signingKey * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function signBundle(signingKey, publicKeys) { return sodium.crypto_sign_detached( await prehashPublicKeysForSigning(publicKeys), signingKey );}/** * This is just so you can see how verification looks. * * @param {Ed25519PublicKey} verificationKey * @param {X25519PublicKey[]} publicKeys * @param {Uint8Array} signature */async function verifyBundle(verificationKey, publicKeys, signature) { return sodium.crypto_sign_verify_detached( await prehashPublicKeysForSigning(publicKeys), verificationKey, signature );}
This boilerplate exists just so you can do something like this:
/** * Generate some number of X25519 keypairs. * Persist the bundle. * Sign the bundle of publickeys with the Ed25519 secret key. * Return the signed bundle (which can be transmitted to the server.) * * @param {Ed25519SecretKey} signingKey * @param {number} numKeys * @returns {{signature: string, bundle: string[]}} */async function x3dh_pre_key(signingKey, numKeys = 100) { const bundle = await generateBundle(numKeys); const publicKeys = bundle.map(x => x.publicKey); const signature = await signBundle(signingKey, publicKeys); // This is a stub; how you persist it is app-specific: persistBundleNotDefinedHere(signingKey, bundle); // Hex-encode all the public keys const encodedBundle = []; for (let pk of publicKeys) { encodedBundle.push(await sodium.sodium_bin2hex(pk.getBuffer())); } return { 'signature': await sodium.sodium_bin2hex(signature), 'bundle': encodedBundle };}
And then you can drop the output of
x3dh_pre_key(secretKey)
into a JSON-encoded HTTP request.In accordance to Signal’s X3DH spec, you want to use
x3dh_pre_key(secretKey, 1)
to generate the “signed pre-key” bundle andx3dn_pre_key(secretKey, 100)
when pushing 100 one-time keys to the server.X3DH Initiation
This section conforms to the Sending the Initial Message section of the X3DH specification.When you initiate a conversation, the server should provide you with a bundle containing:
- Your peer’s Identity key (an Ed25519 public key)
- Your peer’s current Signed Pre-Key (an X25519 public key)
- (If any remain unburned) One of your key’s One-Time Keys (an X25519 public key) — and then delete it
If we assume the structure of this response looks like this:
{ "IdentityKey": "...", "SignedPreKey": { "Signature": "..." "PreKey": "..." }, "OneTimeKey": "..." // or NULL}
Then we can write the initiation step of the handshake like so:
/** * Get SK for initializing an X3DH handshake * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} senderKey */async function x3dh_initiate_send_get_sk(r, senderKey) { const identityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey) ); const signedPreKey = new X25519PublicKey( await sodium.sodium_hex2bin(r.SignedPreKey.PreKey) ); const signature = await sodium.sodium_hex2bin(r.SignedPreKey.Signature); // Check signature const valid = await verifyBundle(identityKey, [signedPreKey], signature); if (!valid) { throw new Error("Invalid signature"); } const ephemeral = await generateKeyPair(); const ephSecret = ephemeral.secretKey; const ephPublic = ephemeral.publicKey; // Turn the Ed25519 keys into X25519 keys for X3DH: const senderX = await sodium.crypto_sign_ed25519_sk_to_curve25519(senderKey); const recipientX = await sodium.crypto_sign_ed25519_pk_to_curve25519(identityKey); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(senderX, signedPreKey); const DH2 = await sodium.crypto_scalarmult(ephSecret, recipientX); const DH3 = await sodium.crypto_scalarmult(ephSecret, signedPreKey); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( ephSecret, new X25519PublicKey(await sodium.sodium_hex2bin(r.OneTimeKey)) ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); ephSecret.wipe(); senderX.wipe(); return { IK: identityKey, EK: ephPublic, SK: SK, OTK: r.OneTimeKey // might be NULL };}/** * Initialize an X3DH handshake * * @param {string} recipientIdentity - Some identifier for the user * @param {Ed25519SecretKey} secretKey - Sender's secret key * @param {Ed25519PublicKey} publicKey - Sender's public key * @param {string} message - The initial message to send * @returns {object} */async function x3dh_initiate_send(recipientIdentity, secretKey, publicKey, message) { const r = await get_server_response(recipientIdentity); const {IK, EK, SK, OTK} = await x3dh_initiate_send_get_sk(r, secretKey); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(publicKey.getBuffer()) .concat(IK.getBuffer()) ) ); /* * We're going to set the session key for our recipient to SK. * This might invoke a ratchet. * * Either SK or the output of the ratchet derived from SK * will be returned by getEncryptionKey(). */ await setSessionKey(recipientIdentity, SK); const encrypted = await encryptData( message, await getEncryptionKey(recipientIdentity), assocData ); return { "Sender": my_identity_string, "IdentityKey": await sodium.sodium_bin2hex(publicKey), "EphemeralKey": await sodium.sodium_bin2hex(EK), "OneTimeKey": OTK, "CipherText": encrypted };}
We didn’t define
setSessionKey()
orgetEncryptionKey()
above. It will be covered later.X3DH – Receiving an Initial Message
This section implements the Receiving the Initial Message section of the X3DH Specification.We’re going to assume the structure of the request looks like this:
{ "Sender": "...", "IdentityKey": "...", "EphemeralKey": "...", "OneTimeKey": "...", "CipherText": "..."}
The code to handle this should look like this:
/** * Handle an X3DH initiation message as a receiver * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} identitySecret * @param {Ed25519PublicKey} identityPublic * @param {Ed25519SecretKey} preKeySecret */async function x3dh_initiate_recv_get_sk( r, identitySecret, identityPublic, preKeySecret) { // Decode strings const senderIdentityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey), ); const ephemeral = new X25519PublicKey( await sodium.sodium_hex2bin(r.EphemeralKey), ); // Ed25519 -> X25519 const senderX = await sodium.crypto_sign_ed25519_pk_to_curve25519(senderIdentityKey); const recipientX = await sodium.crypto_sign_ed25519_sk_to_curve25519(identitySecret); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(preKeySecret, senderX); const DH2 = await sodium.crypto_scalarmult(recipientX, ephemeral); const DH3 = await sodium.crypto_scalarmult(preKeySecret, ephemeral); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( await fetchAndWipeOneTimeSecretKey(r.OneTimeKey), ephemeral ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); recipientX.wipe(); return { Sender: r.Sender, SK: SK, IK: senderIdentityKey };}/** * Initiate an X3DH handshake as a recipient * * @param {object} req - Request object * @returns {string} - The initial message */async function x3dh_initiate_recv(req) { const {identitySecret, identityPublic} = await getIdentityKeypair(); const {preKeySecret, preKeyPublic} = await getPreKeyPair(); const {Sender, SK, IK} = await x3dh_initiate_recv_get_sk( req, identitySecret, identityPublic, preKeySecret, preKeyPublic ); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(IK.getBuffer()) .concat(identityPublic.getBuffer()) ) ); try { await setSessionKey(senderIdentity, SK); return decryptData( req.CipherText, await getEncryptionKey(senderIdentity), assocData ); } catch (e) { await destroySessionKey(senderIdentity); throw e; }}
And with that, you’ve successfully implemented X3DH and symmetric encryption in JavaScript.
We abstracted some of the details away (i.e.
kdf()
, the transport mechanisms, the session key management mechanisms, and a few others). Some of them will be highly specific to your application, so it doesn’t make a ton of sense to flesh them out.One thing to keep in mind: According to the X3DH specification, participants should regularly (e.g. weekly) replace their Signed Pre-Key in the server with a fresh one. They should also publish more One-Time Keys when they start to run low.
If you’d like to see a complete reference implementation of X3DH, as I mentioned before, Rawr-X3DH implements it in TypeScript.
Session Key Management
Using X3DH to for every message is inefficient and unnecessary. Even the Signal Protocol doesn’t do that.Instead, Signal specifies a Double Ratchet protocol that combines a Symmetric-Key Ratchet on subsequent messages, and a Diffie-Hellman-based ratcheting protocol.
Signal even specifies integration guidelines for the Double Ratchet with X3DH.
It’s worth reading through the specification to understand their usages of Key-Derivation Functions (KDFs) and KDF Chains.
Although it is recommended to use HKDF as the Signal protocol specifies, you can strictly speaking use any secure keyed PRF to accomplish the same goal.
What follows is an example of a symmetric KDF chain that uses BLAKE2b with 512-bit digests of the current session key; the leftmost half of the BLAKE2b digest becomes the new session key, while the rightmost half becomes the encryption key.
const SESSION_KEYS = {};/** * Note: In reality you'll want to have two separate sessions: * One for receiving data, one for sending data. * * @param {string} identity * @param {CryptographyKey} key */async function setSessionKey(identity, key) { SESSION_KEYS[identity] = key;}async function getEncryptionKey(identity) { if (!SESSION_KEYS[identity]) { throw new Error("No session key for " + identity"); } const blake2bMac = await sodium.crypto_generichash( SESSION_KEYS[identity], null, 64 ); SESSION_KEYS[identity] = new CryptographyKey(blake2bMac.slice(0, 32)); return new CryptographyKey(blake2bMac.slice(32, 64));}
In the interest of time, a full DHRatchet implementation is left as an exercise to the reader (since it’s mostly a state machine), but using the appropriate functions provided by sodium-plus (
crypto_box_keypair()
,crypto_scalarmult()
) should be relatively straightforward.Make sure your KDFs use domain separation, as per the Signal Protocol specifications.
Group Key Agreement
The Signal Protocol specified X3DH and the Double Ratchet for securely encrypting information between two parties.Group conversations are trickier, because you have to be able to encrypt information that multiple recipients can decrypt, add/remove participants to the conversation, etc.
(The same complexity comes with multi-device support for end-to-end encryption.)
The best design I’ve read to date for tackling group key agreement is the IETF Messaging Layer Security RFC draft.
I am not going to implement the entire MLS RFC in this blog post. If you want to support multiple devices or group conversations, you’ll want a complete MLS implementation to work with.
Brief Recap
That was a lot of ground to cover, but we’re not done yet.(Art by Khia.)
So far we’ve tackled encryption, initial key agreement, and session key management. However, we did not flesh out how Identity Keys (which are signing keys–Ed25519 specifically–rather than Diffie-Hellman keys) are managed. That detail was just sorta hand-waved until now.
So let’s talk about that.
III. Identity Key Management
There’s a meme among technology bloggers to write a post titled “Falsehoods Programmers Believe About _____”.Fortunately for us, Identity is one of the topics that furries are positioned to understand better than most (due to fursonas): Identities have a many-to-many relationship with Humans.
In an end-to-end encryption protocol, each identity will consist of some identifier (phone number, email address, username and server hostname, etc.) and an Ed25519 keypair (for which the public key will be published).
But how do you know whether or not a given public key is correct for a given identity?
This is where we segue into one of the hard problems in cryptography, where the solutions available are entirely dependent on your threat model: Public Key Infrastructure (PKI).
Some common PKI designs include:
- Certificate Authorities (CAs) — TLS does this
- Web-of-Trust (WoT) — The PGP ecosystem does this
- Trust On First Use (TOFU) — SSH does this
- Key Transparency / Certificate Transparency (CT) — TLS also does this for ensuring CA-issued certificates are auditable (although it was originally meant to replace Certificate Authorities)
And you can sort of choose-your-own-adventure on this one, depending on what’s most appropriate for the type of software you’re building and who your customers are.
One design I’m particularly fond of is called Gossamer, which is a PKI design without Certificate Authorities, originally designed for making WordPress’s automatic updates more secure (i.e. so every developer can sign their theme and plugin updates).
Since we only need to maintain an up-to-date repository of Ed25519 identity keys for each participant in our end-to-end encryption protocol, this makes Gossamer a suitable starting point.
Gossamer specifies a limited grammar of Actions that can be performed: AppendKey, RevokeKey, AppendUpdate, RevokeUpdate, and AttestUpdate. These actions are signed and published to an append-only cryptographic ledger.
I would propose a sixth action: AttestKey, so you can have WoT-like assurances and key-signing parties. (If nothing else, you should be able to attest that the identity keys of other cryptographic ledgers in the network are authentic at a point in time.)
IV. Backdoor Resistance
In the previous section, I proposed the use of Gossamer as a PKI for Identity Keys. This would provide Ed25519 keypairs for use with X3DH and the Double Ratchet, which would in turn provide session keys to use for symmetric authenticated encryption.If you’ve implemented everything preceding this section, you have a full-stack end-to-end encryption protocol. But let’s make intelligence agencies and surveillance capitalists even more mad by making it impractical to backdoor our software (and impossible to silently backdoor it).
How do we pull that off?
You want Binary Transparency.
For us, the implementation is simple: Use Gossamer as it was originally intended (i.e. to secure your software distribution channels).
Gossamer provides up-to-date verification keys and a commitment to a cryptographic ledger of every software update. You can learn more about its inspiration here.
It isn’t enough to merely use Gossamer to manage keys and update signatures. You need independent third parties to use the AttestUpdate action to assert one or more of the following:
- That builds are reproducible from the source code.
- That they have reviewed the source code and found no evidence of backdoors or exploitable vulnerabilities.
(And then you should let your users decide which of these independent third parties they trust to vet software updates.)
Closing Remarks
The U.S. Government cries and moans a lot about “criminals going dark” and wonders a lot about how to solve the “going dark problem”.If more software developers implement end-to-end encryption in their communications software, then maybe one day they won’t be able to use dragnet surveillance to spy on citizens and they’ll be forced to do actual detective work to solve actual crimes.
Y’know, like their job description actually entails?
Let’s normalize end-to-end encryption. Let’s normalize backdoor-resistant software distribution.
Let’s collectively tell the intelligence community in every sophisticated nation state the one word they don’t hear often enough:
Especially if you’re a furry. Because we improve everything! :3
Questions You Might Have
What About Private Contact Discovery?
That’s one of the major reasons why the thing we’re building isn’t meant to compete with Signal (and it MUST NOT be advertised as such):Signal is a privacy tool, and their servers have no way of identifying who can contact who.
What we’ve built here isn’t a complete privacy solution, it’s only providing end-to-end encryption (and possibly making NSA employees cry at their desk).
Does This Design Work with Federation?
Yes. Each identifier string can be [username] at [hostname].What About Network Metadata?
If you want anonymity, you want to use Tor.Why Are You Using Ed25519 Keys for X3DH?
If you only read the key agreement section of this blog post and the fact that I’m passing around Ed25519 public keys seems weird, you might have missed the identity section of this blog post where I suggested piggybacking on another protocol called Gossamer to handle the distribution of Ed25519 public keys. (Gossamer is also beneficial for backdoor resistance in software update distribution, as described in the subsequent section.)Furthermore, we’re actually using birationally equivalent X25519 keys derived from the Ed25519 keypair for the X3DH step. This is a deviation from what Signal does (using X25519 keys everywhere, then inventing an EdDSA variant to support their usage).
const publicKeyX = await sodium.crypto_sign_ed25519_pk_to_curve25519(foxPublicKey);const secretKeyX = await sodium.crypto_sign_ed25519_sk_to_curve25519(wolfSecretKey);
(Using fox/wolf instead of Alice/Bob, because it’s cuter.)
This design pattern has a few advantages:
- It makes Gossamer integration seamless, which means you can use Ed25519 for identities and still have a deniable X3DH handshake for 1:1 conversations while implementing the rest of the designs proposed.
- This approach to X3DH can be implemented entirely with libsodium functions, without forcing you to write your own cryptography implementations (i.e. for XEdDSA).
The only disadvantages I’m aware of are:
- It deviates from Signal’s core design in a subtle way that means you don’t get to claim the exact same advantages Signal does when it comes to peer review.
- Some cryptographers are distrustful of the use of birationally equivalent X25519 keys from Ed25519 keys (although there isn’t a vulnerability any of them have been able to point me to that doesn’t involve torsion groups–which libsodium’s implementation already avoids).
If these concerns are valid enough to decide against my implementation above, I invite you to talk with cryptographers about your concerns and then propose alternatives.
Has Any of This Been Implemented Already?
You can find implementations for the designs discussed on this blog post below:
- Rawr-X3DH implements X3DH in TypeScript (added 2020-11-23)
I will update this section of the blog post as implementations surface.
https://soatok.blog/2020/11/14/going-bark-a-furrys-guide-to-end-to-end-encryption/
#authenticatedEncryption #authenticatedKeyExchange #crypto #cryptography #encryption #endToEndEncryption #libsodium #OnlinePrivacy #privacy #SecurityGuidance #symmetricEncryption
Let me say up front, I’m no stranger to negative or ridiculous feedback. It’s incredibly hard to hurt my feelings, especially if you intend to. You don’t openly participate in the furry fandom since 2010 without being accustomed to malevolence and trolling. If this were simply a story of someone being an asshole to me, I would have shrugged and moved on with my life.
It’s important that you understand this, because when you call it like you see it, sometimes people dismiss your criticism with “triggered” memes. This isn’t me being offended. I promise.
My recent blog post about crackpot cryptography received a fair bit of attention in the software community. At one point it was on the front page of Hacker News (which is something that pretty much never happens for anything I write).
Unfortunately, that also means I crossed paths with Zed A. Shaw, the author of Learn Python the Hard Way and other books often recommended to neophyte software developers.
As someone who spends a lot of time trying to help newcomers acclimate to the technology industry, there are some behaviors I’ve recognized in technologists over the years that makes it harder for newcomers to overcome anxiety, frustration, and Impostor Syndrome. (Especially if they’re LGBTQIA+, a person of color, or a woman.)
Normally, these are easily correctable behaviors exhibited by people who have good intentions but don’t realize the harm they’re causing–often not by what they’re saying, but by how they say it.
Sadly, I can’t be so generous about… whatever this is:
https://twitter.com/lzsthw/status/1359659091782733827
Having never before encountered a living example of a poorly-written villain towards the work I do to help disadvantaged people thrive in technology careers, I sought to clarify Shaw’s intent.
https://twitter.com/lzsthw/status/1359673331960733696
https://twitter.com/lzsthw/status/1359673714607013905
This is effectively a very weird hybrid of an oddly-specific purity test and a form of hazing ritual.
Let’s step back for a second. Can you even fathom the damage attitudes like this can cause? I can tell you firsthand, because it happened to me.
Interlude: Amplified Impostor Syndrome
In the beginning of my career, I was just a humble web programmer. Due to a long story I don’t want to get into now, I was acquainted with the culture of black-hat hacking that precipitates the DEF CON community.
In particular, I was exposed the writings of a malicious group called Zero For 0wned, which made sport of hunting “skiddiez” and preached a very “shut up and stay in your lane” attitude:
Geeks don’t really come to HOPE to be lectured on the application of something simple, with very simple means, by a 15 year old. A combination of all the above could be why your room wasn’t full. Not only was it fairly empty, but it emptied at a rapid rate. I could barely take a seat through the masses pushing me to escape. Then when I thought no more people could possibly leave, they kept going. The room was almost empty when I gave in and left also. Heck, I was only there because we pwned the very resources you were talking about.Zero For 0wned
My first security conference was B-Sides Orlando in 2013. Before the conference, I had been hanging out in the #hackucf IRC channel and had known about the event well in advance (and got along with all the organizers and most of the would-be attendees), and considered applying to their CFP.
I ultimately didn’t, solely because I was worried about a ZF0-style reception.
I had no reference frame for other folks’ understanding of cryptography (which is my chosen area of discipline in infosec), and thought things like timing side-channels were “obvious”–even to software developers outside infosec. (Such is the danger of being self-taught!)
“Geeks don’t really come to B-Sides Orlando to be lectured on the application of something simple, with very simple means,” is roughly how I imagined the vitriol would be framed.
If it can happen to me, it can happen to anyone interested in tech. It’s the responsibility of experts and mentors to spare beginners from falling into the trappings of other peoples’ grand-standing.
Pride Before Destruction
With this in mind, let’s return to Shaw. At this point, more clarifying questions came in, this time from Fredrick Brennan.
https://twitter.com/lzsthw/status/1359712275666505734
What an arrogant and bombastic thing to say!
At this point, I concluded that I can never again, in good conscience, recommend any of Shaw’s books to a fledgling programmer.
If you’ve ever published book recommendations before, I suggest auditing them to make sure you’re not inadvertently exposing beginners to his harmful attitude and problematic behavior.
But while we’re on the subject of Zed Shaw’s behavior…
https://twitter.com/lzsthw/status/1359714688972582916
If Shaw thinks of himself as a superior cryptography expert, surely he’s published cryptography code online before.
And surely, it will withstand a five-minute code review from a gay furry blogger who never went through Shaw’s prescribed hazing ritual to rediscover specifically the known problems in OpenSSL circa Heartbleed and is therefore not as much of a cryptography expert?
(Art by Khia.)
May I Offer You a Zero-Day in This Trying Time?
One of Zed A. Shaw’s Github projects is an implementation of SRP (Secure Remote Password)–an early Password-Authenticated Key Exchange algorithm often integrated with TLS (to form TLS-SRP).
Zed Shaw’s SRP implementation
Without even looking past the directory structure, we can already see that it implements an algorithm called TrueRand, which cryptographer Matt Blaze has this to say:
https://twitter.com/mattblaze/status/438464425566412800
As noted by the README, Shaw stripped out all of the “extraneous” things and doesn’t have all of the previous versions of SRP “since those are known to be vulnerable”.
So given Shaw’s previous behavior, and the removal of vulnerable versions of SRP from his fork of Tom Wu’s libsrp code, it stands to reason that Shaw believes the cryptography code he published would be secure. Otherwise, why would he behave with such arrogance?
SRP in the Grass
Head’s up! If you aren’t cryptographically or mathematically inclined, this section might be a bit dense for your tastes. (Art by Scruff.)
When I say SRP, I’m referring to SRP-6a. Earlier versions of the protocol are out of scope; as are proposed variants (e.g. ones that employ SHA-256 instead of SHA-1).
Professor Matthew D. Green of Johns Hopkins University (who incidentally used to proverbially shit on OpenSSL in the way that Shaw expects everyone to, except productively) dislikes SRP but considered the protocol “not obviously broken”.
However, a secure protocol doesn’t mean the implementations are always secure. (Anyone who’s looked at older versions of OpenSSL’s BigNum library after reading my guide to side-channel attacks knows better.)
There are a few ways to implement SRP insecurely:
- Use an insecure random number generator (e.g. TrueRand) for salts or private keys.
- Fail to use a secure set of parameters (q, N, g).
To expand on this, SRP requires q be a Sophie-Germain prime and N be its corresponding Safe Prime. The standard Diffie-Hellman primes (MODP) are not sufficient for SRP.This security requirement exists because SRP requires an algebraic structure called a ring, rather than a cyclic group (as per Diffie-Hellman).
- Fail to perform the critical validation steps as outlined in RFC 5054.
In one way or another, Shaw’s SRP library fails at every step of the way. The first two are trivial:
- We’ve already seen the RNG used by srpmin. TrueRand is not a cryptographically secure pseudo random number generator.
- Zed A. Shaw’s srpmin only supports unsafe primes for SRP (i.e. the ones from RFC 3526, which is for Diffie-Hellman).
The third is more interesting. Let’s talk about the RFC 5054 validation steps in more detail.
Parameter Validation in SRP-6a
Retraction (March 7, 2021): There are two errors in my original analysis.
First, I misunderstood the behavior of SRP_respond()
to involve a network transmission that an attacker could fiddle with. It turns out that this function doesn’t do what its name implies.
Additionally, I was using an analysis of SRP3 from 1997 to evaluate code that implements SRP6a. u
isn’t transmitted, so there’s no attack here.
I’ve retracted these claims (but you can find them on an earlier version of this blog post via archive.org). The other SRP security issues still stand; this erroneous analysis only affects the u
validation issue.
Vulnerability Summary and Impact
That’s a lot of detail, but I hope it’s clear to everyone that all of the following are true:
- Zed Shaw’s library’s use of TrueRand fails the requirement to use a secure random source. This weakness affects both the salt and the private keys used throughout SRP.
- The library in question ships support for unsafe parameters (particularly for the prime, N), which according to RFC 5054 can leak the client’s password.
Salts and private keys are predictable and the hard-coded parameters allow passwords to leak.
But yes, OpenSSL is the real problem, right?
(Art by Khia.)
Low-Hanging ModExp Fruit
Shaw’s SRP implementation is pluggable and supports multiple back-end implementations: OpenSSL, libgcrypt, and even the (obviously not constant-time) GMP.
Even in the OpenSSL case, Shaw doesn’t set the BN_FLG_CONSTTIME
flag on any of the inputs before calling BN_mod_exp()
(or, failing that, inside BigIntegerFromInt
).
As a consequence, this is additionally vulnerable to a local-only timing attack that leaks your private exponent (which is the SHA1 hash of your salt and password). Although the literature on timing attacks against SRP is sparse, this is one of those cases that’s obviously vulnerable.
Exploiting the timing attack against SRP requires the ability to run code on the same hardware as the SRP implementation. Consequently, it’s possible to exploit this SRP ModExp timing side-channel from separate VMs that have access to the same bare-metal hardware (i.e. L1 and L2 caches), unless other protections are employed by the hypervisor.
Leaking the private exponent is equivalent to leaking your password (in terms of user impersonation), and knowing the salt and identifier further allows an attacker to brute force your plaintext password (which is an additional risk for password reuse).
Houston, The Ego Has Landed
Earlier when I mentioned the black hat hacker group Zero For 0wned, and the negative impact their hostile rhetoric, I omitted an important detail: Some of the first words they included in their first ezine.
For those of you that look up to the people mentioned, read this zine, realize that everyone makes mistakes, but only the arrogant ones are called on it.
If Zed A. Shaw were a kinder or humbler person, you wouldn’t be reading this page right now. I have a million things I’d rather be doing than exposing the hypocrisy of an arrogant jerk who managed to bullshit his way into the privileged position of educating junior developers through his writing.
If I didn’t believe Zed Shaw was toxic and harmful to his very customer base, I certainly wouldn’t have publicly dropped zero-days in the code he published while engaging in shit-slinging at others’ work and publicly shaming others for failing to meet arbitrarily specific purity tests that don’t mean anything to anyone but him.
But as Dan Guido said about Time AI:
https://twitter.com/veorq/status/1159575230970396672
It’s high time we stopped tolerating Zed’s behavior in the technology community.
If you want to mitigate impostor syndrome and help more talented people succeed with their confidence intact, boycott Zed Shaw’s books. Stop buying them, stop stocking them, stop recommending them.
Learn Decency the Hard Way
(Updated on February 12, 2021)
One sentiment and question that came up a few times since I originally posted this is, approximately, “Who cares if he’s a jerk and a hypocrite if he’s right?”
But he isn’t. At best, Shaw almost has a point about the technology industry’s over-dependence on OpenSSL.
Shaw’s weird litmus test about whether or not my blog (which is less than a year old) had said anything about OpenSSL during the “20+ years it was obviously flawed” isn’t a salient critique of this problem. Without a time machine, there is no actionable path to improvement.
You can be an inflammatory asshole and still have a salient point. Shaw had neither while demonstrating the worst kind of conduct to expose junior developers to if we want to get ahead of the rampant Impostor Syndrome that plagues us.
This is needlessly destructive to his own audience.
Generally the only people you’ll find who outright like this kind of abusive behavior in the technology industry are the self-proclaimed “neckbeards” that live on the dregs of elitist chan culture and desire for there to be a priestly technologist class within society, and furthermore want to see themselves as part of this exclusive caste–if not at the top of it. I don’t believe these people have anyone else’s best interests at heart.
So let’s talk about OpenSSL.
OpenSSL is the Manifestation of Mediocrity
OpenSSL is everywhere, whether you realize it or not. Any programming language that provides a crypto
module (Erlang, Node.js, Python, Ruby, PHP) binds against OpenSSL libcrypto.
OpenSSL kind of sucks. It used to be a lot worse. A lot of people have spent the past 7 years of their careers trying to make it better.
A lot of OpenSSL’s suckage is because it’s written mostly in C, which isn’t memory-safe. (There’s also some Perl scripts to generate Assembly code, and probably some other crazy stuff under the hood I’m not aware of.)
A lot of OpenSSL’s suckage is because it has to be all things to all people that depend on it, because it’s ubiquitous in the technology industry.
But most of OpenSSL’s outstanding suckage is because, like most cryptography projects, its API was badly designed. Sure, it works well enough as a Swiss army knife for experts, but there’s too many sharp edges and unsafe defaults. Further, because so much of the world depends on these legacy APIs, it’s difficult (if not impossible) to improve the code quality without making upgrades a miserable task for most of the software industry.
What Can We Do About OpenSSL?
There are two paths forward.
First, you can contribute to the OpenSSL 3.0 project, which has a pretty reasonable design document that almost nobody outside of the OpenSSL team has probably ever read before. This is probably the path of least resistance for most of the world.
Second, you can migrate your code to not use OpenSSL. For example, all of the cryptography code I’ve written for the furry community to use in our projects is backed by libsodium rather than OpenSSL. This is a tougher sell for most programming languages–and, at minimum, requires a major version bump.
Both paths are valid. Improve or replace.
But what’s not valid is pointlessly and needlessly shit-slinging open source projects that you’re not willing to help. So I refuse to do that.
Anyone who thinks that makes me less of a cryptography expert should feel welcome to not just unfollow me on social media, but to block on their way out.
https://soatok.blog/2021/02/11/on-the-toxicity-of-zed-a-shaw/
#author #cryptography #ImpostorSyndrome #PAKE #SecureRemotePasswordProtocol #security #SRP #Technology #toxicity #vuln #ZedAShaw #ZeroDay
Sometimes my blog posts end up on social link-sharing websites with a technology focus, such as Lobste.rs or Hacker News.On a good day, this presents an opportunity to share one’s writing with a larger audience and, more importantly, solicit a wider variety of feedback from one’s peers.
However, sometimes you end up with feedback like this, or this:
Apparently my fursona is ugly, and therefore I’m supposed to respect some random person’s preferences and suppress my identity online.
I’m no stranger to gatekeeping in online communities, internet trolls, or bullying in general. This isn’t my first rodeo, and it won’t be my last.
These kinds of comments exist to send a message not just to me, but to anyone else who’s furry or overtly LGBTQIA+: You’re weird and therefore not welcome here.
Of course, the moderators rarely share their views.
https://twitter.com/pushcx/status/1281207233020379137
Because of their toxic nature, there is only one appropriate response to these kinds of comments: Loud and persistent spite.
So here’s some more art I’ve commissioned or been gifted of my fursona over the years that I haven’t yet worked into a blog post:
Art by kazetheblaze
Art by leeohfox
Art by Diffuse MooseIf you hate furries so much, you will be appalled to learn that factoids about my fursona species have landed in LibreSSL’s source code (decoded).
Never underestimate furries, because we make the Internets go.
I will never let these kind of comments discourage me from being open about my hobbies, interests, or personality. And neither should anyone else.
If you don’t like my blog posts because I’m a furry but still find the technical content interesting, know now and forever more that, when you try to push me or anyone else out for being different, I will only increase the fucking thing.
Header art created by @loviesophiee and inspired by floccinaucinihilipilification.
https://soatok.blog/2020/07/09/a-word-on-anti-furry-sentiments-in-the-tech-community/
#antiFurryBullying #cyberculture #furry #HackerNews #LobsteRs #Reddit