Search
Items tagged with: crypto
Earlier this year, I wrote about planned effort to design a federated Key Transparency proposal.
The end goal for this work was constrained to building end-to-end encryption into a new type of Direct Message on the Fediverse, with other protocols and services being a stretch goal rather than its primary purpose.
The ideal situation is to enable developers to write code that looks as simple as this:
async function initialize(message, recipient) { const bundle = await fediverse.getSignedPreKeyBundle(recipient); // This also checks the inclusion proof and witness cosigs: const pubKey = await directory.fetch(recipient, bundle.keyId); if (!await pubKey.verify(bundle)) { throw new Error('Invalid signature or bundle'); } const session = await e2ee.beginSession(bundle); return session.send(message);}initialize("OwO what's this?", "soatok@furry.engineer") .then(async (session) => { /* ... */ });
And then have secure end-to-end encryption such that only a trusted public key for the intended recipient can decrypt.
Work on the specification for the Public Key Directory component has recently started. A few things have changed since my last blog post on the topic. I’ve also gotten a lot of similar questions that wouldn’t be appropriate to try to answer within the specification itself.
Original art: CMYKat, poorly edited by myself
The Big Picture
This section is written mostly for anyone who hasn’t paid attention to my other writing on this project.
This is how I believe this project will develop in the immediate future.
- Public Key Directory (PKD)
- Specification (WIP)
- Reference Implementation (Not Started)
- Client-Side SDKs (Not Started)
- Go
- Ruby
- PHP
- TypeScript
- End-to-End Encryption for the Fediverse (FediE2EE)
- Specification (WIP)
- Client-Side Secret Key Management
- Federated Public Key Infrastructure (See: PKD)
- Asynchronous Forward-Secure Ratcheting Protocol + Group Key Agreement
- Symmetric-Key Authenticated Encryption
- Reference Implementations (Not Started)
- Go
- Ruby
- PHP
- TypeScript
- Specification (WIP)
- Fediverse Instance Patches to Support E2EE
- Mastodon
- ?????
- Client-Side Software
- ?????
- PKD Extensions
- age v1 public keys
Once the PKD complete is complete, there’s nothing stopping other people from defining their own PKD extensions and building on top of our design to add Key Transparency to their own protocols.
My focus, once we have a solid specification and reference implementation, is going to shift towards building FediE2EE.
I will not, however, be working on client-side software unless no one else expresses interest.
The reason for my tentative recusal is simple: I absolutely suck at user interface design, and you’ll probably hate whatever I can cobble together. I am many things, but an artist is not one of them.
You don’t want me designing UIs.
Art: CMYKat
To that end, my final deliverable in this project will be open source libraries (and accompanying guidance for using said libraries) than user experience experts can glue into their own clients.
That said, the only client-side software that should exist are browser extensions, desktop clients, and mobile apps.
I strongly discourage anyone from trying to deploy any code that touches secret keys to a traditional web application, or JavaScript running inside of a WebView.
I’m ambivalent on Electron. It’s better than redownloading the code from a server and running it blindly every page load, but it’s not highly regarded by security professionals.
Decisions Made
The most important topic to cover is design decisions I’ve made with my specification that will shape the evolution of this project.
Account Recovery
The current draft of the specification includes two Protocol Message types, BurnDown and Fireproof, which warrant further examination.
BurnDown is simple in concept: It revokes all of a particular user’s public keys and auxiliary data records. If you have no currently-trusted public keys, you are permitted to push a self-signed AddKey message.
A not-so-subtle detail of BurnDown that everyone should pay attention to is that the instance admin can issue them on behalf of other users hosted on that server.
If you aren’t comfortable with your admin being able to issue a BurnDown at any time, that’s where Fireproof comes in: It allows you to opt out of this capability entirely.
Fireproof is a double-edged sword. It protects you from malicious admins, but it prevents you from ever recovering your account if you lose access to all of your secret keys.
The most important decision I made here is: Fireproof is an opt-in protection, which (as of the current draft) has no “undo”. (I’m considering allowing an “undo” if it makes sense to ever do so. Tell me what you think!)
It’s often said that security at the expense of usability comes at the expense of security. Account recovery mechanisms are, necessarily, always some kind of a backdoor.
Conversely, defaults matter to security. Allowing BurnDown messages be issued by default, from a specification perspective, implies most users will not issue a Fireproof message. (Client software may counteract this by prompting users with a choice when they first enroll, without a default setting, but I digress.)
I believe this choice is the best of all possible options, but you’re certainly welcome to disagree. It’s important to me that I be very loudly transparent about this decision.
No ECDSA Support
I had floated the idea of supporting NIST P-384 in my initial blog post.
Ultimately, there is no real incentive to do so, considering Ed25519 is now in FIPS 186-5 (which has been a standard for 18 months now).
And since we’re already using Ed25519, that satisfies any hypothetical FIPS use-case, should any governments choose to use my design for anything.
Thus, there will be no NIST P-384 support in the Public Key Directory project.
Art: AJ
Right To Be Forgotten
Key Transparency involves creating a global, immutable history. The Right To Be Forgotten enshrined in the EU’s GDPR law is fundamentally incompatible with the security goals of key transparency.
What this means is that, if I just shrugged and plugged Actor IDs and Public Keys into a hash function and committed that hash to a Merkle tree, then, years later, a malicious troll demands their data be removed in accordance with the GDPR, it immediately becomes a catch-22.
Do you comply with the asserted right and break the history and provable security of your transparency ledger? Or do you risk legal peril for noncompliance?
When I first noodled over this, a few people said, “But you’re not in the EU. Why do you care?”And, like, some of the people that will want to use this design one day are in the EU. Some of them may want to run their own Public Key Directory instances. I want them to have a good time with it. Is that so strange?
There is a way to get both properties without sacrificing the universal consistency of a single Merkle tree, but it relies on untested legal theory.
In short, what you need to do is:
- Client-side: Encrypt the sensitive fields, then send the ciphertext and the ephemeral key to the Directory.
- The Directory will commit to the ciphertext, not the plaintext, and hold onto the keys in order to decrypt records on-the-fly.
- When a data erasure request comes in by an EU citizen asserting their right to be forgotten, erase the key to render the data irrecoverable.
This constitutes a forceful BurnDown with amnesia.
Does This Introduce Any Specific Risks?
This works in principle, but a couple of things need to hold true in order to maintain the integrity of the transparency log.
- You need to use a committing authenticated encryption mode.
Without this property, it’s possible to swap one key for another (rather than simply erasing it) and get a valid plaintext for the (ciphertext, tag) committed in the ledger.This protects the Directory from a malicious user that later gets privileged access and manipulates stored keys.
- You need a plaintext commitment that is independent of the key. In addition to key-independence, it needs to be difficult to brute force, and you can’t have additional randomness (i.e., salts) added that could be changed after-the-fact to produce a “valid” commitment for another plaintext.
This protects users from a Directory that lies about which plaintext a particular ciphertext decrypts to.
This is currently specified as follows:
- Encryption is AES-256-CTR then HMAC-SHA512, Encrypt-then-MAC.
- The authentication tag covers a random value used for subkey derivation, as well as a Plaintext Commitment (Q).
- The Plaintext Commitment (Q) is derived from Argon2id and HMAC of the plaintext. There are some subtle detains with how the Argon2id salt is derived, and why specific parameters were chosen the way they are, but this is covered in the specification document.
I had considered using Zero Knowledge Proofs here, but the current HMAC + Argon2 approach solves the problem securely without needing to study ZKDocs (and its supporting material) for hours.
Does This Give Us Compliance?
If we assume that “crypto shredding” is a valid technique for complying with data erasure demands, this lets us honor those requests while ensuring independent third parties can maintain a consistent view of the state of the transparency log.
It is worth repeating: This is not based on a tested legal theory. It is not legal advice. It is a best effort, good faith attempt to engineer a solution that would adhere to the “spirit of the law” as interpreted by an American furry with no academic or legal credentials from any country.
That being said, page 75 of this report about distributed ledgers and GDPR implies it’s not an entirely unfounded hypothesis.
Frequently Asked Questions
I’ve been asked a lot of similar questions since I started this project. This is a good a place as any to answer some of them.
Have you talked with ____?
Short answer: No, I haven’t.
Longer answer: My goal is simply to build a specification, then an implementation, that allows end-to-end encryption on the Fediverse.
No part of that sentence implies getting anyone else’s permission, or compromising on my security decisions in order to meet a competing concern.
For example, there’s always pressure from the open source community to support RSA keys, or to interoperate with other software (i.e., Matrix).
Those are non-goals of mine.
Should the ActivityPub authors or Mastodon developers decide differently from me, I wouldn’t want to sign off on their protocol design just because it appeases someone else.
I also don’t have any sort of requirement that what I specify and build becomes “standardized” in any meaningful way.
So, no, I haven’t talked with any of them yet. I also don’t plan to until the specifications and reference implementations are closer to maturity.
And even then, the message I have in mind for when that time comes looks something like this:
Hiya,I’m building my own end-to-end encryption design for the Fediverse. Here’s the specification, here’s a reference implementation. (Links go here.)
[If applicable: I see you accepted a grant to build something similar.]
Please feel free to reuse whatever you deem useful (if anything) of my work in your own designs. I’m not interested in changing mine.
If you’d like to just adopt what I’ve already built, that’s fine too.
Soatok
I don’t want a deep involvement in anyone else’s political or social mess. I don’t want any of their grant money either, for that matter.
I just want to make security and privacy possible, to help queer people decide when, where, and how they selectively reveal themselves to others.
That said, if the W3C grant recipients want to look at the work I’m doing, they can consider it licensed under public domain, ISC, CC0, WTFPL, or whatever license is easiest for their lawyers to digest. I literally do not give a shit about intellectual property with this project. Go wild.
What if no one steps up to build client software?
Then, as a last resort, I will build something myself. Most likely, a browser extension.
It will probably be ugly, but lightweight, as I am deathly allergic to React Native, NextJS, and other front-end development frameworks.
How can I contribute?
The GitHub repository for the Public Key Directory spec is located here, if you’d like to read and/or suggest improvements to the specification.
As mentioned in my previous blog post on this topic, there is a Signal group for meta-discussion. If you are interested in writing code, that would be the best place to hang out.
What about money? Although my Ko-Fi isn’t difficult to locate, nor hard to guess, I’m not soliciting any financial contributions for this project. It isn’t costing me anything to design or build, presently.
If you represent a company that focuses on cryptography development or software assurance consulting, I may be interested in talking at some point about getting the designs reviewed and implementations audited by professionals. However, we’re a long way from that right now.
Do you have a timeline in mind?
Somewhat, yeah.
I’d like to have version 0.1 of the specification tagged by the end of September 2024.
If I have the time to stick to that timeline, I intend to start working on the reference implementation and client SDKs in a few languages. This is when software developers’ contributions will begin to be the most welcomed.
I can’t really project a timeline beyond that, today.
In addition to building a reference implementation, I would like to pursue formal verification for my protocol design. This allows us to be confident in the correctness and security of the protocol as specified. I cannot provide even a rough estimate for how long that will take to complete.
Once this Public Key Directory project is in a good place, however, my focus will be shifting back towards specifying end-to-end encryption for the Fediverse. Because that’s why I’m doing all this in the first place.
https://soatok.blog/2024/08/21/federated-key-transparency-project-update/
#crypto #cryptography #OnlinePrivacy #symmetricCryptography
In late 2022, I blogged about the work needed to develop a specification for end-to-end encryption for the fediverse. I sketched out some of the key management components on GitHub, and then the public work abruptly stalled.A few of you have wondered what’s the deal with that.
This post covers why this effort stalled, what I’m proposing we do next.
What’s The Hold Up?
The “easy” (relatively speaking) parts of the problem are as follows:
- Secret key management. (This is sketched out already, and provides multiple mechanisms for managing secret key material. Yay!)
- Bulk encryption of messages and media. (I’ve done a lot of work in this space over the years, so it’s an area I’m deeply familiar with. When we get to this part, it will be almost trivial. I’m not worried about it at all.)
- Forward-secure ratcheting / authenticated key exchange / group key agreement. (RFC 9420 is a great starting point.)
That is to say, managing secret keys, using secret keys, and deriving shared secret keys are all in the “easy” bucket.
The hard part? Public key management.
CMYKat made this
Why is Public Key Management Hard?
In a centralized service (think: Twitter, Facebook, etc.), this is actually much easier to build: Shove your public keys into a database, and design your client-side software to trust whatever public key your server gives them. Bob’s your uncle, pack it up and go home.Unfortunately, it’s kind of stupid to build anything that way.
If you explicitly trust the server, the server could provide the wrong public key (i.e., one for which the server knows the corresponding secret key) and you’ll be none the wiser. This makes it trivial for the server to intercept and read your messages.
If your users are trusting you regardless, they’re probably just as happy if you don’t encrypt at the endpoint at all (beyond using TLS, but transport encryption is table stakes for any online service so nevermind that).
But let’s say you wanted to encrypt between peers anyway, because you’re feeling generous (or don’t want to field a bunch of questionably legal demands for user data by law enforcement; a.k.a. the Snapchat threat model).
You could improve endpoint trust by shoving all of your users’ public keys into an append-only data structure; i.e. key transparency, like WhatsApp proposed in 2023:
https://www.youtube.com/watch?v=_N4Q05z5vPE
And, to be perfectly clear, key transparency is a damn good idea.
Key transparency keeps everyone honest and makes it difficult for criminals to secretly replace a victim’s public key, because the act of doing so is unavoidably published to an append-only log.
The primary challenge is scaling a transparency feature to serve a public, federated system.
Federated Key Transparency?
Despite appearances, I haven’t been sitting on my thumbs for the past year or so. I’ve been talking with cryptography experts about their projects and papers in the same space.Truthfully, I had been hoping to piggyback off one of those upcoming projects (which is focused more on public key discovery for SAML- and OAuth-like protocols) to build the Federated PKI piece for E2EE for the Fediverse.
Unfortunately, that project keeps getting delayed and pushed back, and I’ve just about run out of patience for it.
Additionally, there are some engineering challenges that I would need to tackle to build atop it, so it’s not as simple as “let’s just use that protocol”, either.
So let’s do something else instead:
Art: ScruffKerfluff
Fediverse Public Key Directories
Orthogonal to the overall Fediverse E2EE specification project, let’s build a Public Key Directory for the Fediverse.This will not only be useful for building a coherent specification for E2EE (as it provides the “Federated PKI” component we’d need to build it securely), but it would also be extremely useful for software developers the whole world over.
Imagine this:
- If you want to fetch a user’s SSH public key, you can just query for their username and get a list of non-expired, non-revoked public keys to choose from.
- If you wanted public key pinning and key rotation for OAuth2 and/or OpenID Connect identity providers without having to update configurations or re-deploy any applications, you can do that.
- If you want to encrypt a message to a complete stranger, such that only they can decrypt it, without any sort of interaction (i.e., they could be offline for a holiday and still decrypt it when they get back), you could do that.
Oh, and best of all? You can get all these wins without propping up any cryptocurrency bullshit either.
From simple abstractions, great power may bloom.Mark Miller
How Will This Work?
We need to design a specific kind of server that speaks a limited set of the ActivityPub protocol.I say “limited” because it will only not support editing or deleting messages provided by another instance. It will only append data.
To understand the full picture, let’s first look at the message types, public key types, and how the message types will be interpreted.
Message Types
Under the ActivityPub layer, we will need to specify a distinct set of Directory Message Types. An opening offer would look like this:
[b]AddKey[/b]
— contains an Asymmetric Public Key, a number mapped to the user, and instance that hosts it, and some other metadata (i.e., time)[b]RevokeKey[/b]
— marks an existing public key as revoked[b]MoveIdentity[/b]
— moves all of the public keys from identity A to identity B. This can be used for username changes or instance migrations.We may choose to allow more message types at the front-end if need be, but that’s enough for our purposes.
Public Key Types
We are not interested in backwards compatibility with every existing cryptosystem. We will only tolerate a limited set of public key types.At the outset, only Ed25519 will be supported.
In the future, we will include post-quantum digital signature algorithms on this list, but not before the current designs have had time to mature.
RSA will never be included in the set.
ECDSA over NIST P-384 may be included at some point, if there’s sufficient interest in supporting e.g., US government users.
If ECDSA is ever allowed, RFC 6979 is mandatory.
Message Processing
When an instance sends a message to a Directory Server, it will need to contain a specific marker for our protocol. Otherwise, it will be rejected.Each message will have its own processing rules.
After the processing rules are applied, the message will be stored in the Directory Server, and a hash of the message will be published to a SigSum transparency ledger. The Merkle root and inclusion proofs will be stored in an associated record, attached to the record for the new message.
Every message will have its hash published in SigSum. No exceptions.
We will also need a mechanism for witness co-signatures to be published and attached to the record.
Additionally, all messages defined here are generated by the users, client-side. Servers are not trusted, generally, as part of the overall E2EE threat model.
AddKey
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "AddKey", "message": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfMessage"}The first
AddKey
for any given identity will need to be self-signed by the key being added (in addition to ActivityPub messages being signed by the instance).After an identity exists in the directory, every subsequent public key MUST be signed by a non-revoked keypair.
RevokeKey
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "RevokeKey", "message": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfMessage"}This marks the public key as untrusted, and effectively “deletes” it from the list that users will fetch.
Important: RevokeKey will fail unless there is at least one more trusted public key for this user. Otherwise, a denial of service would be possible.
Replaying an AddKey for a previously-revoked key MUST fail.
MoveIdentity
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "MoveIdentity", "message": { "time": "2024-12-31T23:59:59Z", "old-identity": "foo@mastodon.example.com", "new-identity": "bar@akko.example.net" }, "signature": "SignatureOfMessage"}This exists to facilitate migrations and username changes.
Other Message Types
The above list is not exhaustive. We may need other message types depending on the exact feature set needed by the final specification.Fetching Public Keys
A simple JSON API (and/or an ActivityStream; haven’t decided) will be exposed to query for the currently trusted public keys for a given identity.{ "@context": "https://example.com/ns/fedi-e2ee/v1", "public-keys": [ { "data": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfData", "sigsum": { /* ... */ }, }, { "data": { /* ... */ }, /* ... */ }, /* ... */ ]}
Simple and easy.
Gossip Between Instances
Directory Servers should be configurable to mirror records from other instances.Additionally, they should be configurable to serve as Witnesses for the SigSum protocol.
The communication layer here between Directory Servers will also be ActivityPub.
Preventing Abuse
The capability of learning a user’s public key doesn’t imply the ability to send messages or bypass their block list.Additionally, Fediverse account usernames are (to my knowledge) generally not private, so I don’t anticipate there being any danger in publishing public keys to an append-only ledger.
That said, I am totally open to considering use cases where the actual identity is obfuscated (e.g., HMAC with a static key known only to the instance that hosts them instead of raw usernames).
What About GDPR / Right To Be Forgotten?
Others have previously suggested that usernames might be subject to the “right to be forgotten”, which would require breaking history for an append-only ledger.After discussing a proposed workaround with a few people in the Signal group for this project, we realized complying necessarily introduced security issues by giving instance admins the capability of selectively remapping the user ID to different audiences, and detecting/mitigating this remapping is annoying.
However, we don’t need to do that in the first place.
According to this webpage about GDPR’s Right to be Forgotten:
However, an organization’s right to process someone’s data might override their right to be forgotten. Here are the reasons cited in the GDPR that trump the right to erasure:
- (…)
- The data is being used to perform a task that is being carried out in the public interest or when exercising an organization’s official authority.
- (…)
- The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
Enabling private communication is in the public interest. The only information that will be stored in the ledger in relation to the username are cryptographic public keys, so it’s not like anything personal (e.g., email addresses or legal names) will be included.However, we still need to be extremely up-front about this to ensure EU citizens are aware of the trade-off we’re making.
Account Recovery
In the event that a user loses access to all of their secret keys and wants to burn down the old account, they may want a way to start over with another fresh self-signedAddKey
.However, the existing policies I wrote above would make this challenging:
- Since every subsequent
AddKey
must be signed by an incumbent key, if you don’t have access to these secret keys, you’re locked out.- Since
RevokeKey
requires one trusted keypair remains in the set, for normal operations, you can’t just burn the set down to zero even while you still had access to the secret keys.There is an easy way out of this mess: Create a new verb; e.g.
BurnDown
that an instance can issue that resets all signing keys for a given identity.The use of
BurnDown
should be a rare, exceptional event that makes a lot of noise:
- All existing E2EE sessions must break, loudly.
- All other participants must be alerted to the change, through the client software.
- Witnesses and watchdog nodes must take note of this change.
This comes with some trade-offs. Namely: Any account recovery mechanism is a backdoor, and giving the instance operators the capability of issuing
BurnDown
messages is a risk to their users.Therefore, users who trust their own security posture and wish to opt out of this recovery feature should also be able to issue a
Fireproof
message at any point in the process, which permanent and irrevocably prevents anyBurnDown
from being accepted on their current instance.If users opt out of recovery and then lose their signing keys, they’re locked out and need to start over with a new Fediverse identity. On the flipside, their instance operator cannot successfully issue a BurnDown for them, so they have to trust them less.
Notice
This is just a rough sketch of my initial ideas, going into this project. It is not comprehensive, nor complete.There are probably big gaps that need to be filled in, esp. on the ActivityPub side of things. (I’m not as worried about the cryptography side of things.)
How Will This Be Used for E2EE Direct Messaging?
I anticipate that a small pool of Directory Servers will be necessary, due to only public keys and identities being stored.Additional changes beyond just the existence of Directory Servers will need to be made to facilitate private messaging. Some of those changes include:
- Some endpoint for users to know which Directory Servers a given ActivityPub instance federates with (if any).
- Some mechanism for users to asynchronously exchange Signed Pre-Key bundles for initiating contact. (One for users to publish new bundles, another for users to retrieve a bundle.)
- These will be Ed25519-signed payloads containing an ephemeral X25519 public key.
This is all outside the scope of the proposal I’m sketching out here today, but it’s worth knowing that I’m aware of the implementation complexity.
The important thing is: I (soatok@furry.engineer) should be able to query pawb.fun, find the Directory Server(s) they federate with, and then query that Directory server for
Crashdoom@pawb.fun
and get his currently trusted Ed25519 public keys.From there, I can query pawb.fun for a SignedPreKey bundle, which will have been signed by one of those public keys.
And then we can return to the “easy” pile.
Development Plan
Okay, so that was a lot of detail, and yet not enough detail, depending on who’s reading this blog post.What I wrote here today is a very rough sketch. The devil is always in the details, especially with cryptography.
Goals and Non-Goals
We want Fediverse users to be able to publish a public key that is bound to their identity, which anyone else on the Internet can fetch and then use for various purposes.We want to leverage the existing work into key transparency by the cryptography community.
We don’t want to focus on algorithm agility or protocol compatibility.
We don’t want to involve any government offices in the process. We don’t care about “real” identities, nor about codifying falsehoods about names.
We don’t want any X.509 or Web-of-Trust machinery involved in the process.
Tasks
The first thing we would need to do is write a formal specification for a Directory Server (whose job is only to vend Public Keys in an auditable, transparent manner).Next, we need to actually build a reference implementation of this server, test it thoroughly, and then have security experts pound at the implementation for a while. Any security issues that can be mitigated by design will require a specification update.
We will NOT punt these down to implementors to be responsible for, unless we cannot avoid doing so.
Once these steps are done, we can start rolling the Directory Servers out. At this point, we can develop client-side libraries in various programming languages to make it easy for developers to adopt.My continued work on the E2EE specification for the Fediverse can begin after we have an implementation of the Directory Server component ready to go.
Timeline
I have a very demanding couple of months ahead of me, professionally, so I don’t yet know when I can commit to starting the Fediverse Directory Server specification work.Strictly speaking, it’s vaguely possible to get buy-in from work to focus on this project as part of my day-to-day responsibilities, since it has immediate and lasting value to the Internet.However, I don’t want to propose it because that would be crossing the professional-personal streams in a way I’m not really comfortable with.
The last thing I need is angry Internet trolls harassing my coworkers to try to get under my fur, y’know?
If there is enough interest from the broader Fediverse community, I’m also happy to delegate this work to anyone interested.Once the work can begin, I don’t anticipate it will take more than a week for me to write a specification that other crypto nerds will take seriously.
I am confident in this because most of the cryptography will be constrained to hash functions, preventing canonicalization and cross-protocol attacks, and signatures.
Y’know, the sort of thing I write about on my furry blog for fun!
Building a reference implementation will likely take a bit longer; if, for no other reason, than I believe it would be best to write it in Go (which has the strongest SigSum support, as of this writing).
This is a lot of words to say, as far as timelines go:
How to Get Involved
Regardless of whether my overall E2EE proposal gets adopted, the Directory Server component is something that should be universally useful to the Fediverse and to software developers around the world.If you are interested in participating in any technical capacity, I have just created a Signal Group for discussing and coordinating efforts.
All of these efforts will also be coordinated on the fedi-e2ee GitHub organization.
The public key directory server’s specification will eventually exist in this GitHub repository.
Can I Contribute Non-Technically?
Yes, absolutely. In the immediate future, once it kicks off, the work is going to be technology-oriented.However, we may need people with non-technical skills at some point, so feel free to dive in whenever you feel comfortable.
What About Financially?
If you really have money burning a hole in your pocket and want to toss a coin my way, I do have a Ko-Fi. Do not feel pressured at all to do so, however.Because I only use Ko-Fi as a tip jar, rather than as a business, I’m not specifically tracking which transaction is tied to which project, so I can’t make any specific promises about how any of the money sent my way will be allocated.
What I will promise, however, is that any icons/logos/etc. created for this work will be done by an artist and they will be adequately compensated for their work. I will not use large-scale computing (a.k.a., “Generative AI”) for anything.
Closing Thoughts
What I’ve sketched here is much simpler (and more ActivityPub-centric) than the collaboration I was originally planning.Thanks for being patient while I tried, in vain, to make that work.
As of today, I no longer think we need to wait for them. We can build this ourselves, for each other.
https://soatok.blog/2024/06/06/towards-federated-key-transparency/
#cryptography #endToEndEncryption #fediverse #KeyTransparency #Mastodon #MerkleTrees #PublicKeys
I don’t consider myself exceptional in any regard, but I stumbled upon a few cryptography vulnerabilities in Matrix’s Olm library with so little effort that it was nearly accidental.
It should not be this easy to find these kind of issues in any product people purportedly rely on for private messaging, which many people evangelize incorrectly as a Signal alternative.
Later, I thought I identified an additional vulnerability that would have been much worse, but I was wrong about that one. For the sake of transparency and humility, I’ll also describe that in detail.
This post is organized as follows:
- Disclosure Timeline
- Vulnerabilities in Olm (Technical Details)
- Recommendations
- Background Information
- An Interesting Non-Issue That Looked Critical
I’ve opted to front-load the timeline and vulnerability details to respect the time of busy security professionals.
Please keep in mind that this website is a furry blog, first and foremost, that sometimes happens to cover security and cryptography topics.Many people have, over the years, assumed the opposite and commented accordingly. The ensuing message board threads are usually is a waste of time and energy for everyone involved. So please adjust your expectations.
Art by Harubaki
If you’re curious, you can learn more here.
Disclosure Timeline
- 2024-05-15: I took a quick look at the Matrix source code. I identified two issues and emailed them to their
security@
email address.
In my email, I specify that I plan to disclose my findings publicly in 90 days (i.e. on August 14), in adherence with industry best practices for coordinated disclosure, unless they request an extension in writing. - 2024-05-16: I checked something else on a whim and find a third issue, which I also email to their
security@
email address. - 2024-05-17: Matrix security team confirms receipt of my reports.
- 2024-05-17: I follow up with a suspected fourth finding–the most critical of them all. They point out that it is not actually an issue, because I overlooked an important detail in how the code is architected. Mea culpa!
- 2024-05-18: A friend discloses a separate finding with Matrix: Media can be decrypted to multiple valid plaintexts using different keys and Malicious homeservers can trick Element/Schildichat into revealing links in E2EE rooms.
They instructed the Matrix developers to consult with me if they needed cryptography guidance. I never heard from them on this externally reported issue. - 2024-07-12: I shared this blog post draft with the Matrix security team while reminding them of the public disclosure date.
- 2024-07-31: Matrix pushes a commit that announces that libolm is deprecated.
- 2024-07-31: I email the Matrix security team asking if they plan to fix the reported issues (and if not, if there’s any other reason I should withhold publication).
- 2024-07-31: Matrix confirms they will not fix these issues (due to its now deprecated status), but ask that I withhold publication until the 14th as originally discussed.
- 2024-08-14: This blog post is publicly disclosed to the Internet.
- 2024-08-14: The lead Matrix dev claims they already knew about these issues, and, in fact, knowingly shipped cryptography code that was vulnerable to side-channel attacks for years. See Addendum.
- 2024-08-23: MITRE has assigned CVE IDs to these three findings.
Vulnerabilities in Olm
I identified the following issues with Olm through a quick skim of their source code on Gitlab:
- AES implementation is vulnerable to cache-timing attacks
- Ed25519 signatures are malleable
- Timing leakage in base64 decoding of private key material
This is sorted by the order in which they were discovered, rather than severity.
AES implementation is vulnerable to cache-timing attacks
a.k.a. CVE-2024-45191
Olm ships a pure-software implementation of AES, rather than leveraging hardware acceleration.
// Substitutes a word using the AES S-Box.WORD SubWord(WORD word){unsigned int result;result = (int)aes_sbox[(word >> 4) & 0x0000000F][word & 0x0000000F];result += (int)aes_sbox[(word >> 12) & 0x0000000F][(word >> 8) & 0x0000000F] << 8;result += (int)aes_sbox[(word >> 20) & 0x0000000F][(word >> 16) & 0x0000000F] << 16;result += (int)aes_sbox[(word >> 28) & 0x0000000F][(word >> 24) & 0x0000000F] << 24;return(result);}
The code in question is called from this code, which is in turn used to actually encrypt messages.
Software implementations of AES that use a look-up table for the SubWord step of the algorithm are famously susceptible to cache-timing attacks.
This kind of vulnerability in software AES was previously used to extract a secret key from OpenSSL and dm-crypt in about 65 milliseconds. Both papers were published in 2005.
A general rule in cryptography is, “attacks only get better; they never get worse“.
As of 2009, you could remotely detect a timing difference of about 15 microseconds over the Internet with under 50,000 samples. Side-channel exploits are generally statistical in nature, so such a sample size is generally not a significant mitigation.
How is this code actually vulnerable?
In the above code snippet, the vulnerability occurs inaes_sbox[/* ... */][/* ... */]
.
Due to the details of how the AES block cipher works, the input variable (word
) is a sensitive value.
Software written this way allows attackers to detect whether or not a specific value was present in one of the processor’s caches.
To state the obvious: Cache hits are faster than cache misses. This creates an observable timing difference.
Such a timing leak allows the attacker to learn the value that was actually stored in said cache. You can directly learn this from other processes on the same hardware, but it’s also observable over the Internet (with some jitter) through the normal operation of vulnerable software.
See also: cryptocoding’s description for table look-ups indexed by secret data.
How to mitigate this cryptographic side-channel
The correct way to solve this problem is to use hardware accelerated AES, which uses distinct processor features to implement the AES round function and side-steps any cache-timing shenanigans with the S-box.
Not only is this more secure, but it’s faster and uses less energy too!
If you’re also targeting devices that don’t have hardware acceleration available, you should first use hardware acceleration where possible, but then fallback to a bitsliced implementation such as the one in Thomas Pornin’s BearSSL.
See also: the BearSSL documentation for constant-time AES.
Art by AJ
Ed25519 signatures are malleable
a.k.a. CVE-2024-45193
Ed25519 libraries come in various levels of quality regarding signature validation criteria; much to the chagrin of cryptography engineers everywhere. One of those validation criteria involves signature malleability.
Signature malleability usually isn’t a big deal for most protocols, until suddenly you discover a use case where it is. If it matters, that usually that means you’re doing something with cryptocurrency.
Briefly, if your signatures are malleable, that means you can take an existing valid signature for a given message and public key, and generate a second valid signature for the same message. The utility of this flexibility is limited, and the impact depends a lot on how you’re using signatures and what properties you hope to get out of them.
For ECDSA, this means that for a given signature , a second signature is also possible (where is the order of the elliptic curve group you’re working with).
Matrix uses Ed25519, whose malleability is demonstrated between and .
This is trivially possible because S is implicitly reduced modulo the order of the curve, , which is a 253-bit number (0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed
) and S is encoded as a 256-bit number.
The Ed25519 library used within Olm does not ensure that , thus signatures are malleable. You can verify this yourself by looking at the Ed25519 verification code.
int ed25519_verify(const unsigned char *signature, const unsigned char *message, size_t message_len, const unsigned char *public_key) { unsigned char h[64]; unsigned char checker[32]; sha512_context hash; ge_p3 A; ge_p2 R; if (signature[63] & 224) { return 0; } if (ge_frombytes_negate_vartime(&A, public_key) != 0) { return 0; } sha512_init(&hash); sha512_update(&hash, signature, 32); sha512_update(&hash, public_key, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, h); sc_reduce(h); ge_double_scalarmult_vartime(&R, h, &A, signature + 32); ge_tobytes(checker, &R); if (!consttime_equal(checker, signature)) { return 0; } return 1;}
This is almost certainly a no-impact finding (or low-impact at worst), but still an annoying one to see in 2024.
If you’d like to learn more, this page is a fun demo of Ed25519 malleability.
To mitigate this, I recommend implementing these checks from libsodium.
Art: CMYKat
Timing leakage in base64 decoding of private key material
a.k.a. CVE-2024-45192
If you weren’t already tired of cache-timing attacks based on table look-ups from AES, the Matrix base64 code is also susceptible to the same implementation flaw.
while (pos != end) { unsigned value = DECODE_BASE64[pos[0] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[1] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[2] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[3] & 0x7F]; pos += 4; output[2] = value; value >>= 8; output[1] = value; value >>= 8; output[0] = value; output += 3;}
The base64 decoding function in question is used to load the group session key, which means the attack published in this paper almost certainly applies.
How would you mitigate this leakage?
Steve Thomas (one of the judges of the Password Hashing Competition, among other noteworthy contributions) wrote some open source code a while back that implements base64 encoding routines in constant-time.
The real interesting part is how it avoids a table look-up by using arithmetic (from this file):
// Base64 character set:// [A-Z] [a-z] [0-9] + /// 0x41-0x5a, 0x61-0x7a, 0x30-0x39, 0x2b, 0x2finline int base64Decode6Bits(char src){int ch = (unsigned char) src;int ret = -1;// if (ch > 0x40 && ch < 0x5b) ret += ch - 0x41 + 1; // -64ret += (((0x40 - ch) & (ch - 0x5b)) >> 8) & (ch - 64);// if (ch > 0x60 && ch < 0x7b) ret += ch - 0x61 + 26 + 1; // -70ret += (((0x60 - ch) & (ch - 0x7b)) >> 8) & (ch - 70);// if (ch > 0x2f && ch < 0x3a) ret += ch - 0x30 + 52 + 1; // 5ret += (((0x2f - ch) & (ch - 0x3a)) >> 8) & (ch + 5);// if (ch == 0x2b) ret += 62 + 1;ret += (((0x2a - ch) & (ch - 0x2c)) >> 8) & 63;// if (ch == 0x2f) ret += 63 + 1;ret += (((0x2e - ch) & (ch - 0x30)) >> 8) & 64;return ret;}
Any C library that handles base64 codecs for private key material should use a similar implementation. It’s fine to have a faster base64 implementation for non-secret data.
Worth noting: Libsodium also provides a reasonable Base64 codec.
Recommendations
These issues are not fixed in libolm.
Instead of fixing libolm, the Matrix team recommends all Matrix clients adopt vodozemac.
I can’t speak to the security of vodozemac.
Art: CMYKat
But I can speak against the security of libolm, so moving to vodozemac is probably a good idea. It was audited by Least Authority at one point, so it’s probably fine.
Most Matrix clients that still depended on libolm should treat this blog as public 0day, unless the Matrix security team already notified you about these issues.
Background Information
If you’re curious about the backstory and context of these findings, read on.Otherwise, feel free to skip this section. It’s not pertinent to most audiences. The people that need to read it already know who they are.
End-to-end encryption is one of the topics within cryptography that I find myself often writing about.
In 2020, I wrote a blog post covering end-to-end encryption for application developers. This was published several months after another blog I wrote covering gripes with AES-GCM, which included a shallow analysis of how Signal uses the algorithm for local storage.
In 2021, I published weaknesses in another so-called private messaging app called Threema.
In 2022, after Elon Musk took over Twitter, I joined the Fediverse and sought to build end-to-end encryption support for direct messages into ActivityPub, starting with a specification. Work on this effort was stalled while trying to solve Public Key distribution in a federated environment (which I hope to pick up soon, but I digress).
Earlier this year, the Telegram CEO started fearmongering about Signal with assistance from Elon Musk, so I wrote a blog post urging the furry fandom to move away from Telegram and start using Signal more. As I had demonstrated years prior, I was familiar with Signal’s code and felt it was a good recommendation for security purposes (even if its user experience needs significant work).
I thought that would be a nice, self-contained blog post. Some might listen, most would ignore it, but I could move on with my life.
I was mistaken about that last point.
Art by AJ
An overwhelming number of people took it upon themselves to recommend or inquire about Matrix, which prompted me to hastily scribble down my opinion on Matrix so that I might copy/paste a link around and save myself a lot of headache.
Just when I thought the firehose was manageable and I could move onto other topics, one of the Matrix developers responded to my opinion post.
Thus, I decided to briefly look at their source code and see if any major or obvious cryptography issues would fall out of a shallow visual scan.
Since you’re reading this post, you already know how that ended.
Credit: CMYKat
Since the first draft of this blog post was penned, I also outlined what I mean when I say an encrypted messaging app is a Signal competitor or not, and published my opinion on XMPP+OMEMO (which people also recommend for private messaging).
Why mention all this?
Because it’s important to know that I have not audited the Olm or Megolm codebases, nor even glanced at their new Rust codebase.
The fact is, I never intended to study Matrix. I was annoyed into looking at it in the first place.
My opinion of their project was already calcified by the previously discovered practically-exploitable cryptographic vulnerabilities in Matrix in 2022.
The bugs described above are the sort of thing I mentally scan for when I first look at a project just to get a feel for the maturity of the codebase. I do this with the expectation (hope, really) of not finding anything at all.
(If you want two specific projects that I’ve subjected to a similar treatment, and failed to discover anything interesting in: Signal and WireGuard. These two set the bar for cryptographic designs.)
It’s absolutely bonkers that an AES cache timing vulnerability was present in their code in 2024.
It’s even worse when you remember that I was inundated with Matrix evangelism in response to recommending furries use Signal.I’m a little outraged because of how irresponsible this is, in context.
It’s so bad that I didn’t even need to clone their git repository, let alone run basic static analysis tools locally.
So if you take nothing else away from this blog post, let it be this:
There is roughly a 0% chance that I got extremely lucky in my mental grep
and found the only cryptography implementation flaws in their source code. I barely tried at all and found these issues.
I would bet money on there being more bugs or design flaws that I didn’t find, because this discovery was the result of an extremely half-assed effort to blow off steam.
Wasn’t libolm deprecated in May 2022?
The Matrix developers like to insist that their new Rust hotness “vodozemac” is what people should be using today.
I haven’t looked at vodozemac at all, but let’s pretend, for the sake of argument, that its cryptography is actually secure.
(This is very likely if they turn out to be using RustCrypto for their primitives, but I don’t have the time or energy for that nerd snipe, so I’m not going to look. Least Authority did audit their Rust library, for what it’s worth, and Least Authority isn’t clownshoes.)
It’s been more than 2 years since they released vodozemac. What does the ecosystem penetration for this new library look like, in practice?
A quick survey of the various Matrix clients on GitHub says that libolm is still the most widely used cryptography implementation in the Matrix ecosystem (as of this writing):
Matrix Client | Cryptography Backend |
---|---|
https://github.com/tulir/gomuks | libolm (1, 2) |
https://github.com/niochat/nio | libolm (1, 2) |
https://github.com/ulyssa/iamb | vodozemac (1, 2) |
https://github.com/mirukana/mirage | libolm (1) |
https://github.com/Pony-House/Client | libolm (1) |
https://github.com/MTRNord/cetirizine | vodozemac (1) |
https://github.com/nadams/go-matrixcli | none |
https://github.com/mustang-im/mustang | libolm (1) |
https://github.com/marekvospel/libretrix | libolm (1) |
https://github.com/yusdacra/icy_matrix | none |
https://github.com/ierho/element | libolm (through the python SDK) |
https://github.com/mtorials/cordless | none |
https://github.com/hwipl/nuqql-matrixd | libolm (through the python SDK) |
https://github.com/maxkratz/element-web | vodozemac (1, 2, 3, 4) |
https://github.com/asozialesnetzwerk/riot | libolm (wasm file) |
https://github.com/NotAlexNoyle/Versi | libolm (1, 2) |
3 of the 16 clients surveyed use the new vodozemac library. 10 still use libolm, and 3 don’t appear to implement end-to-end encryption at all.
If we only focus on clients that support E2EE, vodozemac has successfully been adopted by 19% of the open source Matrix clients on GitHub.
I deliberately excluded any repositories that were archived or clearly marked as “old” or “legacy” software, because including those would artificially inflate the representation of libolm. It would make for a more compelling narrative to do so, but I’m not trying to be persuasive here.
Deprecation policies are a beautiful lie. The impact of a vulnerability in Olm or Megolm is still far-reaching, and should be taken seriously by the Matrix community.
Worth calling out: this quick survey, which is based on a GitHub Topic, certainly misses other implementations. Both FluffyChat and Cinny, which were not tagged with this GitHub Topic, depend a language-specific Olm binding.These bindings in turn wrap libolm rather than the Rust replacement, vodozemac.
But the official clients…
I thought the whole point of choosing Matrix over something like Signal is to be federated, and run your own third-party clients?
If we’re going to insist that everyone should be using Element if they want to be secure, that defeats the entire marketing point about third-party clients that Matrix evangelists cite when they decry Signal’s centralization.
So I really don’t want to hear it.
An Interesting Non-Issue That Looked Critical
As I mentioned in the timeline at the top, I thought I found a fourth issue with Matrix’s codebase. Had I been correct, this would have been a critical severity finding that the entire Matrix ecosystem would need to melt down to remediate.
Fortunately for everyone, I made a mistake, and there is no fourth vulnerability after all.
However, I thought it would be interesting to write about what I thought I found, the impact it would have had if it were real, and why I believed it to be an issue.
Let’s start with the code in question:
void ed25519_sign(unsigned char *signature, const unsigned char *message, size_t message_len, const unsigned char *public_key, const unsigned char *private_key) { sha512_context hash; unsigned char hram[64]; unsigned char r[64]; ge_p3 R; sha512_init(&hash); sha512_update(&hash, private_key + 32, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, r); sc_reduce(r); ge_scalarmult_base(&R, r); ge_p3_tobytes(signature, &R); sha512_init(&hash); sha512_update(&hash, signature, 32); sha512_update(&hash, public_key, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, hram); sc_reduce(hram); sc_muladd(signature + 32, hram, private_key, r);}
The highlighted segment is doing pointer arithmetic. This means it’s reading 32 bytes, starting from the 32nd byte in private_key
.
What’s actually happening here is: private_key
is the SHA512 hash of a 256-bit seed. If you look at the function prototype, you’ll notice that public_key
is a separate input.
Virtually every other Ed25519 implementation I’ve ever looked at before expected users to provide a 32 byte seed followed by the public key as a single input.
This led me to believe that this private_key + 32
pointer arithmetic was actually using the public key for calculating r
.
The variable r
(not to be confused with big R) generated via the first SHA512 is the nonce for a given signature, it must remain secret for Ed25519 to remain secure.
If r
is known to an attacker, you can do some arithmetic to recover the secret key from a single signature.
Because I had mistakenly believed that r
was calculated from the SHA512 of only public inputs (the public key and message), which I must emphasize isn’t correct, I had falsely concluded that any previously intercepted signature could be used to steal user’s private keys.
Credit: CMYKat
But because private_key
was actually the full SHA512 hash of the seed, rather than the seed concatenated with the public key, this pointer arithmetic did NOT use the public key for the calculation of r
, so this vulnerability does not exist.
If the code did what I thought it did, however, this would have been a complete fucking disaster for the Matrix ecosystem. Any previously intercepted message would have allowed an attacker to recover a user’s secret key and impersonate them. It wouldn’t be enough to fix the code; every key in the ecosystem would need to be revoked and rotated.
Whew!
I’m happy to be wrong about this one, because that outcome is a headache nobody wants.
So no action is needed, right?
Well, maybe.
Matrix’s library was not vulnerable, but I honestly wouldn’t put it past software developers at large to somehow, somewhere, use the public key (rather than a secret value) to calculate the EdDSA signature nonces as described in the previous section.
To that end, I would like to propose a test vector be added to the Wycheproof test suite to catch any EdDSA implementation that misuses the public key in this way.
Then, if someone else screws up their Ed25519 implementation in the exact way I thought Matrix was, the Wycheproof tests will catch it.
For example, here’s a vulnerable test input for Ed25519:
{ "should-fail": true, "secret-key": "d1d0ef849f9ec88b4713878442aeebca5c7a43e18883265f7f864a8eaaa56c1ef3dbb3b71132206b81f0f3782c8df417524463d2daa8a7c458775c9af725b3fd", "public-key": "f3dbb3b71132206b81f0f3782c8df417524463d2daa8a7c458775c9af725b3fd", "message": "Test message", "signature": "ffc39da0ce356efb49eb0c08ed0d48a1cadddf17e34f921a8d2732a33b980f4ae32d6f5937a5ed25e03a998e4c4f5910c931b31416e143965e6ce85b0ea93c09"}
A similar test vector would also be worth creating for Ed448, but the only real users of Ed448 were the authors of the xz backdoor, so I didn’t bother with that.
(None of the Project Wycheproof maintainers knew this suggestion is coming, by the way, because I was respecting the terms of the coordinated disclosure.)
Closing Thoughts
Despite finding cryptography implementation flaws in Matric’s Olm library, my personal opinion on Matrix remains largely unchanged from 2022. I had already assumed it would not meet my bar for security.
Cryptography engineering is difficult because the vulnerabilities you’re usually dealing with are extremely subtle. (Here’s an unrelated example if you’re not convinced of this general observation.) As SwiftOnSecurity once wrote:
https://twitter.com/SwiftOnSecurity/status/832058185049579524
The people that developed Olm and Megolm has not proven themselves ready to build a Signal competitor. In balance, most teams are not qualified to do so.
I really wish the Matrix evangelists would accept this and stop trying to cram Matrix down other people’s throats when they’re talking about problems with other platforms entirely.
More important for the communities of messaging apps:You don’t need to be a Signal competitor. Having E2EE is a good thing on its own merits, and really should be table stakes for any social application in 2024.
It’s only when people try to advertise their apps as a Signal alternative (or try to recommend it instead of Signal), and offer less security, that I take offense.
Just be your own thing.
My work-in-progress proposal to bring end-to-end encryption to the Fediverse doesn’t aim to compete with Signal. It’s just meant to improve privacy, which is a good thing to do on its own merits.
If I never hear Matrix evangelism again after today, it would be far too soon.
If anyone feels like I’m picking on Matrix, don’t worry: I have far worse things to say about Telegram, Threema, XMPP+OMEMO, Tox, and a myriad other projects that are hungry for Signal’s market share but don’t measure up from a cryptographic security perspective.
If Signal fucked up as bad as these projects, my criticism of Signal would be equally harsh. (And remember, I have looked at Signal before.)
Addendum (2024-08-14)
One of the lead Matrix devs posted a comment on Hacker News after this blog post went live that I will duplicate here:
the author literally picked random projects from github tagged as matrix, without considering their prevalence or whether they are actually maintained etc.if you actually look at % of impacted clients, it’s tiny.
meanwhile, it is very unclear that any sidechannel attack on a libolm based client is practical over the network (which is why we didn’t fix this years ago). After all, the limited primitives are commented on in the readme and https://github.com/matrix-org/olm/issues/3 since day 1.
So the Matrix developers already knew about these vulnerabilities, but deliberately didn’t fix them, for years.
Congratulations, you’ve changed my stance. It used to be “I don’t consider Matrix a Signal alternative and they’ve had some embarrassing and impactful crypto bugs but otherwise I don’t care”. Now it’s a stronger stance:
Don’t use Matrix.
I had incorrectly assumed ignorance, when it was in fact negligence.
There’s no reasonable world in which anyone should trust the developers of cryptographic software (i.e., libolm) that deliberately ships with side-channels for years, knowing they’re present, and never bother to fix them.
This is fucking clownshoes.
https://soatok.blog/2024/08/14/security-issues-in-matrixs-olm-library/
#crypto #cryptography #endToEndEncryption #Matrix #sideChannels #vuln
A lot of recent (and upcoming) blog posts I’ve written, and Fediverse discussions I’ve participated in, have been about the security of communication products.My criticism of these products is simply that, from a cryptography and security perspective, they’re not a real competitor to Signal.
For all its other faults, Signal sets the bar for secure private messaging. It’s a solid security tool, even if its user experience and feature set leaves a lot of people disappointed. I highly recommend it over, say, Telegram.
In response to my post about jettisoning Telegram, quite a few people have tried to evangelize other products. For example:
Edit: Oh yeah, DON’T USE SIGNAL. Use Matrix instead, offers the benefits of signal without the drawbacks of lack of sync and phone number requirements and is decentralized. The fact that everyone is going gaga for signal as “the BEST messaging app” should be a big red flag in and of itself, because hype trains like this aren’t organic, just saying.
So, let me explain what it means for a communication product to qualify as a Signal competitor from the perspective of someone whose job involves auditing cryptography implementations.The Minimum Bar to Clear
Open Source
Every private messaging app must be open source in order to qualify as a Signal competitor.If it’s not open source, it’s not even worth talking about.
End-to-End Encryption
Messages MUST be end-to-end encrypted. This means that you encrypt on one participant’s device, decrypt on another’s, and nobody in the middle can observe plaintext.When I say MUST, I mean the RFC 2119 keyword.
There must never be a “transmit plaintext” option. No excuses. Secure cryptography is not interoperable with insecure cryptography. If you allow a “transmit plaintext” mode for any reason whatsoever, you have failed to build an encryption product that meets the bar.
This disqualifies Matrix.
This disqualifies Telegram.
This disqualifies XMPP + OMEMO.
This alone disqualifies a lot of so-called private messaging apps.
This doesn’t mean your product is insecure, or that I’m aware of any specific ways to break it.
It just doesn’t occupy the same mindshare as Signal, which only transmits encrypted data and doesn’t have a plaintext protocol to downgrade to.
Therefore, it’s not a goddamn Signal alternative.
How You Encrypt Matters
Signal normalized the use of AES-256-CBC with HMAC-SHA256.Facebook’s “Secret Conversations” feature deviated from this and preferred AES-GCM for attachments, but this bit them when the Invisible Salamanders attack was discovered.
The way Signal uses AES+HMAC is fine for their use case, but building a secure committing AEAD mode (rather than merely AE) out of these primitives is nontrivial.
If you’re aiming to compete with Signal on security, you should, at minimum, expect to engage with a cryptography auditing firm at least once a year to review and re-review your protocol designs and implementations.
I Will Heavily Scrutinize Your Group Messaging Protocols
Group messaging is one of those topics that might sound easy if you can do peer-to-peer messaging securely, but is catastrophically difficult once you get into the details.See also: My blog post about Threema.
If you want a starting point, look at RFC 9420 (Messaging Layer Security, which is a group key agreement protocol for messaging apps).
How You Manage Keys Matters
Tox attempted to build atop NaCl’s crypto_box interface, but this is not suitable for a general purpose secure messaging due to a lack of KCI Security.Key management (which is the focus of an upcoming blog post) is a problem that almost everyone underestimates. It’s also the most user-facing aspect of these messaging applications.
WhatsApp uses Key Transparency to scale user trust. I’m proposing something similar for E2EE for the Fediverse.
This is a much better strategy than expecting users to manually verify “fingerprints”.
Don’t look at OpenPGP as a role model when it comes to user experience. Johnny still cannot fucking encrypt.
Your Feature Should Not Bypass Privacy
Want to add all sorts of frills, like video chat or some dumb bullshit with AI and/or blockchain to secure the attention of venture capitalist investors?You’d better not implement them in such a way that leaks users’ messages or search queries to your service.
The main reason Signal is “missing” features is because they are thoughtful about how these features are designed and implemented.
Guess what happens if you prioritize shipping features over privacy and cryptography engineering?
That’s right: You stop being a contender for a Signal alternative.
So What?
If your fave isn’t a viable alternative to Signal, don’t fucking recommend it to people in response to me recommending Signal.That’s all I ask.
Art: Scruff
But what about…?
I’m not here to discuss your use cases, or usability, or anything else. I’m also not saying that Signal is perfect!Signal is a private messaging app that I would feel safe recommending whistleblowers to use. It meets all these requirements.
In order to be a Signal competitor, no matter how much you like your app, it needs to meet them too, otherwise it isn’t a Signal competitor. Them’s the rules!
There may be other requirements that are more important to you, that Signal doesn’t meet. That’s fine! You can like other things.
But unless your favorite widget also meets all of the things on this page, it’s not a valid competitor from a security and privacy perspective, and therefore I don’t want to fucking hear about it in response to me saying “use Signal until something better comes along”.
Capiche?
Addendum (2024-08-01)
Since I originally posted this, there have been a lot of opinions expressed and questions asked about messaging apps that have nothing to do with cryptographic security.Those are good discussions to have, elsewhere. Hell, most of this discussion would be a better response to my other blog post than this one.
The goal of this post was to specify what the minimum bar is for a different app to qualify as a Signal competitor. It’s written from the perspective of someone whose career is in applied cryptography.
If you have thoughts, feelings, opinions, questions, or concerns about messaging apps (including but not limited to Signal), that’s wonderful.
But this specific blog post is not the correct place to voice them!
Especially if the first line of your response is “you’re too focused on [technology, security, cryptography] (select appropriate)”.
Because… no shit? That’s the entire point of this particular post. It’s narrowly scoped for a reason. Please respect that.
My upcoming vulnerability disclosure in Matrix will make the same point, but I wanted a separate, less distracting blog post to link people to when someone tries to evangelize another chat app instead of Signal, especially if they make security claims while doing so.
https://soatok.blog/2024/07/31/what-does-it-mean-to-be-a-signal-competitor/
#cryptography #endToEndEncryption #privateMessengers #Signal
Cryptographers and cryptography engineers love to talk about the latest attacks and how to mitigate them. LadderLeak breaks ECDSA with less than 1 bit of nonce leakage? Raccoon attack brings the Hidden Number attack to finite field Diffie-Hellman in TLS?
And while this sort of research is important and fun, most software developers have much bigger problems to contend with, when it comes to the cryptographic security of their products and services.
So let’s start by talking about Java cryptography.
Art by Khia.
Cryptography in Java
In Java, the way you’re supposed to encrypt data using symmetric cryptography is with the javax.crypto.Cipher
class. So to encrypt with AES-GCM, you’d call Cipher.getInstance("AES/GCM/NoPadding")
and use the resulting object to process your data. javax.crypto.Cipher
can be used for a lot of ill-advised modes (including ECB mode) and ciphers (including DES).
Can you guess what class you’d use to encrypt data with RSA in Java?
https://www.youtube.com/watch?v=9nSQs0Gr9FA
That’s right! RSA goes in the Cipher.getInstance("RSA/ECB/OAEPWithSHA-256AndMGF1Padding")
hole.
(Or, more likely, Cipher.getInstance("RSA/ECB/PKCS1Padding")
, which is not great.)
Art by Khia.
Also, as a reminder: You don’t want to encrypt data with RSA directly. You want to encrypt symmetric keys with RSA, and then encrypt your actual data with those symmetric keys. Preferably using a KEM+DEM paradigm.
Fun anecdote: The naming of RSA/ECB/$padding
is misleading because it doesn’t actually implement a sort of ECB mode. However, a few projects over the years missed that memo and decided to implement RSA-ECB so they could be “compatible” with what they thought Java did. That is: They broke long messages into equal sized chunks (237 bytes for 2048-bit RSA), encrypted them independently, and then concatenated the ciphertexts together.
But it doesn’t end there. AES-GCM explodes brilliantly if you ever reuse a nonce. Naturally, the Cipher
class shifts all of this burden onto the unwitting developer that calls it, which results in a regurgitated mess that looks like this (from the Java documentation):
GCMParameterSpec s = ...; cipher.init(..., s); // If the GCM parameters were generated by the provider, it can // be retrieved by: // cipher.getParameters().getParameterSpec(GCMParameterSpec.class); cipher.updateAAD(...); // AAD cipher.update(...); // Multi-part update cipher.doFinal(...); // conclusion of operation // Use a different IV value for every encryption byte[] newIv = ...; s = new GCMParameterSpec(s.getTLen(), newIv); cipher.init(..., s); ...
If you fail to perform this kata perfectly, you’ll introduce a nonce reuse vulnerability into your application.
And if you look a little deeper, you’ll also learn that their software implementation of AES (which is used in any platform without hardware AES available–such as Android on older hardware) isn’t hardened against cache-timing attacks… although their GHASH implementation is (which implies cache-timing attacks are within their threat model). But that’s an implementation problem, not a design problem.
Kludgey, hard-to-use, easy-to-misuse. It doesn’t have to be this way.
Learning From PHP
In 2015, when PHP 7 was being discussed on their mailing list, someone had the brilliant idea of creating a simple, cross-platform, extension-independent interface for getting random bytes and integers.
This effort would become random_bytes()
and random_int()
in PHP 7. (If you want to see how messy things were before PHP 7, take a look at the appropriate polyfill library.)
However, the initial design for this feature sucked really badly. Imagine the following code snippet:
function makePassword(int $length = 20): string{ $password = ''; for ($i = 0; $i < $length; ++$i) { $password .= chr(random_int(33, 124)); } return $password;}
If your operating system’s random number generator failed (e.g. you’re in a chroot and cannot access /dev/urandom
), then the random_int()
call would have returned false
.
Because of type shenanigans in earlier versions of PHP, chr(false)
returns a NUL byte. (This is fixed since 7.4 when strict_types is enabled, but the latest version at the time was PHP 5.6.)
After a heated debate on both the Github issue tracker for PHP and the internal mailing lists, the PHP project did the right thing: It will throw an Exception if it cannot safely generate random data.
Exceptions are developer-friendly: If you do not catch the exception, it kills the script immediately. If you decide to catch them, you can handle them in whatever graceful way you prefer. (Pretty error pages are often better than a white page and HTTP 500 status code, after all.)
Art by Khia.
In version 7.2 of the PHP programming language, they also made another win: Libsodium was promoted as part of the PHP standard library.
However, this feature isn’t as good as it could be: It’s easy to mix up inputs to the libsodium API since it expects string
arguments instead of dedicated types (X25519SecretKey
vs X25519PublicKey
). To address this, the open source community has provided PHP libraries that avoid this mistake.
(I bet you weren’t expecting to hear that PHP is doing better with cryptography than Java in 2021, but here we are!)
Art by Khia.
Towards Usable Cryptography Interfaces
How can we do better? At a minimum, we need to internalize Avi Douglen’s rule of usable security.
Security at the expense of usability comes at the expense of security.
I’d like to propose a set of tenets that cryptography libraries can use to self-evaluate the usability of their own designs and implementations. Keep in mind that these are tenets, not a checklist, so the spirit of the law means more than the literal narrowly-scoped interpretation.
1. Follow the Principle of Least Astonishment
Cryptography code should be boring, never astonishing.
For example: If you’re comparing cryptographic outputs, it should always be done in constant-time–even if timing leaks do not help attackers (i.e. in password hashing validation).
2. Provide High-Level Abstractions
For example: Sealed boxes.
Most developers don’t need to fiddle with RSA and AES to construct their own hybrid public-key encryption designs (as you’d need to with Java). What they really need is a simple way to say, “Encrypt this so that only the recipient can decrypt it, but not the sender.”
This requires talking to your users and figuring out what their needs are instead of just assuming you know best.
3. Logically Separate Algorithm Categories
Put simply: Asymmetric cryptography belongs in a different API than symmetric cryptography. A similar separation should probably exist for specialized algorithms (e.g. multi-party computation and Shamir Secret Sharing).
Java may have got this one horribly wrong, but they’re not alone. The JWE specification (RFC 7518) also allows you to encrypt keys with:
- RSA with PKCS#1 v1.5 padding (asymmetric encryption)
- RSA with OAEP padding (asymmetric encryption)
- ECDH (asymmetric key agreement)
- AES-GCM (symmetric encryption)
If your users aren’t using a high-level abstraction, at least give them separate APIs for different algorithm types. If nothing else, it saves your users from having to ever type RSA/ECB
again.
N.b. this means we should also collectively stop using the simple sign
and verify
verbs for Symmetric Authentication (i.e. HMAC) when these verbs imply digital signature algorithms (which are inherently asymmetric). Qualified verbs (verify
–> sign_verify
, mac_verify
) are okay here.
4. Use Type-Safe Interfaces
If you allow any arbitrary string or byte array to be passed as the key, IV, etc. in your cryptography library, someone will misuse it.
Instead, you should have dedicated {structs, classes, types} (select appropriate) for each different kind of cryptography key your library expects. These keys should also provide guarantees about their contents (i.e. an Aes256GcmKey is always 32 bytes).
5. Defaults Matter
If you instantiate a new SymmetricCipher class, its default state should be an authenticated mode; never ECB.
The default settings are the ones that 80% of real world users should be using, if not a higher percentage.
6. Reduce Cognitive Load
If you’re encrypting data, why should your users even need to know what a nonce is to use it safely?
You MAY allow an explicit nonce if it makes sense for their application, but if they don’t provide one, generate a random nonce and handle it for them.
Aside: There’s an argument that we should have a standard for committing, deterministic authenticated encryption. If you need non-determinism, stick a random 256-bit nonce in the AAD and you get that property too. I liked to call this combination AHEAD (Authenticated Hedged Encryption with Associated Data), in the same vein as Hedged Signatures.
The less choices a user has to make to get their code working correctly, the less likely they’ll accidentally introduce a subtle flaw that makes their application hideously insecure.
7. Don’t Fail at Failure
If what you’re doing is sensitive to error oracles (e.g. padding), you have to be very careful about how you fail.
For example: RSA decryption with PKCS#1v1.5 padding. Doing a constant-time swap between the actual plaintext and some random throwaway value so the decryption error can result from the symmetric decryption is better than aborting.
Conversely, if you’re depending on a component to generate randomness for you and it fails, it shouldn’t fail silently and return bad data.
Security is largely about understanding how systems fail, so there’s no one-size-fits-all answer for this. However, the exact failure mechanism for a cryptographic feature should be decided very thoughtfully.
8. Minimize Runtime Negotiation
This is more of a concern for applications than libraries, but it bears mentioning here: The algorithm you’re using shouldn’t be something an attacker can decide. It should be determined at compile-time, not at run-time.
For example: There were several vulnerabilities in JSON Web Tokens where you could swap out the alg
header to none
(which removed all security) or from RS256
(RSA signed) to HS256
…which meant the RSA public key was being used as an HMAC symmetric key. (See tenet 4 above.)
Header art by Scruff Kerfluff.
https://soatok.blog/2021/02/24/cryptography-interface-design-is-a-security-concern/
#API #APIDesign #crypto #cryptography #cryptographyLibraries #SecurityGuidance
A paper was published on the IACR’s ePrint archive yesterday, titled LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage.The ensuing discussion on /r/crypto led to several interesting questions that I thought would be worth capturing and answering in detail.
What’s Significant About the LadderLeak Paper?
This is best summarized by Table 1 from the paper.
The sections labeled “This work” are what’s new/significant about this research.
The paper authors were able to optimize existing attacks exploiting one-bit leakages against 192-bit and 160-bit elliptic curves. They were further able to exploit leakages of less than one bit in the same curves.How Can You Leak Less Than One Bit?
We’re used to discrete quantities in computer science, but you can leak less than one bit of information in the case of side-channels.Biased modular reduction can also create a vulnerable scenario: If you know the probability of a 0 or a 1 in a given position in the bit-string of the one-time number (i.e. the most significant bit) is not 0.5 to 0.5, but some other ratio (e.g. 0.51 to 0.49), you can (over many samples) conclude a probability of a specific bit in your dataset.
If “less than one bit” sounds strange, that’s probably our fault for always rounding up to the nearest bit when we express costs in computer science.
What’s the Cost of the Attack?
Consult Table 3 from the paper for empirical cost data:
Table 3 from the LadderLeak paper.How Devastating is LadderLeak?
First, it assumes a lot of things:
- That you’re using ECDSA with either sect163r1 or secp192r1 (NIST P-192). Breaking larger curves requires more bits of bias (as far as we know).
- That you’re using a cryptography library with cache-timing leaks.
- That you have a way to measure the timing leaks (and not just pilfer the ECDSA secret key; i.e. in a TPM setup). This threat model generally assumes some sort of physical access.
But if you can pull the attack off, you can successfully recover the device’s ECDSA secret key. Which, for protocols like TLS, allow an attacker to impersonate a certificate-bearer (typically the server)… which is pretty devastating.
Is ECDSA Broken Now?
Non-deterministic ECDSA is not significantly more broken with LadderLeak than it already was by other attacks. LadderLeak does not break the Internet.Fundamentally, LadderLeak doesn’t really change the risk calculus. Bleichenbacher’s attack framework for solving the Hidden Number Problem using Lattices was already practical, with sufficient samples.
There’s even a CryptoPals challenge about these attacks.
As an acquaintance put it, the authors made a time-memory trade-off with a leaky oracle. It’s a neat result worthy of publication, but we aren’t any minutes closer to midnight with this revelation.
Is ECDSA’s k-value Really a Nonce?
Ehhhhhhhhh, sorta.It’s complicated!
Nonce in cryptography has always meant “number that must be used only once” (typically per key). See: AES-GCM.
Nonces are often confused for initialization vectors (IVs), which in addition to a nonce’s requirements for non-reuse must also be unpredictable. See: AES-CBC.
However, nonces and IVs can both be public, whereas ECDSA k-values MUST NOT be public! If you recover the k-value for a given signature, you can recover the secret key too.
That is to say, ECDSA k-values must be all of the above:
- Never reused
- Unpredictable
- Secret
- Unbiased
They’re really in a class of their own.
For that reason, it’s probably better to think of the k-value as a per-signature key than a simple nonce. (n.b. Many cryptography libraries actually implement them as a one-time ECDSA keypair.)
What’s the Difference Between Random and Unpredictable?
The HMAC-SHA256 output of a message under a secret key is unpredictable for anyone not in possession of said secret key. This value, though unpredictable, is not random, since signing the same message twice yields the same output.A large random integer when subjected to modular reduction by a non-Mersenne prime of the same magnitude will be biased towards small values. This bias may be negligible, but it makes the bit string that represents the reduced integer more predictable, even though it’s random.
What Should We Do? How Should We Respond?
First, don’t panic. This is interesting research and its authors deserve to enjoy their moment, but the sky is not falling.Second, acknowledge that none of the attacks are effective against EdDSA.
If you feel the urge to do something about this attack paper, file a support ticket with all of your third-party vendors and business partners that handle cryptographic secrets to ask them if/when they plan to support EdDSA (especially if FIPS compliance is at all relevant to your work, since EdDSA is coming to FIPS 186-5).
Reason: With increased customer demand for EdDSA, more companies will adopt this digital signature algorithm (which is much more secure against real-world attacks). Thus, we can ensure an improved attack variant that actually breaks ECDSA doesn’t cause the sky to fall and the Internet to be doomed.
(Seriously, I don’t think most companies can overcome their inertia regarding ECDSA to EdDSA migration if their customers never ask for it.)
https://soatok.blog/2020/05/26/learning-from-ladderleak-is-ecdsa-broken/
#crypto #cryptography #digitalSignatureAlgorithm #ECDSA #ellipticCurveCryptography #LadderLeak
In 2017, cryptography researchers from Kudelski Security demonstrated practical fault attacks against EdDSA (specifically Ed25519; RFC 8032).
Their techniques are also applicable to Deterministic ECDSA (RFC 6979), and potentially work against any deterministic signature scheme (n.b. the Fiat-Shamir or Schnorr distinction isn’t meaningful in this context).
Oh no, not fault attacks! (Art by Swizz)
Although that might seem alarming, fault attacks aren’t especially useful for software applications running on general-purpose computers. They’re mostly in the threat models for smartcards and embedded devices.
A recent paper discusses a technique called “hedged” signatures, which I’ve mentioned in A Furry’s Guide to Digital Signature Algorithms.
What is a Hedged Signature?
Let’s just consult the formal definition given by Aranha, et al. in the paper I linked above!
Totally human-readable, right? (Dark mode edit made by me.)
Okay, if you’re not a cryptographer, this is probably clear as mud.
Let’s try a different approach (one that most software engineers will find more intuitive). We’ll start with non-hedged signatures, and then tweak them to become hedged.
Libsodium: Non-Hedged Signatures
Libsodium’s crypto_sign_detached()
(which implements Ed25519) accepts two arguments:
- The message being signed.
- The secret key held by the signer.
Libsodium’s congruents crypto_sign_verify_detached()
accepts three arguments:
- The detached signature.
- The message.
- The public key (corresponds to the secret key from the other function).
Since libsodium uses Ed25519 under-the-hood, the signature algorithm is deterministic: If you sign the same message with the same secret key, it will always produce the same signature.
Don’t believe me? Try it yourself: https://3v4l.org/lKrJb
Dhole Crypto: Hedged Signatures
Last year when I wrote Dhole Cryptography (in PHP and JavaScript), I decided to implement what would later come to be called “hedged signatures” by cryptographers.
Instead of just signing a message, Dhole Crypto first generates a 256-bit per-message nonce and then signs the nonce and the message together. Then, it appends the nonce to the generated signature (and encodes this as one binary-safe string).
That is to say, the major hack is to change a procedure from this:
function sign(string $message, string $secretKey): string { $signature = sodium_crypto_sign_detached( $message, $secretKey ); return base64_encode($signature);}
…into a procedure that looks like this:
function hsign(string $message, string $secretKey): string { $nonce = random_bytes(32); $signature = sodium_crypto_sign_detached( $nonce . $message, $secretKey ); return base64_encode($signature . $nonce);}
If you pay careful attention to the placement of the nonce in this updated procedure, you’ll notice that it’s backwards compatible with the original libsodium API for Ed25519: crypto_sign()
and crypto_sign_open()
.
Of course, these details are totally abstracted away from the user. Instead, the API looks like this (PHP):
<?phpuse Soatok\DholeCrypto\Asymmetric;use Soatok\DholeCrypto\Key\AsymmetricSecretKey;// Key generation$secret = AsymmetricSecretKey::generate();$public = $secret->getPublicKey();// Signing a message$message = "I certify that you have paid your $350 awoo fine";$sig = Asymmetric::sign($message, $secret);// Verifying a message and signatureif (!Asymmetric::verify($message, $public, $sig)) { die('AWOO FINE UNPAID');}
For JavaScript developers, this may be more intuitive to read:
const { Asymmetric, AsymmetricSecretKey} = require('dhole-crypto');(async function () { let wolfSecret = await AsymmetricSecretKey.generate(); let wolfPublic = wolfSecret.getPublicKey(); let message = "Your $350 awoo fine has been paid UwU"; let signature = await Asymmetric.sign(message, wolfSecret); if (!await Asymmetric.verify(message, wolfPublic, signature)) { console.log("Invalid signature. Awoo not authorized."); }})();
Do Hedged Signatures Protect Against Fault Attacks?
Sort of. It really depends on the kind of fault attack the attacker uses.
See Section 5 of the paper for a detailed break-down of the provable security of hedged signatures against XEdDSA (a variant of EdDSA used by the Signal protocol; the EdDSA variants specified in RFC 8032 were not studied in that paper).
However, the exploit explored by Kudelski using simple voltage glitches to break EdDSA in an Arduino device does become significantly more difficult with hedged signatures versus classical EdDSA.
Additionally, if you combine the existing techniques for mitigating fault attacks in embedded software with a protocol that uses hedged signatures, you may push the cost of a successful fault attack to become economically impractical for attackers.
However, it’s demonstrable that Hedged Signatures are at least as secure as Deterministic Signatures:
Even if your hedging suffers from a catastrophic randomness failure and generates the same nonce twice, the actual nonce used within Ed25519 will be derived from the SHA-512 hash of this value, the message, and a secret key.
Consequently, the only way for the internal nonce to repeat is for the message to be the same–which is the same scenario as a Deterministic Signature, which doesn’t let attackers steal your secret key.
Safe-by-default cryptography makes my heart shine. Art by Kerijiano.
What Does This All Mean?
Hedged signatures are at least as safe as Deterministic Signatures, and in some scenarios, offer a greater degree of protection than Deterministic Signatures.
Additionally, it’s very easy to convert a Deterministic Signature scheme into a Hedged Signature Scheme: Just add additional randomness that gets signed as part of the message, then append this randomness to the signature (so the signature can be successfully verified later).
Or, if you’re using a programming language that I publish open source software in, you can just use Dhole Cryptography and not worry about these details.
(Header art by Kyume.)
https://soatok.blog/2020/05/03/hedged-signatures-with-libsodium-using-dhole/
#crypto #cryptography #DholeCryptography #digitalSignatureAlgorithm #Ed25519 #EdDSA #hedgedSignatures #JavaScript #libsodium #openSource #PHP
Let’s talk about digital signature algorithms.Digital signature algorithms are one of the coolest ideas to come out of asymmetric (a.k.a. public-key) cryptography, but they’re so simple and straightforward that most cryptography nerds don’t spend a lot of time thinking about them.
Even though you are more likely to run into a digital signature as a building block (e.g. certificate signatures in TLS) than think about them in isolation (e.g. secure software releases), they’re still really cool and worth learning about.
What’s a Digital Signature?
A digital signature is some string that proves that a specific message was signed by some specific entity in possession of the secret half of an asymmetric key-pair. Digital Signature Algorithms define the process for securely signing and verifying messages with their associated signatures.For example, if I have the following keypair:
- Secret key:
9080a2c7897faeb8526968161695da0f7b3afa2e8e7d8e8369a85547ab48ea05
- Public key:
482b8d3430445cdad6b5ce59778e09ab59d099120f32d316e881db1a6330390b
I can cryptographically sign the message “Dhole Moments: Never a dull moment!” with the above secret key, and it will generate the signature string:
63629779a31b623486145359c6f1d56602d8d9135e4b17fa2ae3667c8947397decd7ae01bfed08645a429f5dee906e87df4e18eefdfff9acb5b1488c9dec800f
.If you only have the message, signature string, and my public key, you can verify that I signed the message. But, very crucially, you cannot sign messages and convince someone else that they came from me. (With symmetric authentication schemes, such as HMAC, you can.)
A digital signature algorithm is considered secure if, in order for anyone else to pass off a different message as being signed by me, they would need my secret key to succeed. When this assumption holds true, we say the scheme is secure against existential forgery attacks.
How Do Digital Signatures Work?
Simple answer: They generally combine a cryptographic hash function (e.g. SHA-256) with some asymmetric operation, and the details beyond that are all magic.More complicated answer: That depends entirely on the algorithm in question!
Art by Swizz
For example, with RSA signatures, you actually encrypt a hash of the message with your secret key to sign the message, and then you RSA-decrypt it with your public key to verify the signature. This is backwards from RSA encryption (where you do the totally sane thing: encrypt with public key, decrypt with secret key).
In contrast, with ECDSA signatures, you’re doing point arithmetic over an elliptic curve (with a per-signature random value).
Yet another class of digital signature algorithms are hash-based signatures, such as SPHINCS+ from the NIST Post-Quantum Cryptography Standardization effort, wherein your internals consist entirely of hash functions (and trees of hash functions, and stream ciphers built with other hash functions).
In all cases, the fundamental principle stays the same: You sign a message with a secret key, and can verify it with a public key.
In the interest of time, I’m not going to dive deep into how each signature algorithm works. That can be the subject of future blog posts (one for each of the algorithms in question).
Quick aside: Cryptographers who stumble across my blog might notice that I deviate from convention a bit. They typically refer to the sensitive half of an asymmetric key pair as a “private key”, but I instead call it a “secret key”.
The main reason for this is that “secret key” can be abbreviated as “sk” and public key can be abbreviated as “pk”, whereas private/public doesn’t share this convenience. If you ever come across their writings and wonder about this discrepancy, I’m breaking away from the norm and their way is more in line with the orthodoxy.
What Algorithms Should I Use?
What algorithm, indeed! (Art by circuitslime)If you find yourself asking this question, you’re probably dangerously close to rolling your own crypto. If so, you’ll want to hire a cryptographer to make sure your designs aren’t insecure. (It’s extremely easy to design or implement otherwise-secure cryptography in an insecure way.)
Recommended Digital Signature Algorithms
(Update, 2022-05-19): I’ve published a more in-depth treatment of the Elliptic Curve Digital Signature Algorithms a few years after this post was created. A lot of the topics covered by EdDSA and ECDSA are focused on there.EdDSA: Edwards Curve DSA
EdDSA comes in two variants: Ed25519 (widely supported in a lot of libraries and protocols) and Ed448 (higher security level, but not implemented or supported in as many places).The IETF standardized EdDSA in RFC 8032, in an effort related to the standardization of RFC 7748 (titled: Elliptic Curves for Security).
Formally, EdDSA is derived from Schnorr signatures and defined over Edwards curves. EdDSA’s design was motivated by the real-world security failures of ECDSA:
- Whereas ECDSA requires a per-signature secret number () to protect the secret key, EdDSA derives the per-signature nonce deterministically from a hash of the secret key and message.
- ECDSA with biased nonces can also leak your secret key through lattice attacks. To side-step this, EdDSA uses a hash function twice the size as the prime (i.e. SHA-512 for Ed25519), which guarantees that the distribution of the output of the modular reduction is unbiased (assuming uniform random inputs).
- ECDSA implemented over the NIST Curves is difficult to implement in constant-time: Complicated point arithmetic rules, point division, etc. EdDSA only uses operations that are easy to implement in constant-time.
For a real-world example of why EdDSA is better than ECDSA, look no further than the Minerva attacks, and the Ed25519 designer’s notes on why EdDSA stood up to the attacks.
The security benefits of EdDSA over ECDSA are so vast that FIPS 186-5 is going to include Ed25519 and Ed448.
Hooray for EdDSA adoption even in federal hellscapes.
This is kind of a big deal! The FIPS standards are notoriously slow-moving, and they’re deeply committed to a sunk cost fallacy on algorithms they previously deemed acceptable for real-world deployment.
RFC 6979: Deterministic ECDSA
Despite EdDSA being superior to ECDSA in virtually every way (performance, security, misuse-resistance), a lot of systems still require ECDSA support for the foreseeable future.If ECDSA is here to stay, we might as well make it suck less in real-world deployments. And that’s exactly what Thomas Pornin did when he wrote RFC 6979: Deterministic Usage of DSA and ECDSA.
(Like EdDSA, Deterministic ECDSA is on its way to FIPS 186-5. Look for it in FIPS-compliant hardware 5 years from now when people actually bother to update their implementations.)
Acceptable Digital Signature Algorithms
ECDSA Signatures
The Elliptic Curve Digital Signature Algorithm (ECDSA) is the incumbent design for signatures. Unlike EdDSA, ECDSA is a more flexible design that has been applied to many different types of curves.This is more of a curse than a blessing, as Microsoft discovered with CVE-2020-0601: You could take an existing (signature, public key) pair with standard curve, explicitly set the generator point equal to the victim’s public key, and set your secret key to 1, and Windows’s cryptography library would think, “This is fine.”
For this reason, cryptographers were generally wary of proposals to add support for Koblitz curves (including secp256k1–the Bitcoin curve) or Brainpool curves into protocols that are totally fine with NIST P-256 (and maybe NIST P-384 if you need it for compliance reasons).
For that reason, if you can’t use EdDSA or RFC 6979, your fallback option is ECDSA with one of those two curves (secp256r1, secp384r1), and making sure that you have access to a reliable cryptographic random number generator.
RSA Signatures
It’s high time the world stopped using RSA.Not just for the reasons that Trail of Bits is arguing (which I happen to agree with), but more importantly:
Replacing RSA with EdDSA (or Deterministic ECDSA) also gives teams an opportunity to practice migrating from one cryptography algorithm suite to another, which will probably be a much-needed experience when quantum computers come along and we’re all forced to migrate to post-quantum cryptography.
Encryption is a bigger risk of being broken by quantum computers than signature schemes: If you encrypt data today, a quantum computer 20 years down the line can decrypt it immediately. Conversely, messages that are signed today cannot be broken until after a quantum computer exists.
That being said, if you only need signatures and not encryption, RSA is still acceptable. If you also need encryption, don’t use RSA for that purpose.
If you can, use PSS padding rather than PKCS#1 v1.5 padding, with SHA-256 or SHA-384. But for signatures (i.e. not encryption), PKCS#1 v1.5 padding is fine.
Dishonorable Mention
DSA Signatures
There’s really no point in using classical DSA, when ECDSA is widely supported and has more ongoing attention from cryptography experts.If you’re designing a system in 2020 that uses DSA, my only question for you is…
WHYYYYYY?! (Art by Khia)
Upcoming Signature Algorithms
Although it is far too early to consider adopting these yet, cryptographers are working on new designs that protect against wider ranges of real-world threats.Let’s briefly look at some of them and speculate wildly about what the future looks like. For fun. Don’t use these yet, unless you have a very good reason to do so.
Digital Signature Research Topics
Hedged Signatures
Above, we concluded that EdDSA and Deterministic ECDSA were generally the best choice (and what I’d recommend for software developers). There is one important caveat: Fault attacks.A fault attack is when you induce a hardware fault into a computer chip, and thereby interfere with the correct functioning of a cryptography algorithm. This is especially relevant to embedded devices and IoT.
The IETF’s CFRG is investigating the use of additional randomization of messages (rather than randomizing signatures) as a safeguard against leaking secret keys through fault injection.
Of course, the Dhole Cryptography Library (my libsodium wrapper for JavaScript and PHP) already provides a form of Hedged Signatures.
If this technique is proven successful at mitigating fault injection attacks, then libsodium users will be able to follow the technique outlined in Dhole Crypto to safeguard their own protocols against fault attacks. Until then, they’re at least as safe as deterministic EdDSA today.
Threshold ECDSA Signatures
Suppose you have a scenario where you want 3-or-more people to have to sign a message before it’s valid. That’s exactly what Threshold ECDSA with Fast Trustless Setup aspires to provide.Although this is mostly being implemented in cryptocurrency projects today, the cryptography underpinnings are fascinating. At worst, this will be one good side-effect to come from blockchain mania.
Post-Quantum Digital Signatures
Hash-Based Signatures
The best hash-based signature schemes are based on the SPHINCS design for one simple reason: It’s stateless.In earlier hash-based digital signatures, such as XMSS, you have to maintain a state of which keys you’ve already used, to prevent attacks. Google’s Adam Langley previously described this as a “huge foot-cannon” for security (although probably okay in some environments, such as an HSM).
Lattice-Based Signatures
There are a lot of post-quantum signature algorithm designs defined over lattice groups, but my favorite lattice-based design is called FALCON. FALCON stands for FAst-Fourier Lattice-based COmpact Signatures Over NTRU.Sign Here, Please
Who knew there would be so much complexity involved with such a simple cryptographic operation? And we didn’t even dive deep on how any of them work.
That’s the problem with cryptography: It’s a fractal of complexity. The more you know about these topics, the deeper the complexity becomes.
But if you’re implementing a protocol today and need a digital signature algorithm, use (in order of preference):
- Ed25519 or Ed448
- ECDSA over NIST P-256 or P-384, with RFC 6979
- ECDSA over NIST P-256 or P-384, without RFC 6979
- RSA (as a last resort)
But most importantly: make sure you have a cryptographer audit your designs.
(Header art by Kyume.)
https://soatok.blog/2020/04/26/a-furrys-guide-to-digital-signature-algorithms/
#crypto #cryptography #DeterministicSignatures #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #FIPS #FIPS186 #FIPSCompliance #RFC6979 #SecurityGuidance
Let’s talk about digital signature algorithms.
Digital signature algorithms are one of the coolest ideas to come out of asymmetric (a.k.a. public-key) cryptography, but they’re so simple and straightforward that most cryptography nerds don’t spend a lot of time thinking about them.
Even though you are more likely to run into a digital signature as a building block (e.g. certificate signatures in TLS) than think about them in isolation (e.g. secure software releases), they’re still really cool and worth learning about.
What’s a Digital Signature?
A digital signature is some string that proves that a specific message was signed by some specific entity in possession of the secret half of an asymmetric key-pair. Digital Signature Algorithms define the process for securely signing and verifying messages with their associated signatures.
For example, if I have the following keypair:
- Secret key:
9080a2c7897faeb8526968161695da0f7b3afa2e8e7d8e8369a85547ab48ea05
- Public key:
482b8d3430445cdad6b5ce59778e09ab59d099120f32d316e881db1a6330390b
I can cryptographically sign the message “Dhole Moments: Never a dull moment!” with the above secret key, and it will generate the signature string: 63629779a31b623486145359c6f1d56602d8d9135e4b17fa2ae3667c8947397decd7ae01bfed08645a429f5dee906e87df4e18eefdfff9acb5b1488c9dec800f
.
If you only have the message, signature string, and my public key, you can verify that I signed the message. But, very crucially, you cannot sign messages and convince someone else that they came from me. (With symmetric authentication schemes, such as HMAC, you can.)
A digital signature algorithm is considered secure if, in order for anyone else to pass off a different message as being signed by me, they would need my secret key to succeed. When this assumption holds true, we say the scheme is secure against existential forgery attacks.
How Do Digital Signatures Work?
Simple answer: They generally combine a cryptographic hash function (e.g. SHA-256) with some asymmetric operation, and the details beyond that are all magic.
More complicated answer: That depends entirely on the algorithm in question!
Art by Swizz
For example, with RSA signatures, you actually encrypt a hash of the message with your secret key to sign the message, and then you RSA-decrypt it with your public key to verify the signature. This is backwards from RSA encryption (where you do the totally sane thing: encrypt with public key, decrypt with secret key).
In contrast, with ECDSA signatures, you’re doing point arithmetic over an elliptic curve (with a per-signature random value).
Yet another class of digital signature algorithms are hash-based signatures, such as SPHINCS+ from the NIST Post-Quantum Cryptography Standardization effort, wherein your internals consist entirely of hash functions (and trees of hash functions, and stream ciphers built with other hash functions).
In all cases, the fundamental principle stays the same: You sign a message with a secret key, and can verify it with a public key.
In the interest of time, I’m not going to dive deep into how each signature algorithm works. That can be the subject of future blog posts (one for each of the algorithms in question).
Quick aside: Cryptographers who stumble across my blog might notice that I deviate from convention a bit. They typically refer to the sensitive half of an asymmetric key pair as a “private key”, but I instead call it a “secret key”.
The main reason for this is that “secret key” can be abbreviated as “sk” and public key can be abbreviated as “pk”, whereas private/public doesn’t share this convenience. If you ever come across their writings and wonder about this discrepancy, I’m breaking away from the norm and their way is more in line with the orthodoxy.
What Algorithms Should I Use?
What algorithm, indeed! (Art by circuitslime)
If you find yourself asking this question, you’re probably dangerously close to rolling your own crypto. If so, you’ll want to hire a cryptographer to make sure your designs aren’t insecure. (It’s extremely easy to design or implement otherwise-secure cryptography in an insecure way.)
Recommended Digital Signature Algorithms
(Update, 2022-05-19): I’ve published a more in-depth treatment of the Elliptic Curve Digital Signature Algorithms a few years after this post was created. A lot of the topics covered by EdDSA and ECDSA are focused on there.
EdDSA: Edwards Curve DSA
EdDSA comes in two variants: Ed25519 (widely supported in a lot of libraries and protocols) and Ed448 (higher security level, but not implemented or supported in as many places).
The IETF standardized EdDSA in RFC 8032, in an effort related to the standardization of RFC 7748 (titled: Elliptic Curves for Security).
Formally, EdDSA is derived from Schnorr signatures and defined over Edwards curves. EdDSA’s design was motivated by the real-world security failures of ECDSA:
- Whereas ECDSA requires a per-signature secret number () to protect the secret key, EdDSA derives the per-signature nonce deterministically from a hash of the secret key and message.
- ECDSA with biased nonces can also leak your secret key through lattice attacks. To side-step this, EdDSA uses a hash function twice the size as the prime (i.e. SHA-512 for Ed25519), which guarantees that the distribution of the output of the modular reduction is unbiased (assuming uniform random inputs).
- ECDSA implemented over the NIST Curves is difficult to implement in constant-time: Complicated point arithmetic rules, point division, etc. EdDSA only uses operations that are easy to implement in constant-time.
For a real-world example of why EdDSA is better than ECDSA, look no further than the Minerva attacks, and the Ed25519 designer’s notes on why EdDSA stood up to the attacks.
The security benefits of EdDSA over ECDSA are so vast that FIPS 186-5 is going to include Ed25519 and Ed448.
Hooray for EdDSA adoption even in federal hellscapes.
This is kind of a big deal! The FIPS standards are notoriously slow-moving, and they’re deeply committed to a sunk cost fallacy on algorithms they previously deemed acceptable for real-world deployment.
RFC 6979: Deterministic ECDSA
Despite EdDSA being superior to ECDSA in virtually every way (performance, security, misuse-resistance), a lot of systems still require ECDSA support for the foreseeable future.
If ECDSA is here to stay, we might as well make it suck less in real-world deployments. And that’s exactly what Thomas Pornin did when he wrote RFC 6979: Deterministic Usage of DSA and ECDSA.
(Like EdDSA, Deterministic ECDSA is on its way to FIPS 186-5. Look for it in FIPS-compliant hardware 5 years from now when people actually bother to update their implementations.)
Acceptable Digital Signature Algorithms
ECDSA Signatures
The Elliptic Curve Digital Signature Algorithm (ECDSA) is the incumbent design for signatures. Unlike EdDSA, ECDSA is a more flexible design that has been applied to many different types of curves.
This is more of a curse than a blessing, as Microsoft discovered with CVE-2020-0601: You could take an existing (signature, public key) pair with standard curve, explicitly set the generator point equal to the victim’s public key, and set your secret key to 1, and Windows’s cryptography library would think, “This is fine.”
For this reason, cryptographers were generally wary of proposals to add support for Koblitz curves (including secp256k1–the Bitcoin curve) or Brainpool curves into protocols that are totally fine with NIST P-256 (and maybe NIST P-384 if you need it for compliance reasons).
For that reason, if you can’t use EdDSA or RFC 6979, your fallback option is ECDSA with one of those two curves (secp256r1, secp384r1), and making sure that you have access to a reliable cryptographic random number generator.
RSA Signatures
It’s high time the world stopped using RSA.
Not just for the reasons that Trail of Bits is arguing (which I happen to agree with), but more importantly:
Replacing RSA with EdDSA (or Deterministic ECDSA) also gives teams an opportunity to practice migrating from one cryptography algorithm suite to another, which will probably be a much-needed experience when quantum computers come along and we’re all forced to migrate to post-quantum cryptography.
Encryption is a bigger risk of being broken by quantum computers than signature schemes: If you encrypt data today, a quantum computer 20 years down the line can decrypt it immediately. Conversely, messages that are signed today cannot be broken until after a quantum computer exists.
That being said, if you only need signatures and not encryption, RSA is still acceptable. If you also need encryption, don’t use RSA for that purpose.
If you can, use PSS padding rather than PKCS#1 v1.5 padding, with SHA-256 or SHA-384. But for signatures (i.e. not encryption), PKCS#1 v1.5 padding is fine.
Dishonorable Mention
DSA Signatures
There’s really no point in using classical DSA, when ECDSA is widely supported and has more ongoing attention from cryptography experts.
If you’re designing a system in 2020 that uses DSA, my only question for you is…
WHYYYYYY?! (Art by Khia)
Upcoming Signature Algorithms
Although it is far too early to consider adopting these yet, cryptographers are working on new designs that protect against wider ranges of real-world threats.
Let’s briefly look at some of them and speculate wildly about what the future looks like. For fun. Don’t use these yet, unless you have a very good reason to do so.
Digital Signature Research Topics
Hedged Signatures
Above, we concluded that EdDSA and Deterministic ECDSA were generally the best choice (and what I’d recommend for software developers). There is one important caveat: Fault attacks.
A fault attack is when you induce a hardware fault into a computer chip, and thereby interfere with the correct functioning of a cryptography algorithm. This is especially relevant to embedded devices and IoT.
The IETF’s CFRG is investigating the use of additional randomization of messages (rather than randomizing signatures) as a safeguard against leaking secret keys through fault injection.
Of course, the Dhole Cryptography Library (my libsodium wrapper for JavaScript and PHP) already provides a form of Hedged Signatures.
If this technique is proven successful at mitigating fault injection attacks, then libsodium users will be able to follow the technique outlined in Dhole Crypto to safeguard their own protocols against fault attacks. Until then, they’re at least as safe as deterministic EdDSA today.
Threshold ECDSA Signatures
Suppose you have a scenario where you want 3-or-more people to have to sign a message before it’s valid. That’s exactly what Threshold ECDSA with Fast Trustless Setup aspires to provide.
Although this is mostly being implemented in cryptocurrency projects today, the cryptography underpinnings are fascinating. At worst, this will be one good side-effect to come from blockchain mania.
Post-Quantum Digital Signatures
Hash-Based Signatures
The best hash-based signature schemes are based on the SPHINCS design for one simple reason: It’s stateless.
In earlier hash-based digital signatures, such as XMSS, you have to maintain a state of which keys you’ve already used, to prevent attacks. Google’s Adam Langley previously described this as a “huge foot-cannon” for security (although probably okay in some environments, such as an HSM).
Lattice-Based Signatures
There are a lot of post-quantum signature algorithm designs defined over lattice groups, but my favorite lattice-based design is called FALCON. FALCON stands for FAst-Fourier Lattice-based COmpact Signatures Over NTRU.
Sign Here, Please
Who knew there would be so much complexity involved with such a simple cryptographic operation? And we didn’t even dive deep on how any of them work.
That’s the problem with cryptography: It’s a fractal of complexity. The more you know about these topics, the deeper the complexity becomes.
But if you’re implementing a protocol today and need a digital signature algorithm, use (in order of preference):
- Ed25519 or Ed448
- ECDSA over NIST P-256 or P-384, with RFC 6979
- ECDSA over NIST P-256 or P-384, without RFC 6979
- RSA (as a last resort)
But most importantly: make sure you have a cryptographer audit your designs.
(Header art by Kyume.)
https://soatok.blog/2020/04/26/a-furrys-guide-to-digital-signature-algorithms/
#crypto #cryptography #DeterministicSignatures #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #FIPS #FIPS186 #FIPSCompliance #RFC6979 #SecurityGuidance
Earlier this year, Cendyne published A Deep Dive into Ed25519 Signatures, which covered some of the different types of digital signature algorithms, but mostly delved into the Ed25519 algorithm. Truth in advertising.This got me thinking, “Why isn’t there a better comparison of different elliptic curve signature algorithms available online?”
Art: LvJ
Most people just defer to SafeCurves, but it’s a little dated: We have complete addition formulas for Weierstrass curves now, but SafeCurves doesn’t reflect that.
For the purpose of simplicity, I’m not going to focus on a general treatment of Elliptic Curve Cryptography (ECC), which includes pairing-based cryptography, Elliptic-Curve Diffie-Hellman, and (arguably) isogeny cryptography.
Instead, I’m going to focus entirely on elliptic curve digital signature algorithms.
Note: The content of this post is a bit lower-level than most programmers ever need to be concerned with. If you’re a programmer and interested in learning cryptography, start here. If you’re looking for library recommendations, libsodium is a good safe default.
Compliance Rules Everything Around Me
If you have to meet some arbitrary compliance requirements (i.e. FIPS 140-3, CNSA, etc.), your decision is already made for you, and you shouldn’t waste your time reading blogs like this that will only get your hopes up about the options available to you.Choose the option your compliance officer demands, and hope it’s good enough.
“Sure, let me check that box.”
Art: LvJElliptic Curves for Signature Algorithms
Let’s start with the same curve Cendyne analyzed: Ed25519.Ed25519 (EdDSA, Curve25519)
Ed25519 is one of the two digital signature algorithms today that use the EdDSA algorithm framework. The other is Ed448, which targets a higher security level (224-bit vs 128-bit) but is also slower and uses SHAKE256 (which is overkill and not great for performance).Ed25519 is a safe default choice for most applications where a digital signature is appropriate, for many reasons:
- Ed25519 uses deterministic nonces, which means you’re severely unlikely to ever reproduce the Sony ECDSA k-reuse bug in your system.
The deterministic nonce is calculated from the SHA512 hash of the secret key and message. Two invocations tocrypto_sign_ed25519()
with the same message and secret key will produce the same signature, but the intermediate nonce value is never revealed to an attacker.- Ed25519 includes the public key in the data hashed to produce the signature (more specifically s from the (R,s) pair). This offers a property that ECDSA lacks: Exclusive Ownership. I’ve written about this property before.
Without Exclusive Ownership, it’s possible to create a single signature value that’s valid for multiple different (message, public key) pairs.Years ago, there would have an additional list item: Ed25519 uses Edward Curves, which have complete addition formulas and are therefore safer to implement in constant-time than Weierstrass curves (i.e. the NIST curves). However, we now have complete addition formulas for Weierstrass curves, so this has become a moot point (assuming your implementation uses complete addition formulas).
Ed25519 targets the 128-bit security level.
Why Not Use Ed25519?
There is one minor pitfall of Ed25519 that makes it unsuitable for esoteric uses (say, Ring Signature Schemes or zero-knowledge proofs): Ed25519 is not a prime-order group; it has a cofactor h = 8. This detail famously created a double-spend vulnerability in all CryptoNote-based cryptocurrencies (including Monero).For systems that want the security of Ed25519 and its various well-studied implementations, but still need a prime-order group for their protocol, cryptographers have developed the Ristretto Group to meet your needs.
If you’re working on embedded systems, the determinism inherent to EdDSA might be undesirable due to the possibility of fault attacks. You can use a hedged variant of Ed25519 to mitigate this risk.
Additionally, Ed25519 is not approved for many government applications, although it did make the latest draft revision of FIPS 186 in 2019. If you care about compliance (see above), you cannot use Ed25519. Yet.
A niche Internet meme for cryptography engineers
Guidance for Ed25519
Unless legally prohibited, Ed25519 should be your default choice, unless you need a prime-order group. In that case, build your desired protocol atop Ristretto255.If you’re not sure if you need a prime-order group, you probably don’t. It’s a specialized requirement for uncommon use cases (ring signatures, password authenticated key exchange protocols, zero-knowledge proofs, etc.).
Art: LvJ
The Bitcoin Curve (ECDSA, secp256k1)
Secp256k1 is a Koblitz curve, which is a special case of Weierstrass curves that are more performant when used in binary fields, of the form, . This curve is almost exclusively used in cryptocurrency software.There is no specified reason why Bitcoin chose secp256k1 over another elliptic curve at the time of its inception, but we can speculate:
The author was a pseudonymous contributor to the Metzdowd mailing list for cypherpunks, and probably didn’t trust the NIST curves. Since Ed25519 didn’t exist at the time, the only obvious choice for a hipster elliptic curve parameter selection was to rely on the SECG recommendations, which specify the NIST and Koblitz curves. If you cross the NIST curves off the list, only the Koblitz curves remained.
Therefore, the selection of secp256k1 is likely an artefact of computer history and not a compelling reason to select secp256k1 in new designs. Please look elsewhere.
Fact: Imgflip didn’t have a single secp256k1 meme until I made this one.
Secp256k1 targets the 128-bit security level.
Guidance for secp256k1
Don’t bother, there are better options. (i.e. Ed25519)If you’re writing software for a cryptocurrency-related project, and you feel compelled to use secp256k1 for the sake of reducing your code footprint, please strongly consider the option of burning everything to the proverbial ground.
Cryptocurrency sucks!
Art: SwizzCryptocurrency Aside, Why Avoid Secp256k1?
As we noted above, secp256k1 isn’t widely used outside of cryptocurrency.As a direct consequence of this (as we’ll discuss in the NIST P-256 section), most cryptography libraries don’t offer optimized, side-channel-resistant implementations of secp256k1; even if they do offer optimized implementations of NIST P-256.
(Meanwhile, Ed25519 is designed to be side-channel and misuse-resistant, partly due to its Schnorr construction and constant-time ladder for scalar multiplication, so any library that implements Ed25519 is overwhelmingly likely to be constant-time.)
Therefore, any secp256k1 library for most programming languages that isn’t an FFI wrapper for libsecp256k1 will have worse performance than the other 256-bit curves.
https://twitter.com/bascule/status/1320183684935290882
Additionally, secp256k1 implementations are often a source of exploitable side-channels that permit attackers to pilfer your secret keys.
The previously linked article was about BouncyCastle’s implementation (which covers Java and .NET), but there’s still plenty of secp256k1 implementations that don’t FFI libsecp256k1.
From a quick Google Search:
- Python (uses EEA rather than Binary GCD for modular inverse)
- Go (uses Numbers, which weren’t designed for cryptography)
- PHP (uses GMP, which isn’t constant-time)
- JavaScript (calls here, which uses bn.js, which isn’t constant-time)
If you’re using secp256k1, and you’re not basing your choice on cybercash-interop, you’re playing with fire at the implementation and ecosystem levels–even if there are no security problems with the Koblitz curve itself.
You are much better off choosing any different curve than secp256k1 if you don’t have a Bitcoin/Ethereum/etc. interoperability requirement.
“No thanks, I use Ed25519.”
Art: LvJNIST P-256 (ECDSA, secp256r1)
NIST P-256 is the go-to curve to use with ECDSA in the modern era. Unlike Ed25519, P-256 uses a prime-order group, and is an approved algorithm to use in FIPS-validated modules.Most cryptography libraries offer optimized assembly implementations of NIST P-256, which makes it less likely that your signing operations will leak timing information or become a significant performance bottleneck.
P-256 targets the 128-bit security level.
Why Not Use P-256?
Once upon a time, P-256 was riskier than Ed25519 (for signatures) and X25519 (for Diffie-Hellman), due to the incomplete addition formulas that led to timing-leaky implementations.If you’re running old software, you may still be vulnerable to timing attacks that can recover your ECDSA secret key. However, there is a good chance that you’re on a modern and secure implementation in 2022, especially if you’re outsourcing this to OpenSSL or its derivatives.
ECDSA requires a secure randomness source to sign data. If you don’t have one available, and you sign anything, you’re coughing up your secret key to any attacker capable of observing multiple signatures.
Guidance for P-256
P-256 is an acceptable choice, especially if you’re forced to cope with FIPS and/or the CNSA suite requirements when using cryptography.Of course, if you can get away with Ed25519, use Ed25519 instead.
If you use P-256, make sure you’re using it with SHA-256. Some implementations may default to something weaker (e.g. SHA-1).
If you’re also going to be performing ECDH with P-256, make sure you use compressed points. There used to be a patent; it died in 2018.
If you can afford it, make sure you use deterministic ECDSA (RFC 6979) or hedged signatures (if fault attacks are relevant to your threat model).
Art: LvJ
NIST P-384 (ECDSA, secp384r1)
NIST P-384 has a larger field than the curves we’ve previously examined, which allows P-384 to target the 192-bit security level. That’s the primary reason why anyone would choose P-384.Naturally, elliptic curve security is more complicated than merely security against the Elliptic Curve Discrete Logarithm Problem (ECDLP).
P-384 is most often paired with SHA-384, which is the most widely used flavor of the SHA-2 family hash functions that isn’t susceptible to length-extension attacks. (There are also truncated SHA-512 variants specified later, but that’s also what SHA-384 is under-the-hood.)
If you’re aiming to build a “secure-by-default” tool for a system that the US government might one day become a customer of, with minimal cryptographic primitive choice, using NIST P-384 with SHA-384 makes for a reasonably minimalistic bundle.
Why Not Use P-384?
Unlike P-256, most P-384 implementations don’t use constant-time, optimized, and/or formally verified assembly code. (Notable counter-examples: AWS-LC and Go x/crypto.)Like P-256, P-384 also requires a secure randomness source to sign data. If you aren’t providing one, expect your signing key to end up on fail0verflow one day.
Guidance for P-384
If you use P-384, make sure you’re using it with SHA-384.The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: Kyume
NIST P-521 (ECDSA, secp521r1)
Biggest curve is best curve! — the cluelesshttps://www.youtube.com/watch?v=i_APoSfCYwU
Systems that choose P-521 often have an interesting threat model, even though said threat model is rarely formally specified.
It’s overwhelmingly likely that what eventually breaks the 256-bit elliptic curves will also break P-521 in short order: Cryptography Relevant Quantum Computers.
The only thing P-521 does against CRQCs that P-256 doesn’t is require more quantum memory. If you’re worried about QRQCs, you might want to look into hybrid post-quantum signature schemes.
If you’re choosing P-521 in your designs, you’re basically saying, “I want to have 256 bits of asymmetric cryptographic security, come hell or high water!” even though the 128-bit security level is likely just fine for your actual threats.
Aside: P-521 and 512-bit ECC Security
P-521 is not a typo, although people sometimes think it is. P-521 uses the Mersenne prime instead of a 512-bit near-Mersenne prime.This has led to an unfortunate trend in cryptography media to map ECC key sizes to symmetric security levels that misleads people as to the relationship between the two. For example:
Regrettably, this is misleading, because plotting the ECC Key Size versus equivalent Symmetric Security isn’t a how ECDLP security works. The ratio of the exponents involved is totally linear; it doesn’t suddenly increase beyond 384-bit curves for a mysterious mathematical reason.
- 256-bit Curves target the 128-bit security level
- 384-bit Curves target the 192-bit security level
- 512-bit Curves target the 256-bit security level
- 521-bit Curves actually target the 260-bit security level, but that meets or exceeds the 256-bit security level, so that’s how the standards are interpreted
The reason for this boils down entirely to the best attack against the Elliptic Curve Discrete Logarithm Problem: Pollard’s Rho, which recovers the secret key from an -bit public key (which has a search space) in guesses.
Taking the square root of a number is the same as halving its exponent, so the security level is half: .
Takeaway: If someone tells you that you need a 521-bit curve to meet the 256-bit security level, they are mistaken and it’s not their fault.
Art: Harubaki
Why Not Use P-521?
It’s slow. Much slower than P-256 and Ed25519. Modestly slower than P-384.Unlike P-384, you’re less likely to find an optimized, constant-time P-521 implementation.
Guidance for P-521
First, make a concerted effort to figure out the motivation for P-521 in your designs. Chances are, someone is putting too much emphasis on the wrong things for security.If you use P-521, make sure you’re using it with SHA-512.
The standard NIST curve advice of RFC 6979 and point compression and/or hedged signatures applies here too.
Art: LvJ
Ed448 (EdDSA, Curve448)
Ed448 is the P-521 of the Edwards curves: It mostly exists to give standards committees a psychological comfort for the unlikely event that 256-bit ECC is desperately broken but ECC larger than 384 bits is somehow still safe.https://twitter.com/dchest/status/703017144053833728
The very concept of having multiple “security levels” for raw cryptography primitives is mostly an artefact of the historical military roots of cryptography, rather than a serious consideration in the modern world.
Unfortunately, this leads to implementations that prioritize runtime algorithm selection negotiation, which maximizes the risk of protocol-level vulnerabilities. See also: JWT.
Ed448 was specified to use SHAKE256, which is a needlessly conservative decision which leads to an unnecessary performance bottleneck.
Why Not Use Ed448?
Aside from the performance hit mentioned previously, there’s no compelling reason to avoid Ed448 that isn’t also true of either Ed25519 or P-384.Guidance for Ed448
If you want more speed, go with Ed25519. In addition to being faster, Ed25519 is also very widely supported.If you need a prime-order field, use Decaf with Ed448 or consider P-384.
The Brainpool Curves
The main motivation for the Brainpool curves is that the NIST curves were not generated in a “verifiable pseudo-random way”.The only reasons you’d ever want to support the Brainpool curves include:
- You think the NIST curves are somehow backdoored by the NSA
- You don’t appreciate small attack surfaces in cryptography libraries
- The German government told you to (see: compliance)
Most of the advice for the NIST Curves at each security level can be copy/pasted for the Brainpool curves, with one important caveat:
When considering real-world implementations, Brainpool curves are more likely to use the general purpose Big Number procedures (which aren’t always constant-time), rather than optimized assembly code, than the NIST curves are.
Therefore, my general guidance for the Brainpool curves is simply:
- Proceed at your own peril
- Consider hiring a cryptography engineer to study the implementation you’re relying on, especially with regard to timing attacks
Me when I hear “brainpool”
Art: LvJRe-Examining the SafeCurves Criteria
Here’s a 2022 refresh of the SafeCurves criteria for all of the curves considered by this blog post.
SafeCurve Criteria Relevance to the Curves Listed Above Fields All relevant curves satisfy the requirements Equations All relevant curves satisfy the requirements Base Points All relevant curves satisfy the requirements Rho All relevant curves satisfy the requirements Transfers All relevant curves satisfy the requirements Discriminants Only secp256k1 doesn’t satisfy the requirements (out of the curves listed in this blog post) Rigidity The NIST curves do not meet this requirement.
If you care about whether or not the standards were manipulated to insert a backdoor, rigidity matters to you. Otherwise, it’s not a deal-breaker.Ladders While a Montgomery ladder is beneficial for speed and implementation security, it isn’t strictly speaking required.
This is an icing-on-the-cake consideration.Twists The only curve listed above that doesn’t meet the requirement is the 256-bit Brainpool curve (brainpoolp256t1). Completeness All relevant curves satisfy the requirements, as of 2015.
SafeCurves is out of date here.Indistinguishability All relevant curves satisfy the requirements, as of 2014. SafeCurves continues to be a useful resource, especially if you stray from the guidance on this page.
For example: You wouldn’t want to use pairing-friendly curves for general purpose ECC digital signatures, because they’re suitable for specialized problems. SafeCurves correctly recommends not using BN(2,254).
However, SafeCurves is showing its age in 2022. BN curves still end up in digital signature protocol standards even though BLS-12-381 is clearly a better choice.
The Internet would benefit greatly for an updated SafeCurves that focuses on newer elliptic curve algorithms.
Art: Scruff
TL;DR
Ed25519 is great. NIST P-256 and P-384 are okay (with caveats). Anything else is questionable, and their parameter selection should come with a clear justification.https://soatok.blog/2022/05/19/guidance-for-choosing-an-elliptic-curve-signature-algorithm-in-2022/
#asymmetricCryptography #BrainpoolCurves #cryptography #digitalSignatureAlgorithm #ECDSA #Ed25519 #Ed448 #EdDSA #ellipticCurveCryptography #P256 #P384 #P521 #secp256k1 #secp256r1 #secp384r1 #secp521r1 #SecurityGuidance
Earlier this year I discussed some noteworthy examples of crackpot cryptography and snake-oil security on this blog.
In this post, I’m going to analyze the claims made by CEW Systems Canada about “Post-Quantum Encryption” and argue that their so-called “bi-symmetric encryption” is another example of the same genre of crackpot cryptography.
https://twitter.com/veorq/status/1159575230970396672
Let’s get the disclaimers out of the way: This post was written solely by some security engineer with a fursona that has happens to have a lot of opinions about cryptography. This post is solely the opinion of said author, who also claims to be a blue anthropomorphic dhole, and not the opinion of any employer (especially his).
Credit: Lynx vs Jackalope.
It’s entirely up to you whether or not you want to take me seriously, knowing all of that.
Additionally, by “fraud” I am not speaking in a legal sense, but a colloquial sense.
What Is “Bi-Symmetric Encryption”?
CEW Systems, a Canadian company incorporated in December 2020 by Chad Edward Wanless, claims to have developed a technique called “bi-symmetric encryption”, which they describe as follows:
What exactly is Bi-Symmetric encryption?Bi-symmetric encryption is an internet communications handshake protocol that uses public/private keys similar to typical asymmetric encryption, but instead uses an underlying symmetric encryption system as the encryption backbone.
(source)
Their FAQ page goes on to claim:
Why is it called Bi-Symmetric?We chose bi-symmetric encryption as the name because the encryption handshake is a hybrid of both asymmetric and symmetric encryption. It uses public/private keys just as asymmetric encryption does, while using symmetric encryption called CEW encryption as the underlying encryption software routine.
(source)
Ah, what a contradiction! According to this page, bi-symmetric encryption is a handshake protocol that simultaneously:
- Uses public/private keys just as asymmetric encryption does, but
- Uses an underlying symmetric encryption system
But if your underlying encryption for the handshake is entirely symmetric, where do the asymmetric keypairs come in?
Asymmetric cryptography has public/private keypairs because their security is based on a hard computational problem (large integer factorization, the elliptic curve discrete logarithm problem, etc.). You can generally take a private key (or some secret seed that generates both keys) and easily derive its public key, but doing the opposite is prohibitively expensive.
If you’re only using symmetric cryptography, you don’t have this hard computational problem in the mix, so where do the keypairs come in?
The FAQ goes on to imply that bi-symmetric encryption is resistant to brute-force attack, and then vaguely describes One-Time Passwords (a.k.a. two-factor authentication codes).
Brute force attacks on an ordinary computer work by incrementally testing possible values until the desired output response is found. For example, if a vehicle was locked and a smart device is used to hack it, the brute force attack would start at 0,000,000 and say, the unlock code was 1,234,678, the device would resend the code incrementally advancing the value by 1. The signals would repeat until the correct value was eventually found and the vehicle unlocked. Bi-symmetric software works by using a challenge code and test data combination that changes the unlock code for each attempt. Staring at 0,000,000 and incrementing to 9,999,999 would not unlock the vehicle as the unlock could would always be changing with every attempt.(source)
Even if you’re not a cryptography expert, I hope it’s clear that “synchronized random numbers” (one-time passwords) and “mangling a message so no one else can understand its contents without the key” (symmetric encryption) are totally disparate operations, and not at all interchangeable.
But here’s where things get really funny. After all this confusing and contradictory bullshit, they say this:
Another reason is that asymmetric encryption relies upon a math formula to determine what the private key is by factoring the public key. Bi-symmetric encryption does not mathematically correlate the two, instead one is encrypted by the other.(source)
Yeah, that concept already exists. It’s called envelope encryption, my dudes. There’s nothing magically post-quantum about envelope encryption, and it doesn’t obviate the need for asymmetric cryptography.
And if both keys are symmetric, and you’re communicating them in the clear, then what’s to stop someone from using the algorithm the same way a legitimate user does?
Of course, now that we’ve gotten to the meaty center, the remainder of the FAQ entry is the other half of the bullshit sandwich.
The largest reason asymmetric encryption is vulnerable is that the entire plain text being encrypted is mathematically modified using a math formula.(source)
What are they even talking about?
Credit: Harubaki.
There are a lot of problems with asymmetric encryption. For example: Developers encrypting directly with RSA. This is an antipattern that I’ve complained about before.
But asymmetric encryption isn’t, as a whole, “vulnerable” to anything.
The reason NIST and other standards organizations are focused on post-quantum cryptography is that the currently-deployed asymmetric cryptographic algorithms (RSA, ECC) are broken by a quantum computer (if it ever exists). The solution is to study and standardize better asymmetric algorithms, not throw out the entire class of algorithms, forever.
The fact that quantum computers break RSA and ECC has nothing to do with “the entire plain text being encrypted”, as CEW Systems claims, because that’s generally not what’s actually happening.
If you use TLS 1.2 or 1.3 to connect to a website, one of the following things is happening:
- You have an existing session, no handshake needed.
- Your browser and the webserver use Elliptic Curve Diffie-Hellman to establish a session key. The server’s ephemeral public key is signed by the ECDSA or RSA key in their Certificate, which has been signed by a Certificate Authority independently trusted by the browser and/or operating system you use.
- Your browser encrypts a random value, called the pre-master secret, using the RSA Public Key on the Certificate. The pre-master secret is used by the server to derive the session key for subsequent connections. This doesn’t have forward secrecy like option 2 does, but it’s otherwise secure.
At no point is “the plain text” ever encrypted directly. The ECC option doesn’t even do asymmetric encryption the same way RSA does. ECC is used for key agreement, exclusively.
Understanding the basics of “how cryptography is used” is table stakes for even thinking about inventing your own cryptography, and CEW Systems cannot even clear that bar.
With the under lying encryption of bi-symmetric, each individual letter is modified separately, there is no mathematical link to the entire plain text being encrypted.(source)
https://www.youtube.com/watch?v=QC1WeLyOjj0
The charitable interpretation is that they’re describing a stream cipher, or a block cipher used in Counter Mode.
A more likely interpretation is that they’re encrypting each letter independently in something like ECB mode, which offers no semantic security.
Credit: XKCD
The less charitable interpretation is reinforced by this image included in their FAQ page that archive.org did not successfully capture:
Image taken from this page but with colors inverted to better suit my blog’s theme.
This obsession over big key sizes is oddly reminiscent of the DataGateKeeper scam on KickStarter in 2016.
The about page further cements the insanity of their proposal:
This encryption method is a hybridization of asymmetric public/private keys combined with symmetric encryption to modify each character individually and not the data packets.(source)
Credit: Lynx vs Jackalope.
Moving on…
A great example that demonstrates how bi-symmetric encryption works: If one were to encrypt, for example, a credit card number, a brute force attack would produce every possible credit card number between 0000 0000 0000 0000 and 9999 9999 9999 9999 with no means to determine which output would be the correct value.(source)
This just in! Crackpot company that claims to have solved post-quantum cryptography using only symmetric cryptography also hasn’t heard of authenticated encryption. Film at 11.
Credit: Lynx vs Jackalope
It’s frustrating to read bold claims from someone who flunks the fundamentals.
Credit Card Numbers adhere to the Luhn Algorithm, so an attacker isn’t going to even bother with 90% of the possible card numbers in their brute force range.
(Because of the Luhn Algorithm, there is only one valid checksum value, stored as the last digit, for any given combination of the first 15 digits, which is a 10x speedup in guessing. This mostly exists to detect typos before sending a payment request to the bank. Also not every credit card number is a full 16 digits; they can be as short as 13 or as long as 19.)
Also, for posterity, here’s my actual credit card number, encrypted with a 256-bit random key with a scheme that exists and is widely deployed today (n.b. NOT their snake-oil). You ain’t gonna brute force it.
42a0d7af9ace893289ae4bd86d62c604ab1fa708f1063172777be69511fa01d4af5027ad55a15166b49f6861c825fd026fba00f4eecc1a67
TL;DR
In short, bi-symmetric encryption is the term CEW Systems uses to describe their crackpot cryptographic algorithm that is, allegedly, simultaneously a one-time password algorithm and an envelope encryption algorithm, which involves public/private keys but doesn’t involve an asymmetric mathematical problem that calls for mathematically related keys.
This contradictory and convoluted definition is almost certainly intended to confuse people who don’t understand advanced mathematics while sounding convincing and confident. It’s bullshit, plain and simple.
More Crackpot Claims
If you feel like you haven’t suffered enough, the team behind “bi-symmetric encryption” goes on to make claims about password protection.
Because of course they do.
Password Protection
CEW systems has given great thought to how to protect users’ passwords. As noted in the man-in-the-middle attack, passwords are combined with unique identifying data from users’ computers or smart devices, such as serial numbers, before being modified into encryption keys.(source)
Wrong. So wrong.
Credit: Swizz
Password hashing and password-authenticated key exchanges are an entire discipline that I don’t want to delve into in this post, but passwords are salted and stretched with a computationally difficult symmetric algorithm (usually a password hashing function), especially when they’re being used to derive encryption keys.
There are schemes that use TPMs or secure enclaves to produce key material from a given password, but that doesn’t rely on a “serial number” the way they’re implying.
Additionally, CEW systems created a patent pending and copyrighted custom user interface password edit box. This new user interface tool displays a dropdown window that contains “Forgot Password”, “Change Password” buttons and a phishing email warning tip window that informs and reminds users that the only means by which to change the password is through the software they are currently using.(source)
That is a lot of, well, something. Good luck getting a patent awarded on something that almost every corporate intranet has implemented since Hackers came out in 1995.
Patent Pending on this? Prior art, yo.
I’m also deeply curious how they propose to implement account recovery in their systems for when a users forgets their password.
If anyone reading this ever finds themselves making security decisions for a company, warning labels like this are not effective at all. A much better solution to phishing (and weak passwords, while we’re talking about it) is WebAuthn with hardware security keys (i.e. Solo V2).
Establishing Fraud
Hanlon’s Razor is an adage that states, “Never attribute to malice that which is adequately explained by stupidity“.
To call something fraudulent, it’s not sufficient to merely conclude that they have crackpot ideas (which would be stupid), you also have to demonstrate deception (which is a form of malice).
(To be clear: Me calling something fraudulent, in my opinion, isn’t the same bar that the law uses. Do not misconstrue my statements claims about the legal system. I’m not a lawyer, and I am not making a legal case.)
In the previous section, we’ve looked at enough evidence to justify calling bi-directional encryption an instance of crackpot cryptography. But when does it stop being overconfidence and start becoming a grift?
I submit to you that the line of demarcation is when a crackpot attempts to gain money, fame, notoriety, or a reputational lift for their crackpot idea.
To begin, let’s look at some red flags on the CEW Systems website. Then let’s dive a little bit deeper and see what we can dredge up from the Internet.
Credit: Lynx vs Jackalope
Red Flags
CTO Report
The front page of the CEW Systems website claims to have a “Third Party Academic Independent Review” from Dr. Cyril Coupal from Saskatchewan Polytechnic’s Digital Integration Centre of Excellence.
Immediately after this claim, the website states:
Dr. Cyril Coupal’s CTO report currently can be made available to those who have signed a Non-Disclosure Agreement.(source)
Let me tell ya, as a security engineer, I’m used to dealing with Non-Disclosure Agreements. NDAs are almost always a prerequisite for getting contracted to review a company’s source code or conduct a penetration test on their networks.
Almost nobody working in cryptography today would ever sign an NDA in order to read a third-party academic review of any cryptographic system. That’s bonkers.
In fact, you don’t need to sign anything: Simply navigate to Software Tools, then click Papers at the bottom, and you can download it directly from their website.
Here’s a mirrored copy of this “CTO Report” (PDF).
The “How It Works” Page
A common tactic of scammers and frauds is to sponsor a talk at a prestigious conference and then use the film of your talk at said conference to artificially inflate the credibility of your claims.
This is what we saw with TimeAI at Black Hat.
CEW Systems took a different route than Crown Sterling:
They joined with two other companies (Uzado, Terranova Defense) to form the so-called TCU Alliance in February 2020 (source), then invited a Constable from the Toronto Police Department’s E3 Cyber Security Division to deliver a talk and give legitimacy to their accompanying talk (archived).
Interestingly, their page about this TCU Alliance also states:
This alliance came together during 2020; while bidding on government proposals being issued by the Innovation for Defense Excellence and Security (IDEaS) proposals.(source)
This detail alone is sufficient in establishing the financial incentives needed to claim “fraud”. They’re out to win government contracts.
Will This Save America From Cyber-War?
Speaking of Terranova Defense, their Chairperson James Castle wrote an opinion piece (in response to a The Hill article) that claims:
I would be thinking that collaboratively with our quantum proof encryption software, We “COULD” Survive a Cyber War!!(source)
I wish I was making this up.
Time What Is Time?
CEW Systems was incorporated in December 2020. However, their FAQ page states December 2019 and Chad Wanless’s LinkedIn page (PDF) claims 2017. The copyright year on their website states 2023.
If you cannot reasonably establish the history and timeline of the company you’re talking to, they’re probably up to no good.
Is It Really Fraud?
Aside from the abundant red flags, and the establishment of financial incentives, and the convoluted claims about the company’s timeline, the other significant modicum of evidence for fraud isn’t found on the CEW Systems website.
Rather, it’s kind of meta.
The entire reason that I’m writing about this at all is because CEW Systems pitched their crackpot cryptography to a current Dhole Moments reader, which led to me being alerted to the existence of CEW Systems and their so-called “bi-symmetric encryption” in the first place.
Crackpot ideas are stupid; trying to sell your crackpot ideas to others is fraud.
I don’t know if it was desperation or greed, but they tried to sell their crackpot product to an audience with someone attending that was just clueful enough to realize that something’s amiss. If they hadn’t been trying to sell their crackpot ideas, I would never have even heard of them.
When you add those facts together, I can only conclude that bi-symmetric encryption is a fraud being perpetuated by Chad Wanless of CEW Systems in Canada.
What Did Dr. Coupal Reveal?
If you recall, CWE Systems erroneously leaked the same “CTO Report” that they claimed would only be made available to parties that agreed to their Non-Disclosure Agreement.
I’d like to take this opportunity to annotate some of the interesting revelations from Dr. Cyril Coupal’s report. Feel free to skip this section if you aren’t interested.
The Analysis Was “Short”
The introduction of the CTO report states:
CEW Systems Canada Inc. has asked the Saskatchewan Polytechnic Digital Integration Centre of Excellence (DICE) group to perform a short CTO-funded analysis on their Bi-Symmetric Hybrid Encryption System.
I don’t know what “short CTO-funded analysis” (a unique phrase that doesn’t exist online outside the context of CEW Systems) even means, but any short analysis is unlikely to be a careful one.
How the “Encryption” is Achieved
The bottom of page 1 (Overview of the Approach) states:
The encryption itself is achieved by randomly generating keys and interweaving them with portions of unencrypted data to be transmitted, applied to single bytes of data rather than long byte collections.
This is basically how the broken stream cipher, RC4, was designed. There’s not much novel to say about this. RC4 sucked.
Misleading Scale
The top of page 4 contains this gem of deception:
Two things:
- This should really use a logarithmic scale.
- The powers of 2 being discussed are small potatoes. If you’re trying to factor 2048-bit numbers, your X axis needs to extend way past 30.
I’m honestly not sure if this is because the author was in a rush, or if they’re in on the scam. I sent an email and will update this post when I have any further light to shed on this matter.
Trusted Setup Required
Towards the bottom of page 8 (under the heading: What about initial secret exchange and account setup?) states:
Common secrets, known to both server and client, must be exchanged when initial set up of accounts is made. Various methods exist to do this, but most involve the human factor, which is dangerous.
Way to bury the lede! I can already devise and deploy purely symmetric system that requires pre-shared keys today. That doesn’t make such a system practical or reliable.
Revenge of the Immortal Security Questions
At the top of page 10, Dr. Coupal was kind enough to include a screenshot titled “Forgot Password Example” which shows the breathtaking cluelessness of CEW Systems.
Security questions are an anti-pattern. There are better mechanisms available. Why would anyone intentionally design a new system that uses password-equivalents that users don’t realize are as sensitive as their actual passwords?
It doesn’t matter how you’re encrypting the answers if they can be leaked from third party apps, and are rarely (if ever) revoked.
Cursed User Interface
Just look at this monstrosity.
This is absolutely cursed.
The Smoking Gun
The entire discipline of Cryptography has a tenet called Kerckhoffs’s Principle: a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge.
At the bottom of page 11 of the CTO Report, Dr. Coupal states:
The implementation algorithms cannot be open source.Knowing the procedures would aid in hacking the keys, therefore, the actual implementation of the algorithms, as well as the algorithms themselves, must be kept secret. The interweaving protocol is not mathematically based, but procedurally based. Of course, the data secrets for each client-server interchange must also be known, which is highly unlikely. CEW has many protocols in place to keep their application code secure. However, this may cause difficulty in obtaining certification by security agencies if they cannot inspect the code for security issues and thoroughness. Finally, it is not currently known how easy it would be to reverse engineer a copy of the executable code.
(Emphasis mine.)
Credit: Lynx vs Jackalope
In Conclusion
While cryptographic snake-oil salesmen aren’t peddling sex lube, they’ll be quick to fuck you just the same.
In this author’s opinion, “Bi-Symmetric Encryption” is a crackpot fraud, just like MyDataAngel, TimeAI, Crown Sterling, and so many other bullshit products and services before them. Don’t give them any money.
This story has a silver lining: Someone who felt something was amiss spoke up and the fraud was thus detected.
As @SwiftOnSecurity is quick to remind us when discussing their history as a Help Desk worker, your users are your first line of defense against security threats. They’ll be exposed to bullshit that you never will. (Chad Wanless never posted a paper on IACR’s ePrint, after all.)
Addendum (2021-10-20)
A now-deleted InfoQ article (which was preserved by Google’s search cache (alternative archive)), written by Cyril M. Coupal, echoes a lot of the same claims as the CEW Systems website.
Credit: Lynx vs Jackalope
I think it’s safe to conclude that Dr. Coupal isn’t a disinterested third party. I have not updated the relevant section to reflect this new evidence.
Sophie Schmieg points out that the factorization graph clearly used trial division rather than an optimized factoring algorithm.
If you look at the graph from Cyril Coupal’s “CTO Report”, it’s pretty clear that he didn’t terminate the algorithm after reaching the square root of the number they were attempting to factor:
When factoring a large integer, if you don’t encounter a factor after reaching the square root of that integer, you know the number is prime. After all, the only way for there to be a factor of N that’s larger than sqrt(N) is if there’s also one smaller than sqrt(N). If you reach sqrt(N), inclusively, and don’t find a factor, your number is prime. You can stop searching.
(Before a mathematician objects: Yes, I know I’m completely ignoring Gaussian Integers which have factors in the complex number space, but no factors in the real space. We’re not talking about those.)
This observation has the effect of doubling the X axis for the curve plotted. Factoring a 32-bit integer should require no more than 65,536 trial divisions.
I’m astonished that someone who claims to have a Ph.D in Computer Science doesn’t know this. Makes you wonder about his credentials a bit, doesn’t it?
https://soatok.blog/2021/09/28/the-bi-symmetric-encryption-fraud/
#biSymmetricEncryption #CEWSystems #crackpots #crypto #cryptography #Cybercrime #fraud #security #snakeOil
A few years ago, when the IETF’s Crypto Forum Research Group was deeply entrenched in debates about elliptic curves for security (which eventually culminated in RFC 7748 and RFC 8032), an IT Consultant showed up on the mailing list with their homemade cipher, Crystalline.Mike Hamburg politely informed the consultant that the CFRG isn’t the right forum for proposing new symmetric ciphers, or even new modes for symmetric ciphers, and invited them to email them off-list.
If you’re not familiar with the CFRG, let me just say, this was on the more patient and measured responses I’ve ever read.
Naturally, the author of Crystalline responded with this:
I’m somewhat disappointed in your reply, as I presumed that someone with a stated interest in ciphers would be eager to investigate anything new to pop up that didn’t have obvious holes in it. It almost sounds like you have had your soul crushed by bureaucracy over the years and have lost all passion for this field.Full quote available here. It doesn’t get much better.
Really dude? (Art by Khia.)The discussion continued until Tony Arcieri dropped one of the most brutal takedowns of a cryptographic design in CFRG history.
I think the biggest problem though is all of this has already been pointed out to you repeatedly in other forums and you completely refuse to acknowledge that your cipher fails to meet the absolute most minimum criteria for a secure cipher.Tony Arcieri, landing a cryptographic 360 no-scope on Crystalline.
In spite of this mic drop moment, the author of Crystalline continued to double down and insist that a symmetric cipher doesn’t need to be indistinguishable from randomness to be secure (which, to severely understate the affairs, is simply not true).Normally, when a cipher fails at the indistinguishable test, it’s subtle. This is what Crystalline ciphertexts look like.
Data encrypted with Crystalline, provided in the CFRG mailing list.
Modern ciphers produce something that will look like white noise, like an old TV without the cable plugged in. There should be no discernible pattern.
Crystalline’s author remained convinced that Crystalline’s “131072-bit keys” and claims of “information-theoretic security” were compelling enough to warrant consideration by the standards body that keeps the Internet running.
This was in 2015. In the year 2021, I can safely say that Crystalline adoption never really took off.
Against Crackpot Crypto
Instances of Crackpot Cryptography don’t always look like Crystalline. Sometimes the authors are more charismatic, or have more financial resources to bedazzle would-besuckers^investors. Other times, they’re less brazen and keep their designs far away from the watchful gaze of expert opinions–lest their mistakes be exposed for all to see.Crackpot cryptography is considered dangerous–not because we want people to avoid encryption entirely, but because crackpot cryptography offers a false sense of security. This leads to users acting in ways they wouldn’t if they knew there was little-to-no security. Due to the strictly performative nature of these security measures, I also like to call them Security Theater (although that term is more broadly applicable in other contexts).
The Cryptology community has a few defense mechanisms in place to prevent the real-world adoption of crackpot cryptography. More specifically, we have pithy mottos that distill best practices in a way that usually gets the intent across. (Hey, it’s something!) Unfortunately, the rest of the security industry outside of cryptology often weaponizes these mottos to promote useless and harmful gatekeeping.
The best example of this is the, “Don’t roll your own crypto!” motto.
They See Me Rollin’ [My Own Crypto]
Crackpots never adhere to this rule, so anyone who violates it immediately or often, with wild abandon, can be safely dismissed for kooky behavior.But if taken to its literal, logical extreme, this rule mandates that nobody would ever write cryptographic code and we wouldn’t have cryptography libraries to begin with. So, clearly, it’s a rule meant to be sometimes broken.
This is why some cryptography engineers soften the message a bit and encourage tinkering for the sake of education. The world needs more software engineers qualified to write cryptography.
After all, you wouldn’t expect to hear “Don’t roll your own crypto” being levied against Jason Donenfeld (WireGuard) or Frank Denis (libsodium), despite the fact that both of those people did just that.
But what about a high-level library that defers to libsodium for its actual crypto implementations?
In a twist that surprises no one, lazy heuristics have a high false positive rate. In this case, the lazy heuristic is both, “What qualifies as rolling one’s own crypto?” as well as, “When is it safe to break this rule?”
More broadly, though, is that these knee-jerk responses are a misfiring defense mechanism intended to stop quacks from taking all the air out of the room.
It doesn’t always work, though. There have been a few downright absurd instances of crackpot cryptography in the past few years.
Modern Examples of Crypto Crackpottery
Craig Wright’s Sartre Signature Scam
Satoshi Nakamoto is the alias of the anonymous cryptographer that invented Bitcoin. In the years since Satoshi has gone quiet, a few cranks have come out of the woodwork to claim to be the real Satoshi.Craig Wright is one of the more famous Satoshi impersonators due to his Sartre Signature Scam.
Satoshi’s earliest Bitcoin transactions are public. If you can lift the public key and signature from the transaction and then replay them in a different context as “proof” that you’re Satoshi, you can produce a proof of identity that validates without having to possess Satoshi’s private key. Then you can just wildly claim it’s a signature that validates the text of some philosopher’s prose and a lot of people will believe you.
With a little bit of showmanship added on, you too can convince Gavin Anderson by employing this tactic. (Or maybe not; I imagine he’s learned his lesson by now.)
Time AI
Crown Sterling’s sponsored talk at Black Hat USA 2019 is the most vivid example of crackpot cryptography in most people’s minds.Even the name “Time AI” just screams buzzword soup, so it should come as no surprise that their talk covered a lot of nonsense: “quasi-prime numbers”, “infinite wave conjugations”, “nano-scale of time”, “speed of AI oscillations”, “unified physics cosmology”, and “multi-dimensional encryption technology”.
Naturally, this pissed a lot of cryptographers off, and the normally even-keeled Dan Guido of Trail of Bits actually called them out on their bullshit during their presentation’s Q&A section.
https://twitter.com/dguido/status/1159579063540805632?lang=en
For most people, the story ended with a bunch of facepalms. But Crown Sterling doubled down and published a press release claiming the ability to break 256-bit RSA keys.
Amusingly, their attack took 50 seconds–which is a lot slower than the standard RSA factoring attacks for small key sizes.
(For those who are missing context: In order to be secure, RSA requires public key sizes in excess of 2048 bits. Breaking 256-bit RSA should take less than a minute on any modern PC.)
Terra Quantum
Earlier this week, Bloomberg news ran a story titled, A Swiss Company Says It Found Weakness That Imperils Encryption. If you only read the first few paragraphs, it’s really clear that the story basically boils down to, “Swiss Company realizes there’s an entire discipline of computer science dedicated to quantum computers and the risks they pose to cryptography.”Here’s a quick primer on quantum computers and cryptography:
If a practical quantum computer is ever built, it can immediately break all of the asymmetric cryptography used on the Internet today: RSA, DSA, Diffie-Hellman, Elliptic Curve Cryptography, etc. The attack costs to break these algorithms vary, but are generally in the range (for numbers of queries).
The jury is still out on whether or not quantum computers will ever be practical. Just in case, a lot of cryptographers are working on post-quantum cryptography (algorithms that are secure even against quantum computers).
Symmetric cryptography fares a lot better: The attack costs are roughly reduced by a factor of . This makes a 128-bit secure cipher have only a 64-bit security level, which is pretty terrible, but a 256-bit secure cipher remains at the 128-bit security level even with practical quantum computers.
So it’s a little strange that they open with:
The company said that its research found vulnerabilities that affect symmetric encryption ciphers, including the Advanced Encryption Standard, or AES, which is widely used to secure data transmitted over the internet and to encrypt files. Using a method known as quantum annealing, the company said its research found that even the strongest versions of AES encryption may be decipherable by quantum computers that could be available in a few years from now.From the Bloomberg article.
Uh, no.Let’s do some math: calculations can be performed in seconds on modern computers. If we assume that practical quantum computers are also as fast as classical computers, it’s safe to assume this will hold true as well.
You can break 128-bit ciphers in time, using Grover’s algorithm. You can’t break 256-bit ciphers in any practical time, even with the quantum computer speed-up. Most software prefers 256-bit AES over 128-bit AES for this reason.
What does time look like?
https://www.youtube.com/watch?v=vWXP3DvH8OQ
In 2012, we could break DES (which has 56-bit keys) in 24 hours with FPGAs dedicated to the task. Since each extra bit of security doubles the search space, we can extrapolate that 64-bits would require or 256 days.
So even with a quantum computer in hand, you would need to spend several months trying to break a single 128-bit AES key.
(Art by Scruff Kerfluff.)
If this were just one poorly written Bloomberg article put together by someone who vastly misunderstands post-quantum cryptography, Terra Quantum AG wouldn’t require much mention.
But, as with other crackpots before them, Terra Quantum doubled down with yet another press release published on Business Wire. (Archived.)
Terra Quantum realised that the AES is fairly secure against already identified algorithms but may appear fenceless against upcoming threats. To build the defence, Terra Quantum set out to look for a weakness by testing the AES against new algorithms. They Terra Quantum discovered a weakness on the message-digest algorithm MD5.
Okay, so in the time that elapsed between the two press releases, they realized they couldn’t realistically break AES with a quantum computer, but…MD5? MD-fucking-FIVE?! This is a joke right?
“Let’s hype up a hypothetical attack leveraged by quantum computers and then direct it at the most widely broken hash function on Earth.” – Shorter Terra Quantum
(Art by Khia.)The press release goes on to almost have a moment of self-awareness, but ultimately fails to do so:
The Terra Quantum team found that one can crack an algorithm using a quantum annealer containing about 20,000 qubits. No such annealer exists today, and while it is impossible to predict when it may be created, it is conceivable that such an annealer could become available to hackers in the future.(Emphasis mine.)
Yikes. There’s a lot of bullshit in that sentence, but it only gets zanier from there.https://twitter.com/boazbaraktcs/status/1359283973789278208
Here’s an actual quote from Terra Quantum’s CTOs, Gordey Lesovik and Valerii Vinokur about the “solution” to their imaginary problem:
“A new protocol derives from the notion that Quantum Demon is a small beast. The standard approach utilises the concept that the Demon hired by an eavesdropper (Eva) is a King Kong-like hundred kilometres big monster who can successfully use all the transmission line losses to decipher the communication. But since real Quantum Demons are small, Eva has to recruit an army of a billion to successfully collect all the scattered waves leaking from the optical fibre that she needs for efficient deciphering. Terra Quantum proposes an innovative technique utilizing the fact that such an army cannot exist – in accord with the second law of thermodynamics.”I seriously cannot fucking make this shit up. My fiction writing skills are simply not good enough.
I don’t partake in recreational drugs, but if I did, I’d probably want whatever they’re on.It’s important to note, at no point does Terra Quantum show their work. No source code or technical papers are published; just a lot of press releases that make exaggerated claims about quantum computers and totally misunderstands post-quantum cryptography.
Takeaways
If you see a press release on Business Wire about cryptography, it’s probably a scam. Real cryptographers publish on ePrint and then peer-reviewed journals, present their talks at conferences (but not sponsored talks), and exercise radical transparency with all facets of their work.Publish the source code, Luke!
Cryptography has little patience for swindlers, liars, and egomaniacs. (Although cryptocurrency seems more amenable to those personalities.) That doesn’t stop them from trying, of course.
If you’re reading this blog post and feel like learning about cryptography and cryptanalysis and feel put off by the “don’t roll your own crypto” mantra, and its implied gatekeeping, I hope it’s clear by now who that phrase was mostly intended for and why.
https://soatok.blog/2021/02/09/crackpot-cryptography-and-security-theater/
#asymmetricCryptography #crackpots #CrownSterling #cryptography #kooks #postQuantumCryptography #quantumComputers #scamArtists #scammers #scams #symmetricCryptography #TerraQuantum #TimeAI
A paper was published on the IACR’s ePrint archive yesterday, titled LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage.
The ensuing discussion on /r/crypto led to several interesting questions that I thought would be worth capturing and answering in detail.
What’s Significant About the LadderLeak Paper?
This is best summarized by Table 1 from the paper.
The sections labeled “This work” are what’s new/significant about this research.
The paper authors were able to optimize existing attacks exploiting one-bit leakages against 192-bit and 160-bit elliptic curves. They were further able to exploit leakages of less than one bit in the same curves.
How Can You Leak Less Than One Bit?
We’re used to discrete quantities in computer science, but you can leak less than one bit of information in the case of side-channels.
Biased modular reduction can also create a vulnerable scenario: If you know the probability of a 0 or a 1 in a given position in the bit-string of the one-time number (i.e. the most significant bit) is not 0.5 to 0.5, but some other ratio (e.g. 0.51 to 0.49), you can (over many samples) conclude a probability of a specific bit in your dataset.
If “less than one bit” sounds strange, that’s probably our fault for always rounding up to the nearest bit when we express costs in computer science.
What’s the Cost of the Attack?
Consult Table 3 from the paper for empirical cost data:
Table 3 from the LadderLeak paper.
How Devastating is LadderLeak?
First, it assumes a lot of things:
- That you’re using ECDSA with either sect163r1 or secp192r1 (NIST P-192). Breaking larger curves requires more bits of bias (as far as we know).
- That you’re using a cryptography library with cache-timing leaks.
- That you have a way to measure the timing leaks (and not just pilfer the ECDSA secret key; i.e. in a TPM setup). This threat model generally assumes some sort of physical access.
But if you can pull the attack off, you can successfully recover the device’s ECDSA secret key. Which, for protocols like TLS, allow an attacker to impersonate a certificate-bearer (typically the server)… which is pretty devastating.
Is ECDSA Broken Now?
Non-deterministic ECDSA is not significantly more broken with LadderLeak than it already was by other attacks. LadderLeak does not break the Internet.
Fundamentally, LadderLeak doesn’t really change the risk calculus. Bleichenbacher’s attack framework for solving the Hidden Number Problem using Lattices was already practical, with sufficient samples.
There’s even a CryptoPals challenge about these attacks.
As an acquaintance put it, the authors made a time-memory trade-off with a leaky oracle. It’s a neat result worthy of publication, but we aren’t any minutes closer to midnight with this revelation.
Is ECDSA’s k-value Really a Nonce?
Ehhhhhhhhh, sorta.
It’s complicated!
Nonce in cryptography has always meant “number that must be used only once” (typically per key). See: AES-GCM.
Nonces are often confused for initialization vectors (IVs), which in addition to a nonce’s requirements for non-reuse must also be unpredictable. See: AES-CBC.
However, nonces and IVs can both be public, whereas ECDSA k-values MUST NOT be public! If you recover the k-value for a given signature, you can recover the secret key too.
That is to say, ECDSA k-values must be all of the above:
- Never reused
- Unpredictable
- Secret
- Unbiased
They’re really in a class of their own.
For that reason, it’s probably better to think of the k-value as a per-signature key than a simple nonce. (n.b. Many cryptography libraries actually implement them as a one-time ECDSA keypair.)
What’s the Difference Between Random and Unpredictable?
The HMAC-SHA256 output of a message under a secret key is unpredictable for anyone not in possession of said secret key. This value, though unpredictable, is not random, since signing the same message twice yields the same output.
A large random integer when subjected to modular reduction by a non-Mersenne prime of the same magnitude will be biased towards small values. This bias may be negligible, but it makes the bit string that represents the reduced integer more predictable, even though it’s random.
What Should We Do? How Should We Respond?
First, don’t panic. This is interesting research and its authors deserve to enjoy their moment, but the sky is not falling.
Second, acknowledge that none of the attacks are effective against EdDSA.
If you feel the urge to do something about this attack paper, file a support ticket with all of your third-party vendors and business partners that handle cryptographic secrets to ask them if/when they plan to support EdDSA (especially if FIPS compliance is at all relevant to your work, since EdDSA is coming to FIPS 186-5).
Reason: With increased customer demand for EdDSA, more companies will adopt this digital signature algorithm (which is much more secure against real-world attacks). Thus, we can ensure an improved attack variant that actually breaks ECDSA doesn’t cause the sky to fall and the Internet to be doomed.
(Seriously, I don’t think most companies can overcome their inertia regarding ECDSA to EdDSA migration if their customers never ask for it.)
https://soatok.blog/2020/05/26/learning-from-ladderleak-is-ecdsa-broken/
#crypto #cryptography #digitalSignatureAlgorithm #ECDSA #ellipticCurveCryptography #LadderLeak
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography
Previously on Dead Ends in Cryptanalysis, we talked about length-extension attacks and precisely why modern hash functions like SHA-3 and BLAKE2 aren’t susceptible.
The art and science of side-channel cryptanalysis is one of the subjects I’m deeply fascinated by, and it’s something you’ll hear me yap about a lot on this blog in the future.
Since my background before computer security was in web development, I spend a lot of time talking about timing side-channels in particular, as well as their mitigations (see also: constant-time-js).
Pictured: Me, when an interesting attack gets published on ePrint.
(Art by Khia.)
However, timing side-channels aren’t omnipotent. Even if your code isn’t constant-time, that doesn’t mean you necessarily have a vulnerability. Case in point:
Length Leaks Are Usually Nothing-Burgers
If you look closely at a constant-time string equality function, you’ll see some clause that looks like this:
if (left.length !== right.length) return false;
A common concern that crops up in bikeshedding discussions about the correct implementation of a constant-time compare is, “This will fail fast if two strings of non-equal length are provided; doesn’t this leak information about the strings being compared?”
Sure, but it won’t affect the security of the application that uses it. Consider a contrived example:
- You’re encrypting with AES-CTR then authenticating the ciphertext with HMAC-SHA256 (Encrypt then MAC).
- For added fun, let’s assume you’re using HKDF-HMAC-SHA512 with a 256-bit salt to derive separate a separate encryption keys and MAC keys from the input key. This salt is prepended to the ciphertext and included as an input to the HMAC tag calculation. Now you don’t have to worry about cryptographic wear-out.
- You’re padding the plaintext to exactly 16 kilobytes prior to encryption, because the exact length of the plaintext is considered sensitive.
- You remove the padding after decryption.
- Your constant-time comparison is used to validate the HMAC tags.
Even though the length of your plaintext is sensitive, it doesn’t really matter that length mismatches leak here: The inputs to the constant-time compare are always HMAC-SHA256 outputs. They will always be 32 bytes (256 bits) long. This is public knowledge.
If you’ve somehow managed to design a protocol that depends on the secrecy of the length of a non-truncated HMAC-SHA256 output to be secure, you’ve probably fucked up something fierce.
However, if you were comparing the unpadded plaintexts with this function–or passing the unpadded plaintext to a hash function–you might have cause for concern.
“Double HMAC” is a defense against compiler/JIT optimizations, not length leaks.
(Art by Khia.)
When Do Timing Leaks Cause Impact?
Timing side-channels only lead to a vulnerability when they reveal some information about one of the secret inputs to a cryptographic function.
- Leaking how many leading bytes match when comparing HMACs can allow an attacker to forge a valid authentication tag for a chosen message–which often enables further attacks (e.g. padding oracles with AES-CBC + HMAC). The cryptographic secret is the correct authentication tag for a chosen message under a key known only to the defender.
- Leaking the number of leading zeroes introduced the risk of lattice attacks in TLS when used with Diffie-Hellman ciphersuites. See also: the Raccoon Attack. The cryptographic secret is the zero-trimmed shared secret, which is an input to a hash function.
- Leaking the secret number in the modular inverse step when calculating an ECDSA signature gives attackers enough information to recover the secret key. This can happen if you’re using non-constant-time arithmetic.
Timing attacks can even break state-of-the-art cryptography projects, like the algorithms submitted to NIST’s Post-Quantum Cryptography standardization effort:
https://twitter.com/EllipticKiwi/status/1295670085969838080
However–and this is important–if what leaks is a public input (n.b. something the attackers already knows anyway), then who cares?
(Art by Khia.)
Why Timing Leaks Don’t Break Signature Verification
If you’re reviewing some cryptography library and discovered a timing leak in the elliptic curve signature verification function, you might feel tempted to file a vulnerability report with the maintainers of the library.
If so, you’re wasting your time and theirs, for two reasons:
- Signature verification is performed over public inputs (message, public key, signature).
- Knowing which byte verification the comparison fails on isn’t sufficient for forging a signature for a chosen message.
The first part is obvious (and discussed above), but the second might seem untrue at first: If HMAC breaks this way, why doesn’t ECDSA also suffer here?
The Anatomy of Elliptic Curve Digital Signatures
Elliptic curve signatures are usually encoded as . How these numbers are derived and verified depends on the algorithm in question.
In the case of ECDSA, you calculate two numbers (, ) based on the hash of the plaintext and , both multiplied by the modular inverse of (mod ). You then calculate a curve point based on the public key (). The signature is valid if and only if the x coordinate of that curve point is equal to from the signature (and isn’t equal to the point at infinity).
Why Don’t Timing Attacks Do Anything Here?
Even with a timing leak on the string compare function in hand, you cannot easily find a valid for a chosen message for two reasons:
- The derivation of is effectively an All-Or-Nothing Transform based on secret inputs.
- The curve point equation ) multiplies the ratio r/s by the public key (because ).
In order to calculate a valid pair that will validate , you’d need to know the secret key that corresponds to .
It’s not impossible to calculate this value, but it’s computationally infeasible, and the difficulty of this problem is approximately one fourth the signature size. That is to say, 512-bit signatures, derived from 256-bit keys, have a security level of 128 bits.
Thus, timing leakage won’t let you perform an existential forgery here.
Aside: Don’t confuse signatures for MACs, as iMessage famously did.
(Art by Khia.)
Under What Conditions Could Timing Side-Channels Matter to ECDSA Verification?
Suppose you have a JSON Web Token library that’s vulnerable to the type confusion attack (wherein you can swap out the "alg":"ES256"
with "alg":"HS256"
and then use the public key as if it was an HMAC symmetric key).
In this hypothetical scenario, let’s say you’re using this JWT library in an OIDC-like configuration, where the identity provider signs tokens and the application verifies them, using a public key known to the application.
Also assume, for absolutely contrived reasons, that the public key is not known to the attacker.
If you had a timing attack that leaks the public key, that would be a viable (if horrendously slow) way to make the vulnerability exploitable.
However, even in this setup, the timing leak still doesn’t qualify as a real vulnerability. It merely defeats attempts at Security Through Obscurity. The real vulnerability is any JWT library that allows this attack (or alg=none).
Additionally, you can recover the public key if you have sufficient knowledge of the curve algorithm used, message signed, etc.–which you do if the algorithm is ES256
–so you don’t really even need a timing leak for this. Consequently, timing leaks would only help you if the original algorithm was something custom and obscure to attackers.
(Aside: there are two possible public keys from each signature, so the signature alone isn’t sufficient for uniquely identifying public keys. If you’re hoping to reduce protocol bandwidth through this trick, it won’t work.)
TL;DR
In order for a timing leak to be useful for cryptanalysis, it cannot leak a publicly-known input to the cryptographic operation.
https://soatok.blog/2021/06/07/dead-ends-in-cryptanalysis-2-timing-side-channels/
#cryptanalysis #crypto #cryptography #deadEndsInCryptanalysis #ECDSA #sideChannels #Technology #timingAttacks
This is the first entry in a (potentially infinite) series of dead end roads in the field of cryptanalysis.Cryptography engineering is one of many specialties within the wider field of security engineering. Security engineering is a discipline that chiefly concerns itself with studying how systems fail in order to build better systems–ones that are resilient to malicious acts or even natural disasters. It sounds much simpler than it is.
If you want to develop and securely implement a cryptography feature in the application you’re developing, it isn’t enough to learn how to implement textbook descriptions of cryptography primitives during your C.S. undergrad studies (or equivalent). An active interest in studying how cryptosystems fail is the prerequisite for being a cryptography engineer.
Thus, cryptography engineering and cryptanalysis research go hand-in-hand.
Pictured: How I feel when someone tells me about a novel cryptanalysis technique relevant to the algorithm or protocol I’m implementing. (Art by Khia.)
If you are interested in exploring the field of cryptanalysis–be it to contribute on the attack side of cryptography or to learn better defense mechanisms–you will undoubtedly encounter roads that seem enticing and not well-tread, and it might not be immediately obvious why the road is a dead end. Furthermore, beyond a few comparison tables on Wikipedia or obscure Stack Exchange questions, the cryptology literature is often sparse on details about why these avenues lead nowhere.
So let’s explore where some of these dead-end roads lead, and why they stop where they do.
(Art by Kyume.)
Length Extension Attacks
It’s difficult to provide a better summary of length extension attacks than what Skull Security wrote in 2012. However, that only addresses “What are they?”, “How do you use them?”, and “Which algorithms and constructions are vulnerable?”, but leaves out a more interesting question: “Why were they even possible to begin with?”An Extensive Tale
Tale, not tail! (Art by Swizz.)To really understand length extension attacks, you have to understand how cryptographic hash functions used to be designed. This might sound intimidating, but we don’t need to delve too deep into the internals.
A cryptographic hash function is a keyless pseudorandom transformation from a variable length input to a fixed-length output. Hash functions are typically used as building blocks for larger constructions (both reasonable ones like HMAC-SHA-256, and unreasonable ones like my hash-crypt project).
However, hash functions like SHA-256 are designed to operate on sequential blocks of input. This is because sometimes you need to stream data into a hash function rather than load it all into memory at once. (This is why you can sha256sum a file larger than your available RAM without crashing your computer or causing performance headaches.)
A streaming hash function API might look like this:
class MyCoolHash(BaseHashClass): @staticmethod def init(): """ Initialize the hash state. """ def update(data): """ Update the hash state with additional data. """ def digest(): """ Finalize the hash function. """ def compress(): """ (Private method.) """
To use it, you’d callhash = MyCoolHash.init()
and then chain togetherhash.update()
calls with data as you load it from disk or the network, until you’ve run out of data. Then you’d calldigest()
and obtain the hash of the entire message.There are two things to take away right now:
- You can call
update()
multiple times, and that’s valid.- Your data might not be an even multiple of the internal block size of the hash function. (More often than not, it won’t be!)
So what happens when you call
digest()
and the amount of data you’ve passed toupdate()
is not an even multiple of the hash size?For most hash functions, the answer is simple: Append some ISO/IEC 7816-4 padding until you get a full block, run that through a final iteration of the internal compression function–the same one that gets called on
update()
–and then output the current internal state.Let’s take a slightly deeper look at what a typical runtime would look like for the MyCoolHash class I sketched above:
hash = MyCoolHash.init()
- Initialize some variables to some constants (initialization vectors).
hash.update(blockOfData)
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
compress()
on that data and clear the buffer. Otherwise, just append blockOfData to the currently buffered data.- For every 32 byte of data not yet touched by
compress()
, invokecompress()
on this block (updating the internal state).- If you have any leftover bytes, append to the internal buffer for the next invocation to process.
hash.update(moreData)
- Same as before, except there might be some buffered data from step 2.
output = hash.digest()
- If you have any data left in the buffer, append a 0x80 byte followed by a bunch of 0x00 bytes of padding until you reach the block size. If you don’t, you have an entire block of padding (0x80 followed by 0x00s).
- Call
compress()
one last time.- Serialize the internal hash state as a byte array or hexadecimal-encoded string (depending on usage). Return that to the caller.
This is fairly general description that will hold for most older hash functions. Some details might be slightly wrong (subtly different padding scheme, whether or not to include a block of empty padding on
digest()
invocations, etc.).The details aren’t super important. Just the rhythm of the design.
init()
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- …
digest()
- load buffer, pad,
compress()
- serialize internal state
- return
And thus, without having to know any of the details about what
compress()
even looks like, the reason why length extension attacks were ever possible should leap out at you!Art by Khia.
If it doesn’t, look closely at the difference between
update()
anddigest()
.There are only two differences:
update()
doesn’t pad before callingcompress()
digest()
returns the internal state thatcompress()
always mutatesThe reason length-extension attacks are possible is that, for some hash functions, the output of
digest()
is its full internal state.This means that you can run take an existing hash function and pretend it’s the internal state after an
update()
call instead of adigest()
call by appending the padding and then, after callingcompress()
, appending additional data of your choice.The (F)Utility of Length Extension
Length-Extension Attacks are mostly used for attacking naive message authentication systems where someone attempts to authenticate a message (M) with a secret key (k), but they construct it like so:
auth_code = vulnerable_hash(k.append(M))
If this sounds like a very narrow use-case, that’s because it is. However, it still broke Flickr’s API once, and it’s a popular challenge for CTF competitions around the world.Consequently, length-extension attacks are sometimes thought to be vulnerabilities of the construction rather than a vulnerability of the hash function. For a Message Authentication Code construction, these are classified under canonicalization attacks.
After all, even though SHA-256 is vulnerable to length-extension, but you can’t actually exploit it unless someone is using it in a vulnerable fashion.
That being said, it’s often common to say that hash functions like SHA-256 and SHA-512 are prone to length-extension.
Ways to Avoid Length-Extension Attacks
Use HMAC. HMAC was designed to prevent these kinds of attacks.Alternatively, if you don’t have any cryptographic secrets, you can always do what bitcoin did: Hash your hash again.
return sha256(sha256(message))
Note: Don’t actually do that, it’s dangerous for other reasons. You also don’t want to take this to an extreme. If you iterate your hash too many times, you’ll reinvent PBKDF1 and its insecurity. Two is plenty.Or you can do something really trivial (which ultimately became another standard option in the SHA-2 family of hash functions):
Always start with a 512-bit hash and then truncate your output so the attacker never recovers the entire internal state of your hash in order to extend it.
That’s why you’ll sometimes see SHA-512/224 and SHA-512/256 in a list of recommendations. This isn’t saying “use one or the other”, that’s the (rather confusing) notation for a standardized SHA-512 truncation.
Note: This is actually what SHA-384 has done all along, and that’s one of the reasons why you see SHA-384 used more than SHA-512.
If you want to be extra fancy, you can also just use a different hash function that isn’t vulnerable to length extension, such as SHA-3 or BLAKE2.
Questions and Answers
Art by Khia.Why isn’t BLAKE2 vulnerable to length extension attacks?
Quite simply: It sets a flag in the internal hash state before compressing the final buffer.If you try to deserialize this state then invoke
update()
, you’ll get a different result than BLAKE2’scompress()
produced duringdigest()
.For a secure hash function, a single bit of difference in the internal state should result in a wildly different output. (This is called the avalanche effect.)
Why isn’t SHA-3 vulnerable to length extension attacks?
SHA-3 is a sponge construction whose internal state is much larger than the hash function output. This prevents an attacker from recovering the hash function’s internal state from a message digest (similar to the truncated hash function discussed above).Why don’t length-extension attacks break digital signature algorithms?
Digital signature algorithms–such as RSASSA, ECDSA, and EdDSA–take a cryptographic hash of a message and then perform some asymmetric cryptographic transformation of the hash with the secret key to produce a signature that can be verified with a public key. (The exact details are particular to the signature algorithm in question.)Length-extension attacks only allow you to take a valid H(k || m) and produce a valid H(k || m || padding || extra) hash that will validate, even if you don’t know k. They don’t magically create collisions out of thin air.
Even if you use a weak hash function like SHA-1, knowing M and H(M) is not sufficient to calculate a valid signature. (You need to be able to know these values in order to verify the signature anyway.)
The security of digital signature algorithms depends entirely on the secrecy of the signing key and the security of the asymmetric cryptographic transformation used to generate a signature. (And its resilience to side-channel attacks.)
However, a more interesting class of attack is possible for systems that expect digital signatures to have similar properties as cryptographic hash functions. This would qualify as a protocol vulnerability, not a length-extension vulnerability.
TL;DR
Art by Khia.Length-extension attacks exploit a neat property of a few cryptographic hash functions–most of which you shouldn’t be using in 2020 anyway (SHA-2 is still fine)–but can only be exploited by a narrow set of circumstances.
If you find yourself trying to use length-extension to break anything else, you’ve probably run into a cryptographic dead end and need to backtrack onto more interesting avenues of exploitation–of which there are assuredly many (unless your cryptography is boring).
Next: Timing Side-Channels
https://soatok.blog/2020/10/06/dead-ends-in-cryptanalysis-1-length-extension-attacks/
#cryptanalysis #crypto #cryptographicHashFunction #cryptography #lengthExtensionAttacks
This is the first entry in a (potentially infinite) series of dead end roads in the field of cryptanalysis.
Cryptography engineering is one of many specialties within the wider field of security engineering. Security engineering is a discipline that chiefly concerns itself with studying how systems fail in order to build better systems–ones that are resilient to malicious acts or even natural disasters. It sounds much simpler than it is.
If you want to develop and securely implement a cryptography feature in the application you’re developing, it isn’t enough to learn how to implement textbook descriptions of cryptography primitives during your C.S. undergrad studies (or equivalent). An active interest in studying how cryptosystems fail is the prerequisite for being a cryptography engineer.
Thus, cryptography engineering and cryptanalysis research go hand-in-hand.
Pictured: How I feel when someone tells me about a novel cryptanalysis technique relevant to the algorithm or protocol I’m implementing. (Art by Khia.)
If you are interested in exploring the field of cryptanalysis–be it to contribute on the attack side of cryptography or to learn better defense mechanisms–you will undoubtedly encounter roads that seem enticing and not well-tread, and it might not be immediately obvious why the road is a dead end. Furthermore, beyond a few comparison tables on Wikipedia or obscure Stack Exchange questions, the cryptology literature is often sparse on details about why these avenues lead nowhere.
So let’s explore where some of these dead-end roads lead, and why they stop where they do.
(Art by Kyume.)
Length Extension Attacks
It’s difficult to provide a better summary of length extension attacks than what Skull Security wrote in 2012. However, that only addresses “What are they?”, “How do you use them?”, and “Which algorithms and constructions are vulnerable?”, but leaves out a more interesting question: “Why were they even possible to begin with?”
An Extensive Tale
Tale, not tail! (Art by Swizz.)
To really understand length extension attacks, you have to understand how cryptographic hash functions used to be designed. This might sound intimidating, but we don’t need to delve too deep into the internals.
A cryptographic hash function is a keyless pseudorandom transformation from a variable length input to a fixed-length output. Hash functions are typically used as building blocks for larger constructions (both reasonable ones like HMAC-SHA-256, and unreasonable ones like my hash-crypt project).
However, hash functions like SHA-256 are designed to operate on sequential blocks of input. This is because sometimes you need to stream data into a hash function rather than load it all into memory at once. (This is why you can sha256sum a file larger than your available RAM without crashing your computer or causing performance headaches.)
A streaming hash function API might look like this:
class MyCoolHash(BaseHashClass): @staticmethod def init(): """ Initialize the hash state. """ def update(data): """ Update the hash state with additional data. """ def digest(): """ Finalize the hash function. """ def compress(): """ (Private method.) """
To use it, you’d call hash = MyCoolHash.init()
and then chain together hash.update()
calls with data as you load it from disk or the network, until you’ve run out of data. Then you’d call digest()
and obtain the hash of the entire message.
There are two things to take away right now:
- You can call
update()
multiple times, and that’s valid. - Your data might not be an even multiple of the internal block size of the hash function. (More often than not, it won’t be!)
So what happens when you call digest()
and the amount of data you’ve passed to update()
is not an even multiple of the hash size?
For most hash functions, the answer is simple: Append some ISO/IEC 7816-4 padding until you get a full block, run that through a final iteration of the internal compression function–the same one that gets called on update()
–and then output the current internal state.
Let’s take a slightly deeper look at what a typical runtime would look like for the MyCoolHash class I sketched above:
hash = MyCoolHash.init()
- Initialize some variables to some constants (initialization vectors).
hash.update(blockOfData)
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
compress()
on that data and clear the buffer. Otherwise, just append blockOfData to the currently buffered data. - For every 32 byte of data not yet touched by
compress()
, invokecompress()
on this block (updating the internal state). - If you have any leftover bytes, append to the internal buffer for the next invocation to process.
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
hash.update(moreData)
- Same as before, except there might be some buffered data from step 2.
output = hash.digest()
- If you have any data left in the buffer, append a 0x80 byte followed by a bunch of 0x00 bytes of padding until you reach the block size. If you don’t, you have an entire block of padding (0x80 followed by 0x00s).
- Call
compress()
one last time. - Serialize the internal hash state as a byte array or hexadecimal-encoded string (depending on usage). Return that to the caller.
This is fairly general description that will hold for most older hash functions. Some details might be slightly wrong (subtly different padding scheme, whether or not to include a block of empty padding on digest()
invocations, etc.).
The details aren’t super important. Just the rhythm of the design.
init()
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- load buffer,
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- load buffer,
- …
digest()
- load buffer, pad,
compress()
- serialize internal state
- return
- load buffer, pad,
And thus, without having to know any of the details about what compress()
even looks like, the reason why length extension attacks were ever possible should leap out at you!
Art by Khia.
If it doesn’t, look closely at the difference between update()
and digest()
.
There are only two differences:
update()
doesn’t pad before callingcompress()
digest()
returns the internal state thatcompress()
always mutates
The reason length-extension attacks are possible is that, for some hash functions, the output of digest()
is its full internal state.
This means that you can run take an existing hash function and pretend it’s the internal state after an update()
call instead of a digest()
call by appending the padding and then, after calling compress()
, appending additional data of your choice.
The (F)Utility of Length Extension
Length-Extension Attacks are mostly used for attacking naive message authentication systems where someone attempts to authenticate a message (M) with a secret key (k), but they construct it like so:
auth_code = vulnerable_hash(k.append(M))
If this sounds like a very narrow use-case, that’s because it is. However, it still broke Flickr’s API once, and it’s a popular challenge for CTF competitions around the world.
Consequently, length-extension attacks are sometimes thought to be vulnerabilities of the construction rather than a vulnerability of the hash function. For a Message Authentication Code construction, these are classified under canonicalization attacks.
After all, even though SHA-256 is vulnerable to length-extension, but you can’t actually exploit it unless someone is using it in a vulnerable fashion.
That being said, it’s often common to say that hash functions like SHA-256 and SHA-512 are prone to length-extension.
Ways to Avoid Length-Extension Attacks
Use HMAC. HMAC was designed to prevent these kinds of attacks.
Alternatively, if you don’t have any cryptographic secrets, you can always do what bitcoin did: Hash your hash again.
return sha256(sha256(message))
Note: Don’t actually do that, it’s dangerous for other reasons. You also don’t want to take this to an extreme. If you iterate your hash too many times, you’ll reinvent PBKDF1 and its insecurity. Two is plenty.
Or you can do something really trivial (which ultimately became another standard option in the SHA-2 family of hash functions):
Always start with a 512-bit hash and then truncate your output so the attacker never recovers the entire internal state of your hash in order to extend it.
That’s why you’ll sometimes see SHA-512/224 and SHA-512/256 in a list of recommendations. This isn’t saying “use one or the other”, that’s the (rather confusing) notation for a standardized SHA-512 truncation.
Note: This is actually what SHA-384 has done all along, and that’s one of the reasons why you see SHA-384 used more than SHA-512.
If you want to be extra fancy, you can also just use a different hash function that isn’t vulnerable to length extension, such as SHA-3 or BLAKE2.
Questions and Answers
Art by Khia.
Why isn’t BLAKE2 vulnerable to length extension attacks?
Quite simply: It sets a flag in the internal hash state before compressing the final buffer.
If you try to deserialize this state then invoke update()
, you’ll get a different result than BLAKE2’s compress()
produced during digest()
.
For a secure hash function, a single bit of difference in the internal state should result in a wildly different output. (This is called the avalanche effect.)
Why isn’t SHA-3 vulnerable to length extension attacks?
SHA-3 is a sponge construction whose internal state is much larger than the hash function output. This prevents an attacker from recovering the hash function’s internal state from a message digest (similar to the truncated hash function discussed above).
Why don’t length-extension attacks break digital signature algorithms?
Digital signature algorithms–such as RSASSA, ECDSA, and EdDSA–take a cryptographic hash of a message and then perform some asymmetric cryptographic transformation of the hash with the secret key to produce a signature that can be verified with a public key. (The exact details are particular to the signature algorithm in question.)
Length-extension attacks only allow you to take a valid H(k || m) and produce a valid H(k || m || padding || extra) hash that will validate, even if you don’t know k. They don’t magically create collisions out of thin air.
Even if you use a weak hash function like SHA-1, knowing M and H(M) is not sufficient to calculate a valid signature. (You need to be able to know these values in order to verify the signature anyway.)
The security of digital signature algorithms depends entirely on the secrecy of the signing key and the security of the asymmetric cryptographic transformation used to generate a signature. (And its resilience to side-channel attacks.)
However, a more interesting class of attack is possible for systems that expect digital signatures to have similar properties as cryptographic hash functions. This would qualify as a protocol vulnerability, not a length-extension vulnerability.
TL;DR
Art by Khia.
Length-extension attacks exploit a neat property of a few cryptographic hash functions–most of which you shouldn’t be using in 2020 anyway (SHA-2 is still fine)–but can only be exploited by a narrow set of circumstances.
If you find yourself trying to use length-extension to break anything else, you’ve probably run into a cryptographic dead end and need to backtrack onto more interesting avenues of exploitation–of which there are assuredly many (unless your cryptography is boring).
Next: Timing Side-Channels
https://soatok.blog/2020/10/06/dead-ends-in-cryptanalysis-1-length-extension-attacks/
#cryptanalysis #crypto #cryptographicHashFunction #cryptography #lengthExtensionAttacks
A question I get asked frequently is, “How did you learn cryptography?”I could certainly tell everyone my history as a self-taught programmer who discovered cryptography when, after my website for my indie game projects kept getting hacked, I was introduced to cryptographic hash functions… but I suspect the question folks want answered is, “How would you recommend I learn cryptography?” rather than my cautionary tale about poorly-implemented password hash being a gateway bug.
The Traditional Ways to Learn
There are two traditional ways to learn cryptography.If you want a book to augment your journey in either traditional path, I recommend Serious Cryptography by Jean-Philippe Aumasson.
Academic Cryptography
The traditional academic way to learn cryptography involves a lot of self-study about number theory, linear algebra, discrete mathematics, probability, permutations, and field theory.You’d typically start off with classical ciphers (Caesar, etc.) then work your way through the history of ciphers until you finally reach an introduction to the math underpinning RSA and Diffie-Hellman, and maybe taught about Schneier’s Law and cautioned to only use AES and SHA-2… and then you’re left to your own devices unless you pursue a degree in cryptography.
The end result of people carelessly exploring this path is a lot of designs like Telegram’s MTProto that do stupid things with exotic block cipher modes and misusing vanilla cryptographic hash functions as message authentication codes; often with textbook a.k.a. unpadded RSA, AES in ECB, CBC, or some rarely-used mode that the author had to write custom code to handle (using ECB mode under the hood), and (until recently) SHA-1.
People who decide to pursue cryptography as a serious academic discipline will not make these mistakes. They’re far too apt for the common mistakes. Instead, they run the risk of spending years involved in esoteric research about homomorphic encryption, cryptographic pairings, and other cool stuff that might not see real world deployment (outside of novel cryptocurrency hobby projects) for five or more years.
That is to say: Academia is a valid path to pursue, but it’s not for everyone.
If you want to explore this path, Cryptography I by Dan Boneh is a great starting point.
Security Industry-Driven Cryptography
The other traditional way to learn cryptography is to break existing cryptography implementations. This isn’t always as difficult as it may sound: Reverse engineering video games to defeat anti-cheat protections has led several of my friends into learning about cryptography.For security-minded folks, the best place to start is the CryptoPals challenges. Another alternative is CryptoHack.
There are also plenty of CTF events all year around, but they’re rarely a good cryptography learning exercise above what CryptoPals offers. (Though there are notable exceptions.)
A Practical Approach to Learning Cryptography
Art by Kyume.If you’re coming from a computer programming background and want to learn cryptography, the traditional approaches carry the risk of Reasoning By Lego.
Instead, the approach I recommend is to start gaining experience with the safest, highest-level libraries and then slowly working your way down into the details.
This approach has two benefits:
- If you have to implement something while you’re still learning, your knowledge and experience is stilted towards “use something safe and secure” not “hack together something with Blowfish in ECB mode and MD5 because they’re familiar”.
- You can let your own curiosity guide your education rather than follow someone else’s study guide.
To illustrate what this looks like, here’s how a JavaScript developer might approach learning cryptography, starting from the most easy-mode library and drilling down into specifics.
Super Easy Mode: DholeCrypto
Disclaimer: This is my project.Dhole Crypto is an open source library, implemented in JavaScript and PHP and powered by libsodium, that tries to make security as easy as possible.
I designed Dhole Crypto for securing my own projects without increasing the cognitive load of anyone reviewing my code.
If you’re an experienced programmer, you should be able to successfully use Dhole Crypto in a Node.js/PHP project. If it does not come easy, that is a bug that should be fixed immediately.
Easy Mode: Libsodium
Using libsodium is slightly more involved than Dhole Crypto: Now you have to know what a nonce is, and take care to manage them carefully.Advantage: Your code will be faster than if you used Dhole Crypto.
Libsodium is still pretty easy. If you use this cheat sheet, you can implement something secure without much effort. If you deviate from the cheat sheet, pay careful attention to the documentation.
If you’re writing system software (i.e. programming in C), libsodium is an incredibly easy-to-use library.
Moderate Difficulty: Implementing Protocols
Let’s say you’re working on a project where libsodium is overkill, and you only need a few cryptography primitives and constructions (e.g. XChaCha20-Poly1305). A good example: In-browser JavaScript.Instead of forcing your users to download the entire Sodium library, you might opt to implement a compatible construction using JavaScript implementations of these primitives.
Since you have trusted implementations to test your construction against, this should be a comparatively low-risk effort (assuming the primitive implementations are also secure), but it’s not one that should be undertaken without all of the prior experience.
Note: At this stage you are not implementing the primitives, just using them.
Hard Difficulty: Designing Protocols and Constructions
Repeat after me: “I will not roll my own crypto before I’m ready.” Art by AtlasInu.To distinguish: TLS and Noise are protocols. AES-GCM and XChaCha20-Poly1305 are constructions.
Once you’ve implemented protocols and constructions, the next step in your self-education is to design new ones.
Maybe you want to combine XChaCha20 with a MAC based on the BLAKE3 hash function, with some sort of SIV to make the whole shebang nonce-misuse resistant?
You wouldn’t want to dive headfirst into cryptography protocol/construction design without all of the prior experience.
Very Hard Mode: Implementing Cryptographic Primitives
It’s not so much that cryptography primitives are hard to implement. You could fit RC4 in a tweet before they raised the character limit to 280. (Don’t use RC4 though!)The hard part is that they’re hard to implement securely. See also: LadderLeak.
Usually when you get to this stage in your education, you will have also picked up one or both of the traditional paths to augment your understanding. If not, you really should.
Nightmare Mode: Designing Cryptography Primitives
A lot of people like to dive straight into this stage early in their education. This usually ends in tears.If you’ve mastered every step in my prescribed outline and pursued both of the traditional paths to the point that you have a novel published attack in a peer-reviewed journal (and mirrored on ePrint), then you’re probably ready for this stage.
Bonus: If you’re a furry and you become a cryptography expert, you can call yourself a cryptografur. If you had no other reason to learn cryptography, do it just for pun!
Header art by circuitslime.
https://soatok.blog/2020/06/10/how-to-learn-cryptography-as-a-programmer/
#cryptography #education #programming #Technology
Governments are back on their anti-encryption bullshit again.
Between the U.S. Senate’s “EARN IT” Act, the E.U.’s slew of anti-encryption proposals, and Australia’s new anti-encryption law, it’s become clear that the authoritarians in office view online privacy as a threat to their existence.
Normally, when the governments increase their anti-privacy sabre-rattling, technologists start talking more loudly about Tor, Signal, and other privacy technologies (usually only to be drowned out by paranoid people who think Tor and Signal are government backdoors or something stupid; conspiracy theories ruin everything!).
I’m not going to do that.
Instead, I’m going to show you how to add end-to-end encryption to any communication software you’re developing. (Hopefully, I’ll avoid making any bizarre design decisions along the way.)
But first, some important disclaimers:
- Yes, you should absolutely do this. I don’t care how banal your thing is; if you expect people to use it to communicate with each other, you should make it so that you can never decrypt their communications.
- You should absolutely NOT bill the thing you’re developing as an alternative to Signal or WhatsApp.
- The goal of doing this is to increase the amount of end-to-end encryption deployed on the Internet that the service operator cannot decrypt (even if compelled by court order) and make E2EE normalized. The goal is NOT to compete with highly specialized and peer-reviewed privacy technology.
- I am not a lawyer, I’m some furry who works in cryptography. The contents of this blog post is not legal advice, nor is it endorsed by any company or organization. Ask the EFF for legal questions.
The organization of this blog post is as follows: First, I’ll explain how to encrypt and decrypt data between users, assuming you have a key. Next, I’ll explain how to build an authenticated key exchange and a ratcheting protocol to determine the keys used in the first step. Afterwards, I’ll explore techniques for binding authentication keys to identities and managing trust. Finally, I’ll discuss strategies for making it impractical to ever backdoor your software (and impossible to silently backdoor it), just to piss the creeps and tyrants of the world off even more.
You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing.
(Art by Kyume.)
Preliminaries
Choosing a Cryptography Library
In the examples contained on this page, I will be using the Sodium cryptography library. Specifically, my example code will be written with the Sodium-Plus library for JavaScript, since it strikes a good balance between performance and being cross-platform.
const { SodiumPlus } = require('sodium-plus');(async function() { // Select a backend automatically const sodium = await SodiumPlus.auto(); // Do other stuff here})();
Libsodium is generally the correct choice for developing cryptography features in software, and is available in most programming languages,
If you’re prone to choose a different library, you should consult your cryptographer (and yes, you should have one on your payroll if you’re doing things different) about your design choices.
Threat Modelling
Remember above when I said, “You don’t have to implement the full stack of solutions to protect users, but the further you can afford to go, the safer your users will be from privacy-invasive policing”?
How far you go in implementing the steps outlined on this blog post should be informed by a threat model, not an ad hoc judgment.
For example, if you’re encrypting user data and storing it in the cloud, you probably want to pass the Mud Puddle Test:
1. First, drop your device(s) in a mud puddle.
2. Next, slip in said puddle and crack yourself on the head. When you regain consciousness you’ll be perfectly fine, but won’t for the life of you be able to recall your device passwords or keys.
3. Now try to get your cloud data back.Did you succeed? If so, you’re screwed. Or to be a bit less dramatic, I should say: your cloud provider has access to your ‘encrypted’ data, as does the government if they want it, as does any rogue employee who knows their way around your provider’s internal policy checks.
Matthew Green describes the Mud Puddle Test, which Apple products definitely don’t pass.
If you must fail the Mud Puddle Test for your users, make sure you’re clear and transparent about this in the documentation for your product or service.
(Art by Swizz.)
I. Symmetric-Key Encryption
The easiest piece of this puzzle is to encrypt data in transit between both ends (thus, satisfying the loosest definition of end-to-end encryption).
At this layer, you already have some kind of symmetric key to use for encrypting data before you send it, and for decrypting it as you receive it.
For example, the following code will encrypt/decrypt strings and return hexadecimal strings with a version prefix.
const VERSION = "v1";/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, key, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(50)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, key, aad ); return plaintext.toString('utf-8');}
Under-the-hood, this is using XChaCha20-Poly1305, which is less sensitive to timing leaks than AES-GCM. However, like AES-GCM, this encryption mode doesn’t provide message- or key-commitment.
If you want key commitment, you should derive two keys from $key
using a KDF based on hash functions: One for actual encryption, and the other as a key commitment value.
If you want message commitment, you can use AES-CTR + HMAC-SHA256 or XChaCha20 + BLAKE2b-MAC.
If you want both, ask Taylor Campbell about his BLAKE3-based design.
A modified version of the above code with key-commitment might look like this:
const VERSION = "v2";/** * Derive an encryption key and a commitment hash. * @param {CryptographyKey} key * @param {Uint8Array} nonce * @returns {{encKey: CryptographyKey, commitment: Uint8Array}} */async function deriveKeys(key, nonce) { const encKey = new CryptographyKey(await sodium.crypto_generichash( new Uint8Array([0x01].append(nonce)), key )); const commitment = await sodium.crypto_generichash( new Uint8Array([0x02].append(nonce)), key ); return {encKey, commitment};}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function encryptData(message, key, assocData = null) { const nonce = await sodium.randombytes_buf(24); const aad = JSON.stringify({ 'version': VERSION, 'nonce': await sodium.sodium_bin2hex(nonce), 'extra': assocData }); const {encKey, commitment} = await deriveKeys(key, nonce); const encrypted = await sodium.crypto_aead_xchacha20poly1305_ietf_encrypt( message, nonce, encKey, aad ); return ( VERSION + await sodium.sodium_bin2hex(nonce) + await sodium.sodium_bin2hex(commitment) + await sodium.sodium_bin2hex(encrypted) );}/** * @param {string|Uint8Array} message * @param {Uint8Array} key * @param {string|null} assocData * @returns {string} */async function decryptData(encrypted, key, assocData = null) { const ver = encrypted.slice(0, 2); if (!await sodium.sodium_memcmp(ver, VERSION)) { throw new Error("Incorrect version: " + ver); } const nonce = await sodium.sodium_hex2bin(encrypted.slice(2, 50)); const ciphertext = await sodium.sodium_hex2bin(encrypted.slice(114)); const aad = JSON.stringify({ 'version': ver, 'nonce': encrypted.slice(2, 50), 'extra': assocData }); const storedCommitment = await sodium.sodium_hex2bin(encrypted.slice(50, 114)); const {encKey, commitment} = await deriveKeys(key, nonce); if (!(await sodium.sodium_memcmp(storedCommitment, commitment))) { throw new Error("Incorrect commitment value"); } const plaintext = await sodium.crypto_aead_xchacha20poly1305_ietf_decrypt( ciphertext, nonce, encKey, aad ); return plaintext.toString('utf-8');}
Another design choice you might make is to encode ciphertext with base64 instead of hexadecimal. That doesn’t significantly alter the design here, but it does mean your decoding logic has to accommodate this.
You SHOULD version your ciphertexts, and include this in the AAD provided to your AEAD encryption mode. I used “v1” and “v2” as a version string above, but you can use your software name for that too.
II. Key Agreement
If you’re not familiar with Elliptic Curve Diffie-Hellman or Authenticated Key Exhcanges, the two of the earliest posts on this blog were dedicated to those topics.
Key agreement in libsodium uses Elliptic Curve Diffie-Hellman over Curve25519, or X25519 for short.
There are many schools of thought for extending ECDH into an authenticated key exchange protocol.
We’re going to implement what the Signal Protocol calls X3DH instead of doing some interactive EdDSA + ECDH hybrid, because X3DH provides cryptographic deniability (see this section of the X3DH specification for more information).
For the moment, I’m going to assume a client-server model. That may or may not be appropriate for your design. You can substitute “the server” for “the other participant” in a peer-to-peer configuration.
Head’s up: This section of the blog post is code-heavy.
Update (November 23, 2020): I implemented this design in TypeScript, if you’d like something tangible to work with. I call my library, Rawr X3DH.
X3DH Pre-Key Bundles
Each participant will need to upload an Ed25519 identity key once (which is a detail covered in another section), which will be used to sign bundles of X25519 public keys to use for X3DH.
Your implementation will involve a fair bit of boilerplate, like so:
/** * Generate an X25519 keypair. * * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}} */async function generateKeyPair() { const keypair = await sodium.crypto_box_keypair(); return { secretKey: await sodium.crypto_box_secretkey(keypair), publicKey: await sodium.crypto_box_publickey(keypair) };}/** * Generates some number of X25519 keypairs. * * @param {number} preKeyCount * @returns {{secretKey: X25519SecretKey, publicKey: X25519PublicKey}[]} */async function generateBundle(preKeyCount = 100) { const bundle = []; for (let i = 0; i < preKeyCount; i++) { bundle.push(await generateKeyPair()); } return bundle;}/** * BLAKE2b( len(PK) | PK_0, PK_1, ... PK_n ) * * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function prehashPublicKeysForSigning(publicKeys) { const hashState = await sodium.crypto_generichash_init(); // First, update the state with the number of public keys const pkLen = new Uint8Array([ (publicKeys.length >>> 24) & 0xff, (publicKeys.length >>> 16) & 0xff, (publicKeys.length >>> 8) & 0xff, publicKeys.length & 0xff ]); await sodium.crypto_generichash_update(hashState, pkLen); // Next, update the state with each public key for (let pk of publicKeys) { await sodium.crypto_generichash_update( hashState, pk.getBuffer() ); } // Return the finalized BLAKE2b hash return await sodium.crypto_generichash_final(hashState);}/** * Signs a bundle. Returns the signature. * * @param {Ed25519SecretKey} signingKey * @param {X25519PublicKey[]} publicKeys * @returns {Uint8Array} */async function signBundle(signingKey, publicKeys) { return sodium.crypto_sign_detached( await prehashPublicKeysForSigning(publicKeys), signingKey );}/** * This is just so you can see how verification looks. * * @param {Ed25519PublicKey} verificationKey * @param {X25519PublicKey[]} publicKeys * @param {Uint8Array} signature */async function verifyBundle(verificationKey, publicKeys, signature) { return sodium.crypto_sign_verify_detached( await prehashPublicKeysForSigning(publicKeys), verificationKey, signature );}
This boilerplate exists just so you can do something like this:
/** * Generate some number of X25519 keypairs. * Persist the bundle. * Sign the bundle of publickeys with the Ed25519 secret key. * Return the signed bundle (which can be transmitted to the server.) * * @param {Ed25519SecretKey} signingKey * @param {number} numKeys * @returns {{signature: string, bundle: string[]}} */async function x3dh_pre_key(signingKey, numKeys = 100) { const bundle = await generateBundle(numKeys); const publicKeys = bundle.map(x => x.publicKey); const signature = await signBundle(signingKey, publicKeys); // This is a stub; how you persist it is app-specific: persistBundleNotDefinedHere(signingKey, bundle); // Hex-encode all the public keys const encodedBundle = []; for (let pk of publicKeys) { encodedBundle.push(await sodium.sodium_bin2hex(pk.getBuffer())); } return { 'signature': await sodium.sodium_bin2hex(signature), 'bundle': encodedBundle };}
And then you can drop the output of x3dh_pre_key(secretKey)
into a JSON-encoded HTTP request.
In accordance to Signal’s X3DH spec, you want to use x3dh_pre_key(secretKey, 1)
to generate the “signed pre-key” bundle and x3dn_pre_key(secretKey, 100)
when pushing 100 one-time keys to the server.
X3DH Initiation
This section conforms to the Sending the Initial Message section of the X3DH specification.
When you initiate a conversation, the server should provide you with a bundle containing:
- Your peer’s Identity key (an Ed25519 public key)
- Your peer’s current Signed Pre-Key (an X25519 public key)
- (If any remain unburned) One of your key’s One-Time Keys (an X25519 public key) — and then delete it
If we assume the structure of this response looks like this:
{ "IdentityKey": "...", "SignedPreKey": { "Signature": "..." "PreKey": "..." }, "OneTimeKey": "..." // or NULL}
Then we can write the initiation step of the handshake like so:
/** * Get SK for initializing an X3DH handshake * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} senderKey */async function x3dh_initiate_send_get_sk(r, senderKey) { const identityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey) ); const signedPreKey = new X25519PublicKey( await sodium.sodium_hex2bin(r.SignedPreKey.PreKey) ); const signature = await sodium.sodium_hex2bin(r.SignedPreKey.Signature); // Check signature const valid = await verifyBundle(identityKey, [signedPreKey], signature); if (!valid) { throw new Error("Invalid signature"); } const ephemeral = await generateKeyPair(); const ephSecret = ephemeral.secretKey; const ephPublic = ephemeral.publicKey; // Turn the Ed25519 keys into X25519 keys for X3DH: const senderX = await sodium.crypto_sign_ed25519_sk_to_curve25519(senderKey); const recipientX = await sodium.crypto_sign_ed25519_pk_to_curve25519(identityKey); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(senderX, signedPreKey); const DH2 = await sodium.crypto_scalarmult(ephSecret, recipientX); const DH3 = await sodium.crypto_scalarmult(ephSecret, signedPreKey); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( ephSecret, new X25519PublicKey(await sodium.sodium_hex2bin(r.OneTimeKey)) ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); ephSecret.wipe(); senderX.wipe(); return { IK: identityKey, EK: ephPublic, SK: SK, OTK: r.OneTimeKey // might be NULL };}/** * Initialize an X3DH handshake * * @param {string} recipientIdentity - Some identifier for the user * @param {Ed25519SecretKey} secretKey - Sender's secret key * @param {Ed25519PublicKey} publicKey - Sender's public key * @param {string} message - The initial message to send * @returns {object} */async function x3dh_initiate_send(recipientIdentity, secretKey, publicKey, message) { const r = await get_server_response(recipientIdentity); const {IK, EK, SK, OTK} = await x3dh_initiate_send_get_sk(r, secretKey); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(publicKey.getBuffer()) .concat(IK.getBuffer()) ) ); /* * We're going to set the session key for our recipient to SK. * This might invoke a ratchet. * * Either SK or the output of the ratchet derived from SK * will be returned by getEncryptionKey(). */ await setSessionKey(recipientIdentity, SK); const encrypted = await encryptData( message, await getEncryptionKey(recipientIdentity), assocData ); return { "Sender": my_identity_string, "IdentityKey": await sodium.sodium_bin2hex(publicKey), "EphemeralKey": await sodium.sodium_bin2hex(EK), "OneTimeKey": OTK, "CipherText": encrypted };}
We didn’t define setSessionKey()
or getEncryptionKey()
above. It will be covered later.
X3DH – Receiving an Initial Message
This section implements the Receiving the Initial Message section of the X3DH Specification.
We’re going to assume the structure of the request looks like this:
{ "Sender": "...", "IdentityKey": "...", "EphemeralKey": "...", "OneTimeKey": "...", "CipherText": "..."}
The code to handle this should look like this:
/** * Handle an X3DH initiation message as a receiver * * @param {object} r -- See previous code block * @param {Ed25519SecretKey} identitySecret * @param {Ed25519PublicKey} identityPublic * @param {Ed25519SecretKey} preKeySecret */async function x3dh_initiate_recv_get_sk( r, identitySecret, identityPublic, preKeySecret) { // Decode strings const senderIdentityKey = new Ed25519PublicKey( await sodium.sodium_hex2bin(r.IdentityKey), ); const ephemeral = new X25519PublicKey( await sodium.sodium_hex2bin(r.EphemeralKey), ); // Ed25519 -> X25519 const senderX = await sodium.crypto_sign_ed25519_pk_to_curve25519(senderIdentityKey); const recipientX = await sodium.crypto_sign_ed25519_sk_to_curve25519(identitySecret); // See the X3DH specification to really understand this part: const DH1 = await sodium.crypto_scalarmult(preKeySecret, senderX); const DH2 = await sodium.crypto_scalarmult(recipientX, ephemeral); const DH3 = await sodium.crypto_scalarmult(preKeySecret, ephemeral); let SK; if (r.OneTimeKey) { let DH4 = await sodium.crypto_scalarmult( await fetchAndWipeOneTimeSecretKey(r.OneTimeKey), ephemeral ); SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) .concat(DH4.getBuffer()) )); DH4.wipe(); } else { SK = kdf(new Uint8Array( [].concat(DH1.getBuffer()) .concat(DH2.getBuffer()) .concat(DH3.getBuffer()) )); } // Wipe keys DH1.wipe(); DH2.wipe(); DH3.wipe(); recipientX.wipe(); return { Sender: r.Sender, SK: SK, IK: senderIdentityKey };}/** * Initiate an X3DH handshake as a recipient * * @param {object} req - Request object * @returns {string} - The initial message */async function x3dh_initiate_recv(req) { const {identitySecret, identityPublic} = await getIdentityKeypair(); const {preKeySecret, preKeyPublic} = await getPreKeyPair(); const {Sender, SK, IK} = await x3dh_initiate_recv_get_sk( req, identitySecret, identityPublic, preKeySecret, preKeyPublic ); const assocData = await sodium.sodium_bin2hex( new Uint8Array( [].concat(IK.getBuffer()) .concat(identityPublic.getBuffer()) ) ); try { await setSessionKey(senderIdentity, SK); return decryptData( req.CipherText, await getEncryptionKey(senderIdentity), assocData ); } catch (e) { await destroySessionKey(senderIdentity); throw e; }}
And with that, you’ve successfully implemented X3DH and symmetric encryption in JavaScript.
We abstracted some of the details away (i.e. kdf()
, the transport mechanisms, the session key management mechanisms, and a few others). Some of them will be highly specific to your application, so it doesn’t make a ton of sense to flesh them out.
One thing to keep in mind: According to the X3DH specification, participants should regularly (e.g. weekly) replace their Signed Pre-Key in the server with a fresh one. They should also publish more One-Time Keys when they start to run low.
If you’d like to see a complete reference implementation of X3DH, as I mentioned before, Rawr-X3DH implements it in TypeScript.
Session Key Management
Using X3DH to for every message is inefficient and unnecessary. Even the Signal Protocol doesn’t do that.
Instead, Signal specifies a Double Ratchet protocol that combines a Symmetric-Key Ratchet on subsequent messages, and a Diffie-Hellman-based ratcheting protocol.
Signal even specifies integration guidelines for the Double Ratchet with X3DH.
It’s worth reading through the specification to understand their usages of Key-Derivation Functions (KDFs) and KDF Chains.
Although it is recommended to use HKDF as the Signal protocol specifies, you can strictly speaking use any secure keyed PRF to accomplish the same goal.
What follows is an example of a symmetric KDF chain that uses BLAKE2b with 512-bit digests of the current session key; the leftmost half of the BLAKE2b digest becomes the new session key, while the rightmost half becomes the encryption key.
const SESSION_KEYS = {};/** * Note: In reality you'll want to have two separate sessions: * One for receiving data, one for sending data. * * @param {string} identity * @param {CryptographyKey} key */async function setSessionKey(identity, key) { SESSION_KEYS[identity] = key;}async function getEncryptionKey(identity) { if (!SESSION_KEYS[identity]) { throw new Error("No session key for " + identity"); } const blake2bMac = await sodium.crypto_generichash( SESSION_KEYS[identity], null, 64 ); SESSION_KEYS[identity] = new CryptographyKey(blake2bMac.slice(0, 32)); return new CryptographyKey(blake2bMac.slice(32, 64));}
In the interest of time, a full DHRatchet implementation is left as an exercise to the reader (since it’s mostly a state machine), but using the appropriate functions provided by sodium-plus (crypto_box_keypair()
, crypto_scalarmult()
) should be relatively straightforward.
Make sure your KDFs use domain separation, as per the Signal Protocol specifications.
Group Key Agreement
The Signal Protocol specified X3DH and the Double Ratchet for securely encrypting information between two parties.
Group conversations are trickier, because you have to be able to encrypt information that multiple recipients can decrypt, add/remove participants to the conversation, etc.
(The same complexity comes with multi-device support for end-to-end encryption.)
The best design I’ve read to date for tackling group key agreement is the IETF Messaging Layer Security RFC draft.
I am not going to implement the entire MLS RFC in this blog post. If you want to support multiple devices or group conversations, you’ll want a complete MLS implementation to work with.
Brief Recap
That was a lot of ground to cover, but we’re not done yet.
(Art by Khia.)
So far we’ve tackled encryption, initial key agreement, and session key management. However, we did not flesh out how Identity Keys (which are signing keys–Ed25519 specifically–rather than Diffie-Hellman keys) are managed. That detail was just sorta hand-waved until now.
So let’s talk about that.
III. Identity Key Management
There’s a meme among technology bloggers to write a post titled “Falsehoods Programmers Believe About _____”.
Fortunately for us, Identity is one of the topics that furries are positioned to understand better than most (due to fursonas): Identities have a many-to-many relationship with Humans.
In an end-to-end encryption protocol, each identity will consist of some identifier (phone number, email address, username and server hostname, etc.) and an Ed25519 keypair (for which the public key will be published).
But how do you know whether or not a given public key is correct for a given identity?
This is where we segue into one of the hard problems in cryptography, where the solutions available are entirely dependent on your threat model: Public Key Infrastructure (PKI).
Some common PKI designs include:
- Certificate Authorities (CAs) — TLS does this
- Web-of-Trust (WoT) — The PGP ecosystem does this
- Trust On First Use (TOFU) — SSH does this
- Key Transparency / Certificate Transparency (CT) — TLS also does this for ensuring CA-issued certificates are auditable (although it was originally meant to replace Certificate Authorities)
And you can sort of choose-your-own-adventure on this one, depending on what’s most appropriate for the type of software you’re building and who your customers are.
One design I’m particularly fond of is called Gossamer, which is a PKI design without Certificate Authorities, originally designed for making WordPress’s automatic updates more secure (i.e. so every developer can sign their theme and plugin updates).
Since we only need to maintain an up-to-date repository of Ed25519 identity keys for each participant in our end-to-end encryption protocol, this makes Gossamer a suitable starting point.
Gossamer specifies a limited grammar of Actions that can be performed: AppendKey, RevokeKey, AppendUpdate, RevokeUpdate, and AttestUpdate. These actions are signed and published to an append-only cryptographic ledger.
I would propose a sixth action: AttestKey, so you can have WoT-like assurances and key-signing parties. (If nothing else, you should be able to attest that the identity keys of other cryptographic ledgers in the network are authentic at a point in time.)
IV. Backdoor Resistance
In the previous section, I proposed the use of Gossamer as a PKI for Identity Keys. This would provide Ed25519 keypairs for use with X3DH and the Double Ratchet, which would in turn provide session keys to use for symmetric authenticated encryption.
If you’ve implemented everything preceding this section, you have a full-stack end-to-end encryption protocol. But let’s make intelligence agencies and surveillance capitalists even more mad by making it impractical to backdoor our software (and impossible to silently backdoor it).
How do we pull that off?
You want Binary Transparency.
For us, the implementation is simple: Use Gossamer as it was originally intended (i.e. to secure your software distribution channels).
Gossamer provides up-to-date verification keys and a commitment to a cryptographic ledger of every software update. You can learn more about its inspiration here.
It isn’t enough to merely use Gossamer to manage keys and update signatures. You need independent third parties to use the AttestUpdate action to assert one or more of the following:
- That builds are reproducible from the source code.
- That they have reviewed the source code and found no evidence of backdoors or exploitable vulnerabilities.
(And then you should let your users decide which of these independent third parties they trust to vet software updates.)
Closing Remarks
The U.S. Government cries and moans a lot about “criminals going dark” and wonders a lot about how to solve the “going dark problem”.
If more software developers implement end-to-end encryption in their communications software, then maybe one day they won’t be able to use dragnet surveillance to spy on citizens and they’ll be forced to do actual detective work to solve actual crimes.
Y’know, like their job description actually entails?
Let’s normalize end-to-end encryption. Let’s normalize backdoor-resistant software distribution.
Let’s collectively tell the intelligence community in every sophisticated nation state the one word they don’t hear often enough:
Especially if you’re a furry. Because we improve everything! :3
Questions You Might Have
What About Private Contact Discovery?
That’s one of the major reasons why the thing we’re building isn’t meant to compete with Signal (and it MUST NOT be advertised as such):
Signal is a privacy tool, and their servers have no way of identifying who can contact who.
What we’ve built here isn’t a complete privacy solution, it’s only providing end-to-end encryption (and possibly making NSA employees cry at their desk).
Does This Design Work with Federation?
Yes. Each identifier string can be [username] at [hostname].
What About Network Metadata?
If you want anonymity, you want to use Tor.
Why Are You Using Ed25519 Keys for X3DH?
If you only read the key agreement section of this blog post and the fact that I’m passing around Ed25519 public keys seems weird, you might have missed the identity section of this blog post where I suggested piggybacking on another protocol called Gossamer to handle the distribution of Ed25519 public keys. (Gossamer is also beneficial for backdoor resistance in software update distribution, as described in the subsequent section.)
Furthermore, we’re actually using birationally equivalent X25519 keys derived from the Ed25519 keypair for the X3DH step. This is a deviation from what Signal does (using X25519 keys everywhere, then inventing an EdDSA variant to support their usage).
const publicKeyX = await sodium.crypto_sign_ed25519_pk_to_curve25519(foxPublicKey);const secretKeyX = await sodium.crypto_sign_ed25519_sk_to_curve25519(wolfSecretKey);
(Using fox/wolf instead of Alice/Bob, because it’s cuter.)
This design pattern has a few advantages:
- It makes Gossamer integration seamless, which means you can use Ed25519 for identities and still have a deniable X3DH handshake for 1:1 conversations while implementing the rest of the designs proposed.
- This approach to X3DH can be implemented entirely with libsodium functions, without forcing you to write your own cryptography implementations (i.e. for XEdDSA).
The only disadvantages I’m aware of are:
- It deviates from Signal’s core design in a subtle way that means you don’t get to claim the exact same advantages Signal does when it comes to peer review.
- Some cryptographers are distrustful of the use of birationally equivalent X25519 keys from Ed25519 keys (although there isn’t a vulnerability any of them have been able to point me to that doesn’t involve torsion groups–which libsodium’s implementation already avoids).
If these concerns are valid enough to decide against my implementation above, I invite you to talk with cryptographers about your concerns and then propose alternatives.
Has Any of This Been Implemented Already?
You can find implementations for the designs discussed on this blog post below:
- Rawr-X3DH implements X3DH in TypeScript (added 2020-11-23)
I will update this section of the blog post as implementations surface.
https://soatok.blog/2020/11/14/going-bark-a-furrys-guide-to-end-to-end-encryption/
#authenticatedEncryption #authenticatedKeyExchange #crypto #cryptography #encryption #endToEndEncryption #libsodium #OnlinePrivacy #privacy #SecurityGuidance #symmetricEncryption
Zoom recently announced that they were going to make end-to-end encryption available to all of their users–not just customers.https://twitter.com/zoom_us/status/1320760108343652352
This is a good move, especially for people living in countries with inept leadership that failed to address the COVID-19 pandemic and therefore need to conduct their work and schooling remotely through software like Zoom. I enthusiastically applaud them for making this change.
End-to-end encryption, on by default, is a huge win for everyone who uses Zoom. (Art by Khia.)
The end-to-end encryption capability arrives on the heels of their acquisition of Keybase in earlier this year. Hiring a team of security experts and cryptography engineers seems like a good move overall.
Upon hearing this news, I decided to be a good neighbor and take a look at their source code, with the reasoning, “If so many people’s privacy is going to be dependent on Zoom’s security, I might as well make sure they’re not doing something ridiculously bad.”
Except I couldn’t find their source code anywhere online. But they did publish a white paper on Github…
(Art by Khia.)
Disclaimers
What follows is the opinion of some guy on the Internet with a fursona–so whether or not you choose to take it seriously should be informed by this context. It is not the opinion of anyone’s employer, nor is it endorsed by Zoom, etc. Tell your lawyers to calm their nips.More importantly, I’m not here to hate on Zoom for doing a good thing, nor on the security experts that worked hard on making Zoom better for their users. The responsibility of security professionals is to the users, after all.
Also, these aren’t zero-days, so don’t try to lecture me about “responsible” disclosure. (That term is also problematic, by the way.)
Got it? Good. Let’s move on.
(Art by Swizz.)
Bizarre Design Choices in Version 2.3 of Zoom’s E2E White Paper
Note: I’ve altered the screenshots to be white text on a black background, since my blog’s color scheme is darker than a typical academic PDF. You can find the source here.Cryptographic Algorithms
It’s a little weird that they’re calculating a signature over SHA256(Context) || SHA256(M), considering Ed25519 uses SHA512 internally.
It would make just as much sense to sign Context || M directly–or, if pre-hashing large streams is needed, SHA512(Context || M).
At the top of this section, it says it uses libsodium’s
crypto_box
interface. But then they go onto… not actually use it.Instead, they wrote their own protocol using HKDF, two SHA256 hashes, and XChaCha20-Poly1305.
While secure, this isn’t really using the crypto_box interface.
The only part of the libsodium interface that’s being used is
[url=https://github.com/jedisct1/libsodium/blob/927dfe8e2eaa86160d3ba12a7e3258fbc322909c/src/libsodium/crypto_box/curve25519xsalsa20poly1305/box_curve25519xsalsa20poly1305.c#L35-L46]crypto_box_beforenm()[/url]
, which could easily have been a call tocrypto_scalarmult()
instead (since they’re passing the output of the scalar multiplication to HKDF anyway).(Art by Riley.)
Also, the SHA256(a) || SHA256(b) pattern returns. Zoom’s engineers must love SHA256 for some reason.
This time, it’s in the additional associated data for the XChaCha20-Poly1305.
Binding the ciphertext and the signature to the same context string is a sensible thing to do, it’s just the concatenation of SHA256 hashes is a bit weird when SHA512 exists.
Meeting Leader Security Code
Here we see Zoom using the a SHA256 of a constant string (“
Zoombase-1-ClientOnly-MAC-SecurityCode
“) in a construction that tries but fails to be HMAC.And then they concatenate it with the SHA256 hash of the public key (which is already a 256-bit value), and then they hash the whole thing again.
It’s redundant SHA256 all the way down. The redundancy of “MAC” and “SecurityCode” in their constant string is, at least, consistent with the rest of their design philosophy.
It would be a real shame if double-hashing carried the risk of invalidating security proofs, or if the security proof for HMAC required a high Hamming distance of padding constants and this design decision also later saved HMAC from related-key attacks.
Hiding Personal Details
Wait, you’re telling me Zoom was aware of HMAC’s existence this whole time?I give up!
Enough Pointless Dunking, What’s the Takeaway?
None of the design decisions Zoom made that I’ve criticized here are security vulnerabilities, but they do demonstrate an early lack of cryptography expertise in their product design.After all, the weirdness is almost entirely contained in section 3 of their white paper, which describes the “Phase I” of their rollout. So what I’ve pointed out here appears to be mostly legacy cruft that wasn’t risky enough to bother changing in their final design.
The rest of their paper is pretty straightforward and pleasant to read. Their design makes sense in general, and each phase includes an “Areas to Improve” section.
All in all, if you’re worried about the security of Zoom’s E2EE feature, the only thing they can really do better is to publish the source code (and link to it from the whitepaper repository for ease-of-discovery) for this feature so independent experts can publicly review it.
However, they seem to be getting a lot of mileage out of the experts on their payroll, so I wouldn’t count on that happening.
https://soatok.blog/2020/10/28/bizarre-design-choices-in-zooms-end-to-end-encryption/
#encryption #endToEndEncryption #zoom
There are several different methods for securely hashing a password server-side for storage and future authentication. The most common one (a.k.a. the one that FIPS allows you to use, if compliance matters for you) is called PBKDF2. It stands for Password-Based Key Derivation Function #2.
Why #2? It’s got nothing to do with pencils. There was, in fact, a PBKDF1! But PBKDF1 was fatally insecure in a way I find very interesting. This StackOverflow answer is a great explainer on the difference between the two.
Very Hand-Wavy Description of a Hash Function
Let’s defined a hash function as any one-way transformation of some arbitrary-length string () to a fixed-length, deterministic, pseudo-random output.
Note: When in doubt, I err on the side of being easily understood by non-experts over pedantic precision. (Art by Swizz)
For example, this is a dumb hash function (uses SipHash-2-4 with a constant key):
function dumb_hash(string $arbitrary, bool $raw_binary = false): string{ $h = sodium_crypto_shorthash($arbitrary, 'SoatokDreamseekr'); if ($raw_binary) { return $h; } return sodium_bin2hex($h);}
You can see the output of this function with some sample inputs here.
Properties of Hash Functions
A hash function is considered secure if it has the following properties:
- Pre-image resistance. Given , it should be difficult to find .
- Second pre-image resistance. Given , it should be difficult to find such that
- Collision resistance. It should be difficult to find any arbitrary pair of messages () such that
That last property, collision resistance, is guaranteed up to the Birthday Bound of the hash function. For a hash function with a 256-bit output, you will expect to need on average trial messages to find a collision.
If you’re confused about the difference between collision resistance and second pre-image resistance:
Collision resistance is about finding any two messages that produce the same hash, but you don’t care what the hash is as long as two distinct but known messages produce it.
On the other paw, second pre-image resistance is about finding a second message that produces a given hash.
Exploring PBKDF1’s Insecurity
If you recall, hash functions map an arbitrary-length string to a fixed-length string. If your input size is larger than your output size, collisions are inevitable (albeit computationally infeasible for hash functions such as SHA-256).
But what if your input size is equal to your output size, because you’re taking the output of a hash function and feeding it directly back into the same hash function?
Then, as explained here, you get an depletion of the possible outputs with each successive iteration.
But what does that look like?
Without running the experiments on a given hash function, there are two possibilities that come to mind:
- Convergence. This is when will, for two arbitrary messages and a sufficient number of iterations, converge on a single hash output.
- Cycles. This is when for some integer .
The most interesting result would be a quine, which is a cycle where (that is to say, ).
The least interesting result would be for random inputs to converge into large cycles e.g. cycles of size for a 256-bit hash function.
I calculated this lazily as the birthday bound of the birthday bound (so basically the 4th root, which for is ).
Update: According to this 1960 paper, the average time to cycle is , and cycle length should be , which means for a 256-bit hash you should expect a cycle after about or about 128 bits, and the average cycle length will be about . Thanks Riastradh for the corrections.
Conjecture: I would expect secure cryptographic hash functions in use today (e.g. SHA-256) to lean towards the least interesting output.
An Experiment Design
Since I don’t have an immense amount of cheap cloud computing at my disposal to run this experiments on a real hash function, I’m going to cheat a little and use my constant-key SipHash code from earlier in this post. In future work, cryptographers may find studying real hash functions (e.g. SHA-256) worthwhile.
Given that SipHash is a keyed pseudo-random function with an output size of 64 bits, my dumb hash function can be treated as a 64-bit hash function.
This means that you should expect your first collision (with 50% probability) after only trial hashes. This is cheap enough to try on a typical laptop.
Here’s a simple experiment for convergence:
- Generate two random strings .
- Set .
- Iterate until .
You can get the source code to run this trivial experiment here.
Clone the git repository, run composer install
, and then php bin/experiment.php
. Note that this may need to run for a very long time before you get a result.
If you get a result, you’ve found a convergence.
If the loop doesn’t terminate even after 2^64 iterations, you’ve definitely found a cycle. (Actually detecting a cycle and analyzing its length would require a lot of memory, and my simple PHP script isn’t suitable for this effort.)
“What do you mean I don’t have a petabyte of RAM at my disposal?”
What Does This Actually Tell Us?
The obvious lesson: Don’t design key derivation functions like PBKDF1.
But beyond that, unless you can find a hash function that reliably converges or produces short cycles (, for an n-bit hash function), not much. (This is just for fun, after all!)
Definitely fun to think about though! (Art by circuitslime)
If, however, a hash function is discovered to produce interesting results, this may indicate that the chosen hash function’s internal design is exploitable in some subtle ways that, upon further study, may lead to better cryptanalysis techniques. Especially if a hash quine is discovered.
(Header art by Khia)
https://soatok.blog/2020/05/05/putting-the-fun-in-hash-function/
#crypto #cryptography #hashFunction #SipHash
There are several different methods for securely hashing a password server-side for storage and future authentication. The most common one (a.k.a. the one that FIPS allows you to use, if compliance matters for you) is called PBKDF2. It stands for Password-Based Key Derivation Function #2.Why #2? It’s got nothing to do with pencils. There was, in fact, a PBKDF1! But PBKDF1 was fatally insecure in a way I find very interesting. This StackOverflow answer is a great explainer on the difference between the two.
Very Hand-Wavy Description of a Hash Function
Let’s defined a hash function as any one-way transformation of some arbitrary-length string () to a fixed-length, deterministic, pseudo-random output.Note: When in doubt, I err on the side of being easily understood by non-experts over pedantic precision. (Art by Swizz)
For example, this is a dumb hash function (uses SipHash-2-4 with a constant key):
function dumb_hash(string $arbitrary, bool $raw_binary = false): string{ $h = sodium_crypto_shorthash($arbitrary, 'SoatokDreamseekr'); if ($raw_binary) { return $h; } return sodium_bin2hex($h);}
You can see the output of this function with some sample inputs here.
Properties of Hash Functions
A hash function is considered secure if it has the following properties:
- Pre-image resistance. Given , it should be difficult to find .
- Second pre-image resistance. Given , it should be difficult to find such that
- Collision resistance. It should be difficult to find any arbitrary pair of messages () such that
That last property, collision resistance, is guaranteed up to the Birthday Bound of the hash function. For a hash function with a 256-bit output, you will expect to need on average trial messages to find a collision.
If you’re confused about the difference between collision resistance and second pre-image resistance:
Collision resistance is about finding any two messages that produce the same hash, but you don’t care what the hash is as long as two distinct but known messages produce it.
On the other paw, second pre-image resistance is about finding a second message that produces a given hash.
Exploring PBKDF1’s Insecurity
If you recall, hash functions map an arbitrary-length string to a fixed-length string. If your input size is larger than your output size, collisions are inevitable (albeit computationally infeasible for hash functions such as SHA-256).But what if your input size is equal to your output size, because you’re taking the output of a hash function and feeding it directly back into the same hash function?
Then, as explained here, you get an depletion of the possible outputs with each successive iteration.
But what does that look like?
Without running the experiments on a given hash function, there are two possibilities that come to mind:
- Convergence. This is when will, for two arbitrary messages and a sufficient number of iterations, converge on a single hash output.
- Cycles. This is when for some integer .
The most interesting result would be a quine, which is a cycle where (that is to say, ).
The least interesting result would be for random inputs to converge into large cycles e.g. cycles of size for a 256-bit hash function.
I calculated this lazily as the birthday bound of the birthday bound (so basically the 4th root, which for is ).Update: According to this 1960 paper, the average time to cycle is , and cycle length should be , which means for a 256-bit hash you should expect a cycle after about or about 128 bits, and the average cycle length will be about . Thanks Riastradh for the corrections.
Conjecture: I would expect secure cryptographic hash functions in use today (e.g. SHA-256) to lean towards the least interesting output.
An Experiment Design
Since I don’t have an immense amount of cheap cloud computing at my disposal to run this experiments on a real hash function, I’m going to cheat a little and use my constant-key SipHash code from earlier in this post. In future work, cryptographers may find studying real hash functions (e.g. SHA-256) worthwhile.Given that SipHash is a keyed pseudo-random function with an output size of 64 bits, my dumb hash function can be treated as a 64-bit hash function.
This means that you should expect your first collision (with 50% probability) after only trial hashes. This is cheap enough to try on a typical laptop.
Here’s a simple experiment for convergence:
- Generate two random strings .
- Set .
- Iterate until .
You can get the source code to run this trivial experiment here.
Clone the git repository, run
composer install
, and thenphp bin/experiment.php
. Note that this may need to run for a very long time before you get a result.If you get a result, you’ve found a convergence.
If the loop doesn’t terminate even after 2^64 iterations, you’ve definitely found a cycle. (Actually detecting a cycle and analyzing its length would require a lot of memory, and my simple PHP script isn’t suitable for this effort.)
“What do you mean I don’t have a petabyte of RAM at my disposal?”
What Does This Actually Tell Us?
The obvious lesson: Don’t design key derivation functions like PBKDF1.But beyond that, unless you can find a hash function that reliably converges or produces short cycles (, for an n-bit hash function), not much. (This is just for fun, after all!)
Definitely fun to think about though! (Art by circuitslime)
If, however, a hash function is discovered to produce interesting results, this may indicate that the chosen hash function’s internal design is exploitable in some subtle ways that, upon further study, may lead to better cryptanalysis techniques. Especially if a hash quine is discovered.
(Header art by Khia)
https://soatok.blog/2020/05/05/putting-the-fun-in-hash-function/
#crypto #cryptography #hashFunction #SipHash
The crypto industry isn’t dead yet. In fact, it’s one of the biggest funders in this US election cycle.
On #TechWontSaveUs, I spoke to @molly0xfff to discuss what crypto boosters want from their nearly $200 million in campaign spending.
Listen to the full episode: https://techwontsave.us/episode/235_crypto_is_spending_millions_to_sway_the_us_election_w_molly_white
#tech #crypto #politics #bitcoin
Crypto Is Spending Millions to Sway the US Election w/ Molly White - Tech Won’t Save Us
A left-wing podcast for better technology and a better world.Tech Won't Save Us
It’s finally time to release my newest project: https://www.followthecrypto.org/
This website provides a real-time lens into the cryptocurrency industry’s efforts to influence 2024 elections in the United States.
#crypto #cryptocurrency #elections #USpol #lobbying
Follow the Crypto
Follow the cryptocurrency industry’s influence on 2024 elections in the United States.Follow the Crypto
8Petros [Signal: Petros.63]
11 months ago — (Internet Archive, 300, Funston Avenue, Richmond District, San Francisco, California, 94118, United States)
#crypto #donation #anonymity #LifeSupport #donationsneeded #MutualAid
I am still waiting for my application for caretaker's social support accepted. I take full-time care of my wife (wheelchair, cerebral palsy) and I hope that - after several appeals and re-appeals (due to unfriendly legislation and even less friendly praxis) - I will get it approved. ETA: 3-6 months.
Meanwhile, I have no income, cannot even work (for money) remotely, as the authorities routinely use odd jobs as an argument (you can work, you do not need social support) to reject such applications.
So, I kindly ask you to support me with some crypto, so I can survive until - hopefully - happy ending.
Here is the regular link to AnonPay service. And here is the Onion link as well.
Bumps, boosts and shares will be appreciated.
🇬🇧 An international consortium of journalists analyzed close to 300 Estonian-registered crypto companies and discovered dozens of crimes: massive-scale fraud, money laundering, sanctions evasion, and illicit financing of organizations such as Wagner Group.
🇵🇱 Międzynarodowe konsorcjum dziennikarzy przeanalizowało blisko 300 zarejestrowanych w Estonii firm zajmujących się kryptowalutami i odkryło dziesiątki przestępstw: oszustwa na masową skalę, pranie pieniędzy, unikanie sankcji i nielegalne finansowanie organizacji takich jak Grupa Wagnera.
_point: https://kyivindependent.com/investigation-baltic-registered-crypto-firms-service-payments-for-russian-private-army-allow-sanctions-evasion/
_point: https://vsquare.org/tales-from-the-crypto-money-laundering-fraud-sanctions-estonia-lithuania-russia/
#crypto #fraud #investigation #wagnergroup
Tales from the Crypto: How the Baltic states became the hub of money laundering and fraud
International criminals used Estonia’s deficient cryptocurrency regulation to turn the small country into a hub of financial crime where over a billion euros have been laundered or defrauded from victims.VSquare
Potrzebuję odpowiedzi na trzy pytania, krytyczne dla rozwoju wolnej przedsiębiorczości (odpowiednie narzędzia będą udostępnione publicznie na otwartej licencji).
Jestem gotów uczciwie zapłacić za profesjonalną opinię, która się ostoi wobec ewentualnych kontroli skarbowych i innych.
1. Czy w rozumieniu przepisów o opodakowaniu przychodów z transakcji kryptowalutowych #GNUTaler to #kryptowaluta?
2. Czy barter (sprzedaż towaru/usługi w zamian za kryptowalutę) niesie za sobą skutki podatkowe, w szczególności czy zalicza się do nieewidenjonowanej działalności gospodarczej w kontekście limitu przychodów i zakresu przefmiotowego (sprzedaż usług)?
3. Czy koszty wyprodukowania/uzyskania przedmiotu sprzedanego w w/w barterze można będzie zaliczyć jako koszty nabycia odnośnej kwoty kryptowaluty?
When you are a #Linux user and interested in #crypto
https://flathub.org/apps/com.github.miguelmota.Cointop
My reply?
“Thanks but no thanks.”
Why?
Because I don’t need to see the app. I know the people behind it and what they stand for. The app can only ever be a manifestation of (or an initial misdirection from) that.
Expecting the wolf that took a bite out of you last time to this time act in your interests is how you end up as its dinner.
#BlueSky #JackDorsey #billionaires #SiliconValley #decentralisation #lipService #crypto #bullshit
People have to make a decision. You can only choose one.
Do you want freedom / human autonomy?
Or do you want pretend safety?
Ben Franklin quote:
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
Thomas Jefferson: "I prefer dangerous freedom over peaceful slavery"
#privacy #humanRights #autonomy #massSurveillance #crypto #e2ee #Signal #encryption quote
https://www.nytimes.com/2022/12/28/opinion/jack-dorseys-twitter-signal-privacy.html
Who needs "money" like that, right? People invented banking, payment systems and other financial institutions for a reason. I swore I would never touch it.
Year 2022 came and brought War like one Horseman¹ of Apocalypse already hanging around wasn't enough.
- Most popular global payment systems blocked Russia which was bad enough itself but what was even worse it became almost impossible to pay for services which remained operational but had no other forms of payment.
- SWIFT bank transfers mostly stopped working and what remained became very expensive.
- Some foreign investment accounts were frozen. So much for investing in US shares. Poof.
- EU now forbids exporting euros to Russia and some countries (like Finland) can actually confiscate² cash on border.
- Local accounts in foreign currency are restricted by the government which means you can't withdraw cash from them in substantial amount.
- At the same time now they have negative interest rates so if you just leave them they will get depleted anyway. The only way is to convert them to rubles at shamelessly low rates.
- Ruble, local currency, has about four very different rates towards other currencies (depends where, how and in which direction you convert) and suffers from high inflation (which again is very different depending how you calculate but significant no matter what).
- Local government as well as just about everyone else tightened all kinds of financial regulations, restrictions and tracking.
- Looks like everyone declared cold war on cash. I have some money in 500 and 200 euro notes which apparently are next to impossible to use in EU now because are you drug dealer, gun runner or child trafficker as no one else has use for these notes? Locally it is a bit better but clearly not for long.
So here I am standing before you admitting I broke my oath. I still think crypto is crap. So is the current state of the world and I hate to say it but the future doesn't look much brighter. I thought these cryptoanarchists had distorted perspective - apparently not. It is not that their vision is dystopian - it is our world that becomes ugly mess fitting their expectations.
#BraveNewWorld #crypto #cryptocurrency #Russia #nowar
¹ - Pestilence of course. Although if you like the other interpretation where it is Conquest - it fits even better.
² - As I hear they don't formally confiscate euro cash on Finnish border but "take it to be returned later on the next visit" and don't really search for it but it happens.