Search
Items tagged with: jwt
If you follow me on Twitter, you probably already knew that I attended DEFCON 30 in Las Vegas.
If you were there in person, you probably also saw this particular nerd walking around:
That’s me, wearing a partial fursuit.
The collar reads “Warning: PUNS” in a marquee.
In addition to being shamelessly furry and meeting a lot of really cool hackers (including several readers of this blog!), I attended several village talks (and at least one Sky Talk).
For some of these talks, I attended in fursuit. For others, I opted instead to bring my laptop and take notes. (Doing both things is ill-advised because it’s hard to type with paws, and the risk of laptop damage from excess sweat gives me pause, so it was one or the other.)
To be terse, my DEFCON experience was overwhelmingly positive. However, one talk at the Quantum Village had an extremely sus abstract (which was pointed out to me by Cendyne), so I decided to attend in person and see for myself.
Screenshot of the talk abstract.
Unfortunately, my intuition as a security engineer did not let me down, as we’ll explore below. But first, some background information.
What is DEFCON, Exactly?
(Feel free to skip this section if you already know.)
DEFCON is a hacking and security conference that takes place every year in Las Vegas, NV.
Unlike the stuffy Industry- and vendor-oriented events, DEFCON is more Community focused. Consequently, DEFCON has an inextricable counterculture element to it, and that makes it unique among security conferences.
(Some of the Security BSides events may also retain the same counterculture attitudes. However, that depends heavily on the attitudes of the local security/hacking scene. Give them a chance if you live near one, or consider starting your own.)
Put another way: DEFCON is to punk rock what USENIX is to classical. You’ll find good practitioners in both genres, and a surprising amount of overlap, but they can be appreciated independently too.
Which is why very few people balked at a group of furries running through Closing Ceremonies.
https://twitter.com/DCFurs/status/1558947697469448192
The weirdness and acceptance of DEFCON is a reflection of the diversity of the infosec community, and that’s a damn good thing.
In addition to DEFCON’s main track, there are a lot of so-called Villages that focus on different aspects of hacking, security, and the community.
For example, the Crypto & Privacy Village.
Photo: Cendyne
One of the many aspects of DEFCON is that you find yourself surrounded by people with deep expertise in many different areas (not just technological) who aren’t afraid to call you on any bullshit you spew. (Some of this fearlessness may be due to how much alcohol some people consume at the conference; I generally don’t drink alcohol, especially when fursuiting.)
Vikram Sharma’s Talk at the Quantum Village at DEFCON 30
The M.C. of the Quantum Village introduced Vikram by saying, “You should know who he is,” and saying that he’s deeply involved in policy discussions with government agencies. He was also described as a TED speaker.
https://www.youtube.com/watch?v=SQB9zNWw9MA
I came to the talk intentionally unfamiliar with Vikram’s previous presentations, and with an open mind to the material he was going to share, so to avoid preconceived notions or unconscious bias as much as humanly possible.
The M.C.’s introduction primed me to expect a level of technical competence with cryptography and cryptanalysis on par with, or better than, a typical engineering manager that oversees cryptographers.
Vikram’s talk did not deliver on this built-up expectation.
Vikram Sharma’s Arguments
Vikram’s talk can be loosely distilled into a few important points:
- Cryptography-Relevant Quantum Computers are coming, and we don’t know when, but most experts believe within the next 30 years (at most).
- Although NIST has been working since 2015 on post-quantum cryptography, the advancements in cryptanalysis against Rainbow and SIKE highlight how nascent these algorithms are, and how brittle they may prove to be against mathematical advancements.
- In order to be ready to defend against Quantum Adversaries, we must adopt a mindset of “crypto agility” when engineering our applications.
- For enterprise customers, a Key Management Service that uses Quantum Key Distribution between endpoints in addition to post-quantum cryptography is likely necessary to hedge against advancements to post-quantum cryptanalysis.
The first two points are totally uncontroversial.
The third is interesting, and the fourth is a thinly-veiled sales pitch for Vikram’s company, QuintessenceLabs, rather than a serious technical argument.
Let’s explore the latter two points in more detail.
Art: LvJ
“Crypto Agility” From an Applied Cryptography Perspective
Protocols that utilize “crypto agility” are not new. SSL/TLS, SSH, and JWT all permit some degree of flexibility in the ciphersuite selection, rather than hard-coding the parameters.
For earlier versions of SSL/TLS, you could encrypt with RC4 with SHA1, and that was totally permitted. Stupid, but permitted. Nowadays, we use AES-GCM or ChaCha20-Poly1305 with TLS 1.3.
The story with SSH is similar, and there’s a lot of overlap in the user experience of configuring SSL/TLS for webservers and configuring OpenSSH. Even PGP supported different ciphers and modes. Crypto agility was the norm for many years.
JSON Web Tokens (JWT) is the perfect example of what happens when you take “crypto agility” too far:
- An attacker can change the
alg
header’s value tonone
to allow for trivial existential forgery. This surprises users. - An attacker can take an existing token that uses an asymmetric mode (
RS256
,ES256
, etc.), change thealg
to a symmetric mode (i.e.HS256
), then use the public key (or a hash of the public key) as a symmetric key, and the verifier would just blindly accept it.
When you maximize “crypto agility”, you introduce dangerous levels of in-band protocol negotiation.
Crypto Agility tends to become a security problem: When an attacker can entirely decide how the target system behaves, they can often bypass your security controls entirely.
Therefore, anyone who speaks in favor of crypto agility should be very careful about what, precisely, they’re advocating for.
Vikram’s talk wasn’t clear about these nuances. At the opening of the Q&A session, I asked the following question:
How do you balance your proposal for crypto agility with the risks of in-band protocol negotiation?
He declined to answer this question directly. Instead, he directed me to email his CTO, John Leiseboer, adding, “He would be better able to speak to that.”
Art: LvJ
So I returned to my DEFCON hotel room and sent an email to the email address that was displayed on Vikram’s slideshow.
Soatok’s First Email to QuintessenceLabs
This was mostly an attempt to be diplomatic so as to not scare them off from responding:
Hi,I attended your CEO’s talk at the Quantum Village at DEFCON 30. I was very interested in the subject, and Vikram’s delivery of the material was appropriate for the general audience. I did have one question, but he suggested that would be better answered by your CTO.
For a bit of background: I work in applied cryptography.
My question is, “How do you balance your proposal for crypto agility with the risks of in-band protocol negotiation?”
To add color to this question, consider the widely exploited standard, JSON Web Tokens. To change the cryptographic properties of a JWT, you only need to change the “alg” header. This is maximally agile.
Unfortunately, you can specify “none” as an algorithm. This surprises users: https://www.howmanydayssinceajwtalgnonevuln.com/
Additionally, an attacker can take a token signed with an asymmetric key (e.g. RSA), change the alg header to HMAC, and then use [a hash of] the asymmetric public key as if it were a symmetric HMAC key, and achieve trivial existential forgery.
The same concerns, I feel, will crop up when migrating to a hybrid post-quantum design. Cross-protocol interactions of the incorrect secret keys would, from by understanding of Vikram’s talk, be made possible if an attacker can mutate the metadata table.
There is a way out of this mess: Versioned Protocols. Using JWT as an example, a contrary proposal that can support cipher agility but only for ciphers (and constructions of ciphers) that cryptographers have approved is called PASETO. https://paseto.io
If a vulnerability in the current supported protocols (v3, v4) is discovered, the cryptographers behind PASETO will specify new modes (e.g. v5 and v6). Currently the only reason they’re considering any such migration is for hybrid post-quantum signatures (P384 + Dilithium, Ed25519 + Dilithium).
I think the general idea of crypto agility is the right direction for addressing the risk of cryptography relevant quantum computers, but I would caution against recreating the protocol foot-guns made possible by the wrong sort of agility.
Vikram suggested you would be able to speak more to how your designs take these threats into consideration. I’m very interested in this space and would love to learn more about how your designs work at a deeper level.
Thank you for your time,
Soatok
(Typos included in original email)
I didn’t hear back from them for several days, so I hunted down the CTO’s email address directly (thanks, old mailing list discussions!).
He quickly responded with this:
John Leiseboer’s First Response Email
Hi Soatok,My apologies for not responding sooner. I’ve been travelling and in meetings across three time zones. It’s been difficult to find time to absorb your questions and put together a response.
Please give me a few more days to find some time to get back to you.
Regards,
John
I want to pause here and make something very clear: My actual question is the most basic and obvious question any credible security engineer that works with cryptography would ask in response to Vikram’s presentation.
Art: Swizz
If you were to claim, “Everyone should use crypto agility,” and don’t qualify that statement with nuance or context, then a follow-up question about the risks of crypto agility is inevitable.
If this question somehow doesn’t come up, you’re in the wrong room of the wrong security engineers when you rehearse your pitch.
Why, then, would the speaker not be prepared for such a question? Isn’t he routinely offering similar talks to policy experts for government agencies?
Why, also, would his company’s CTO not be able to readily answer the question without needing additional time to “absorb” this question and put together a response?
It’s not like I asked them to provide a machine-verifiable, formal proof of correctness for their proposals.
I only asked the most basic Cryptography Protocol Design 101 question in response to their assertion that “crypto agility” is something that “companies” should adopt.
The proof, the proof, the proof is invalid!We don’t need no lemma–Proof by contradiction fail!
Fail, contradiction, fail!
With apologies to Rock Master Scott & The Dynamic Three
Was this company’s leadership blind-sided by the most basic question of the field of study they’re hoping to, at least in part, replace?
Did they fail to consider an attacker that alters the metadata associated with their encrypted data in order to convince the “agile” cryptosystems to misbehave?
I will update this post when I have an answer to these very interesting questions.
(Update) QuintessenceLabs Responds
This is one email, but I’m going to add my commentary between paragraph breaks.
Hello again Soatok,I’ve just read your blog where you criticised Vikram’s DEFCON talk. I was disappointed that you couldn’t wait for me to get back to you with the more detailed response that I promised to provide you last night.
Mea culpa. If I don’t press “publish” when the iron is hot, these posts stay in rough draft hell for 12-15 months, and I wanted to get my feedback (and the necessary context for said feedback) to the Quantum Village.
The short answer to your question, “How do you balance your proposal for crypto agility with the risks of in-band protocol negotiation?”, is simply that our proposal does not permit in-band negotiation. I suppose I could have provided you with that simple answer, but I expected that you’d want more than that. And I certainly wanted to provide you more. So, with apologies to Winston Churchill, if I had more time I would have written a shorter response.The problem with the term “crypto agility” is that it means different things to different people. In your case, you talk about the “crypto agility” of SSL/TLS, SSH and JWT. The problem, as you are aware, and point out in your email and blog, is that crypto agility, as implemented in these protocols, introduces security issues. At QuintessenceLabs we are well aware of the issues arising from the sort of crypto agility that provides too much flexibility to users (and attackers are a class of user as far as I’m concerned), and fails to account for imaginative use of the agile features.
Although I do not yet have access to Vikram’s slides, I will point out that the specific example used during the talk included encrypted credit card numbers and a separate metadata table that stored the algorithms used to encrypt said credit card numbers.
(He specifically spoke to upgrading from AES-128 to Kyber; which I thought was an odd proposition considering Kyber is a KEM not a symmetric cipher, but I dismissed that as misspeaking.)
Vikram’s metadata table proposal necessarily implies some kind of in-band negotiation, which is the bad type of crypto agility.
The following is taken from the introduction of an internal QuintessenceLabs Technical Note:“The term “cryptographic agility” is defined within QuintessenceLabs as a key management capability that permits the coding of software applications without the application code needing to specify or refer to specific cryptographic algorithms, key lengths, modes of operation, or similar cryptographic details. This capability enables applications to migrate between key types and cryptographic algorithms without a need to update the application software.
Specification of key types, strengths, modes, and cryptographic algorithms, etc. is the responsibility of a key management administrator. The key management platform delivers key material and meta data to applications in such a way that the cryptographic service or library used by the application can perform cryptographic operations in accordance with the attributes attached to the key material.”
This would have been very useful to include in the DEFCON talk. (It would have also been helpful if the speaker was prepared to answer such questions about the proposal!)
I generally think developers shouldn’t even need to care about the ciphers and algorithms being used; only that they’re secure, fast, and appropriate for their security goals. i.e. While AES-256-GCM might be okay for many use cases, it’s not how you should be storing users’ passwords.
In this regard, it sounds like the actual proposal is closer to NaCl/libsodium’s crypto_box()
API, except the underlying library can tolerate algorithm upgrades (e.g. from X25519 + Xsalsa20poly1305 to something like X25519/Kyber768 + XChaCha20 + BLAKE2b-MAC). Which is totally reasonable.
This is very similar to what I did with my open source cryptography library, after all. It wraps libsodium, and if I decided to upgrade it to be post-quantum secure, I could do so without my users noticing (except maybe some bandwidth and performance differences).
The simplest way to tolerate such upgrades, by the by, is to use some sort of versioning scheme (which is the non-agile crypto design), which is what I’ve been proposing all along.
Some key points to note about our implementation:
- The application developer has absolutely no control over which algorithms, key lengths, modes of operation or other crypto-related attributes are used.
- The cryptographic key is actually a “managed object”. It is a protected object that encapsulates the key values and attributes that define how the key is to be used, including attributes such as cryptographic algorithm, key length, mode of operation, permitted uses (e.g. encrypt, decrypt, sign, signature verify, MAC, wrap, unwrap, etc.), links to related managed objects (e.g. a private key has an attribute that is it’s associated public key identifier), not-before and not-after time stamps, and maximum usage limits after which the key must be retired for use for protect operations.
- The cryptographic subsystem (whether that is an HSM, smart card, library, external service, …) presents a very simple API. Again from our internal Technical Note (was this in Vikram’s presentation?):
This is actually reasonable! Per Sophie Schmieg:
https://twitter.com/SchmiegSophie/status/1264567198091079681
“The application code is responsible for the business logic of the application. The application code makes no low level and direct cryptographic calls. Instead, it makes calls that abstract away the cryptographic details.An example of how this may work is as follows:
protected_data = protect(sensitive_data, policy);
In this example the sensitive data is the data to be protected, the policy indicates what type of data, or type of protection is required, and the result returned is the protected data. More specifically, the policy could be a template name such as “CCN-Policy”, which might be used for protecting credit card numbers.”
I’m not sure I like this API (protect
is way too vague), but the intent is much clearer than some vague notion of “crypto agility”.
The application is responsible for understanding the type of information it is working with. It needs to know whether it’s handling sensitive data or not. It needs to know whether sensitive data needs to be protected. It needs to know the classification of, or how to classify, the data. I think that’s reasonable, after all the application is implementing the business logic.
Why not simply encrypt everything by default and let developers specifically opt out for data classified as not sensitive?
Then if a developer adds a field down the road and somehow forgets to classify it, the database administrator only ever sees ciphertext.
So as the pseudo code above illustrates, the application passes the sensitive data and the classification (encoded in the ‘policy’ attribute) to the cryptographic subsystem to figure out what to do cryptographically.
If you can avoid developers having to even care about these details, while still getting the details right, that’s a win.
- The policy that defines the cryptographic attributes is separately managed. In the case of QuintessenceLabs’ products, this is the responsibility of a key management administrator, or a security officer, who has a cryptographic policy management role.
- The returned, protected data is an object, or structure that encapsulates not only the result of the cryptographic operation, but attributes that allow the inverse operation (“unprotect”) to be performed by the crypto subsystem.
This is actually not clownshoes. Thankfully.
- The policy can be changed at any time. This the agile part of the implementation. Today the policy says use AES-128-CBC, tomorrow is says use AES-256-GCM.
So, a couple of things about this specific example.
- AES-128-CBC is vulnerable to padding oracle attacks.
- If you support both AES-CBC and AES-GCM in the same code path, you can use the legacy CBC mode support to trigger a padding oracle attack to decrypt GCM ciphertexts.
Isn’t cryptography fun?
Art: Swizz
Adaptive attacks aside, cryptography migrations are way more challenging that your technical description implies. You’re going to have to do the following, in this specific order.
- Make sure your entire fleet can decrypt messages AES-GCM ciphertexts before any machine writes any data using AES-GCM.
- Only after step 1 is complete, you can upgrade your fleet to start writing records with AES-GCM.
- Re-encrypt all old records to use AES-GCM instead of AES-CBC.
- Turn off support for the legacy CBC mode.
If you attempt to move onto step 2 before step 1 is finished, you will find your network in a split-brain scenario where freshly-encrypted data cannot be read by some machines.
If you attempt to move onto step 3 before step 2 is finished, you will find some machines are still writing the legacy format.
If you attempt to move onto step 4 before step 3 is finished, you will lose access to some subset of old encrypted data.
Art: LvJ
It’s simply not possible to design a “crypto agile” application at any significant scale that totally avoids any window of downgrade susceptibility without also losing access to old encrypted data.
Until your entire network has reached step 4, you’re necessarily going to be vulnerable to in-band negotiation.
Also if you’re using the same key for multiple records (or the same wrapping key for multiple data keys), you’re going to read up on symmetric wear-out and the birthday paradox.
HKDF helps, but it has a specific way it needs to be used in security proofs.
Obviously as new algorithms are introduced, the crypto subsystem will need to be upgraded. But the application should not have to be changed.Please feel free to ask more questions. But please be patient. Right now I’m sitting at the check-in gate at Dulles Airport using a laptop with a battery rapidly running down. For the next 30 or so hours I’ll be flying back home to Australia. I’m always happy to receive questions. I would ask though that you have some respect for my time and give me a chance to catch my wind before you publish an article without allowing me the time to respond properly.
The technical details that were absent from the DEFCON talk I watched, which only cryptographers and security engineers will really care about, were essential and should have been included.
I still disagree about the properties of their proposed “crypto agility” in practice (especially at cloud scale, which seems to be where QuintessenceLabs’ CEO wants to operate), but at least this is a much more reasonable proposal.
(The remainder of this blog post remains unchanged from what was written before John’s reply was edited in.)
Against Quantum Key Distribution
While the response to my question about “crypto agility” was disappointing, their argument for Quantum Key Distribution is just weird.
Quantum Key Distribution (QKD) doesn’t scale well enough to protect consumers.
Suppose you’re accessing a brand new computer and want to login to your bank’s website to check your account balance. In this scenario, QKD is useless for you: You haven’t established a secure channel via quantum entanglement yet. You probably don’t even have the hardware or optic channel necessary to do so.
QuintessenceLabs doesn’t acknowledge this scaling issue explicitly in their Quantum Village slides. Instead, they pitch QKD as a solution for businesses (rather than users) that want to add a layer of quantum security to their Key Management Services (described vaguely as “interacting with HSMs”).
I’m not sure who exactly their intended audience is with this sales pitch: Are they trying to sell this to, or compete with, large cloud providers?
Art: LvJ
Either way, this isn’t the sort of thing that resonates well with most hackers (who are more aligned with protecting users than protecting companies), so DEFCON isn’t really the best venue for such a sales pitch.
To their credit, they don’t entirely handwave post-quantum cryptography as a solution to cryptography relevant quantum computers. Instead, they claim that QKD is an “optional” way to protect communications when post-quantum cryptography is broken. Vikram comes short of describing these algorithmic breaks as “inevitable” in his presentation.
Here’s the rub: QKD isn’t magic.
Just as the correct response to, “What’s your threat model?” isn’t, “Well why don’t you try to disprove Heisenberg? Checkmate atheists!” the solution to quantum computing isn’t to throw all of discrete mathematics out with the bathwater in favor of relying only on quantum entanglement.
Even if you assume that the physical properties being leveraged to secure QKD cannot be directly defeated, you still have to deal with side-channel attacks. Et cetera.
That’s not to say that QKD research isn’t valuable or interesting. But I would strongly caution anyone who’s considering deploying it in production to talk to cryptographers and other security experts first.
We’re here to help!
Art: Harubaki
Burning Trust at DEFCON 30
DEFCON isn’t at all a vendor expo like Black Hat USA where you send sales drones to hock blinkenboxen that claim to secure networks.
DEFCON is where serious researchers and eccentric geeks exchange 0days for cocktails and solve interesting technical challenges.
https://twitter.com/telechallenge/status/1558891761610723330
My point is: If you present bullshit at DEFCON, you will be called out on it.
When I sat down at the Quantum Village to watch Vikram’s presentation, I honestly wanted it to not be bullshit.
After all, a lot of talented hackers lack refinement in public speaking, especially with talk abstracts. Conversely, a lot of business people (i.e. the CEO of a company) might be so accustomed to writing abstracts for a different audience, even if they ultimately deliver a hacker-focused talk in the end.
Therefore, in my mind, it’s possible that the content of the talk would be surprisingly excellent. Unfortunately, I left disappointed.
I can’t help but wonder if Vikram’s company being one of the four sponsors for the nascent Quantum Village prevented the necessary bullshit filter from being utilized to screen his talk.
He, and his employees, might rightfully claim to be experts on quantum technology, but they certainly don’t understand cryptography as well as they like to believe.
The thought of a sponsor bypassing the bullshit filter certainly makes me a little distrustful of the Quantum Village going into DEFCON 31.
Recommendation for DEFCON 31 and Beyond
The Quantum Village should reach out to the Crypto & Privacy Village to help review any talks that involve cryptography, moving forward.
This will help in two ways:
- Cryptography and privacy experts will be involved in the process.
- Someone who doesn’t have a perverse incentive will likely be involved in the process, which makes it easier to tell a sponsor, “Hell no.”
A particularly motivated bullshit artist with sufficient capital might simply sponsor both villages, so it’s not 100% foolproof, but you can always count on weirdos in fursuits like me to be the last line of defense, should all else fail.
Update:
https://twitter.com/FiloSottile/status/1560165504312135680
I stand corrected. This attack vector won’t work. Thanks, Filippo!
Closing Thoughts
I was originally going to delay publishing this until QuintessenceLabs had a chance to respond to my question, or at least until all of the slide decks were published (so I could cite them carefully in my critique), but I honestly wanted to get my recommendation off my chest and in the hands of the Quantum Village organizers before I get distracted by the work that piled up while I was at DEFCON.
I might blog for free, but that doesn’t mean I don’t have a job to do, or bills to pay.
If you (or someone you work for) were sold on “crypto agility”, consider versioned protocols instead. And if you’re going to invest in custom hardware, take a page out of Nintendo’s book: simply burn fuses with each new protocol version to prevent downgrade attacks.
You probably don’t need quantum encryption to solve any of the problems you face in the immediate future.
https://soatok.blog/2022/08/18/burning-trust-at-the-quantum-village-at-defcon-30/
#cryptographicProtocols #cryptography #DEFCON #inBandNegotiation #JWT #postQuantumCryptography #QKD #QuantumEncryption #QuantumKeyDistribution #QuantumVillage #QuintessenceLabs #QuintessenceLabs #securityEngineering #VikramSharma
Sometimes my blog posts end up on social link-sharing websites with a technology focus, such as Lobste.rs or Hacker News.On a good day, this presents an opportunity to share one’s writing with a larger audience and, more importantly, solicit a wider variety of feedback from one’s peers.
However, sometimes you end up with feedback like this, or this:
Apparently my fursona is ugly, and therefore I’m supposed to respect some random person’s preferences and suppress my identity online.
I’m no stranger to gatekeeping in online communities, internet trolls, or bullying in general. This isn’t my first rodeo, and it won’t be my last.
These kinds of comments exist to send a message not just to me, but to anyone else who’s furry or overtly LGBTQIA+: You’re weird and therefore not welcome here.
Of course, the moderators rarely share their views.
https://twitter.com/pushcx/status/1281207233020379137
Because of their toxic nature, there is only one appropriate response to these kinds of comments: Loud and persistent spite.
So here’s some more art I’ve commissioned or been gifted of my fursona over the years that I haven’t yet worked into a blog post:
Art by kazetheblaze
Art by leeohfox
Art by Diffuse MooseIf you hate furries so much, you will be appalled to learn that factoids about my fursona species have landed in LibreSSL’s source code (decoded).
Never underestimate furries, because we make the Internets go.
I will never let these kind of comments discourage me from being open about my hobbies, interests, or personality. And neither should anyone else.
If you don’t like my blog posts because I’m a furry but still find the technical content interesting, know now and forever more that, when you try to push me or anyone else out for being different, I will only increase the fucking thing.
Header art created by @loviesophiee and inspired by floccinaucinihilipilification.
https://soatok.blog/2020/07/09/a-word-on-anti-furry-sentiments-in-the-tech-community/
#antiFurryBullying #cyberculture #furry #HackerNews #LobsteRs #Reddit
One of the criticisms of the JOSE/JWT standards is that they give an attacker too much flexibility, by allowing them to specify how a message should be processed. In particular, the standard “alg” header tells the recipient what cryptographic algorithm was used to sign or encrypt the contents. Letting the attacker chose the algorithm can lead to disastrous security vulnerabilities.
In response to this, some people have suggested alternatives such as removing the alg header and instead only allowing a version and a type from a very limited set. While this seems better, it is actually just a variation on the same mistake. If an attacker can still pick the version then they can still potentially cause mischief.
My opinion is that it is better to remove any indicators from the message about how it should be processed – no “alg”, no purpose indicator, no version numbers. Instead the message should (optionally) allow specifying just one hint: a key id (“kid” header in JOSE). The metadata associated with the key should then tell you how to process messages with that key. For instance, the related JWK spec already allows specifying the algorithm that the key is to be used for. The recipient looks up the key and processes the message according to the metadata.
If you want to change the algorithm then you simply publish a new key with the new algorithm. It is best practice to use a new key when switching algorithms anyway. Even in the case where the old key would be compatible with the new algorithm, security proofs often assume that a key is only used for one algorithm. If you really must use the same key with more than one algorithm, then you could just publish more than one JWK, referencing the same key but with different metadata.
(Update added 26th April 2021): In the context of a protocol like TLS, where parties do not necessarily know each others’ key identifiers ahead of time you can imagine baking the algorithm details into the certificate. For example, suppose that a TLS ciphersuite identifier was baked directly into each certificate. The client says what ciphersuites it supports, the server then responds with a cert that matches one of those ciphersuites and they use the exact ciphersuite in the cert. CAs can then perform useful functions like ensuring deprecated ciphersuites aren’t used in new certificates, or that the same key is not used for multiple ciphersuites in potentially dangerous combinations.
A digression on capability security
I have always found an object-capability (ocap) security analysis to be illuminating, even when not designing an ocap system. For instance, consider these two ways of copying a file on a UNIX system (I can’t remember which paper I read this example in, but it is not my own):
cp from to
cat <from >to
The first (cp) takes an input filename and an output filename and opens the two files for you. The second (cat) instead takes an already opened pair of file descriptors and reads from one and writes to the other. If we think about the permissions that these programs need, the first needs to be able to read and write to any files that you might ask it to, while the second only needs to read and write to the specific file descriptors that you passed it as arguments.
The first approach is much more susceptible to a class of security vulnerabilities known as confused deputy attacks. The original example was a compiler service that accepted programs to compile and an output file to write the compiled object to. By passing in a carefully chosen output file (say /etc/passwd) you can trick the compiler into performing malicious actions that you would not be able to perform by yourself (overwriting sensitive files). Confused deputy attacks are widespread. Both path traversal and CSRF attacks are examples.
By bundling up the permissions to perform an operation with an unforgeable reference to an object to perform those actions on, a capability-based system makes it impossible to even express confused deputy attacks: you cannot forge a writeable reference to a file that you do not yourself have write access to. The same principle is used in CSRF countermeasures (which are in fact largely capability-based) – the anti-CSRF token prevents an attacker forging a request. The ocap literature shows how in this model, secure design closely follows normal best practice software design principles: encapsulation, dependency injection, eliminating global variables and static methods with side-effects, and so on.
So how does this relate to JOSE? By allowing the sender of a message to indicate a key to be used separately from specifying the algorithm to use with that key, JOSE is making exactly the same mistake the UNIX does and opening up implementations to confused deputy attacks. Here the JOSE/JWT library is the deputy and it can become confused about what algorithm to use with what key, just as a compiler can become confused about what files it should be writing to. If we remove the ability for an attacker to specify a key independently of specifying the algorithm, then we eliminate this whole class of attacks.
If we further only allow specifying the key by id, then the key id (“kid” in JOSE) becomes a capability. That capability can either be public (a simple name) or private (an unguessable random string). In either case, an attacker must use one of our set of valid keys and cannot confuse our deputy by creating new keys or specifying incompatible algorithms.
https://neilmadden.blog/2018/09/30/key-driven-cryptographic-agility/
#cryptography #jose #jwt #objectCapabilitySecurity
In the wake of some more recent attacks against popular JSON Web Token (JWT)/JSON Object Signing and Encryption (JOSE) libraries, there has been some renewed criticism of the JWT/JOSE standards themselves (see also discussion on lobste.rs with an excellent comment from Thomas Ptacek summarising some of the problems with the standard). Given these criticisms, should you use JOSE at all? Are articles like my recent “best practices” one just encouraging adoption of bad standards that should be left to die a death?Certainly, there are lots of potential gotchas in the specs, and it is easy for somebody without experience to shoot themselves in the foot using these standards. I agree with pretty much all of the criticisms levelled against the standards. They are too complicated with too many potentially insecure options. It is far too easy to select insecure combinations or misconfigure them. Indeed, much of the advice in my earlier article can be boiled down to limiting which options you use, understanding what security properties those options do and do not provide, and completely ignoring some of the more troublesome aspects of the spec. If you followed my advice of using “headless” JWTs and direct authenticated encryption with a symmetric key, you’d end up not far off from the advice of just encrypting a JSON object with libsodium or using Fernet.
So in that sense, I am already advocating for not really using the specs as-is, at least not without significant work to understand them and how they fit with your requirements. But there are some cases where using JWTs still makes sense:
- If you need to implement a standard that mandates their use, such as OpenID Connect. In this case you do not have much of a choice.
- If you need to interoperate with third-party software that is already using JWTs. Again, in this case you also do not have a choice.
- You have complex requirements mandating particular algorithms/parameters (e.g. NIST/FIPS-approved algorithms) and don’t want to hand-roll a message format or are required to use something with a “standard”. In this case, JWT/JOSE is not a terrible choice, so long as you know what you are doing (and I hope you do if you are in this position).
If you do have a choice, then you should think hard about whether you need the complexity of JWTs or can use a simpler approach that takes care of most of the choices for you or store state on the server and use opaque cookies. In addition to the options mentioned in the referenced posts, I would also like to mention Macaroons, which can be a good alternative for some authorization token use-cases and the existing libraries tend to build on solid foundations (libsodium/NaCl).
So, should you use JWT/JOSE at all? In many cases the answer is no, and you should use a less error-prone alternative. If you do need to use them, then make sure you know what you are doing.
https://neilmadden.blog/2017/03/15/should-you-use-jwt-jose/
#cryptography #jose #jwt
In the wake of some more recent attacks against popular JSON Web Token (JWT)/JSON Object Signing and Encryption (JOSE) libraries, there has been some renewed criticism of the JWT/JOSE standards themselves (see also discussion on lobste.rs with an excellent comment from Thomas Ptacek summarising some of the problems with the standard). Given these criticisms, should you use JOSE at all? Are articles like my recent “best practices” one just encouraging adoption of bad standards that should be left to die a death?
Certainly, there are lots of potential gotchas in the specs, and it is easy for somebody without experience to shoot themselves in the foot using these standards. I agree with pretty much all of the criticisms levelled against the standards. They are too complicated with too many potentially insecure options. It is far too easy to select insecure combinations or misconfigure them. Indeed, much of the advice in my earlier article can be boiled down to limiting which options you use, understanding what security properties those options do and do not provide, and completely ignoring some of the more troublesome aspects of the spec. If you followed my advice of using “headless” JWTs and direct authenticated encryption with a symmetric key, you’d end up not far off from the advice of just encrypting a JSON object with libsodium or using Fernet.
So in that sense, I am already advocating for not really using the specs as-is, at least not without significant work to understand them and how they fit with your requirements. But there are some cases where using JWTs still makes sense:
- If you need to implement a standard that mandates their use, such as OpenID Connect. In this case you do not have much of a choice.
- If you need to interoperate with third-party software that is already using JWTs. Again, in this case you also do not have a choice.
- You have complex requirements mandating particular algorithms/parameters (e.g. NIST/FIPS-approved algorithms) and don’t want to hand-roll a message format or are required to use something with a “standard”. In this case, JWT/JOSE is not a terrible choice, so long as you know what you are doing (and I hope you do if you are in this position).
If you do have a choice, then you should think hard about whether you need the complexity of JWTs or can use a simpler approach that takes care of most of the choices for you or store state on the server and use opaque cookies. In addition to the options mentioned in the referenced posts, I would also like to mention Macaroons, which can be a good alternative for some authorization token use-cases and the existing libraries tend to build on solid foundations (libsodium/NaCl).
So, should you use JWT/JOSE at all? In many cases the answer is no, and you should use a less error-prone alternative. If you do need to use them, then make sure you know what you are doing.
https://neilmadden.blog/2017/03/15/should-you-use-jwt-jose/
#cryptography #jose #jwt
In the wake of some more recent attacks against popular JSON Web Token (JWT)/JSON Object Signing and Encryption (JOSE) libraries, there has been some renewed criticism of the JWT/JOSE standards themselves (see also discussion on lobste.rs with an excellent comment from Thomas Ptacek summarising some of the problems with the standard). Given these criticisms, should you use JOSE at all? Are articles like my recent “best practices” one just encouraging adoption of bad standards that should be left to die a death?Certainly, there are lots of potential gotchas in the specs, and it is easy for somebody without experience to shoot themselves in the foot using these standards. I agree with pretty much all of the criticisms levelled against the standards. They are too complicated with too many potentially insecure options. It is far too easy to select insecure combinations or misconfigure them. Indeed, much of the advice in my earlier article can be boiled down to limiting which options you use, understanding what security properties those options do and do not provide, and completely ignoring some of the more troublesome aspects of the spec. If you followed my advice of using “headless” JWTs and direct authenticated encryption with a symmetric key, you’d end up not far off from the advice of just encrypting a JSON object with libsodium or using Fernet.
So in that sense, I am already advocating for not really using the specs as-is, at least not without significant work to understand them and how they fit with your requirements. But there are some cases where using JWTs still makes sense:
- If you need to implement a standard that mandates their use, such as OpenID Connect. In this case you do not have much of a choice.
- If you need to interoperate with third-party software that is already using JWTs. Again, in this case you also do not have a choice.
- You have complex requirements mandating particular algorithms/parameters (e.g. NIST/FIPS-approved algorithms) and don’t want to hand-roll a message format or are required to use something with a “standard”. In this case, JWT/JOSE is not a terrible choice, so long as you know what you are doing (and I hope you do if you are in this position).
If you do have a choice, then you should think hard about whether you need the complexity of JWTs or can use a simpler approach that takes care of most of the choices for you or store state on the server and use opaque cookies. In addition to the options mentioned in the referenced posts, I would also like to mention Macaroons, which can be a good alternative for some authorization token use-cases and the existing libraries tend to build on solid foundations (libsodium/NaCl).
So, should you use JWT/JOSE at all? In many cases the answer is no, and you should use a less error-prone alternative. If you do need to use them, then make sure you know what you are doing.
https://neilmadden.blog/2017/03/15/should-you-use-jwt-jose/
#cryptography #jose #jwt