Search
Items tagged with: Technology
Twitter’s Birdwatch is Fundamentally Flawed
Earlier this week, Twitter announced an initiative to combat misinformation on their platform that they call Birdwatch.
How Birdwatch works: Volunteers sign up (assuming they meet all the requirements) and can add notes to fill in context on misleading tweets. Other users can rate these contextual tweets as helpful or not helpful. All of these “notes” and ratings of notes are completely transparent.
Credit: the Birdwatch website
Credit: the Birdwatch website
At its face, Birdwatch is an attempt to scale up the existing fact-checking capability used during the 2020 U.S. Elections while also crowdsourcing this decision-making.
I will give Twitter credit for two things, and only two things, before I get into the problems with their design.
- They’re distributing the power to fact-check bad tweets to their users rather than hoarding it for themselves.
- They correctly emphasized transparency as a goal for this tool.
But it’s not all sunshine and rainbows.
The Fatal Flaw of Birdwatch’s Design
There’s an old essay titled The Six Dumbest Ideas in Computer Security, that immediately identifies two problems with Birdwatch’s design. They also happen to be the first two items on the essay’s list!
- Default Permit
- Enumerating Badness
This is best illustrated by way of example.
Let’s assume there are two pathological liars hellbent on spreading misinformation on Twitter. They each tweet unsubstantiated claims about some facet of government or civil service. Birdwatch users catch only one of them, and correctly fact-check their tweet.
What happens to the other liar?
What happens if Birdwatch users can only identify one out of ten liars? One out of a hundred? One out of a thousand?! Et cetera.
(Art by Khia.)
To be clear: The biggest flaw in their product design is simply that their “notes” and “fact-checks” are negative indicators on known-bad tweets.
This will create a dark pattern: If a tweet slips past the Birdwatch users’ radars, it won’t be fact-checked. In turn, users won’t realize it’s misinformation. A popular term for the resulting conduct is coordinated inauthentic behavior.
This already happens to YouTube.
Hell, this is already happening to Twitter:
https://www.youtube.com/watch?v=V-1RhQ1uuQ4
How To Fix Birdwatch
I wrote an entire essay on Defeating Coordinated Inauthentic Behavior at Scale in 2019. I highly recommend anyone at Twitter interested in actually solving the misinformation problem to give that a careful consideration.
(Art by Swizz.)
But in a nutshell, the most important fix is to change the state machine underlying Birdwatch from:
- No notes -> trustworthy
- Notes -> misinformation
…to something subtly different:
- No notes -> unvetted / be cautious
- Notes ->
- Negative notes -> misinformation
- Positive notes -> verified by experts
This effectively creates a traffic light signal for users: Tweets start as yellow (exercise caution, which is the default) and may become green (affirmed by notes) or red (experts disagree).
What Would This Change Accomplish?
Malicious actors that accomplish Birdwatch evasion will only manage to encompass their message in caution tape. (Metaphorically speaking, anyway.)
If their goal is to spread misinformation while convincing the recipients of their message that they’re speaking the truth, they’ll have to get a green light–which is ideally more expensive to accomplish.
Bonus Round
I would also recommend some kind of “this smells fishy” button to signal Birdwatch contributors that this tweet needs fact-checking. Users might self-select into filter bubbles that Birdwatch users are totally absent from, and in turn come across things that are completely unvetted and possibly ambiguous.
While I have your attention, here’s a quality of life suggestion, on the house:
Being able to link claims together (e.g. reposted images with a false claim, n.b. like how the minions memes on Facebook) to deduplicate their claims about reality would save a lot of unnecessary headache.
(Anyone who has used Stack Overflow will appreciate the utility of being able to say “this is a duplicate of $otherThing”.)
What If These Fundamental Flaws Remain Unfixed?
Although Birdwatch will probably meet the immediate goal of scaling up the fact-checking efforts beyond what Twitter can provide (and satisfy the public relations requirements of tangibly doing something to combat this problem), propagandists and conspiracy theorists will simply become incentivized to evade Birdwatch contributors’ detection while spreading their lies.
As I said above, coordinated inauthentic behavior is already happening. This isn’t some abstract threat that only academics care about.
To the aid of the malicious, most users will confuse tweets that evaded detection with tweets that didn’t warrant correction. This might even lead to users trusting misinformation more than they would before Birdwatch. This would be a total self-own for the entire Birdwatch project.
#bullshit #computerSecurity #enumeratingBadness #misinformation #SocialMedia #Technology #Twitter
Defeating Coordinated Inauthentic Behavior at Scale
Over the weekend, the YouTube channel SmarterEveryDay posted this video (part 1 of a 3-part series) discussing coordinated inauthentic behavior (via crappy content generation) to manipulate YouTube’s…Soatok Dreamseeker (Medium)
If you’re new to reading this blog, you might not already be aware of my efforts to develop end-to-end encryption for ActivityPub-based software. It’s worth being aware of before you continue to read this blog post.
To be very, very clear, this is work I’m doing independent of the W3C or any other standards organization and/or funding source (and they have their own ideas about how to approach it).Really, I’m doing my own thing and releasing my designs under a public domain-equivalent license so anyone (including the W3C grant awardees) can pick it up and use it, if they see fit.
But the work I’m doing has no official standing and is not representative of anyone (except maybe a lot of other furries interested in technology). They have, emphatically, never endorsed anything I’m doing. I have not talked with any of them about my ideas, nor has my name come up in any of their meeting notes.
My background is in applied cryptography and software security assessments, so I have strong opinions about how such software should be developed.
I’m being very up-front about this because I don’t want anyone to mistake my ideas for anything “official”.
Why spend your time on that?
My end goal is pretty straightforward.
Before Musk took it over, Twitter was wonderful for queer people. I’ve even heard it described as the most successful dating platform for the LGBTQIA+ community.
These days, it’s full of Nazis and people who think the ideal version of “free speech” means not being allowed to say the word “cisgender.” But I repeat myself.
The typical threat model for Twitter was: You have to trust the person you’re talking with, and the Twitter corporation, to keep your conversations (or nudes, if we’re being frank about it) private.
With the Fediverse, things are a little more complicated. Instance operators also have access to the plaintext versions of any Direct Messages between you and other participants.
And maybe you trust your instance operator… but do you trust your friends’? And do they trust yours?
If implemented securely, end-to-end encryption saves you from having to care about this injection of additional threat actors to consider.
If not implemented securely, it’s little more than security theater and should be ridiculed loudly.
So it’s natural and obvious for a person with my particular interests and skills to want to solve this problem.
Technological Decisions
When I started this project, I separated the end goal into 4 separate components:
- Client-side secret key management.
- Federated public-key infrastructure.
- Shared key agreement for group messaging.
- The actual bulk encryption techniques.
A lot of hobbyist projects over-index on the fourth component, rather than the actual hard problems. This is why so many doomed projects start with PGP, or implement weird “cipher cascades” to hedge against AES getting broken.
In reality, every component matters for the security of the whole system, but the bulk encryption is boring. It’s the well-tread path of any cryptosystem. The significantly harder parts are key management.
Political Decisions
Let’s not mince words: How you implement key management is inherently a political decision.
If that sounds counter-intuitive, meditate on this bit of wisdom for a while:
Repeat after me: all technical problems of sufficient scope or impact are actually political problems first.
Many projects, when confronted with the complexity of key management, are perfectly happy with “just write private keys to disk” or “put blind trust in AWS KMS.”
Or, more directly: “YOLO.”
With my Fediverse E2EE project, I wanted to minimize the amount of trust you have to place in others. (Especially, minimize the trust needed in Soatok!)
How Decisions Flow
Client-side secrets are the most visible area of risk to end users. Backing up and managing their own credentials, recovering from failure modes, the Mud Puddle test, etc.
Once each participant has secret keys managed (1), they can provide public keys to each other.
Public-key infrastructure (2) is how you decide trust relationships between parties. We’re operating in a federated environment, and want to minimize the amount of unchecked “authority” anyone has, so that complicates matters. But, if it wasn’t challenging, it would already be solved.
Once you’ve figured out a trust mechanism to tie a public key to an identity, you can try to agree on a shared symmetric key securely, even over an untrusted channel.
Key agreement for group messaging (3) is how you decide which shared key to use, and when, and who has access to this key and for how long.
And from there, you can actually encrypt shit (4).
It doesn’t really matter how much you boil the ocean on mitigating hypothetical weaknesses in AES if an adversary can muck with your key management.
Thus, it should hopefully be reasonable to divide the work up in this fashion.
But there is a fifth component; one that I am not qualified to comment on:
User experience.
The final deliverable for my participation in this project will be software libraries (and any necessary patches to server software) to facilitate secure end-to-end encryption between Fediverse users.
As for what that experience looks like? How it’s presented visually? What accessibility features are used, and how? How elements are organized and in what order they are displayed? Any quality-of-life design decisions that delight users and avoid dark patterns?
Yeah, sorry, I’m totally out of my depth here. That’s not my domain.
I will do my damnedest to not make security decisions that are inherently onerous towards making usable software.
(After all, security at the cost of usability comes at the cost of security.)
But I can’t promise that the experience will be totally seamless for everyone, all the time.
Lacking Ambition?
One of the things that’s been bothering me, as I work on out the finer details about this end-to-end encryption project, is that it seems to lack ambition.
Sure, I can talk your ear off for hours about the ins and outs of implementing end-to-end encryption securely, but we already have end-to-end encryption apps. So many private messengers.
How does “you can now have encrypted DMs in Mastodon” help people who can already use Signal or WhatsApp? Why should the people who aren’t computer nerds care about it at all?
What’s actually new or exciting about this work?
And, honestly, the best answer I can come up with is that it’s the first step.
Tech Freedom and You
Before the Big Data and cloud computing crazes took the technology industry by storm (or any of the messes that followed), most software was designed to work offline. That is, without Internet access.
With the growing ubiquity of Internet access (and mobile networks), the Overton window shifted towards always-on devices, edge computing, and no longer owning anything. Instead, consumers rent licenses to software that a third party can revoke on a whim.
The Free Software movement, for all of the very pronounced personality quirks associated with it today, foresaw this problem long before the modern Internet existed. Technologists, lawyers, and activists spent thousands of person-years of effort on trying to protect end users’ rights from greedy monopolies.
Kyume
(I couldn’t not include this meme in this section.)
This isn’t a modern problem, by any stretch of the imagination.
Every year, our rights and digital freedoms are eroded by court decisions by corrupt judges, terrible legislature, and questionable leadership.
But the Electronic Frontier Foundation and its friends in other nations have been talking about this and fighting court battles since the 1990s.
Even if I somehow made some small innovation that benefited end users with allowing Fediverse users to message each other privately, that’s not really ambitious either.
From Sparks to Embers
As I was noodling over this, a friend of mine linked me to an article titled Rust Needs a Web Framework for Lazy Developers the other day.
It made me realize how much I miss the era when software was offline-first, even if it had online components. The past several years of Live Service Games has exhausted my tolerance more than anything else, but they’re not alone.
When I initially delineated my proposal into 4 components, my goal was to simplify the security analysis and make the threat models digestible.
But it occurred to me, recently, that by abstracting these components (especially the Federated Public Key Infrastructure design), a new era of cypherpunks and pirates could breathe new ambition into software projects that build atop the boring infrastructure I’m building.
Let’s Turn the Ambition Up To 11
Imagine peer-to-peer software that uses the Fediverse and/or onion routing technologies (similar to Tor) to establish peer-to-peer encrypted data tunnels between devices, with the Federated PKI as the source of truth for identity public keys so you always know you’re talking to the correct entity.
Now combine that with developer tools that make it easy for people to self-publish software (even if only through Tor Hidden Services), with an optional way to create a public portal (e.g., for a public-facing website).
You could even create a protocol for people with rack space and spare bandwidth to host said public portals, without biasing for a particular one.
This would allow technologists to build the tools for normal people to create an anti-corporate, decentralized network.
And you could do it without ever mentioning the word “blockchain” (though you may need to tolerate it if you want to prevent anti-porn groups like Exodus Cry from having any say in what we compute).
Finally, imagine that we build all of this in memory-safe languages.
Are you building this today?
In short: No, I’m not.
Ambitious ideas and cryptography should only intersect rarely. I’m focused on the cryptography.
Instead, I wanted to lay this rough sketch out there as a possibility that someone else–presumably more ambitious, charismatic, and/or resourceful–could easily pick up if they so choose.
More importantly, all of the hard parts of this would be solved problems by the time I finish with the end-to-end encryption project. (Most of them already exist, in fact!)
That’s what I meant above by “it’s the first step”.
Along the way to achieving my own goals, I’m building at least one useful building block. What the rest of the technology industry decides to do with it is up to the rest of us.
I can’t, and will not try, to do it alone.
There is a lot of potential for tech freedom that could benefit users beyond what they can get from the Fediverse today. I wanted to examine how some of these ideas could be useful for–
Rejected! What else you got?
Oh.
…
Okay, so y’know how a lot of video games (Undertale/Deltarune, Doki Doki Literature Club) try to make a highly immersive experience with many diegetic elements?
Let’s build an operating system, based on some flavor of Linux, that is in and of itself a game. People can write their own DLC by developing packages for that OS. The end deliverable will be a virtual machine, and in order to get it to work on Steam, we would install Docker or Kubernetes, but users will also be able to install it via VirtualBox.
Inevitably, someone will decide this OS is their new daily driver. Imagine the impact this would have on corporate IT the whole world over.
This is the worst idea in the history of bad ideas!
Oh, I can do worse. I can do so much worse.
I don’t know if I can top the various attempts to build a Message Authentication Code out of the insecure RC4 stream cipher, of course.
If you want ambition, you sacrifice wisdom.
If you want freedom, you sacrifice convenience.
If you want security, you sacrifice usability.
…
Or do you?
They Can’t All Be Winners
I have a lot of bad ideas, all the time. That’s the only reason I ever occasionally have moderately good ones.
My process of eliminating bad ideas is ruthless, and may cull some interesting or fun ones along the way. This is an unfortunate side-effect of being an effective security engineer.
I don’t actually think the ideas I’ve written above are that bad. I wrote them this way for comedic effect.
Rather, I’m just not actually sure they’re actually good, or worthwhile to invest time into.
Whether someone could build atop the work I’m doing to reclaim our Internet from the grip of massive technology corporations is, at best, difficult to classify.
I do not have the time, energy, or motivation to do the work already on my own plate and then explore these ideas fully.
Maybe someone reading this does?
If not, that’s cool. Ideas are allowed to just exist as idle curiosities. Not everything has to matter all the time.
The “ship a whole god damn OS as an indie
game” idea could be fun though.
https://soatok.blog/2024/10/12/ambition-the-fediverse-and-technology-freedom/
#endToEndEncryption #fediverse #FreeSoftware #OnlinePrivacy #Society #SoftwareFreedom #TechFreedom #Technology
In 2022, I wrote about my plan to build end-to-end encryption for the Fediverse. The goals were simple:
- Provide secure encryption of message content and media attachments between Fediverse users, as a new type of Direct Message which is encrypted between participants.
- Do not pretend to be a Signal competitor.
The primary concern at the time was “honest but curious” Fediverse instance admins who might snoop on another user’s private conversations.
After I finally was happy with the client-side secret key management piece, I had moved on to figure out how to exchange public keys. And that’s where things got complicated, and work stalled for 2 years.
Art: AJ
I wrote a series of blog posts on this complication, what I’m doing about it, and some other cool stuff in the draft specification.
- Towards Federated Key Transparency introduced the Public Key Directory project
- Federated Key Transparency Project Update talked about some of the trade-offs I made in this design
- Not supporting ECDSA at all, since FIPS 186-5 supports Ed25519
- Adding an account recovery feature, which power users can opt out of, that allows instance admins to help a user recover from losing all their keys
- Building a Key Transparency system that can tolerate GDPR Right To Be Forgotten takedown requests without invalidating history
- Introducing Alacrity to Federated Cryptography discussed how I plan to ensure that independent third-party clients stay up-to-date or lose the ability to decrypt messages
Recently, NIST published the new Federal Information Protection Standards documents for three post-quantum cryptography algorithms:
- FIPS-203 (ML-KEM, formerly known as CRYSTALS-Kyber),
- FIPS-204 (ML-DSA, formerly known as CRYSTALS-Dilithium)
- FIPS-205 (SLH-DSA, formerly known as SPHINCS+)
The race is now on to implement and begin migrating the Internet to use post-quantum KEMs. (Post-quantum signatures are less urgent.) If you’re curious why, this CloudFlare blog post explains the situation quite well.
Since I’m proposing a new protocol and implementation at the dawn of the era of post-quantum cryptography, I’ve decided to migrate the asymmetric primitives used in my proposals towards post-quantum algorithms where it makes sense to do so.
Art: AJ
The rest of this blog post is going to talk about technical specifics and the decisions I intend to make in both projects, as well as some other topics I’ve been thinking about related to this work.
Which Algorithms, Where?
I’ll discuss these choices in detail, but for the impatient:
- Public Key Directory
- Still just Ed25519 for now
- End-to-End Encryption
- KEMs: X-Wing (Hybrid X25519 and ML-KEM-768)
- Signatures: Still just Ed25519 for now
Virtually all other uses of cryptography is symmetric-key or keyless (i.e., hash functions), so this isn’t a significant change to the design I have in mind.
Post-Quantum Algorithm Selection Criteria
While I am personally skeptical if we will see a practical cryptography-relevant quantum computer in the next 30 years, due to various engineering challenges and a glacial pace of progress on solving them, post-quantum cryptography is still a damn good idea even if a quantum computer doesn’t emerge.Post-Quantum Cryptography comes in two flavors:
- Key Encapsulation Mechanisms (KEMs), which I wrote about previously.
- Digital Signature Algorithms (DSAs).
Originally, my proposals were going to use Elliptic Curve Diffie-Hellman (ECDH) in order to establish a symmetric key over an untrusted channel. Unfortunately, ECDH falls apart in the wake of a crypto-relevant quantum computer. ECDH is the component that will be replaced by post-quantum KEMs.
Additionally, my proposals make heavy use of Edwards Curve Digital Signatures (EdDSA) over the edwards25519 elliptic curve group (thus, Ed25519). This could be replaced with a post-quantum DSA (e.g., ML-DSA) and function just the same, albeit with bandwidth and/or performance trade-offs.
But isn’t post-quantum cryptography somewhat new?
Lattice-based cryptography has been around almost as long as elliptic curve cryptography. One of the first designs, NTRU, was developed in 1996.Meanwhile, ECDSA was published in 1992 by Dr. Scott Vanstone (although it was not made a standard until 1999). Lattice cryptography is pretty well-understood by experts.
However, before the post-quantum cryptography project, there hasn’t been a lot of incentive for attackers to study lattices (unless they wanted to muck with homomorphic encryption).
So, naturally, there is some risk of a cryptanalysis renaissance after the first post-quantum cryptography algorithms are widely deployed to the Internet.
However, this risk is mostly a concern for KEMs, due to the output of a KEM being the key used to encrypt sensitive data. Thus, when selecting KEMs for post-quantum security, I will choose a Hybrid construction.
Hybrid what?
We’re not talking folfs, sonny!Hybrid isn’t just a thing that furries do with their fursonas. It’s also a term that comes up a lot in cryptography.
Unfortunately, it comes up a little too much.
I made this dumb meme with imgflip
When I say we use Hybrid constructions, what I really mean is we use a post-quantum KEM and a classical KEM (such as HPKE‘s DHKEM), then combine them securely using a KDF.Post-quantum KEMs
For the post-quantum KEM, we only really have one choice: ML-KEM. But this choice is actually three choices: ML-KEM-512, ML-KEM-768, or ML-KEM-1024.The security margin on ML-KEM-512 is a little tight, so most cryptographers I’ve talked with recommend ML-KEM-768 instead.
Meanwhile, the NSA wants the US government to use ML-KEM-1024 for everything.
How will you hybridize your post-quantum KEM?
Originally, I was looking to use DHKEM with X25519, as part of the HPKE specification. After switching to post-quantum cryptography, I would need to combine it with ML-KEM-768 in such a way that the whole shebang is secure if either component is secure.But then, why reinvent the wheel here? X-Wing already does that, and has some nice binding properties that a naive combination might not.
So let’s use X-Wing for our KEM.
Notably, OpenMLS is already doing this in their next release.
Art: CMYKat
Post-quantum signatures
So our KEM choice seems pretty straightforward. What about post-quantum signatures?Do we even need post-quantum signatures?
Well, the situation here is not nearly as straightforward as KEMs.
For starters, NIST chose to standardize two post-quantum digital signature algorithms (with a third coming later this year). They are as follows:
- ML-DSA (formerly CRYSTALS-Dilithium), that comes in three flavors:
- ML-DSA-44
- ML-DSA-65
- ML-DSA-87
- SLH-DSA (formerly SPHINCS+), that comes in 24 flavors
- FN-DSA (formerly FALCON), that comes in two flavors but may be excruciating to implement in constant-time (this one isn’t standardized yet)
Since we’re working at the application layer, we’re less worried about a few kilobytes of bandwidth than the networking or X.509 folks are. Relatively speaking, we care about security first, performance second, and message size last.
After all, people ship Electron, React Native, and NextJS apps that load megabytes of JavaScript code to print, “hello world,” and no one bats an eye. A few kilobytes in this context is easily digestible for us.
(As I said, this isn’t true for all layers of the stack. WebPKI in particular feels a lot of pain with large public keys and/or signatures.)
Eliminating post-quantum signature candidates
Performance considerations would eliminate SLH-DSA, which is the most conservative choice. Even with the fastest parameter set (SLH-DSA-128f), this family of algorithms is about 550x slower than Ed25519. (If we prioritize bandwidth, it becomes 8000x slower.)Adopted from CloudFlare’s blog post on post-quantum cryptography.
Between the other two, FN-DSA is a tempting option. Although it’s difficult to implement in constant-time, it offers smaller public key and signature sizes.
However, FN-DSA is not standardized yet, and it’s only known to be safe on specific hardware architectures. (It might be safe on others, but that’s not proven yet.)
In order to allow Fediverse users be secure on a wider range of hardware, this uncertainty would limit our choice of post-quantum signature algorithms to some flavor of ML-DSA–whether stand-alone or in a hybrid construction.
Unlike KEMs, hybrid signature constructions may be problematic in subtle ways that I don’t want to deal with. So if we were to do anything, we would probably choose a pure post-quantum signature algorithm.
Against the Early Adoption of Post-Quantum Signatures
There isn’t an immediate benefit to adopting a post-quantum signature algorithm, as David Adrian explains.The migration to post-quantum cryptography will be a long and difficult road, which is all the more reason to make sure we learn from past efforts, and take advantage of the fact the risk is not imminent. Specifically, we should avoid:
- Standardizing without real-world experimentation
- Standardizing solutions that match how things work currently, but have significant negative externalities (increased bandwidth usage and latency), instead of designing new things to mitigate the externalities
- Deploying algorithms pre-standardization in ways that can’t be easily rolled back
- Adding algorithms that are pre-standardization or have severe shortcomings to compliance frameworks
We are not in the middle of a post-quantum emergency, and nothing points to a surprise “Q-Day” within the next decade. We have time to do this right, and we have time for an iterative feedback loop between implementors, cryptographers, standards bodies, and policymakers.
The situation may change. It may become clear that quantum computers are coming in the next few years. If that happens, the risk calculus changes and we can try to shove post-quantum cryptography into our existing protocols as quickly as possible. Thankfully, that’s not where we are.
David Adrian, Lack of post-quantum security is not plaintext.
Furthermore, there isn’t currently any commitment from the Sigsum developers to adopt a post-quantum signature scheme in the immediate future. They hard-code Ed25519 for the current iteration of the specification.The verdict on digital signature algorithms?
Given all of the above, I’m going to opt to simply not adopt post-quantum signatures until a later date.Version 1 of our design will continue to use Ed25519 despite it not being secure after quantum computers emerge (“Q-Day”).
When the security industry begins to see warning signs of Q-Day being realistically within a decade, we will prioritize migrating to use post-quantum signature algorithms in a new version of our design.
Should something drastic happen that would force us to decide on a post-quantum algorithm today, we would choose ML-DSA-44. However, that’s unlikely for at least several years.
Remember, Store Now, Decrypt Later doesn’t really break signatures the way it would break public-key encryption.
Art: Harubaki
Miscellaneous Technical Matters
Okay, that’s enough about post-quantum for now. I worry that if I keep talking about key encapsulation, some of my regular readers will start a shitty garage band called My KEMical Romance before the end of the year.Let’s talk about some other technical topics related to end-to-end encryption for the Fediverse!
Federated MLS
MLS was implicitly designed with the idea of having one central service for passing messages around. This makes sense if you’re building a product like Signal, WhatsApp, or Facebook Messenger.It’s not so great for federated environments where your Delivery Service may be, in fact, more than one service (i.e., the Fediverse). An expired Internet Draft for Federated MLS talks about these challenges.
If we wanted to build atop MLS for group key agreement (like has been suggested before), we’d need to tackle this in a way that doesn’t cede control of MLS epochs to any server that gets compromised.
How to Make MLS Tolerate Federation
First, the Authentication Service component can be replaced by client-side protocols, where public keys are sourced from the Public Key Directory (PKD) services.That is to say, from the PKD, you can fetch a valid list of Ed25519 public keys for each participant in the group.
When a group is created, the creator’s Ed25519 public key is known. Everyone they invite, their software necessarily has to know their Ed25519 public key in order to invite them.
In order for a group action to be performed, it must be signed by one of the public keys enrolled into the group list. Additionally, some actions may be limited by permissions attached at the time of the invite (or elevated by a more privileged user; which necessitates another group action).
By requiring a valid signature from an existing group member, we remove the capability of the Fediverse instance that’s hosting the discussion group to meddle with it in any way (unless, for some reason, the server is somehow also a participant that was invited).
But therein lies the other change we need to make: In many cases, groups will span multiple Fediverse servers, so groups shouldn’t be dependent on a single instance.
Spreading The Load Across Instances
Put simply, we need a consensus algorithm to determine which instance hosts messages. We could look to Raft as a starting point, but whatever we land on should be fair, fault-tolerant, and deterministic to all participants who can agree on the same symmetric keying material at some point in time.To that end, I propose using an additional HKDF output from the Group Key Agreement protocol to select a “leader” for all instances involved in the group, weighted by the number of participants on each instance.
Then, every N messages (where N >= 1), a new leader is elected by the same deterministic protocol. This will be performed entirely client-side, and clients will choose N. I will refer to this as a sub-epoch, since it doesn’t coincide with a new MLS epoch.
Since the agreed-upon group key always ratchets forward when a group action occurs (i.e., whenever there’s a new epoch), getting another KDF output to elect the next leader is straightforward.
This isn’t a fully fleshed out idea. Building consensus protocols that can handle real-world operational issues is heavily specialized work and there’s a high risk of falling to the illusion of safety until it’s too late. I will probably need help with this component.
That said, we aren’t building an anonymity network, so the cost of getting a detail wrong isn’t measurable in blood.
We aren’t really concerned with Sybil attacks. Winning the election just means you’re responsible for being a dumb pipe for ciphertext. Client software should trust the instance software as little as possible.
We also probably don’t need to worry about availability too much. Since we’re building atop ActivityPub, when a server goes down, the other instances can hold encrypted messages in the outbox for the host instance to pick up when it’s back online.
If that’s not satisfactory, we could also select both a primary and secondary leader for each epoch (and sub-epoch), to have built-in fail-over when more than one instance is involved in a group conversation.
If messages aren’t being delivered for an unacceptable period of time, client software can forcefully initiate a new leader election by expiring the current MLS epoch (i.e. by rotating their own public key and sending the relevant bundle to all other participants).
Art: Kyume
Those are just some thoughts. I plan to talk it over with people who have more expertise in the relevant systems.
And, as with the rest of this project, I will write a formal specification for this feature before I write a single line of production code.
Abuse Reporting
I could’ve swore I talked about this already, but I can’t find it in any of my previous ramblings, so here’s a good place as any.The intent for end-to-end encryption is privacy, not secrecy.
What does this mean exactly? From the opening of Eric Hughes’ A Cypherpunk’s Manifesto:
Privacy is necessary for an open society in the electronic age. Privacy is not secrecy.A private matter is something one doesn’t want the whole world to know, but a secret matter is something one doesn’t want anybody to know.
Privacy is the power to selectively reveal oneself to the world.
Eric Hughes (with whitespace and emphasis added)
Unrelated: This is one reason why I use “secret key” when discussing asymmetric cryptography, rather than “private key”. It also lends towardssk
andpk
as abbreviations, whereas “private” and “public” both start with the letter P, which is annoying.With this distinction in mind, abuse reporting is not inherently incompatible with end-to-end encryption or any other privacy technology.
In fact, it’s impossible to create useful social technology without the ability for people to mitigate abuse.
So, content warning: This is going to necessarily discuss some gross topics, albeit not in any significant detail. If you’d rather not read about them at all, feel free to skip this section.
Art: CMYKat
When thinking about the sorts of problems that call for an abuse reporting mechanism, you really need to consider the most extreme cases, such as someone joining group chats to spam unsuspecting users with unsolicited child sexual abuse material (CSAM), flashing imagery designed to trigger seizures, or graphic depictions of violence.
That’s gross and unfortunate, but the reality of the Internet.
However, end-to-end encryption also needs to prioritize privacy over appeasing lazy cops who would rather everyone’s devices include a mandatory little cop that watches all your conversations and snitches on you if you do anything that might be illegal, or against the interest of your government and/or corporate masters. You know the type of cop. They find privacy and encryption to be rather inconvenient. After all, why bother doing their jobs (i.e., actual detective work) when you can just criminalize end-to-end encryption and use dragnet surveillance instead?
Whatever we do, we will need to strike a balance that protects users’ privacy, without any backdoors or privileged access for lazy cops, with community safety.
Thus, the following mechanisms must be in place:
- Groups must have the concept of an “admin” role, who can delete messages on behalf of all users and remove users from the group. (Signal currently doesn’t have this.)
- Users must be able to delete messages on their own device and block users that send abusive content. (The Fediverse already has this sort of mechanism, so we don’t need to be inventive here.)
- Users should have the ability to report individual messages to the instance moderators.
I’m going to focus on item 3, because that’s where the technically and legally thorny issues arise.
Keep in mind, this is just a core-dump of thoughts about this topic, and I’m not committing to anything right now.
Technical Issues With Abuse Reporting
First, the end-to-end encryption must be immune to Invisible Salamanders attacks. If it’s not, go back to the drawing board.Every instance will need to have a moderator account, who can receive abuse reports from users. This can be a shared account for moderators or a list of moderators maintained by the server.
When an abuse report is sent to the moderation team, what needs to happen is that the encryption keys for those specific messages are re-wrapped and sent to the moderators.
So long as you’re using a forward-secure ratcheting protocol, this doesn’t imply access to the encryption keys for other messages, so the information disclosed is limited to the messages that a participant in the group consents to disclosing. This preserves privacy for the rest of the group chat.
When receiving a message, moderators should not only be able to see the reported message’s contents (in the order that they were sent), but also how many messages were omitted in the transcript, to prevent a type of attack I colloquially refer to as “trolling through omission”. This old meme illustrates the concept nicely:
Trolling through omission.
And this all seems pretty straightforward, right? Let users protect themselves and report abuse in such a way that doesn’t invalidate the privacy of unrelated messages or give unfettered access to the group chats. “Did Captain Obvious write this section?”But things aren’t so clean when you consider the legal ramifications.
Potential Legal Issues With Abuse Reporting
Suppose Alice, Bob, and Troy start an encrypted group conversation. Alice is the group admin and delete messages or boot people from the chat.One day, Troy decides to send illegal imagery (e.g., CSAM) to the group chat.
Bob immediately, disgusted, reports it to his instance moderator (Dave) as well as Troy’s instance moderator (Evelyn). Alice then deletes the messages for her and Bob and kicks Troy from the chat.
Here’s where the legal questions come in.
If Dave and Evelyn are able to confirm that Troy did send CSAM to Alice and Bob, did Bob’s act of reporting the material to them count as an act of distribution (i.e., to Dave and/or Evelyn, who would not be able to decrypt the media otherwise)?
If they aren’t able to confirm the reports, does Alice’s erasure count as destruction of evidence (i.e., because they cannot be forwarded to law enforcement)?
Are Bob and Alice legally culpable for possession? What about Dave and Evelyn, whose servers are hosting the (albeit encrypted) material?
It’s not abundantly clear how the law will intersect with technology here, nor what specific technical mechanisms would need to be in place to protect Alice, Bob, Dave, and Evelyn from a particularly malicious user like Troy.
Obviously, I am not a lawyer. I have an understanding with my lawyer friends that I will not try to interpret law or write my own contracts if they don’t roll their own crypto.
That said, I do have some vague ideas for mitigating the risk.
Ideas For Risk Mitigation
To contend with this issue, one thing we could do is separate the abuse reporting feature from the “fetch and decrypt the attached media” feature, so that while instance moderators will be capable of fetching the reported abuse material, it doesn’t happen automatically.When the “reason” attached to an abuse report signals CSAM in any capacity, the client software used by moderators could also wholesale block the download of said media.
Whether that would be sufficient mitigate the legal matters raised previously, I can’t say.
And there’s still a lot of other legal uncertainty to figure out here.
- Do instance moderators actually have a duty to forward CSAM reports to law enforcement?
- If so, how should abuse forwarding to be implemented?
- How do we train law enforcement personnel to receive and investigate these reports WITHOUT frivolously arresting the wrong people or seizing innocent Fediverse servers?
- How do we ensure instance admins are broadly trained to handle this?
- How do we deal with international law?
- How do we prevent scope creep?
- While there is public interest in minimizing the spread of CSAM, which is basically legally radioactive, I’m not interested in ever building a “snitch on women seeking reproductive health care in a state where abortion is illegal” capability.
- Does Section 230 matter for any of these questions?
We may not know the answers to these questions until the courts make specific decisions that establish relevant case law, or our governments pass legislation that clarifies everyone’s rights and responsibilities for such cases.
Until then, the best answer may simply to do nothing.
That is to say, let admins delete messages for the whole group, let users delete messages they don’t want on their own hardware, and let admins receive abuse reports from their users… but don’t do anything further.
Okay, we should definitely require an explicit separate action to download and decrypt the media attached to a reported message, rather than have it be automatic, but that’s it.
What’s Next?
For the immediate future, I plan on continuing to develop the Federated Public Key Directory component until I’m happy with its design. Then, I will begin developing the reference implementations for both client and server software.Once that’s in a good state, I will move onto finishing the E2EE specification. Then, I will begin building the client software and relevant server patches for Mastodon, and spinning up a testing instance for folks to play with.
Timeline-wise, I would expect most of this to happen in 2025.
I wish I could promise something sooner, but I’m not fond of moving fast and breaking things, and I do have a full time job unrelated to this project.
Hopefully, by the next time I pen an update for this project, we’ll be closer to launching. (And maybe I’ll have answers to some of the legal concerns surrounding abuse reporting, if we’re lucky.)
https://soatok.blog/2024/09/13/e2ee-for-the-fediverse-update-were-going-post-quantum/
#E2EE #endToEndEncryption #fediverse #FIPS #Mastodon #postQuantumCryptography
Every hype cycle in the technology industry continues a steady march towards a shitty future that nobody wants.
Note: I know this isn’t unique to the tech industry, but I can’t write about industries I don’t work in, so this is what’s being covered.
The Road to Hell
Once upon a time, everyone was all hot and bothered about Big Data: Having lots of information–far too much to process with commodity software–was supposed to magically transform business.
How do you build technology that can process that much information at scale? Well, obviously, you just need to invest in The Cloud! (If you’re using the Cloud to Butt Plus Chrome extension, this entire blog post may be confusing to you.)
But don’t scrutinize the Cloud too long, you might miss your chance to invest in blockchain.
meme via Tony Arcieri
Blockchainiacs practically invented an entire constructed language of buzzwords. Things like “DeFi”, “Web3”, and so on. To anyone not accustomed to their in-signaling, it’s potent enough cringe to repel even the weirdest of furries.
But the only thing to know about blockchain is its proponents they like it when the line goes up, and every “innovation” in that sector was in service of the line going up.
Blockchain, of course, refers to cryptocurrency. The security of these digital currencies is based on expensive consensus mechanisms (e.g., Proof of Work). The incentives baked into the design of these consensus mechanisms led users to buy lots of GPUs in order to compete to solve numeric puzzles (a.k.a. “mining”).
For a while, many technologists observed that whenever the line actually goes down or a popular cryptocurrency decides to adopt a less wasteful consensus mechanism, the secondhand market gets flooded with used GPUs.
That all changed with the release of ChatGPT and other Large Language Models.
https://www.youtube.com/watch?v=AaU6tI2pb3M
Now GPUs are a hot commodity even when the price of Bitcoin goes down because tech company leaders are either malicious or stupid, and are always trying to appease investors that have more money than sense. It’s not just tech companies either.
“Our vision of [quick-service restaurants] is that an AI-first mentality works every step of the way.”Joe Park, CEO of Yum Brands (Taco Bell, Pizza Hut, KFC)
Of all these hype cycles, I suspect that the “AI” hype has more staying power than the rest, if for no other reason than it provides a hedge against the downside of previous hype cycles.
- Not sure to do with the exabytes of Big Data you’re sitting on? Have LLMs parse it all then convincingly lie to you about what it means.
- Expensive cloud bill? Attract more investor dollars by selling them on trying to build an Artificial General Intelligence out of hallucinating chatbots.
- Got a bunch of GPUs lying around from a failed crypto-mining idea? Use it to flagrantly violate intellectual property law to steal from artists with legal impunity!
This “AI” trend is the Human Centipede of technology.
(Yes, there are some valid use cases for the technology that underpins this hype. I’m focusing on Generative AI exclusively for this blog post, since that’s what a lot of the hype is centered around.)
Art: CMYKat
So you can imagine how I felt when I went to add an image to a blog post draft one day and saw this:
Generate with AI? Fuck you.
There is no way to opt out of, or disable, this feature.
WordPress is not alone in its overt participation in this consumption of binary excrement.
Tech Industry Idiocy is Ubiquitous
Behold, Oracle’s AI innovation. Source
EA’s CEO called generative AI the “very core of our business”, which an astute listener will find reminiscent of the time they claimed NFTs and blockchain were the future of the games industry at an earnings call.
Nevermind the fact that they’re actually in the business of publishing video games!
Mozilla Firefox 128.0 released a feature (enabled by default of course) to help advertisers collect data on you.
Per 404 Media, Snapchat reserves the right to use AI-generated images of your face in ads (also on by default).
At this point, even Rip Van fucking Winkle can spot the pattern.
Investors (read: fools with more money than sense) are dead set on a generative AI future, blockchain bullshit in everything, etc. Furthermore, there are a lot of gullible idiots that drank the Kool-Aid and feel like they’re part of the build-up to the next World Wide Web, so there’s no shortage of willing new CS grads to throw at these problems to keep the money flowing.
So we’re clearly well past the point that ridiculing the people involved will have any significant deterrence. The enshittification has spread too far to quarantine, and there are too many True Believers in the mix. Throw in a little bit of Roko’s Basilisk (read: Pascal’s wager for arrogant so-called “rationalists” who think they’re too smart to be Christian) and you’ve got a full-blown cargo cult on your hands.
What can we do about it? Beats me.
Sanity Check
I’m going to set aside the (extremely cathartic) attempts at shame and ridicule as a solution. Fun as they are, they fail to penetrate filter bubbles and reach the people they need to.
What’s your Bullshit Tech Score?
One way we could push back against this steady march towards a future where everything is enshittified, and the devices you paid for (with your hard-earned money) don’t respect your consent at all, is to turn the first of the buzz words we examined (Big Data) against these companies.
I’m proposing we could gather data about companies’ actual practices and build score-cards and leaderboards based on the following metrics:
- Does the company strategy involve generative AI?
- Does the company strategy involve selling NFTs?
- Does the company strategy involve stitching other unnecessary blockchain bullshit where it doesn’t belong?
- Does the company make questionable claims about quantum computers?
- Does the company choose default settings that hurt the user in the interest of increasing revenue (i.e., assuming consent without explicitly receiving it)?
- Does the company own any software patents?
- This includes purely “defensive” patents, in industries where their competitors abuse intellectual property law to stifle competition.
While these circumstances are understandable, we should be objective in our measurements.
- This includes purely “defensive” patents, in industries where their competitors abuse intellectual property law to stifle competition.
- Is the company completely bankrupt on innovation tokens?
- Does the company suffer from premature optimization (e.g., choosing MongoDB because they fear a relational database isn’t web-scale, rather than because it’s the right tool for the job)?
- Have any of the company’s leaders been credibly accused of sexual misconduct or violence?
- Sorry not sorry, Blizzard!
- Does the company routinely have crunch time (i.e., more than one week per quarter where employees are expected to work more than 40 hours)?
- Does the company enforce draconian return-to-office policies?
- Has the company threatened a security researcher with lawsuits in the past 10 years?
- Does the company roll its own cryptography without having at least one cryptographer on the payroll?
- (Okay, this one is purely for my own sanity, and probably not broadly applicable.)
A passing score is “No” to each of the above questions.
This proposal is basically the opposite of SSO Tax. Rather than shaming the losers (which there will assuredly be many), the goal would be to highlight companies that are reasonably sane to work for.
I’m aware that there are already companies like Forrester that try to do this, but with a much wider scope than the avoidance of bullshit.Furthermore, they’re incentivized to not piss off wealthy businessmen, so that they can keep their research business alive, whereas I don’t particularly care if tech CEOs get mad at being called a hypocritical hype-huffer.
I mean, what are they gonna do? Downvote me on Hacker News? I don’t work for them anyway.
In Over Our Heads
There may be other solutions available that will improve things somewhat. I’m not immune to failures of imagination.
Some solutions are incredibly contentious, though, and I don’t really want the headache.
For example: I’m sure that, if this blog post ever gets posted on a message board, someone in the peanut gallery will bring up unions as a mechanism, and others will fiercely shoot that idea down.
It’s possible that we, as an industry, are completely in over our heads. There’s too much bullshit, and too many perverse incentives creating ever-increasing amounts of bullshit, that escape is simply impossible.
Perhaps we’ve already crossed the excrement horizon.
Maybe Kurzweil was right about a Singularity after all?
Closing Thoughts
The main thing I wanted to convey today was, “No, you’re not alone, things are getting stupider,” to anyone who wondered if there was a spark of sanity left in the tech sector.
Art: AJ
It’s not just the smarmy tech CEOs that are the problem. The rot has spread all the way to the foundations of many organizations. Hacker News, Lobsters, etc. are full of clueless AI maximalists that cannot see the harms they are inflicting.
It is difficult to get a [person] to understand something, when [their] salary depends on [their] not understanding it.Original quote by Upton Sinclair.
Though I am at a loss for how to tackle this problem as a community, acknowledging it exists is still important to me.
On WordPress and Generative AI
Years ago, I wrote on Medium, but got tired of the constant pressure to monetize my blog, so I decided to pay for a WordPress.com account. I write for myself, after all, and don’t expect any compensation for it.
Many of you will notice the “adblocker not detected” popup. That sums up how I feel about the adtech industry.
It’s disheartening that WordPress is pushing Generative AI bullshit to paying customers with no way to opt out of the feature. (Nevermind that it should be off-by-default and opted into.)
For now, I just refuse to use the feature and hope a lower adoption rate causes a project manager somewhere in Automattic to sweat. They’re somewhat notorious for being led by stubborn assholes who don’t listen to critics (even on security matters).
I’ll also continue to credit the artists that made the furry art I include in my blog posts, because supporting artists is the exact opposite of supporting generative AI.
If you’re looking for a furry artist to commission, first read this, and then maybe consider the artists whose work I’ve featured over the years.
New Avenues of Bullshit
If I may be so bold as to make a predication: In the distant future, I expect to see more Quantum Computing related bullshit.
Though currently constrained to the realm of grifters, NIST’s recent standardization of post-quantum cryptography is likely to ignite a lot of questionable technology companies.
Whether any of this quantum bullshit catches on at the same scale as tech industry hype remains to be seen.
If any does, I promise to handle each instance with the same derision as the bullshit I discovered in DEFCON’s Quantum Village.
https://soatok.blog/2024/09/18/the-continued-trajectory-of-idiocy-in-the-tech-industry/
#Cryptocurrency #Society #Technology
Normally when you see an article that talks about cryptocurrency come across your timeline, you can safely sort it squarely into two camps: For and Against. If you’re like me, you might even make a game out of trying to classify it into one bucket or the other from the first paragraph–sort of like how people treat biological sex–and then reading to see if you were right or not. Most of the time, you don’t even have to read past the headline to know where the author stands.Unfortunately, the topic of cryptocurrency is complicated in ways only nerds could envision. And I’m not even talking about the cryptography involved when I say that.
(Art by Khia.)
Cryptocurrency is one of those cans I keep kicking down the road, lest all of its worms escape. I’m neither an enthusiast who wants to pump dogecoin to the moon, nor a detractor who thinks that the idea of digital cash is inherently stupid.
https://twitter.com/FiloSottile/status/1380576100888281094
The “crypto means cryptography” trope exists because, after Bitcoin’s first price hike, a shitload of speculative investors flooded cryptography forums and drowned out the usual participants’ discussions. I’ve previously said that some gatekeeping is necessary for the maintenance group identity, and that the excess of this minimum amount is what creates toxicity. Unfortunately, this trope has far exceeded the LD50 for healthy discourse.
Some of my friends make their living working on cryptocurrency projects–as researchers, mathematicians, programmers, security engineers, and so on. A lot of the interesting cryptography breakthroughs we’ll see in the next 10-15 years will be, at least in part, the result of cryptographers working in the cryptocurrency space. It’s difficult to talk about zero-knowledge proofs without acknowledging some of the kick-ass research the Electric Coin Company has done in order to launch their privacy-preserving cryptocurrency, and that’s only one example.
Here’s cryptographer Jean-Phillipe Aumasson, whose employer is launching a regulated cryptocurrency marketplace:
https://twitter.com/veorq/status/1384045994413678598
If you’re not familiar with JP’s work, he wrote several cryptography books (including Serious Cryptography), contributed to several hash functions (SipHash, BLAKE2, and BLAKE3), and initiated the Password Hashing Competition that resulted in Argon2.
However, there’s also a lot of bullshit in the cryptocurrency space.
- Years of securities fraud enabled by “Initial Coin Offerings” (ICOs) on the Ethereum blockchain. Most famously: Bitcoiin (yes, with two I’s) whose spokesman was bad movie star, Steven Seagal.
- The plague of hacked Twitter accounts pretending to be Elon Musk, perpetuating a “give me some $ and I’ll give you more back” scam that’s sadly effective.
- The whole cryptoart / NFT debacle.
- Litanies of startups trying to “use blockchain to solve X problem” without ever asking if the problem warrants a blockchain in the first place.
- Every microgram of drama related to John McAfee.
And those are just the items I can list off, off the top of my head. The awfulness surrounding cryptocurrency is like a fractal: The deeper you look at it, the more shit you see.
Cryptocurrency Subculture: A Tale of Too Shitty
The world’s most successful cryptocurrency to date, Bitcoin, was created in 2008 by an anonymous cryptographer who liked to be known as Satoshi Nakamoto and distributed on metzdowd.com, a mailing list created by a group of cryptoanarchists that called themselves “cypherpunks”.At the risk of being overly reductive, cryptoanarchists are people who believe strongly in a right to privacy and therefore the right to use cryptography to protect communications from others–be it governments, corporations, or jealous ex-lovers. The cypherpunks were a group of cryptoanarchists that also wrote code. It’s a wordplay on “cyberpunk”.
It’s difficult to speculate about the intentions or politics of Satoshi Nakamoto, considering they said very little of substance about their private beliefs, and no longer answer emails from random strangers. However, given their presence on metzdowd, it’s reasonable to propose they were at least sympathetic to the cypherpunks’ cause.
Most outspoken cryptocurrency enthusiasts today are not like Satoshi Nakamoto. They don’t understand or frankly give a shit about complex, nuanced points about privacy and the government machinations underpinning public safety–let alone how that intersects with the racist history of the institutions charged with keeping the public safe. They’re largely anarcho-capitalists who want to make as much money as they can and, in turn, pay as little as possible in taxes.
How do you make money in cryptocurrency?
By obtaining some amount of a coin, then convincing other people to buy it to drive up the demand, and therefore the price, and then sell at a later date. Then you can sell your coins at a higher price than you paid (either directly, or through energy costs from “mining”) and pocket your profits.Don’t let the name fool you: anarcho-capitalists (a.k.a. ancaps) aren’t anarchists (and furthermore, cryptocurrency-manic ancaps aren’t cryptoanarchists). Here’s a helpful video to disambiguate the terms involved:
https://www.youtube.com/watch?v=OOTlxsn8tWc
If I said that large swaths of the cryptocurrency community was generally shitty, I would not be the first to make this observation. The earliest Bitcoin events were caricatures of the kind of toxic sexist excess that dominates chauvinistic power fantasies. (“When lambo?”)
It’s not just the bad politics or the stark contrast between cryptocurrency in practice and cryptocurrency as envisioned by the earliest architects on the metzdowd cryptography mailing list.
Last year I wrote about a dumb attack against the second hash function used by the cryptocurrency, IOTA. After I wrote this story, my Twitter mentions and DMs were flooded with astroturfing attempts by IOTA enthusiasts. Nearly a year later, most of those have been deleted–presumably because of an account suspension.
https://twitter.com/HapaRekk/status/1283485380004597760
Before IOTA, Monero enthusiasts used to engage in bad faith with anyone that dared criticize their favorite cryptocurrency project on Reddit or Hacker News.
To be clear: I don’t think that cryptocurrency projects or their developers are ever necessarily responsible for the behavior of their users. Sometimes you find toxic assholes like Sergey Ivancheglo (the IOTA developer that threatened security researchers) at the helm, and then immediately jettison it until they leave (to great fanfare of the non-toxic part of their community).
I don’t want to overstate my case here. A lot of blockchainiacs are just downright awful people. The absolute worst. But I’ve found over the years that, the less a person talks about cryptocurrency as a financial endeavor (e.g. speculative trading), the less likely they are to be shitty. It’s not a law of the universe, but it’s a useful measuring stick.
But with all that in mind, an obvious question emerges.
If there’s so much awful shit surrounding cryptocurrency, why would furries (a subculture that constantly receives endless helpings of flak from society at large) ever venture near cryptocurrency?
The Politics Inherent to Furry Identity
Art by Swizz.A lot of Americans like to think of themselves as “Free Speech” proponents. Some of them get all sweaty over whether or not they should be allowed to broadcast, and profit from, bigoted or hateful content laden with slurs.
And yet, the most censored people in American society are, without a doubt, sex workers. And you rarely hear any so-called “Free Speech” proponents give an iota of shit about the plight of sex workers. They can’t even freely engage in commerce here.
Sex work is explicitly banned by most financial service providers, such as PayPal. It’s exceedingly difficult for sex workers to make ends meet without constantly having to worry about their accounts being frozen and funds inaccessible.
There are a lot of reasons why the plight of sex workers is so bad in America. At the top of the list is the intersection of conservative politics and evangelical Christianity, which overall condemns healthy and consensual expressions of human sexuality. (Ever noticed how the only people who think they have a “sex addiction” are religious or right-wing? Not a coincidence.)
Do you know who else is a target of evangelicals and conservatives?
Furries, as you might know, are widely considered an LGBTQIA+ subculture (although not all of us are LGBTQIA+; only about 80%). But we’re more than just an LGBTQIA+ subculture. We’re also a vibrant community filled with skilled artists. Some of this art is pornographic in nature. It turns out, when queer people aren’t forced into the closet, they tend to embrace shameless authenticity and celebrate their romantic and sexual attractions with pride.
https://twitter.com/Pinboard/status/992819169593716737
A few years ago, the Death Eaters in Congress passed two bills (FOSTA and SESTA) that were advertised as an attempt to crack down on “sex trafficking”.
In practice, these laws killed Pounced.org–the only furry “dating” site at the time that wasn’t a sketchy cash grab (FurryMate, FurFling, etc.). Pounced.org died because the cost to avoid being criminally prosecuted under these laws was so exorbitant that they couldn’t sustain the website anymore, and it probably wasn’t the only small dating site to be killed by poor legislation. Only the big players could really have front-loaded these costs.
Which leads to the meat of this issue…
Why Furries Might Be Interested in Cryptocurrency
Cryptocurrency can be very attractive to members of the furry fandom because of the bullshit baked into the societies and cultures we exist in.Cryptocurrency promises to be permissionless and decentralized; to bank the unbanked. If you make your living filling up someone else’s spank bank, the idea of creepy rich white men not being able to exercise targeted censorship against you or your family is, frankly, irresistible.
“Can’t use PayPal for your trade? Just setup a cryptocurrency wallet and give a different address to each of your clients, and instructions on how to access some vaguely reputable cryptocurrency exchange.”
Granted, most furries aren’t sex workers or porn artists, but some of our friends are, and we want to see them protected. But there’s another threat that cryptocurrency promises to alleviate: Chargeback fraud.
The prevalence of chargeback fraud is why I always tip artists. It helps to offset some of the harm caused by shitty behavior.
(Art by Khia)This is the usual story (although exceptions do exist) I heard from my artist friends:
Someone under 18 decides they want to commission an artist they cannot personally afford, so they steal their parent’s credit card and use it to pay for a commission. Later–often after the work has been completed and delivered to the client–their parent notices the unauthorized charge on their credit card, and issues a chargeback.
Not only does this steal from the artist, but it incurs a $35 fee and increases the risk of their account being permanently suspended by their payment provider–thereby preventing them from accessing the funds paid to them by legitimate customers.
“Thanks for the free art! Now you’re at least $35 poorer and maybe lost your only lifeline out of perpetual poverty.”— Assholes
And thus, the Siren Song repeats once again!Cryptocurrency doesn’t prevent chargeback fraud, but it does shift the risk from independent artists that have no capital or political power and onto billion dollar financial institutions like Coinbase.
Once the cryptocurrency has been transferred from the Coinbase wallet to the furry artist, it cannot be unspent. Bad faith behavior might still happen, but the artist doesn’t risk their livelihood because of it.
And that’s why, when furry auction site The Dealer’s Den announced a plan to rebuild with “Blockchain Technology”, I didn’t even bat an eye. It seems like an obvious solution to a pervasive unsolved problem to me.
Sure, it’d be great if we could solve this problem with sensible civil policy. But when is that going to finally happen? After all, we’re talking about the same governments that bungled COVID-19 last year, and the AIDS crisis last century, and so on…
https://www.youtube.com/watch?v=aJtvKSUPICA
However, and this bears emphasizing, the CryptoArt / NFT trend is not a valid reason to get involved in cryptocurency! As I said on Twitter:
https://twitter.com/SoatokDhole/status/1370045499122843654
https://twitter.com/SoatokDhole/status/1370046285798064128
https://twitter.com/SoatokDhole/status/1370047071949033472
https://twitter.com/SoatokDhole/status/1370047509314297862
So, super long preamble aside, what I thought I’d do today is talk a bit about cryptocurrency and how to engage with the topic responsibly, especially if you’re trying to mitigate the damage of the systems we inherited.
Cryptocurrency For Furries
I’m going to be very light on technical jargon, in the interests of accessibility, but at the risk of being imprecise.No two cryptocurrencies are created equal. If you’re hoping to use one to mitigate systemic harms to our community, I implore you to learn the technical details in depth.
Decentralized Consensus
Cryptocurrencies can be classified by something called their consensus mechanism, which is how they can maintain a consistent ledger without being centralized. It doesn’t really matter, for the purpose of this article, how any of them work. I’m happy to dive into that in a future blog post, should anyone want it.What you need to know is that Proof-of-Work (PoW) consensus algorithms are designed to maximize energy waste across the entire cryptocurrency network. That’s how it maintains its security against different kinds of esoteric-sounding attacks.
When you “mine” a Proof-of-Work cryptocurrency, what you’re doing is solving a computationally hard puzzle (e.g. find a number that, when combined with the previous block’s hash and your address and hashed, produces a specific number of leading 0 bits determined by an algorithm to ensure this happens at a set average frequency of time), which results in the entire network agreeing that your address gets the “block reward” (a fixed amount of whatever currency) plus transaction fees.
Cryptocurrency discussions frequently invite conversations about the environmental impact of mining. Proof-of-Work is the cause for this excess energy use which certainly contributes to global climate change.
So, if you’re going to get involved with cryptocurrency without contributing to global climate disaster, you’re going to want to avoid Proof-of-Work cryptocurrencies. There are several other options to choose from.
Proof-of-Stake is popular among my cryptocurrency nerd friends, although it receives a fair bit of criticism from experts (especially the “nothing at stake” problem). Ask your cryptographer. It’s probably not me.
On-Chain Privacy
The vaunted “blockchain” is a public, transparent record of all transactions.When you use a cryptocurrency like Bitcoin, it’s sort of like tweeting your financial activities for the world to see.
“But nobody knows who owns this address,” Bitcoin maximalists might argue. To which I point out: Nobody is supposed to know your sockpuppet Twitter accounts either, but when you use them to harass someone right after they block your main account, we know it’s you.
The people whom this applies to know who they are, and should stop.
(Art by Khia)Some cryptocurrencies, like Zcash, try to provide something like TLS for your transactions. When you use shielded Zcash addresses, the transaction amounts and recipients are encrypted, and this ciphertext is accompanied by a zero-knowledge proof to ensure the total amount in the shielded and unshielded pools remains consistent.
I highly implore you to choose a cryptocurrency that has on-chain privacy, especially if your target audience includes queer people and/or sex workers.
Mainstream Appeal
Finding a privacy-preserving cryptocurrency that doesn’t equate to Global Warming Bucks is a tall order, but if you want people to actually use a cryptocurrency, it needs to be accessible.By accessible, I mean available on all the mainstream cryptocurrency exchange platforms (Coinbase, Binance, Bitfinex, etc.).
This might sound like pointless gatekeeping, but remember: They have the money and lawyers to negotiate with the economic powerhouses of the world, while sex workers and furry artists do not.
Cryptographic Security
Any regular reader of Dhole Moments probably saw this section coming a mile away, but an important consideration for a cryptocurrency to build upon is whether or not it’s actually secure.This is where things get tricky. Weird or poor choices in cryptographic algorithm don’t seem to matter much.
Bitcoin uses ECDSA over Koblitz curves. IOTA shipped two broken hash functions, threatened researchers, and then tried to claim the first broken hash function was backdoored for “copy protection”. The CryptoNote currencies (n.b. Monero) tried to build on EdDSA but introduced a double spend attack.
I’m certainly not qualified to audit an entire cryptocurrency and say “yes/no” on its security. But any cryptocurrency you consider should at least pass a smoke test from your cryptographer.
Which Cryptocurrency Should I Choose?
If you’re looking for a cryptocurrency that’s secure, accessible, privacy-preserving, and doesn’t waste a fuck ton of energy all the time, the short answer is that there is none. You’re going to have to make a trade-off.Shocking, I know.
(Art by Khia)I’m sure there are cryptocurrency projects that use privacy-preserving technologies without a Proof-of-Work algorithm, and their design and implementation might even be secure! But, to date, I’m not aware of any such projects that also have mainstream accessibility on large exchange platforms.
You’ll notice that I didn’t mention price volatility in my list above. There’s two reasons for that:
- I’m not a financial expert. For all I know, price volatility might be something you want out of your cryptocurrency, especially if you’re LARPing a day trader.
- It’s hard enough to make this choice without adding more complications to the formula.
If Zcash ever adopted a consensus algorithm that wasn’t Proof-of-Work, it’d be a shoe-in for me to recommend. It checks all the other boxes neatly and is one of the most interesting cryptography projects on the Internet, after all.
In the meantime, maybe some other project will fill this niche and become widely accessible for everyone. There’s a lot of exciting and/or scary things happening with cryptocurrency research.
If you’re stuck with a hard decision, honestly, just do the best you can and be very transparent about the trade-offs you’re making and why you’re making them. Then ask a friend or expert to check your reasoning before you commit to it. “Do nothing” also needs to be publicly considered, no matter how absurd it might seem.
Disclaimers and Other Remarks
I do not work with cryptocurrency in my dayjob. I’d like to say that, consequently, I don’t have a conflict of interest, but all humans have subconscious biases, and a lot of my favorite people in cryptography do work in or with cryptocurrency. I want my friends to be able to continue to do awesome work without feeling ashamed.https://twitter.com/cryptolexicon/status/1331712883403722752
Thus, I don’t care if you invest in Bitcoin or Dogecoin or whatever. Shoot for the moon while you awoo at the moon. Just be careful; for every winner, there’s at least one loser.
Fact: Dholes are also known as “Whistling Dogs”
(Art by Khia)I’m a fan of transparency logs–which are often compared to blockchains, but without the currency aspect. If you’re not familiar, read up on Trillian and Chronicle. Notably, Trillian is the backbone of Certificate Transparency, which helps keep the CA infrastructure honest and consequently makes HTTPS safer for everyone.
https://soatok.blog/2021/04/19/a-furrys-guide-to-cryptocurrency/
#Cryptocurrency #furries #furry #furryArtists #FurryFandom #Politics #Society
I quit my job towards the end of last month.
When I started this blog, I told myself, “Don’t talk about work.” Since my employment is in the rear view mirror, I’m going to bend that rule for once. And most likely, only this one time.
Why? Since I wrote a whole series about how to get into tech for as close to $0 as possible without prior experience, I feel that omitting my feelings would be, on some level, dishonest.
Refusing Forced Relocation
I had been hired in 2019 for the cryptography team at a large tech company. I was hired as a 100% remote employee, with the understanding that I would work from my home in Florida.
Then a pandemic started to happen (which continues to be a mass-disabling event despite what many politicians proclaim).
The COVID-19 pandemic forced a lot of people who preferred to work in an office setting to sink-or-swim in a remote work environment.
In early 2020, you could be forgiven for imagining that this new arrangement was a temporary safety measure that we would adopt for a time, and then one day return to normal. By mid 2022, only people that cannot let go of their habits and traditions continued to believe that we’d ever return to the “normal” they knew in 2019.
As someone who had been working remote since 2014, as soon as the shift happened, many of my peers reached out to me for advice on how to be productive at home. This was an uncomfortable experience for many of them, and as someone who was comfortable in a fully virtual environment, I was happy to help.
By early 2021, I was considered to not only be a top performer, but also a critical expert for the cryptography organization. My time ended up split across three different teams, and I was still knocking my projects out of the park. But more importantly, junior employees felt comfortable approaching me with questions and our most distinguished engineers sought my insight on security and cryptography topics.
It became an inside joke of the cryptography organization, not to let me ever look at someone else’s source code on a Friday, because I would inevitably find at least one security issue, which would inevitably ruin someone’s weekend. I suppose the reasoning was that, if the source code in question belonged to a foundational software package, it carried the risk of paging the entire company as we tried to figure out how to mitigate the issue and upstream the fix.
(I never once got earnestly reprimanded for finding security bugs, of course.)
I can’t really go into detail about the sort of work I did. I don’t really want to name names, either. But I will say that I woke up every day excited and motivated. The problems were interesting, the people were wonderful, and there was an atmosphere of respect and collaboration.
Despite the sudden change in working environment for most of the cryptography organization in response to COVID-19, we were doing great work and cultivating the same healthy and productive work environment that everyone fondly remembered pre-pandemic.
Art: CMYKat
And then the company’s CEO decided to make an unceremonious, unilateral, top-down decision (based entirely on vibes from talking to other CEOs, rather than anything resembling facts, data, or logic):
Everyone must return to the office, and virtual employees must relocate. Exceptions would be few, far between, and required a C-level to sign off on it. Good luck getting an exception before your relocation decision deadline.
Hey, tech workers, stop me if you’ve heard this one before.
To the credit of my former managers, they sprung this dilemma on me literally the day before I went to a hacker conference–a venue full of hiring managers and technical founders.
On Ultimatums
If I had to give only one bit of advice to anyone ever faced with an ultimatum from someone with power over them (be it an employer or abusive romantic partner), it would be:
Ultimately, never choose the one giving you an ultimatum.
Art: AJ_LovesDinos
If your employer tells you, “Move to an expensive city or resign,” your best move will be, in the end, to quit. Notice that I said, in the end.
It’s perfectly okay to pretend to comply to buy time while you line up a new gig somewhere else.
That’s what I did. Just don’t start selling your family home or looking at real estate listings, and definitely don’t accept any relocation assistance (since you’ll have to return it when you split).
Conversely, if you let these assholes exert their power over you, you dehumanize yourself in submission.
(Yes, you did just read those words on a blog written by a furry.)
If you take nothing else away from this post, always keep this in mind.
Art: MarleyTanuki
From Whence Was This Idiocy Inspired?
Nothing happens in a vacuum.
When more tech workers opted to earn their tech company salaries while living in cheaper cost-of-living houses, less tech worker money circulated to big city businesses.
This outflow of money does hurt the local economies of said cities, including the ones that big tech companies are headquartered in. In some cases, this pain has jeopardized a lot of the tax incentives that said companies enjoy.
That’s why we keep hearing about politicians praising the draconian way that the return-to-office policies are being enforced.
At the end of the day, incentives rule everything around us.
Companies have to kowtow to the government in order to reduce their tax bill (and continue pocketing record profits–which drive inflation–while their workers’ wages stagnate).
This outcome was incredibly obvious to everyone that was paying attention; it was just a matter of when, not if.
Signs of Things to Come
Do you know who was really paying attention? The top talent at most tech companies.
After I turned in my resignation, I received a much larger outpour of support from other very senior tech workers than I ever imagined.
Many of them admitted that they were actively looking for new roles; some of them for the first time in over a decade.
Many of them already have new gigs lined up, and were preparing to resign too. Some of those already have.
Others are preparing to refuse to comply with either demand, countering the companies’ ultimatums with one of their own: Shut up or fire me.
What I took from these messages is this: What tech companies are doing is complete bullshit, and everyone knows it, and nobody is happy about it.
With all this in mind, I’d like to issue a prediction for how this return-to-office with forced relocation will play out, should companies’ leaders double down on their draconian nature.
My Prediction
Every company that issued forced relocation ultimatums to their pre-pandemic remote workers will not only lose most (if not all) their top talent in the next year, but they will struggle to hire for at least the coming decade.
The bridge has been burnt, and the well has been poisoned.
Trust arrives on foot, but leaves on horseback.Dutch proverb
The companies that issued these ultimatums are not stupid. They had to know that some percentage of their core staff would leave over their forced relocation mandates. Many described it as a “soft layoff” tactic.
But I don’t think they appreciate the breadth or depth of the burn they’ve inflicted. Even if they can keep their ships from sinking, the wound will fester and their culture will not easily recover. This will lead to even more brain drain.
Who could blame anyone for leaving when that happens?
Unfortunately, there is a class of people that work in tech that will bear the brunt of the ensuing corporate abuse: H-1B visa employees, whose immigration status is predicated on their ongoing employment. Their ability to hop from abusive companies onto lifeboats is, on the best of days, limited.
And that? Well, that’s going to get ugly.
There’s still time for these companies to slam the brakes on their unmitigated disaster of failed leadership before it collapses the whole enterprise.
If I were a betting dhole, I wouldn’t bet money on most of them doing that.
Their incentives aren’t aligned that way yet, and when they finally are, it will be far too late.
Toward New Opportunities
As for me, I’m enjoying some well-earned downtime before I start my new remote job.
I wasn’t foolish enough to uproot my life and everyone I love at some distant corporate asshole’s whims, but I also wasn’t impulsive enough to jump ship without a plan.
That’s as much as I feel comfortable saying about myself on here.
If you’re facing a similar dilemma, just know that you’re not alone. Savvy companies will be taking advantage of your current employer’s weakness to pan for gold, so to speak.
You are not trapped. Your life is your own to live. Choose wisely.
Addendum
After I posted this, it made the front page of Hacker News and was subsequently posted in quite a few places. After reading some of the comments, I realize a few subtleties in my word choice didn’t come across, so I’d like to clarify them.
When I say “RTO is bullshit”, I don’t mean “office work is bullshit” or anything negative about people that prefer in-person office work. I mean “the forced relocation implementation of transitioning a whole company to never-remote (a.k.a. RTO) is bullshit”.
If working in an office is better for you, rock on. I don’t have any issue with that. The bullshit is the actions taken by company’s leadership teams in absence of (or often in spite of) hard data on remote work versus in-person work. The bullshit is changing remote worker’s employment agreements without their consent and threatening “voluntary resignation” as the only alternative (even though that’s pretty obviously constructive dismissal).
When I discussed ultimatums above, I’m specifically referring to actual ultimatums, not colloquial understandings of the word. If you can talk with the person and negotiate with them, it’s not a goddamn ultimatum. What I was faced with was an actual ultimatum: Comply or suffer. I chose freedom.
Hope that helps.
CMYKat made this, I edited the text
Regarding some of the other comments, I come from the “I work to live” mindset, not the “I live to work” mindset. My opinions won’t resonate with everyone. That’s okay!
Update: I wrote a follow-up to this post to address a lot of bad comments I saw on HN and Reddit.
https://soatok.blog/2023/10/02/return-to-office-is-bullshit-and-everyone-knows-it/
#business #businessEthics #forcedRelocation #returnToOffice #Society #techIndustry #Technology #ultimatums #work
I probably don’t need to remind anyone reading this while it’s fresh about the current state of affairs in the world, but for the future readers looking back on this time, let me set the stage a bit.The Situation Today
(By “Today”, I mean early May 2020, when I started writing this series.)In the past two months, over 26 million Americans have filed for unemployment, and an additional 14 million have been unable to file.
Federal Reserve chairman, Jerome Powell, says we’re in the worst economy ever.
In a desperate bid of economic necromancy, many government officials want to put millions more Americans at risk of COVID-19 before we can develop a vaccine and effective treatment. And we still don’t even know the long-term effects of the virus.
I’m not interested in discussing the politics of this pandemic or who to blame; I’ll leave that to everyone else with an opinion. Instead, I want to acknowledge two facts that most people probably already know:
- This was mostly avoidable with competent leadership and responsible preparation
- Most of us have rough times ahead of us
I can’t do anything about the first point (although most people are focused on it), but I want to try to alleviate the second point.
What This Series is About
Whether you lost your job and need an income to survive, or you’re one of the essential workers wanting to avoid being sacrificed by politicians for the sake of economic necromancy, I wrote this guide to help you transition into a technology career with little-to-no tech experience.This is not a magic bullet! It will require time, focus, and effort.
But if you follow the advice on the subsequent posts in this series, you will at least have another option available to you. The value of choice, especially when you otherwise have none, is difficult to overstate.
I am not selling anything, nor are there ads on these pages.
This entire series is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Why Work in Tech?
Technology careers aren’t everyone’s cup of tea, and they might be far from your first choice, but there are a couple of advantages that you should be aware of especially during this pandemic and lockdown:
- Most technology careers can be performed remotely.
- Most technology careers pay well.
The first point is especially important for folks living in rural areas hit hard by a lack of local employment opportunities.
A lot of the information and suggestions contained in this series may be applicable to other domains. However, my entire career has been in tech, so I cannot in good conscience speak to the requirements to gain employment in those industries.
Why Should We Trust You?
You shouldn’t. I encourage you to take everything I say with a grain of salt and fact-check any claims I make. Seriously.My Background
I’m currently employed as a security engineer for a cryptography team of a larger company, although I don’t even have a Bachelor’s degree. I’ve worked with teams of all sizes on countless technology stacks.I have been programming, in one form or another, since I was in middle school (about 18 years ago), although I didn’t start my professional career until 2011. I’ve been on both sides of bug bounty programs, including as my fursona. A nontrivial percentage of the websites on the Internet run security code I wrote under my professional name.
Art by Khia
My Motivation
Over the past few years, I’ve helped a handful of friends (some of them furries) transition into technology careers. I am writing this series, and distributing it for free because I want to scale up the effort I used to put into mentoring.I’m writing this series under my furry persona, and drenching the articles with queer and furry art, to make it less palatable to bigots.
Art by Kerijiano
Series Contents
- Building Your Support Network and/or Team
- Mapping the Technology Landscape
- Learning the Fundamental Skills
- Choosing Your Path
- Starting and Growing an Open Source Project
- Building Your C.V.
- Getting Your First Tech Job
- Starting a Technology Company
- Career Growth and Paying It Forward
The first three entries are the most important.
The header art for this entire series was created by ScruffKerfluff.
https://soatok.blog/2020/06/08/furward-momentum-introduction/
In 2015, a subreddit called /r/The_Donald was created. This has made a lot of people very angry and widely been regarded as a bad move.
Roughly 5 years after its inception, the Reddit staff banned /r/The_Donald because it was a cesspool of hateful content and harmful conspiracy theories. You can learn more about it here.
Why are we talking about this in 2021?
Well, a lot has happened in the first week of the new year. A lot of words have been written about the fascist insurrection that attempted a coup on the U.S. legislature, so I won’t belabor the point more than I have to.
But as it turns out: The shitty people who ran /r/The_Donald didn’t leave well enough alone when they got shit-canned.
Remember: You can’t recycle fash.
(Art by Khia.)
Instead, they spun up a Reddit clone under the domain thedonald.win
and hid it behind CloudFlare.
Even worse: Without Reddit rules to keep them in check, they’ve gone all in on political violence and terrorism.
(Content Warning: Fascism, political violence, and a myriad of other nastiness in the Twitter thread below.)
https://twitter.com/Viking_Sec/status/1347758893976457217
If you remember last year, I published a blog post about identifying the real server IP address from email headers. This is far from a sophisticated technique, but if simple solutions work, why not use them?
(Related, I wrote a post in 2020 about more effectively deplatforming hate and harassment. This knowledge will come in handy if you find yourself needing to stop the spread of political violence, but is strictly speaking not relevant to the techniques discussed on this page.)
Unmasking TheDonald.win
The technique I outlined in my previous post doesn’t work on their Reddit clone software: Although it asks you for an (optional) email address at the time of account registration, it never actually emails you, and there is no account recovery feature (a.k.a. “I forgot my password”).
Foiled immediately! What’s a furry to do?
(Art by Khia.)
However, their software is still a Reddit clone!
Reddit has this feature where you can submit links and it will helpfully fetch the page title for you. It looks like this:
When I paste a URL into this form, it automatically fetches the title.
How this feature works is simple: They initiate an HTTP request server-side to fetch the web page, parse out the title tag, and return it.
So what happens if you control the server that their request is being routed to, and provide a unique URL?
Leaking TheDonald.win’s true IP address from behind CloudFlare.
Well, that was easy! To eliminate false positives, I performed all of this sampling with Tor Browser and manually rebuilt the Tor Circuit multiple times, and always got the same IP address: 167.114.145.140
.
An Even Lazier Technique
Just use Shodan, lol
https://twitter.com/_rarecoil/status/1347768188017143808
Apparently chuds are really bad at OpSec, and their IP was exposed on Shodan this whole time.
You can’t help but laugh at their incompetence.
(Art by Khia.)
The Road to Accountability
Okay, so we have their real IP address. What can we do with it?
The easiest thing to do is find out who’s hosting their servers, with a simple WHOIS lookup on their IP address.
Hosted by OVH Canada, eh? After all, nothing screams “Proud American” like hosting your website with a French company in a Canadian datacenter.
Dunking on these fools for the inconsistencies in their worldview is self-care and I recommend it, even though I know they don’t care one iota about hypocrisy.
I immediately wondered if their ISP was aware they were hosting right-wing terrorists, so I filed an innocent abuse report with details about how I obtained their IP address and the kind of behavior they’re engaging in. Canada’s laws about hate speech and inciting violence are comparably strict, after all.
I’ll update this post later if OVH decides to take action.
Lessons to Learn
First, don’t tolerate violent political extremists, or you’ll end up with political violence on your hands. Deplatforming works.
https://twitter.com/witchiebunny/status/1347624481318166528
Second, and most important: Online privacy is hard. Hard enough that bigots, terrorists, and seditious insurrectionists can’t do it right.
This bears emphasizing: None of the techniques I’ve shared on the history of my blog are particularly clever or novel. But they work extremely well, and they’re useful for exposing shitty people.
Remember: Sunlight is the best disinfectant.
Conversely: Basic OSINT isn’t hard; merely tedious.
Other Techniques (from Twitter)
Subdomain leaks (via @z3dster):
https://twitter.com/z3dster/status/1347807318478639106
Exploiting CloudFlare workers (via @4dwins):
https://twitter.com/4dwins/status/1347809701291937792
DNS enumeration (via @JoshFarwell):
https://twitter.com/JoshFarwell/status/1347840751720304641
If the site in question is running WordPress, you can use Pingbacks to get WordPress to cough up the server IP address. If you aren’t sure if something runs WordPress, here’s the lazy way to detect that: view any page’s source code and see if the string /wp-content
shows up in any URLs (especially for CSS). If it’s found, you’re probably dealing with WordPress.
Gab’s (another platform favored by right-wing extremists) IP address discovered through their Image Proxy feature to be 216.66.0.222
(via @kubeworm):
https://twitter.com/kubeworm/status/1348162193523675136
The Alt-Right Notices this Blog Post
Shortly after I posted this online, some users from thedonald.win noticed this blog post and hilarity ensued.
https://twitter.com/SoatokDhole/status/1348204577154326528
I want to make something clear in case anyone (especially members of toxic Trump-supporting communities) is confused:
What’s published on this page isn’t doxing, nor do I have any interest in doxing people. That’s the job of law enforcement, not furry bloggers who sometimes write about computer topics. And law enforcement definitely doesn’t need my help: When you create an account, you must solve a ReCAPTCHA challenge, which sends an HTTP request directly to Google servers–which means law enforcement could just subpoena Google for the IP address of the server, even if the above leaks were all patched.
This also isn’t the sort of thing I’d ever brag about, since the entire point I’ve been making is what I’ve done here isn’t technically challenging. If I wanted to /flex, I’d just talk more about my work on constant-time algorithm implementations.
If, in response to my abuse report, OVH Canada determines that their website isn’t violating OVH’s terms of service, then y’all have nothing to worry about.
But given the amount of rampant hate speech being hosted in Canadian jurisdiction, I wouldn’t make that bet.
Addendum (2021-01-19)
Additionally, this wasn’t as simple as running a WHOIS search on thedonald.win
either, since that only coughs up the CloudFlare IP addresses. I went a step further and got the real IP address of the server behind CloudFlare, not just CloudFlare’s IP.
This isn’t rocket science, folks.
According to CBC Canada, they moved off OVH Canada the same day this blog post went live. I’m willing to bet a simple WHOIS query won’t yield their current, non-CloudFlare IP address. (To wit: If you think the steps taken in this blog post are so unimpressive to warrant mockery, why not discover the non-CloudFlare IP for yourselves? I’ll bet you can’t.)
There are a lot of ways to deflect criticism for your system administrators’ mistakes, but being overly reductionist and claiming I “just” ran a WHOIS query (which, as stated above, wouldn’t work because of CloudFlare) is only hurting your users by instilling in them a false sense of security.
Just admit it: You fucked up, and got outfoxed by a random furry blogger, and then moved hosting providers after patching the IP leak. How hard is that?
Also, if anyone from CloudFlare is reading this: You should really dump your violent extremist customers before they hurt more people. I’m a strong proponent of freedom of speech–especially for sex workers, the most censored group online–but they’re actively spreading hate and planning violent attacks like the Capitol Hill Riot of January 6, 2021. Pull the damn plug, man.
Finally, I highly recommend Innuendo Studios’ series, The Alt-Right Playbook, for anyone who’s trying to make sense of the surge in right-wing violence we’ve been seeing in America for the past few years.
How Do You Know This IP Wasn’t Bait?
After I published this article, the developers of their software hobbled the Get Suggested Title feature of their software, and the system administrators cancelled their OVH hosting account and moved to another ISP. (Source.)
You can independently verify that their software is hobbled: Try to fetch the page title for a random news website, or Wikipedia article, with the developer console open. It will stall for a while then return an empty string instead of the page title.
They also changed their domain name to patriots.win.
If the IP address I’d found was bait, why would they break a core piece of their software’s functionality and then hurriedly migrate their server elsewhere?
The very notion doesn’t stand up to common sense, let alone greater scrutiny. The whole point of bait is to catch people making a mistake–presumably so you can mock them while remaining totally unaffected–not so you can do these things in a hurry.
A much more likely story: Anyone who makes this claim is trying to downplay a mistake and save face.
Header art by Kyume
https://soatok.blog/2021/01/09/masks-off-for-thedonald-win/
#cloudflare #deanonymize #hateSpeech #OnlinePrivacy #Technology
Update (2021-01-09): There’s a newer blog post that covers different CloudFlare deanonymization techniques (with a real world case study).Furry Twitter is currently abuzz about a new site selling knock-off fursuits and illegally using photos from the owners of the actual fursuits without permission.
Understandably, the photographers and fursuiters whose work was ripped off by this website are upset and would like to exercise their legal recourse (i.e. DMCA takedown emails) of the scam site, but there’s a wrinkle:
Their contact info isn’t in DNS and their website is hosted behind CloudFlare.
CloudFlare.
Private DNS registration.You might think this is a show-stopper, but I’m going to show you how to get their server’s real IP address in one easy step.
Ordering the Server’s IP Address by Mail
Most knock-off site operators will choose open source eCommerce platforms like Magento, WooCommerce, and OpenCart, which usually have a mechanism for customers to register for an account and login.Usually this mechanism sends you an email when you authenticate.
(If it doesn’t, logout and use the “reset password” feature, which will almost certainly send you an email.)
Once you have an email from the scam site, you’re going to need to view the email headers.
With Gmail, can click the three dots on the right of an email then click “Show original”.
Account registration email.
Full email headers after clicking “Show original”.And there you have it. The IP address of the server behind CloudFlare delivered piping hot to your inbox in 30 minutes or less, or your money back.
That’s a fairer deal than any of these knock-off fursuit sites will give you.
Black magic and piss-poor opsec.
What Can We Do With The Server IP?
You can identify who hosts their website. (In this case, it’s a company called Net Minders.)With this knowledge in mind, you can send an email to their web hosting provider, citing the Digital Millennium Copyright Act.
One or two emails might get ignored, but discarding hundreds of distinct complaint emails from different people is bad for business. This (along with similar abuse complaints to the domain registrar, which isn’t obscured by DNS Privacy) should be enough to shut down these illicit websites.
The more you know!Epilogue
https://twitter.com/Mochiroo/status/1259289385876373504The technique is simple, effective, and portable. Use it whenever someone tries to prop up another website to peddle knock-off goods and tries to hide behind CloudFlare.
https://soatok.blog/2020/05/09/how-to-de-anonymize-scam-knock-off-sites-hiding-behind-cloudflare/
#cloudflare #deanonymize #DNS #fursuitScamSites #informationSecurity #OnlinePrivacy #opsec
#Microsoft pays $10 million to kill human #journalism
#ai #technology #news #future #finance #economy #jobs #work #Problem #Press #information #money #humanity #change
OpenAI and Microsoft are funding $10 million in grants for AI-powered journalism
OpenAI and Microsoft are funding projects to bring more AI tools into the newsroom.Anna Washenko (Engadget)
Last week, Floridians were startled by an emergency alert sent to all of our cell phones. Typically when this sort of alert happens, it’s an Amber Alert, which means a child was abducted. In Florida, we sometimes also receive Silver Alerts, which indicates that an Alzheimer’s or dementia patient has gone missing. (Florida has a lot of old and retired people.)
To my surprise, it was neither of those things. Instead, it was a Blue Alert–a type of alert I had never seen before. Apparently nobody else had seen it either, because a local news site published a story explaining what Blue Alerts even are for their confused readers.
What’s a Blue Alert?
A Blue Alert is an involuntary message, communicated over the emergency alert infrastructure, to perform the equivalent of a Twitter call-out thread on a suspected cop-killer or cop-abductor.
Blue Alerts are opt-out, not opt-in, and you cannot turn them off without also disabling other types of emergency alerts. Even on newer phones which offer greater granularity with the types of emergency alerts to receive, there is no specific flag to disable Blue Alerts and leave all the other types turned on.
Blue Alerts Are Security Theater
Blue Alerts do not provide any meaningful benefit towards public safety, and actually make us less safe.
If someone just killed a cop, do you really expect random untrained citizens to get involved? We already know how that worked out for the armed and trained professionals.
https://twitter.com/cel_decicco/status/1408127188671647748
If law enforcement wants an uncritical platform to broadcast their lies and omissions with no questions (or only softball questions that presuppose the frame that they’re telling the truth), they already have every major media outlet in their locale. They don’t need the Blue Alerts to get the word out, or to advertise a cash reward for information leading to an arrest. They already have channels for that.
Why are Blue Alerts a thing? The best reason I’ve been able to discern is: Because the surviving families of deceased law enforcement officers want to feel like their loss is taken seriously. The need to “do something”–even when that something is meaningless, or even harmful, but still looks like a solution–is the essence of Security Theater.
But Blue Alerts aren’t as harmless as a mere expression of sheer self-entitlement over the rest of us unimportant proles.
Blue Alerts actually serve to make our society less safe by increasing Alarm Fatigue, which negatively impacts public safety by making people less focused when an alert comes in.
Alternatively, some people will actively disable Blue Alerts to prevent alarm fatigue. But, as stated above, there’s no way to disable them in isolation without also disabling other emergency alerts, which puts them at risk of being uninformed of an actual severe or extreme emergency.
Making the public less safe goes against the very predicate for why police forces exist in most states.
Just say NO to Security Theater!
(Art by Khia.)
Blue Alerts Are Copaganda (in Practice)
This one needs a bit of explaining. I’m going to focus on Florida, because it’s familiar to me.
Blue Alerts were created in Florida in 2011 via an executive order by then-governor Rick Scott. According to Spectrum News 9, only three alerts have been issued since the system was created.
(Anecdote: I’ve had my own mobile phone since 2008 and never once received one until last week.)
The Florida Department of Law Enforcement identifies four criteria for a Blue Alert to be issued:
- A law enforcement officer must have been: seriously injured; killed by a subject(s); or become missing while in the line of duty under circumstances causing concern for the law enforcement officer’s safety.
- The investigating agency must determine that the offender(s) poses a serious risk to the public or to other law enforcement officers, and the alert may help avert further harm or assist in the apprehension of the suspect.
- A detailed description of the offender’s vehicle or other means of escape (vehicle tag or partial tag) must be available for broadcast to the public.
- The local law enforcement agency of jurisdiction must recommend issuing the Blue Alert.
That fourth requirement gives law enforcement a lot of discretion in deciding whether or not to issue a Blue Alert.
That power to arbitrarily decide whether or not to send one might explain why, despite having 2 cops killed in 2020 and 4 cops killed in 2018 due to shooting incidents (both in Florida alone, and I do not have access to data earlier than 2018), a Blue Alert wasn’t emitted for any of those incidents.
Gee, I wonder if something else could have happened last week to prompt law enforcement to exercise a rarely-used tool in their toolbelt?
What Happened Before June 2021’s Blue Alert
I’m not particularly clued into the specific events of the shooting that issued the Blue Alert, but there was a particularly embarrassing incident for law enforcement in Florida the day before that was starting to gain a lot of attention.
Florida Highway Patrol tased a teenage boy in his girlfriend’s yard. And it was starting to get national media coverage.
Content Warning: Do not watch this video if violence–especially police violence–might cause you severe discomfort or trigger an involuntary psychological response to past trauma:
https://www.youtube.com/watch?v=n4wSkqQlA9o
I do not have, nor will I claim to have, any specific evidence that proves that the cops used the shooting in Volusia County, Florida as an excuse to trigger the surprising Blue Alert to confuse and distract the populace.
However, all cops are bastards, so I certainly suspect them of doing such a thing to cover for their buddies.
And since their mere suspicion is generally sufficient justification for cops to violate the Fourth Amendment with wild abandon, it’s only fair that my suspicion be sufficient to launch an investigation into their motives.
Just kidding!
We know the system is stilted in cops’ favor, which is why there’s a Blue Alert when a cop gets killed, but not a Stasi Alert when cops decide to murder an American citizen.
(Art by Khia.)
In Conclusion
Blue Alerts are not actionable for their recipients, and make the public less safe. Additionally, they provide the police yet another propaganda tool that I suspect they already used once to distract the public from an embarrassing news story.
Here’s what needs to happen:
- Mobile Operating System developers need to create a dedicated toggle to disable Blue Alerts without disabling other emergency alerts.
- These toggles need to be easier to find and configure.
These aren’t political solutions, merely technological ones, but as a security engineer, that’s all I can offer.
https://soatok.blog/2021/07/02/blue-alerts-security-theater-and-copaganda/
#ACAB #BlueAlerts #Florida #police #policeState #Politics #publicSafety #SecurityTheater #Society #Technology
Last week, Floridians were startled by an emergency alert sent to all of our cell phones. Typically when this sort of alert happens, it’s an Amber Alert, which means a child was abducted. In Florida, we sometimes also receive Silver Alerts, which indicates that an Alzheimer’s or dementia patient has gone missing. (Florida has a lot of old and retired people.)To my surprise, it was neither of those things. Instead, it was a Blue Alert–a type of alert I had never seen before. Apparently nobody else had seen it either, because a local news site published a story explaining what Blue Alerts even are for their confused readers.
What’s a Blue Alert?
A Blue Alert is an involuntary message, communicated over the emergency alert infrastructure, to perform the equivalent of a Twitter call-out thread on a suspected cop-killer or cop-abductor.Blue Alerts are opt-out, not opt-in, and you cannot turn them off without also disabling other types of emergency alerts. Even on newer phones which offer greater granularity with the types of emergency alerts to receive, there is no specific flag to disable Blue Alerts and leave all the other types turned on.
Blue Alerts Are Security Theater
Blue Alerts do not provide any meaningful benefit towards public safety, and actually make us less safe.If someone just killed a cop, do you really expect random untrained citizens to get involved? We already know how that worked out for the armed and trained professionals.
https://twitter.com/cel_decicco/status/1408127188671647748
If law enforcement wants an uncritical platform to broadcast their lies and omissions with no questions (or only softball questions that presuppose the frame that they’re telling the truth), they already have every major media outlet in their locale. They don’t need the Blue Alerts to get the word out, or to advertise a cash reward for information leading to an arrest. They already have channels for that.
Why are Blue Alerts a thing? The best reason I’ve been able to discern is: Because the surviving families of deceased law enforcement officers want to feel like their loss is taken seriously. The need to “do something”–even when that something is meaningless, or even harmful, but still looks like a solution–is the essence of Security Theater.
But Blue Alerts aren’t as harmless as a mere expression of sheer self-entitlement over the rest of us unimportant proles.
Blue Alerts actually serve to make our society less safe by increasing Alarm Fatigue, which negatively impacts public safety by making people less focused when an alert comes in.
Alternatively, some people will actively disable Blue Alerts to prevent alarm fatigue. But, as stated above, there’s no way to disable them in isolation without also disabling other emergency alerts, which puts them at risk of being uninformed of an actual severe or extreme emergency.
Making the public less safe goes against the very predicate for why police forces exist in most states.
Just say NO to Security Theater!
(Art by Khia.)Blue Alerts Are Copaganda (in Practice)
This one needs a bit of explaining. I’m going to focus on Florida, because it’s familiar to me.Blue Alerts were created in Florida in 2011 via an executive order by then-governor Rick Scott. According to Spectrum News 9, only three alerts have been issued since the system was created.
(Anecdote: I’ve had my own mobile phone since 2008 and never once received one until last week.)
The Florida Department of Law Enforcement identifies four criteria for a Blue Alert to be issued:
- A law enforcement officer must have been: seriously injured; killed by a subject(s); or become missing while in the line of duty under circumstances causing concern for the law enforcement officer’s safety.
- The investigating agency must determine that the offender(s) poses a serious risk to the public or to other law enforcement officers, and the alert may help avert further harm or assist in the apprehension of the suspect.
- A detailed description of the offender’s vehicle or other means of escape (vehicle tag or partial tag) must be available for broadcast to the public.
- The local law enforcement agency of jurisdiction must recommend issuing the Blue Alert.
That fourth requirement gives law enforcement a lot of discretion in deciding whether or not to issue a Blue Alert.
That power to arbitrarily decide whether or not to send one might explain why, despite having 2 cops killed in 2020 and 4 cops killed in 2018 due to shooting incidents (both in Florida alone, and I do not have access to data earlier than 2018), a Blue Alert wasn’t emitted for any of those incidents.
Gee, I wonder if something else could have happened last week to prompt law enforcement to exercise a rarely-used tool in their toolbelt?
What Happened Before June 2021’s Blue Alert
I’m not particularly clued into the specific events of the shooting that issued the Blue Alert, but there was a particularly embarrassing incident for law enforcement in Florida the day before that was starting to gain a lot of attention.Florida Highway Patrol tased a teenage boy in his girlfriend’s yard. And it was starting to get national media coverage.
Content Warning: Do not watch this video if violence–especially police violence–might cause you severe discomfort or trigger an involuntary psychological response to past trauma:
https://www.youtube.com/watch?v=n4wSkqQlA9o
I do not have, nor will I claim to have, any specific evidence that proves that the cops used the shooting in Volusia County, Florida as an excuse to trigger the surprising Blue Alert to confuse and distract the populace.
However, all cops are bastards, so I certainly suspect them of doing such a thing to cover for their buddies.
And since their mere suspicion is generally sufficient justification for cops to violate the Fourth Amendment with wild abandon, it’s only fair that my suspicion be sufficient to launch an investigation into their motives.
Just kidding!
We know the system is stilted in cops’ favor, which is why there’s a Blue Alert when a cop gets killed, but not a Stasi Alert when cops decide to murder an American citizen.
(Art by Khia.)
In Conclusion
Blue Alerts are not actionable for their recipients, and make the public less safe. Additionally, they provide the police yet another propaganda tool that I suspect they already used once to distract the public from an embarrassing news story.Here’s what needs to happen:
- Mobile Operating System developers need to create a dedicated toggle to disable Blue Alerts without disabling other emergency alerts.
- These toggles need to be easier to find and configure.
These aren’t political solutions, merely technological ones, but as a security engineer, that’s all I can offer.
https://soatok.blog/2021/07/02/blue-alerts-security-theater-and-copaganda/
#ACAB #BlueAlerts #Florida #police #policeState #Politics #publicSafety #SecurityTheater #Society #Technology
Previously on Dead Ends in Cryptanalysis, we talked about length-extension attacks and precisely why modern hash functions like SHA-3 and BLAKE2 aren’t susceptible.
The art and science of side-channel cryptanalysis is one of the subjects I’m deeply fascinated by, and it’s something you’ll hear me yap about a lot on this blog in the future.
Since my background before computer security was in web development, I spend a lot of time talking about timing side-channels in particular, as well as their mitigations (see also: constant-time-js).
Pictured: Me, when an interesting attack gets published on ePrint.
(Art by Khia.)
However, timing side-channels aren’t omnipotent. Even if your code isn’t constant-time, that doesn’t mean you necessarily have a vulnerability. Case in point:
Length Leaks Are Usually Nothing-Burgers
If you look closely at a constant-time string equality function, you’ll see some clause that looks like this:
if (left.length !== right.length) return false;
A common concern that crops up in bikeshedding discussions about the correct implementation of a constant-time compare is, “This will fail fast if two strings of non-equal length are provided; doesn’t this leak information about the strings being compared?”
Sure, but it won’t affect the security of the application that uses it. Consider a contrived example:
- You’re encrypting with AES-CTR then authenticating the ciphertext with HMAC-SHA256 (Encrypt then MAC).
- For added fun, let’s assume you’re using HKDF-HMAC-SHA512 with a 256-bit salt to derive separate a separate encryption keys and MAC keys from the input key. This salt is prepended to the ciphertext and included as an input to the HMAC tag calculation. Now you don’t have to worry about cryptographic wear-out.
- You’re padding the plaintext to exactly 16 kilobytes prior to encryption, because the exact length of the plaintext is considered sensitive.
- You remove the padding after decryption.
- Your constant-time comparison is used to validate the HMAC tags.
Even though the length of your plaintext is sensitive, it doesn’t really matter that length mismatches leak here: The inputs to the constant-time compare are always HMAC-SHA256 outputs. They will always be 32 bytes (256 bits) long. This is public knowledge.
If you’ve somehow managed to design a protocol that depends on the secrecy of the length of a non-truncated HMAC-SHA256 output to be secure, you’ve probably fucked up something fierce.
However, if you were comparing the unpadded plaintexts with this function–or passing the unpadded plaintext to a hash function–you might have cause for concern.
“Double HMAC” is a defense against compiler/JIT optimizations, not length leaks.
(Art by Khia.)
When Do Timing Leaks Cause Impact?
Timing side-channels only lead to a vulnerability when they reveal some information about one of the secret inputs to a cryptographic function.
- Leaking how many leading bytes match when comparing HMACs can allow an attacker to forge a valid authentication tag for a chosen message–which often enables further attacks (e.g. padding oracles with AES-CBC + HMAC). The cryptographic secret is the correct authentication tag for a chosen message under a key known only to the defender.
- Leaking the number of leading zeroes introduced the risk of lattice attacks in TLS when used with Diffie-Hellman ciphersuites. See also: the Raccoon Attack. The cryptographic secret is the zero-trimmed shared secret, which is an input to a hash function.
- Leaking the secret number in the modular inverse step when calculating an ECDSA signature gives attackers enough information to recover the secret key. This can happen if you’re using non-constant-time arithmetic.
Timing attacks can even break state-of-the-art cryptography projects, like the algorithms submitted to NIST’s Post-Quantum Cryptography standardization effort:
https://twitter.com/EllipticKiwi/status/1295670085969838080
However–and this is important–if what leaks is a public input (n.b. something the attackers already knows anyway), then who cares?
(Art by Khia.)
Why Timing Leaks Don’t Break Signature Verification
If you’re reviewing some cryptography library and discovered a timing leak in the elliptic curve signature verification function, you might feel tempted to file a vulnerability report with the maintainers of the library.
If so, you’re wasting your time and theirs, for two reasons:
- Signature verification is performed over public inputs (message, public key, signature).
- Knowing which byte verification the comparison fails on isn’t sufficient for forging a signature for a chosen message.
The first part is obvious (and discussed above), but the second might seem untrue at first: If HMAC breaks this way, why doesn’t ECDSA also suffer here?
The Anatomy of Elliptic Curve Digital Signatures
Elliptic curve signatures are usually encoded as . How these numbers are derived and verified depends on the algorithm in question.
In the case of ECDSA, you calculate two numbers (, ) based on the hash of the plaintext and , both multiplied by the modular inverse of (mod ). You then calculate a curve point based on the public key (). The signature is valid if and only if the x coordinate of that curve point is equal to from the signature (and isn’t equal to the point at infinity).
Why Don’t Timing Attacks Do Anything Here?
Even with a timing leak on the string compare function in hand, you cannot easily find a valid for a chosen message for two reasons:
- The derivation of is effectively an All-Or-Nothing Transform based on secret inputs.
- The curve point equation ) multiplies the ratio r/s by the public key (because ).
In order to calculate a valid pair that will validate , you’d need to know the secret key that corresponds to .
It’s not impossible to calculate this value, but it’s computationally infeasible, and the difficulty of this problem is approximately one fourth the signature size. That is to say, 512-bit signatures, derived from 256-bit keys, have a security level of 128 bits.
Thus, timing leakage won’t let you perform an existential forgery here.
Aside: Don’t confuse signatures for MACs, as iMessage famously did.
(Art by Khia.)
Under What Conditions Could Timing Side-Channels Matter to ECDSA Verification?
Suppose you have a JSON Web Token library that’s vulnerable to the type confusion attack (wherein you can swap out the "alg":"ES256"
with "alg":"HS256"
and then use the public key as if it was an HMAC symmetric key).
In this hypothetical scenario, let’s say you’re using this JWT library in an OIDC-like configuration, where the identity provider signs tokens and the application verifies them, using a public key known to the application.
Also assume, for absolutely contrived reasons, that the public key is not known to the attacker.
If you had a timing attack that leaks the public key, that would be a viable (if horrendously slow) way to make the vulnerability exploitable.
However, even in this setup, the timing leak still doesn’t qualify as a real vulnerability. It merely defeats attempts at Security Through Obscurity. The real vulnerability is any JWT library that allows this attack (or alg=none).
Additionally, you can recover the public key if you have sufficient knowledge of the curve algorithm used, message signed, etc.–which you do if the algorithm is ES256
–so you don’t really even need a timing leak for this. Consequently, timing leaks would only help you if the original algorithm was something custom and obscure to attackers.
(Aside: there are two possible public keys from each signature, so the signature alone isn’t sufficient for uniquely identifying public keys. If you’re hoping to reduce protocol bandwidth through this trick, it won’t work.)
TL;DR
In order for a timing leak to be useful for cryptanalysis, it cannot leak a publicly-known input to the cryptographic operation.
https://soatok.blog/2021/06/07/dead-ends-in-cryptanalysis-2-timing-side-channels/
#cryptanalysis #crypto #cryptography #deadEndsInCryptanalysis #ECDSA #sideChannels #Technology #timingAttacks
This is the first entry in a (potentially infinite) series of dead end roads in the field of cryptanalysis.Cryptography engineering is one of many specialties within the wider field of security engineering. Security engineering is a discipline that chiefly concerns itself with studying how systems fail in order to build better systems–ones that are resilient to malicious acts or even natural disasters. It sounds much simpler than it is.
If you want to develop and securely implement a cryptography feature in the application you’re developing, it isn’t enough to learn how to implement textbook descriptions of cryptography primitives during your C.S. undergrad studies (or equivalent). An active interest in studying how cryptosystems fail is the prerequisite for being a cryptography engineer.
Thus, cryptography engineering and cryptanalysis research go hand-in-hand.
Pictured: How I feel when someone tells me about a novel cryptanalysis technique relevant to the algorithm or protocol I’m implementing. (Art by Khia.)
If you are interested in exploring the field of cryptanalysis–be it to contribute on the attack side of cryptography or to learn better defense mechanisms–you will undoubtedly encounter roads that seem enticing and not well-tread, and it might not be immediately obvious why the road is a dead end. Furthermore, beyond a few comparison tables on Wikipedia or obscure Stack Exchange questions, the cryptology literature is often sparse on details about why these avenues lead nowhere.
So let’s explore where some of these dead-end roads lead, and why they stop where they do.
(Art by Kyume.)
Length Extension Attacks
It’s difficult to provide a better summary of length extension attacks than what Skull Security wrote in 2012. However, that only addresses “What are they?”, “How do you use them?”, and “Which algorithms and constructions are vulnerable?”, but leaves out a more interesting question: “Why were they even possible to begin with?”An Extensive Tale
Tale, not tail! (Art by Swizz.)To really understand length extension attacks, you have to understand how cryptographic hash functions used to be designed. This might sound intimidating, but we don’t need to delve too deep into the internals.
A cryptographic hash function is a keyless pseudorandom transformation from a variable length input to a fixed-length output. Hash functions are typically used as building blocks for larger constructions (both reasonable ones like HMAC-SHA-256, and unreasonable ones like my hash-crypt project).
However, hash functions like SHA-256 are designed to operate on sequential blocks of input. This is because sometimes you need to stream data into a hash function rather than load it all into memory at once. (This is why you can sha256sum a file larger than your available RAM without crashing your computer or causing performance headaches.)
A streaming hash function API might look like this:
class MyCoolHash(BaseHashClass): @staticmethod def init(): """ Initialize the hash state. """ def update(data): """ Update the hash state with additional data. """ def digest(): """ Finalize the hash function. """ def compress(): """ (Private method.) """
To use it, you’d callhash = MyCoolHash.init()
and then chain togetherhash.update()
calls with data as you load it from disk or the network, until you’ve run out of data. Then you’d calldigest()
and obtain the hash of the entire message.There are two things to take away right now:
- You can call
update()
multiple times, and that’s valid.- Your data might not be an even multiple of the internal block size of the hash function. (More often than not, it won’t be!)
So what happens when you call
digest()
and the amount of data you’ve passed toupdate()
is not an even multiple of the hash size?For most hash functions, the answer is simple: Append some ISO/IEC 7816-4 padding until you get a full block, run that through a final iteration of the internal compression function–the same one that gets called on
update()
–and then output the current internal state.Let’s take a slightly deeper look at what a typical runtime would look like for the MyCoolHash class I sketched above:
hash = MyCoolHash.init()
- Initialize some variables to some constants (initialization vectors).
hash.update(blockOfData)
- Start with any buffered data (currently none), count up to 32 bytes. If you’ve reached this amount, invoke
compress()
on that data and clear the buffer. Otherwise, just append blockOfData to the currently buffered data.- For every 32 byte of data not yet touched by
compress()
, invokecompress()
on this block (updating the internal state).- If you have any leftover bytes, append to the internal buffer for the next invocation to process.
hash.update(moreData)
- Same as before, except there might be some buffered data from step 2.
output = hash.digest()
- If you have any data left in the buffer, append a 0x80 byte followed by a bunch of 0x00 bytes of padding until you reach the block size. If you don’t, you have an entire block of padding (0x80 followed by 0x00s).
- Call
compress()
one last time.- Serialize the internal hash state as a byte array or hexadecimal-encoded string (depending on usage). Return that to the caller.
This is fairly general description that will hold for most older hash functions. Some details might be slightly wrong (subtly different padding scheme, whether or not to include a block of empty padding on
digest()
invocations, etc.).The details aren’t super important. Just the rhythm of the design.
init()
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
update()
- load buffer,
compress()
compress()
compress()
- …
- buffer remainder
- …
digest()
- load buffer, pad,
compress()
- serialize internal state
- return
And thus, without having to know any of the details about what
compress()
even looks like, the reason why length extension attacks were ever possible should leap out at you!Art by Khia.
If it doesn’t, look closely at the difference between
update()
anddigest()
.There are only two differences:
update()
doesn’t pad before callingcompress()
digest()
returns the internal state thatcompress()
always mutatesThe reason length-extension attacks are possible is that, for some hash functions, the output of
digest()
is its full internal state.This means that you can run take an existing hash function and pretend it’s the internal state after an
update()
call instead of adigest()
call by appending the padding and then, after callingcompress()
, appending additional data of your choice.The (F)Utility of Length Extension
Length-Extension Attacks are mostly used for attacking naive message authentication systems where someone attempts to authenticate a message (M) with a secret key (k), but they construct it like so:
auth_code = vulnerable_hash(k.append(M))
If this sounds like a very narrow use-case, that’s because it is. However, it still broke Flickr’s API once, and it’s a popular challenge for CTF competitions around the world.Consequently, length-extension attacks are sometimes thought to be vulnerabilities of the construction rather than a vulnerability of the hash function. For a Message Authentication Code construction, these are classified under canonicalization attacks.
After all, even though SHA-256 is vulnerable to length-extension, but you can’t actually exploit it unless someone is using it in a vulnerable fashion.
That being said, it’s often common to say that hash functions like SHA-256 and SHA-512 are prone to length-extension.
Ways to Avoid Length-Extension Attacks
Use HMAC. HMAC was designed to prevent these kinds of attacks.Alternatively, if you don’t have any cryptographic secrets, you can always do what bitcoin did: Hash your hash again.
return sha256(sha256(message))
Note: Don’t actually do that, it’s dangerous for other reasons. You also don’t want to take this to an extreme. If you iterate your hash too many times, you’ll reinvent PBKDF1 and its insecurity. Two is plenty.Or you can do something really trivial (which ultimately became another standard option in the SHA-2 family of hash functions):
Always start with a 512-bit hash and then truncate your output so the attacker never recovers the entire internal state of your hash in order to extend it.
That’s why you’ll sometimes see SHA-512/224 and SHA-512/256 in a list of recommendations. This isn’t saying “use one or the other”, that’s the (rather confusing) notation for a standardized SHA-512 truncation.
Note: This is actually what SHA-384 has done all along, and that’s one of the reasons why you see SHA-384 used more than SHA-512.
If you want to be extra fancy, you can also just use a different hash function that isn’t vulnerable to length extension, such as SHA-3 or BLAKE2.
Questions and Answers
Art by Khia.Why isn’t BLAKE2 vulnerable to length extension attacks?
Quite simply: It sets a flag in the internal hash state before compressing the final buffer.If you try to deserialize this state then invoke
update()
, you’ll get a different result than BLAKE2’scompress()
produced duringdigest()
.For a secure hash function, a single bit of difference in the internal state should result in a wildly different output. (This is called the avalanche effect.)
Why isn’t SHA-3 vulnerable to length extension attacks?
SHA-3 is a sponge construction whose internal state is much larger than the hash function output. This prevents an attacker from recovering the hash function’s internal state from a message digest (similar to the truncated hash function discussed above).Why don’t length-extension attacks break digital signature algorithms?
Digital signature algorithms–such as RSASSA, ECDSA, and EdDSA–take a cryptographic hash of a message and then perform some asymmetric cryptographic transformation of the hash with the secret key to produce a signature that can be verified with a public key. (The exact details are particular to the signature algorithm in question.)Length-extension attacks only allow you to take a valid H(k || m) and produce a valid H(k || m || padding || extra) hash that will validate, even if you don’t know k. They don’t magically create collisions out of thin air.
Even if you use a weak hash function like SHA-1, knowing M and H(M) is not sufficient to calculate a valid signature. (You need to be able to know these values in order to verify the signature anyway.)
The security of digital signature algorithms depends entirely on the secrecy of the signing key and the security of the asymmetric cryptographic transformation used to generate a signature. (And its resilience to side-channel attacks.)
However, a more interesting class of attack is possible for systems that expect digital signatures to have similar properties as cryptographic hash functions. This would qualify as a protocol vulnerability, not a length-extension vulnerability.
TL;DR
Art by Khia.Length-extension attacks exploit a neat property of a few cryptographic hash functions–most of which you shouldn’t be using in 2020 anyway (SHA-2 is still fine)–but can only be exploited by a narrow set of circumstances.
If you find yourself trying to use length-extension to break anything else, you’ve probably run into a cryptographic dead end and need to backtrack onto more interesting avenues of exploitation–of which there are assuredly many (unless your cryptography is boring).
Next: Timing Side-Channels
https://soatok.blog/2020/10/06/dead-ends-in-cryptanalysis-1-length-extension-attacks/
#cryptanalysis #crypto #cryptographicHashFunction #cryptography #lengthExtensionAttacks
Earlier this week, security researcher Ryan Castellucci published a blog post with a somewhat provocative title: DKIM: Show Your Privates.
After reading the ensuing discussions on Hacker News and Reddit about their DKIM post, it seems clear that the importance of deniability in online communications seems to have been broadly overlooked.
Security Goals, Summarized
(Art by Swizz.)
When you design or implement any communications protocol, you typically have most or all of the following security goals:
- Confidentiality: Only the intended recipients can understand the contents of a message (almost always achieved through encryption).
- Integrity: The message will be delivered without alterations; and if it is, the recipient will know to reject it.
- Availability: Authorized users will have access to the resources they need (i.e. a medium they can communicate through).
However, you may also have one or more of the following security goals:
- Authenticity: In a group communication protocol, you want to ensure you can validate which participant sent each message. This is loosely related to, yet independent from, integrity.
- Non-Repudiation: An extension of authenticity, wherein you cannot deny that you sent a message after you sent it; it’s provable that you sent it.
- Deniability: The complement to non-repudiation, wherein you can prove that you sent a message to your recipient, and then at a future time make it possible for other participants to have forged the message.
It’s tempting to think of deniability as the opposite of non-repudiation, but in practice, you want messages to have authenticity for at least a brief period of time for both.
However, you cannot simultaneously have deniability and non-repudiation in a communication. They’re mutually exclusive concepts, even if they both build off authenticity. Hence, I call it a complement.
Off-The-Record messaging achieved deniability through publishing the signing key of the previous message with each additional message.
Security Properties of DKIM
Ryan Castellucci’s blog post correctly observed that the anti-spam protocol DKIM, as used by most mail providers in 2020, incidentally also offers non-repudiation…even if that’s not supposed to be a primary goal of DKIM.
Non-repudiation can be bolted onto any protocol with long-term asymmetric cryptographic keys used to generate digital signatures of messages–which is exactly what DKIM does.
Real World Case Study
A while ago, the New York Post published a DKIM-signed email from someone claiming to be named Vadym Pozharskyi to Hunter Biden–son of the presidential candidate and former Vice President Joe Biden.
Because the DKIM public keys used by Gmail during that time period are known–but not the private keys–it’s possible to authenticate that the emails came from Gmail and is a valid email. And someone did exactly this.
In a similar vein, if someone wanted to embarrass an executive at a large company, accessing their colleagues’ email and leaking messages would be sufficient, since DKIM could be used to verify that the emails are authentic.
Deniability in DKIM
Ryan’s proposal for introducing deniability in DKIM was to routinely rotate signing keys and publish fragments of their old DKIM private keys (which are RSA keys) so that anyone can reconstruct the private key after-the-fact.
This kind of deniability is mostly to mitigate against the harm of data leaks–such as your friend’s laptop getting stolen and someone trying to lambaste you on social media for an email you sent 10+ years ago–rather than provide a legal form of deniability. (We’re cryptography nerds, not lawyers.)
If the laptop theft scenario took place, with DKIM, someone can cryptographically prove you sent the email at a specific time to your friend with a specific body, because it’s signed by (presumably Gmail’s) DKIM keys.
Conversely, if you had used an email provider that practiced what Ryan proposed (rotating/publishing the private key at a regular interval), they couldn’t cryptographically prove anything. If the past private keys are public, anyone could have come along and forged the DKIM signature.
On Post-Compromise Security
The concept of Post-Compromise Security is somewhat related to deniability (but affects confidentiality rather than integrity or authenticity):
If someone successfully compromises one participant in a private discussion group, and their access is discovered, can the rest of the participants recover from this breach and continue to have privacy for future conversations?
It’s easy to see how the concepts are related.
- Deniability offers short-term authenticity followed by a long-term break in authenticity.
- Post-Compromise Security offers long-term confidentiality even if there’s a short-term break in confidentiality.
Robust private messaging protocols–such as what the IETF is trying to provide with Message Layer Security–would ideally offer both properties to their users.
Past attempts to build non-repudiation (through “message franking”) on top of cipher constructions like AES-GCM led to a class of attacks known affectionately as Invisible Salamanders, based on the title of the relevant research paper.
In Conclusion
https://twitter.com/matthew_d_green/status/1323011619069321216
It might seem really weird for cryptographers to want large-scale email providers to publish their expired DKIM secret keys, but when you understand the importance of deniability in past private communications, it’s a straightforward thing to want.
It’s worth noting: Some security experts will push back on this, because they work in computer forensics, and making DKIM deniable would theoretically make their job slightly more difficult.
Keep their self-interest in mind when they’re complaining about this notion, since the proposal is not to publish non-expired DKIM secret keys, and therefore it would not make spam more challenging to combat.
https://soatok.blog/2020/11/04/a-brief-introduction-to-deniability/
#cryptography #deniability #OnlinePrivacy #securityGoals #Technology
If you’re reading this wondering if you should stop using AES-GCM in some standard protocol (TLS 1.3), the short answer is “No, you’re fine”.I specialize in secure implementations of cryptography, and my years of experience in this field have led me to dislike AES-GCM.
This post is about why I dislike AES-GCM’s design, not “why AES-GCM is insecure and should be avoided”. AES-GCM is still miles above what most developers reach for when they want to encrypt (e.g. ECB mode or CBC mode). If you want a detailed comparison, read this.
To be clear: This is solely my opinion and not representative of any company or academic institution.
What is AES-GCM?
AES-GCM is an authenticated encryption mode that uses the AES block cipher in counter mode with a polynomial MAC based on Galois field multiplication.In order to explain why AES-GCM sucks, I have to first explain what I dislike about the AES block cipher. Then, I can describe why I’m filled with sadness every time I see the AES-GCM construction used.
What is AES?
The Advanced Encryption Standard (AES) is a specific subset of a block cipher called Rijndael.Rijndael’s design is based on a substitution-permutation network, which broke tradition from many block ciphers of its era (including its predecessor, DES) in not using a Feistel network.
AES only includes three flavors of Rijndael: AES-128, AES-192, and AES-256. The difference between these flavors is the size of the key and the number of rounds used, but–and this is often overlooked–not the block size.
As a block cipher, AES always operates on 128-bit (16 byte) blocks of plaintext, regardless of the key size.
This is generally considered acceptable because AES is a secure pseudorandom permutation (PRP), which means that every possible plaintext block maps directly to one ciphertext block, and thus birthday collisions are not possible. (A pseudorandom function (PRF), conversely, does have birthday bound problems.)
Why AES Sucks
Art by Khia.Side-Channels
The biggest reason why AES sucks is that its design uses a lookup table (called an S-Box) indexed by secret data, which is inherently vulnerable to cache-timing attacks (PDF).There are workarounds for this AES vulnerability, but they either require hardware acceleration (AES-NI) or a technique called bitslicing.
The short of it is: With AES, you’re either using hardware acceleration, or you have to choose between performance and security. You cannot get fast, constant-time AES without hardware support.
Block Size
AES-128 is considered by experts to have a security level of 128 bits.Similarly, AES-192 gets certified at 192-bit security, and AES-256 gets 256-bit security.
However, the AES block size is only 128 bits!
That might not sound like a big deal, but it severely limits the constructions you can create out of AES.
Consider the case of AES-CBC, where the output of each block of encryption is combined with the next block of plaintext (using XOR). This is typically used with a random 128-bit block (called the initialization vector, or IV) for the first block.
This means you expect a collision after encrypting (at 50% probability) blocks.
When you start getting collisions, you can break CBC mode, as this video demonstrates:
https://www.youtube.com/watch?v=v0IsYNDMV7A
This is significantly smaller than the you expect from AES.
Post-Quantum Security?
With respect to the number of attempts needed to find the correct key, cryptographers estimate that AES-128 will have a post-quantum security level of 64 bits, AES-192 will have a post-quantum security level of 96 bits, and AES-256 will have a post-quantum security level of 128 bits.This is because Grover’s quantum search algorithm can search unsorted items in time, which can be used to reduce the total number of possible secrets from to . This effectively cuts the security level, expressed in bits, in half.
Note that this heuristic estimate is based on the number of guesses (a time factor), and doesn’t take circuit size into consideration. Grover’s algorithm also doesn’t parallelize well. The real-world security of AES may still be above 100 bits if you consider these nuances.
But remember, even AES-256 operates on 128-bit blocks.
Consequently, for AES-256, there should be approximately (plaintext, key) pairs that produce any given ciphertext block.
Furthermore, there will be many keys that, for a constant plaintext block, will produce the same ciphertext block despite being a different key entirely. (n.b. This doesn’t mean for all plaintext/ciphertext block pairings, just some arbitrary pairing.)
Concrete example: Encrypting a plaintext block consisting of sixteen NUL bytes will yield a specific 128-bit ciphertext exactly once for each given AES-128 key. However, there are times as many AES-256 keys as there are possible plaintext/ciphertexts. Keep this in mind for AES-GCM.
This means it’s conceivable to accidentally construct a protocol that, despite using AES-256 safely, has a post-quantum security level on par with AES-128, which is only 64 bits.
This would not be nearly as much of a problem if AES’s block size was 256 bits.
Real-World Example: Signal
The Signal messaging app is the state-of-the-art for private communications. If you were previously using PGP and email, you should use Signal instead.Signal aims to provide private communications (text messaging, voice calls) between two mobile devices, piggybacking on your pre-existing contacts list.
Part of their operational requirements is that they must be user-friendly and secure on a wide range of Android devices, stretching all the way back to Android 4.4.
The Signal Protocol uses AES-CBC + HMAC-SHA256 for message encryption. Each message is encrypted with a different AES key (due to the Double Ratchet), which limits the practical blast radius of a cache-timing attack and makes practical exploitation difficult (since you can’t effectively replay decryption in order to leak bits about the key).
Thus, Signal’s message encryption is still secure even in the presence of vulnerable AES implementations.
Hooray for well-engineered protocols managing to actually protect users.
Art by Swizz.However, the storage service in the Signal App uses AES-GCM, and this key has to be reused in order for the encrypted storage to operate.
This means, for older phones without dedicated hardware support for AES (i.e. low-priced phones from 2013, which Signal aims to support), the risk of cache-timing attacks is still present.
This is unacceptable!
What this means is, a malicious app that can flush the CPU cache and measure timing with sufficient precision can siphon the AES-GCM key used by Signal to encrypt your storage without ever violating the security boundaries enforced by the Android operating system.
As a result of the security boundaries never being crossed, these kind of side-channel attacks would likely evade forensic analysis, and would therefore be of interest to the malware developers working for nation states.
Of course, if you’re on newer hardware (i.e. Qualcomm Snapdragon 835), you have hardware-accelerated AES available, so it’s probably a moot point.
Why AES-GCM Sucks Even More
AES-GCM is an authenticated encryption mode that also supports additional authenticated data. Cryptographers call these modes AEAD.AEAD modes are more flexible than simple block ciphers. Generally, your encryption API accepts the following:
- The plaintext message.
- The encryption key.
- A nonce (: A number that must only be used once).
- Optional additional data which will be authenticated but not encrypted.
The output of an AEAD function is both the ciphertext and an authentication tag, which is necessary (along with the key and nonce, and optional additional data) to decrypt the plaintext.
Cryptographers almost universally recommend using AEAD modes for symmetric-key data encryption.
That being said, AES-GCM is possibly my least favorite AEAD, and I’ve got good reasons to dislike it beyond simply, “It uses AES”.
The deeper you look into AES-GCM’s design, the harder you will feel this sticker.
GHASH Brittleness
The way AES-GCM is initialized is stupid: You encrypt an all-zero block with your AES key (in ECB mode) and store it in a variable called . This value is used for authenticating all messages authenticated under that AES key, rather than for a given (key, nonce) pair.
Diagram describing Galois/Counter Mode, taken from Wikipedia.
This is often sold as an advantage: Reusing allows for better performance. However, it makes GCM brittle: Reusing a nonce allows an attacker to recover H and then forge messages forever. This is called the “forbidden attack”, and led to real world practical breaks.Let’s contrast AES-GCM with the other AEAD mode supported by TLS: ChaCha20-Poly1305, or ChaPoly for short.
ChaPoly uses one-time message authentication keys (derived from each key/nonce pair). If you manage to leak a Poly1305 key, the impact is limited to the messages encrypted under that (ChaCha20 key, nonce) pair.
While that’s still bad, it isn’t “decrypt all messages under that key forever” bad like with AES-GCM.
Note: “Message Authentication” here is symmetric, which only provides a property called message integrity, not sender authenticity. For the latter, you need asymmetric cryptography (wherein the ability to verify a message doesn’t imply the capability to generate a new signature), which is totally disparate from symmetric algorithms like AES or GHASH. You probably don’t need to care about this nuance right now, but it’s good to know in case you’re quizzed on it later.
H Reuse and Multi-User Security
If you recall, AES operates on 128-bit blocks even when 256-bit keys are used.If we assume AES is well-behaved, we can deduce that there are approximately different 256-bit keys that will map a single plaintext block to a single ciphertext block.
This is trivial to calculate. Simply divide the number of possible keys () by the number of possible block states () to yield the number of keys that produce a given ciphertext for a single block of plaintext: .
Each key that will map an arbitrarily specific plaintext block to a specific ciphertext block is also separated in the keyspace by approximately .
This means there are approximately independent keys that will map a given all-zero plaintext block to an arbitrarily chosen value of (if we assume AES doesn’t have weird biases).
Credit: Harubaki
“Why Does This Matter?”
It means that, with keys larger than 128 bits, you can model the selection of as a 128-bit pseudorandom function, rather than a 128-bit permutation. As a result, you an expect a collision with 50% probability after only different keys are selected.Note: Your 128-bit randomly generated AES keys already have this probability baked into their selection, but this specific analysis doesn’t really apply for 128-bit keys since AES is a PRP, not a PRF, so there is no “collision” risk. However, you end up at the same upper limit either way.
But 50% isn’t good enough for cryptographic security.
In most real-world systems, we target a collision risk. So that means our safety limit is actually different AES keys before you have to worry about reuse.
This isn’t the same thing as symmetric wear-out (where you need to re-key after a given number of encryptions to prevent nonce reuse). Rather, it means after your entire population has exhausted the safety limit of different AES keys, you have to either accept the risk or stop using AES-GCM.
If you have a billion users (), the safety limit is breached after AES keys per user (approximately 262,000).
“What Good is H Reuse for Attackers if HF differs?”
There are two numbers used in AES-GCM that are derived from the AES key. is used for block multiplication, and (the value of with a counter of 0 from the following diagram) is XORed with the final result to produce the authentication tag.The arrow highlighted with green is HF.
It’s tempting to think that a reuse of isn’t a concern because will necessarily be randomized, which prevents an attacker from observing when collides. It’s certainly true that the single-block collision risk discussed previously for will almost certainly not also result in a collision for . And since isn’t reused unless a nonce is reused (which also leaks directly), this might seem like a non-issue.
Art by Khia.
However, it’s straightforward to go from a condition of reuse to an adaptive chosen-ciphertext attack.
- Intercept multiple valid ciphertexts.
- e.g. Multiple JWTs encrypted with
{"alg":"A256GCM"}
- Use your knowledge of , the ciphertext, and the AAD to calculate the GCM tag up to the final XOR. This, along with the existing authentication tag, will tell you the value of for a given nonce.
- Calculate a new authentication tag for a chosen ciphertext using and your candidate value, then replay it into the target system.
While the blinding offered by XORing the final output with is sufficient to stop from being leaked directly, the protection is one-way.
Ergo, a collision in is not sufficiently thwarted by .
“How Could the Designers Have Prevented This?”
The core issue here is the AES block size, again.If we were analyzing a 256-bit block variant of AES, and a congruent GCM construction built atop it, none of what I wrote in this section would apply.
However, the 128-bit block size was a design constraint enforced by NIST in the AES competition. This block size was during an era of 64-bit block ciphers (e.g. Triple-DES and Blowfish), so it was a significant improvement at the time.
NIST’s AES competition also inherited from the US government’s tradition of thinking in terms of “security levels”, which is why there are three different permitted key sizes (128, 192, or 256 bits).
“Why Isn’t This a Vulnerability?”
There’s always a significant gap in security, wherein something isn’t safe to recommend, but also isn’t susceptible to a known practical attack. This gap is important to keep systems secure, even when they aren’t on the bleeding edge of security.Using 1024-bit RSA is a good example of this: No one has yet, to my knowledge, successfully factored a 1024-bit RSA public key. However, most systems have recommended a minimum 2048-bit for years (and many recommend 3072-bit or 4096-bit today).
With AES-GCM, the expected distance between collisions in is , and finding an untargeted collision requires being able to observe more than different sessions, and somehow distinguish when collides.
As a user, you know that after different keys, you’ve crossed the safety boundary for avoiding collisions. But as an attacker, you need bites at the apple, not . Additionally, you need some sort of oracle or distinguisher for when this happens.
We don’t have that kind of distinguisher available to us today. And even if we had one available, the amount of data you need to search in order for any two users in the population to reuse/collide is challenging to work with. You would need the computational and data storages of a major cloud service provider to even think about pulling the attack off.
Naturally, this isn’t a practical vulnerability. This is just another gripe I have with AES-GCM, as someone who has to work with cryptographic algorithms a lot.
Short Nonces
Although the AES block size is 16 bytes, AES-GCM nonces are only 12 bytes. The latter 4 bytes are dedicated to an internal counter, which is used with AES in Counter Mode to actually encrypt/decrypt messages.(Yes, you can use arbitrary length nonces with AES-GCM, but if you use nonces longer than 12 bytes, they get hashed into 12 bytes anyway, so it’s not a detail most people should concern themselves with.)
If you ask a cryptographer, “How much can I encrypt safely with AES-GCM?” you’ll get two different answers.
- Message Length Limit: AES-GCM can be used to encrypt messages up to bytes long, under a given (key, nonce) pair.
- Number of Messages Limit: If you generate your nonces randomly, you have a 50% chance of a nonce collision after messages.
However, 50% isn’t conservative enough for most systems, so the safety margin is usually much lower. Cryptographers generally set the key wear-out of AES-GCM at random nonces, which represents a collision probability of one in 4 billion.These limits are acceptable for session keys for encryption-in-transit, but they impose serious operational limits on application-layer encryption with long-term keys.
Random Key Robustness
Before the advent of AEAD modes, cryptographers used to combine block cipher modes of operation (e.g. AES-CBC, AES-CTR) with a separate message authentication code algorithm (e.g. HMAC, CBC-MAC).You had to be careful in how you composed your protocol, lest you invite Cryptographic Doom into your life. A lot of developers screwed this up. Standardized AEAD modes promised to make life easier.
Many developers gained their intuition for authenticated encryption modes from protocols like Signal’s (which combines AES-CBC with HMAC-SHA256), and would expect AES-GCM to be a drop-in replacement.
Unfortunately, GMAC doesn’t offer the same security benefits as HMAC: Finding a different (ciphertext, HMAC key) pair that produces the same authentication tag is a hard problem, due to HMAC’s reliance on cryptographic hash functions. This makes HMAC-based constructions “message committing”, which instills Random Key Robustness.
Critically, AES-GCM doesn’t have this property. You can calculate a random (ciphertext, key) pair that collides with a given authentication tag very easily.
This fact prohibits AES-GCM from being considered for use with OPAQUE (which requires RKR), one of the upcoming password-authenticated key exchange algorithms. (Read more about them here.)
Better-Designed Algorithms
You might be thinking, “Okay random furry, if you hate AES-GCM so much, what would you propose we use instead?”I’m glad you asked!
XChaCha20-Poly1305
For encrypting messages under a long-term key, you can’t really beat XChaCha20-Poly1305.
- ChaCha is a stream cipher based on a 512-bit ARX hash function in counter mode. ChaCha doesn’t use S-Boxes. It’s fast and constant-time without hardware acceleration.
- ChaCha20 is ChaCha with 20 rounds.
- XChaCha nonces are 24 bytes, which allows you to generate them randomly and not worry about a birthday collision until about messages (for the same collision probability as AES-GCM).
- Poly1305 uses different 256-bit key for each (nonce, key) pair and is easier to implement in constant-time than AES-GCM.
- XChaCha20-Poly1305 uses the first 16 bytes of the nonce and the 256-bit key to generate a distinct subkey, and then employs the standard ChaCha20-Poly1305 construction used in TLS today.
For application-layer cryptography, XChaCha20-Poly1305 contains most of the properties you’d want from an authenticated mode.
However, like AES-GCM (and all other Polynomial MACs I’ve heard of), it is not message committing.
The Gimli Permutation
For lightweight cryptography (n.b. important for IoT), the Gimli permutation (e.g. employed in libhydrogen) is an attractive option.Gimli is a Round 2 candidate in NIST’s Lightweight Cryptography project. The Gimli permutation offers a lot of applications: a hash function, message authentication, encryption, etc.
Critically, it’s possible to construct a message-committing protocol out of Gimli that will hit a lot of the performance goals important to embedded systems.
Closing Remarks
Despite my personal disdain for AES-GCM, if you’re using it as intended by cryptographers, it’s good enough.Don’t throw AES-GCM out just because of my opinions. It’s very likely the best option you have.
Although I personally dislike AES and GCM, I’m still deeply appreciative of the brilliance and ingenuity that went into both designs.
My desire is for the industry to improve upon AES and GCM in future cipher designs so we can protect more people, from a wider range of threats, in more diverse protocols, at a cheaper CPU/memory/time cost.
We wouldn’t have a secure modern Internet without the work of Vincent Rijmen, Joan Daemen, John Viega, David A. McGrew, and the countless other cryptographers and security researchers who made AES-GCM possible.
Change Log
- 2021-10-26: Added section on H Reuse and Multi-User Security.
https://soatok.blog/2020/05/13/why-aes-gcm-sucks/
#AES #AESGCM #cryptography #GaloisCounterMode #opinion #SecurityGuidance #symmetricCryptography
Let me say up front, I’m no stranger to negative or ridiculous feedback. It’s incredibly hard to hurt my feelings, especially if you intend to. You don’t openly participate in the furry fandom since 2010 without being accustomed to malevolence and trolling. If this were simply a story of someone being an asshole to me, I would have shrugged and moved on with my life.
It’s important that you understand this, because when you call it like you see it, sometimes people dismiss your criticism with “triggered” memes. This isn’t me being offended. I promise.
My recent blog post about crackpot cryptography received a fair bit of attention in the software community. At one point it was on the front page of Hacker News (which is something that pretty much never happens for anything I write).
Unfortunately, that also means I crossed paths with Zed A. Shaw, the author of Learn Python the Hard Way and other books often recommended to neophyte software developers.
As someone who spends a lot of time trying to help newcomers acclimate to the technology industry, there are some behaviors I’ve recognized in technologists over the years that makes it harder for newcomers to overcome anxiety, frustration, and Impostor Syndrome. (Especially if they’re LGBTQIA+, a person of color, or a woman.)
Normally, these are easily correctable behaviors exhibited by people who have good intentions but don’t realize the harm they’re causing–often not by what they’re saying, but by how they say it.
Sadly, I can’t be so generous about… whatever this is:
https://twitter.com/lzsthw/status/1359659091782733827
Having never before encountered a living example of a poorly-written villain towards the work I do to help disadvantaged people thrive in technology careers, I sought to clarify Shaw’s intent.
https://twitter.com/lzsthw/status/1359673331960733696
https://twitter.com/lzsthw/status/1359673714607013905
This is effectively a very weird hybrid of an oddly-specific purity test and a form of hazing ritual.
Let’s step back for a second. Can you even fathom the damage attitudes like this can cause? I can tell you firsthand, because it happened to me.
Interlude: Amplified Impostor Syndrome
In the beginning of my career, I was just a humble web programmer. Due to a long story I don’t want to get into now, I was acquainted with the culture of black-hat hacking that precipitates the DEF CON community.
In particular, I was exposed the writings of a malicious group called Zero For 0wned, which made sport of hunting “skiddiez” and preached a very “shut up and stay in your lane” attitude:
Geeks don’t really come to HOPE to be lectured on the application of something simple, with very simple means, by a 15 year old. A combination of all the above could be why your room wasn’t full. Not only was it fairly empty, but it emptied at a rapid rate. I could barely take a seat through the masses pushing me to escape. Then when I thought no more people could possibly leave, they kept going. The room was almost empty when I gave in and left also. Heck, I was only there because we pwned the very resources you were talking about.Zero For 0wned
My first security conference was B-Sides Orlando in 2013. Before the conference, I had been hanging out in the #hackucf IRC channel and had known about the event well in advance (and got along with all the organizers and most of the would-be attendees), and considered applying to their CFP.
I ultimately didn’t, solely because I was worried about a ZF0-style reception.
I had no reference frame for other folks’ understanding of cryptography (which is my chosen area of discipline in infosec), and thought things like timing side-channels were “obvious”–even to software developers outside infosec. (Such is the danger of being self-taught!)
“Geeks don’t really come to B-Sides Orlando to be lectured on the application of something simple, with very simple means,” is roughly how I imagined the vitriol would be framed.
If it can happen to me, it can happen to anyone interested in tech. It’s the responsibility of experts and mentors to spare beginners from falling into the trappings of other peoples’ grand-standing.
Pride Before Destruction
With this in mind, let’s return to Shaw. At this point, more clarifying questions came in, this time from Fredrick Brennan.
https://twitter.com/lzsthw/status/1359712275666505734
What an arrogant and bombastic thing to say!
At this point, I concluded that I can never again, in good conscience, recommend any of Shaw’s books to a fledgling programmer.
If you’ve ever published book recommendations before, I suggest auditing them to make sure you’re not inadvertently exposing beginners to his harmful attitude and problematic behavior.
But while we’re on the subject of Zed Shaw’s behavior…
https://twitter.com/lzsthw/status/1359714688972582916
If Shaw thinks of himself as a superior cryptography expert, surely he’s published cryptography code online before.
And surely, it will withstand a five-minute code review from a gay furry blogger who never went through Shaw’s prescribed hazing ritual to rediscover specifically the known problems in OpenSSL circa Heartbleed and is therefore not as much of a cryptography expert?
(Art by Khia.)
May I Offer You a Zero-Day in This Trying Time?
One of Zed A. Shaw’s Github projects is an implementation of SRP (Secure Remote Password)–an early Password-Authenticated Key Exchange algorithm often integrated with TLS (to form TLS-SRP).
Zed Shaw’s SRP implementation
Without even looking past the directory structure, we can already see that it implements an algorithm called TrueRand, which cryptographer Matt Blaze has this to say:
https://twitter.com/mattblaze/status/438464425566412800
As noted by the README, Shaw stripped out all of the “extraneous” things and doesn’t have all of the previous versions of SRP “since those are known to be vulnerable”.
So given Shaw’s previous behavior, and the removal of vulnerable versions of SRP from his fork of Tom Wu’s libsrp code, it stands to reason that Shaw believes the cryptography code he published would be secure. Otherwise, why would he behave with such arrogance?
SRP in the Grass
Head’s up! If you aren’t cryptographically or mathematically inclined, this section might be a bit dense for your tastes. (Art by Scruff.)
When I say SRP, I’m referring to SRP-6a. Earlier versions of the protocol are out of scope; as are proposed variants (e.g. ones that employ SHA-256 instead of SHA-1).
Professor Matthew D. Green of Johns Hopkins University (who incidentally used to proverbially shit on OpenSSL in the way that Shaw expects everyone to, except productively) dislikes SRP but considered the protocol “not obviously broken”.
However, a secure protocol doesn’t mean the implementations are always secure. (Anyone who’s looked at older versions of OpenSSL’s BigNum library after reading my guide to side-channel attacks knows better.)
There are a few ways to implement SRP insecurely:
- Use an insecure random number generator (e.g. TrueRand) for salts or private keys.
- Fail to use a secure set of parameters (q, N, g).
To expand on this, SRP requires q be a Sophie-Germain prime and N be its corresponding Safe Prime. The standard Diffie-Hellman primes (MODP) are not sufficient for SRP.This security requirement exists because SRP requires an algebraic structure called a ring, rather than a cyclic group (as per Diffie-Hellman).
- Fail to perform the critical validation steps as outlined in RFC 5054.
In one way or another, Shaw’s SRP library fails at every step of the way. The first two are trivial:
- We’ve already seen the RNG used by srpmin. TrueRand is not a cryptographically secure pseudo random number generator.
- Zed A. Shaw’s srpmin only supports unsafe primes for SRP (i.e. the ones from RFC 3526, which is for Diffie-Hellman).
The third is more interesting. Let’s talk about the RFC 5054 validation steps in more detail.
Parameter Validation in SRP-6a
Retraction (March 7, 2021): There are two errors in my original analysis.
First, I misunderstood the behavior of SRP_respond()
to involve a network transmission that an attacker could fiddle with. It turns out that this function doesn’t do what its name implies.
Additionally, I was using an analysis of SRP3 from 1997 to evaluate code that implements SRP6a. u
isn’t transmitted, so there’s no attack here.
I’ve retracted these claims (but you can find them on an earlier version of this blog post via archive.org). The other SRP security issues still stand; this erroneous analysis only affects the u
validation issue.
Vulnerability Summary and Impact
That’s a lot of detail, but I hope it’s clear to everyone that all of the following are true:
- Zed Shaw’s library’s use of TrueRand fails the requirement to use a secure random source. This weakness affects both the salt and the private keys used throughout SRP.
- The library in question ships support for unsafe parameters (particularly for the prime, N), which according to RFC 5054 can leak the client’s password.
Salts and private keys are predictable and the hard-coded parameters allow passwords to leak.
But yes, OpenSSL is the real problem, right?
(Art by Khia.)
Low-Hanging ModExp Fruit
Shaw’s SRP implementation is pluggable and supports multiple back-end implementations: OpenSSL, libgcrypt, and even the (obviously not constant-time) GMP.
Even in the OpenSSL case, Shaw doesn’t set the BN_FLG_CONSTTIME
flag on any of the inputs before calling BN_mod_exp()
(or, failing that, inside BigIntegerFromInt
).
As a consequence, this is additionally vulnerable to a local-only timing attack that leaks your private exponent (which is the SHA1 hash of your salt and password). Although the literature on timing attacks against SRP is sparse, this is one of those cases that’s obviously vulnerable.
Exploiting the timing attack against SRP requires the ability to run code on the same hardware as the SRP implementation. Consequently, it’s possible to exploit this SRP ModExp timing side-channel from separate VMs that have access to the same bare-metal hardware (i.e. L1 and L2 caches), unless other protections are employed by the hypervisor.
Leaking the private exponent is equivalent to leaking your password (in terms of user impersonation), and knowing the salt and identifier further allows an attacker to brute force your plaintext password (which is an additional risk for password reuse).
Houston, The Ego Has Landed
Earlier when I mentioned the black hat hacker group Zero For 0wned, and the negative impact their hostile rhetoric, I omitted an important detail: Some of the first words they included in their first ezine.
For those of you that look up to the people mentioned, read this zine, realize that everyone makes mistakes, but only the arrogant ones are called on it.
If Zed A. Shaw were a kinder or humbler person, you wouldn’t be reading this page right now. I have a million things I’d rather be doing than exposing the hypocrisy of an arrogant jerk who managed to bullshit his way into the privileged position of educating junior developers through his writing.
If I didn’t believe Zed Shaw was toxic and harmful to his very customer base, I certainly wouldn’t have publicly dropped zero-days in the code he published while engaging in shit-slinging at others’ work and publicly shaming others for failing to meet arbitrarily specific purity tests that don’t mean anything to anyone but him.
But as Dan Guido said about Time AI:
https://twitter.com/veorq/status/1159575230970396672
It’s high time we stopped tolerating Zed’s behavior in the technology community.
If you want to mitigate impostor syndrome and help more talented people succeed with their confidence intact, boycott Zed Shaw’s books. Stop buying them, stop stocking them, stop recommending them.
Learn Decency the Hard Way
(Updated on February 12, 2021)
One sentiment and question that came up a few times since I originally posted this is, approximately, “Who cares if he’s a jerk and a hypocrite if he’s right?”
But he isn’t. At best, Shaw almost has a point about the technology industry’s over-dependence on OpenSSL.
Shaw’s weird litmus test about whether or not my blog (which is less than a year old) had said anything about OpenSSL during the “20+ years it was obviously flawed” isn’t a salient critique of this problem. Without a time machine, there is no actionable path to improvement.
You can be an inflammatory asshole and still have a salient point. Shaw had neither while demonstrating the worst kind of conduct to expose junior developers to if we want to get ahead of the rampant Impostor Syndrome that plagues us.
This is needlessly destructive to his own audience.
Generally the only people you’ll find who outright like this kind of abusive behavior in the technology industry are the self-proclaimed “neckbeards” that live on the dregs of elitist chan culture and desire for there to be a priestly technologist class within society, and furthermore want to see themselves as part of this exclusive caste–if not at the top of it. I don’t believe these people have anyone else’s best interests at heart.
So let’s talk about OpenSSL.
OpenSSL is the Manifestation of Mediocrity
OpenSSL is everywhere, whether you realize it or not. Any programming language that provides a crypto
module (Erlang, Node.js, Python, Ruby, PHP) binds against OpenSSL libcrypto.
OpenSSL kind of sucks. It used to be a lot worse. A lot of people have spent the past 7 years of their careers trying to make it better.
A lot of OpenSSL’s suckage is because it’s written mostly in C, which isn’t memory-safe. (There’s also some Perl scripts to generate Assembly code, and probably some other crazy stuff under the hood I’m not aware of.)
A lot of OpenSSL’s suckage is because it has to be all things to all people that depend on it, because it’s ubiquitous in the technology industry.
But most of OpenSSL’s outstanding suckage is because, like most cryptography projects, its API was badly designed. Sure, it works well enough as a Swiss army knife for experts, but there’s too many sharp edges and unsafe defaults. Further, because so much of the world depends on these legacy APIs, it’s difficult (if not impossible) to improve the code quality without making upgrades a miserable task for most of the software industry.
What Can We Do About OpenSSL?
There are two paths forward.
First, you can contribute to the OpenSSL 3.0 project, which has a pretty reasonable design document that almost nobody outside of the OpenSSL team has probably ever read before. This is probably the path of least resistance for most of the world.
Second, you can migrate your code to not use OpenSSL. For example, all of the cryptography code I’ve written for the furry community to use in our projects is backed by libsodium rather than OpenSSL. This is a tougher sell for most programming languages–and, at minimum, requires a major version bump.
Both paths are valid. Improve or replace.
But what’s not valid is pointlessly and needlessly shit-slinging open source projects that you’re not willing to help. So I refuse to do that.
Anyone who thinks that makes me less of a cryptography expert should feel welcome to not just unfollow me on social media, but to block on their way out.
https://soatok.blog/2021/02/11/on-the-toxicity-of-zed-a-shaw/
#author #cryptography #ImpostorSyndrome #PAKE #SecureRemotePasswordProtocol #security #SRP #Technology #toxicity #vuln #ZedAShaw #ZeroDay
Sometimes my blog posts end up on social link-sharing websites with a technology focus, such as Lobste.rs or Hacker News.On a good day, this presents an opportunity to share one’s writing with a larger audience and, more importantly, solicit a wider variety of feedback from one’s peers.
However, sometimes you end up with feedback like this, or this:
Apparently my fursona is ugly, and therefore I’m supposed to respect some random person’s preferences and suppress my identity online.
I’m no stranger to gatekeeping in online communities, internet trolls, or bullying in general. This isn’t my first rodeo, and it won’t be my last.
These kinds of comments exist to send a message not just to me, but to anyone else who’s furry or overtly LGBTQIA+: You’re weird and therefore not welcome here.
Of course, the moderators rarely share their views.
https://twitter.com/pushcx/status/1281207233020379137
Because of their toxic nature, there is only one appropriate response to these kinds of comments: Loud and persistent spite.
So here’s some more art I’ve commissioned or been gifted of my fursona over the years that I haven’t yet worked into a blog post:
Art by kazetheblaze
Art by leeohfox
Art by Diffuse MooseIf you hate furries so much, you will be appalled to learn that factoids about my fursona species have landed in LibreSSL’s source code (decoded).
Never underestimate furries, because we make the Internets go.
I will never let these kind of comments discourage me from being open about my hobbies, interests, or personality. And neither should anyone else.
If you don’t like my blog posts because I’m a furry but still find the technical content interesting, know now and forever more that, when you try to push me or anyone else out for being different, I will only increase the fucking thing.
Header art created by @loviesophiee and inspired by floccinaucinihilipilification.
https://soatok.blog/2020/07/09/a-word-on-anti-furry-sentiments-in-the-tech-community/
#antiFurryBullying #cyberculture #furry #HackerNews #LobsteRs #Reddit
I probably don’t need to remind anyone reading this while it’s fresh about the current state of affairs in the world, but for the future readers looking back on this time, let me set the stage a bit.
The Situation Today
(By “Today”, I mean early May 2020, when I started writing this series.)
In the past two months, over 26 million Americans have filed for unemployment, and an additional 14 million have been unable to file.
Federal Reserve chairman, Jerome Powell, says we’re in the worst economy ever.
In a desperate bid of economic necromancy, many government officials want to put millions more Americans at risk of COVID-19 before we can develop a vaccine and effective treatment. And we still don’t even know the long-term effects of the virus.
I’m not interested in discussing the politics of this pandemic or who to blame; I’ll leave that to everyone else with an opinion. Instead, I want to acknowledge two facts that most people probably already know:
- This was mostly avoidable with competent leadership and responsible preparation
- Most of us have rough times ahead of us
I can’t do anything about the first point (although most people are focused on it), but I want to try to alleviate the second point.
What This Series is About
Whether you lost your job and need an income to survive, or you’re one of the essential workers wanting to avoid being sacrificed by politicians for the sake of economic necromancy, I wrote this guide to help you transition into a technology career with little-to-no tech experience.
This is not a magic bullet! It will require time, focus, and effort.
But if you follow the advice on the subsequent posts in this series, you will at least have another option available to you. The value of choice, especially when you otherwise have none, is difficult to overstate.
I am not selling anything, nor are there ads on these pages.
This entire series is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Why Work in Tech?
Technology careers aren’t everyone’s cup of tea, and they might be far from your first choice, but there are a couple of advantages that you should be aware of especially during this pandemic and lockdown:
- Most technology careers can be performed remotely.
- Most technology careers pay well.
The first point is especially important for folks living in rural areas hit hard by a lack of local employment opportunities.
A lot of the information and suggestions contained in this series may be applicable to other domains. However, my entire career has been in tech, so I cannot in good conscience speak to the requirements to gain employment in those industries.
Why Should We Trust You?
You shouldn’t. I encourage you to take everything I say with a grain of salt and fact-check any claims I make. Seriously.
My Background
I’m currently employed as a security engineer for a cryptography team of a larger company, although I don’t even have a Bachelor’s degree. I’ve worked with teams of all sizes on countless technology stacks.
I have been programming, in one form or another, since I was in middle school (about 18 years ago), although I didn’t start my professional career until 2011. I’ve been on both sides of bug bounty programs, including as my fursona. A nontrivial percentage of the websites on the Internet run security code I wrote under my professional name.
Art by Khia
My Motivation
Over the past few years, I’ve helped a handful of friends (some of them furries) transition into technology careers. I am writing this series, and distributing it for free because I want to scale up the effort I used to put into mentoring.
I’m writing this series under my furry persona, and drenching the articles with queer and furry art, to make it less palatable to bigots.
Art by Kerijiano
Series Contents
- Building Your Support Network and/or Team
- Mapping the Technology Landscape
- Learning the Fundamental Skills
- Choosing Your Path
- Starting and Growing an Open Source Project
- Building Your C.V.
- Getting Your First Tech Job
- Starting a Technology Company
- Career Growth and Paying It Forward
The first three entries are the most important.
The header art for this entire series was created by ScruffKerfluff.
https://soatok.blog/2020/06/08/furward-momentum-introduction/
Furward Momentum (Introduction)
- Building Your Support Network and/or Team
- Mapping the Technology Landscape
- Learning the Fundamental Skills
- Choosing Your Path
- Starting and Growing an Open Source Project
- Building Your C.V.
- Getting Your First Tech Job
- Starting a Technology Company
- Career Growth and Paying It Forward
If you’re reading this, I presume you want to pursue a career in technology, but you have little-to-no work experience to cite on your resume.
https://twitter.com/JibKodi/status/907290992289751046
This is ambitious; your success is not guaranteed. But with a little bit of care and a lot of dedication, you can pull it off!
But–and I cannot emphasize this enough–you won’t do it alone.
https://twitter.com/JibKodi/status/908742571014463491
Falsehoods People Believe About Success
It’s difficult to talk about careers without stumbling into a lot of falsehoods and cognitive distortions that many of us have picked up from society over the years. Let’s take a moment to acknowledge some of these myths and delusions so we can deprogram ourselves of them.The myth of “the self-made man” needs to be retired. It’s the trifecta of widespread, harmful, and incorrect. A lot of books have been sold to people in desperate pursuit of this myth. This genre is called “self help”, which displays an amusing lack of self-awareness: If you could help yourself, what do you need the book for?
People who claim to be “self-made” are somewhere on a spectrum that ranges from breathtaking ignorance all the way to self-entitled narcissism that borders on solipsism.
I’m not here to stroke anyone’s ego or sell you anything.
Humans (and, I suppose, animals of the anthropomorphic variety) are a social species. Our career success cannot exist in a vacuum of our own ego. Anyone who wants to be self-made has doomed themselves from the offset.
No matter what you specialize in, nor how talented you are, you will always sink or swim largely based on how well you work with others. Being able to work effectively on your own is a smaller (but still important) component.
However, working well with others doesn’t mean being an ass-kisser or a push-over. You can and SHOULD say “No” to unreasonable demands. Value your time and assert your personal boundaries. If you don’t, nobody else will.
Some people seem to think (especially regarding women and queer folks) that others can get promoted through an organization by being attractive and/or sexually promiscuous. I like to ask them if they’ve ever tested their theory in their own life, and if not, why they believe it to be true.
Envy is just the shadow of narcissism, and it’s a bad look for everyone. Rather than disparaging others for career success you feel is unwarranted, wouldn’t your time be better spent on improving yourself (and helping the other people you feel are more deserving do the same, thereby cancelling out the relative trajectory of those you think are “unworthy”)?
The Simple Truth
If you want to change careers, you’re going to need the support of your friends and family (chosen or otherwise).If you don’t have any friends, or your friends are unable or unwilling to support you in your ambitions, stop what you’re doing and find people who will.
“But I don’t know anyone!”
I run a group on Telegram called Furry Technologists for furries interested in science and technology. Start there; I promise we don’t bite.Furry Technologists Group Logo
“What if I’m not a furry?”
No problem. There are plenty of venues for folks of every background to meet each other on the Internet.For example, there’s a Slack channel called LGBTQ in Technology that I recommend to LGBTQIA+ people outside the furry fandom interested in tech work.
If you can’t locate a venue that you feel comfortable making friends in, ask around.
“What’s stopping us from pursuing our dreams? Nothing!” Art by ScruffKerfluff.
Form a Coalition or Task Force
For the remainder of this series, I’m going to assume that you’ve managed to gather a small group of 3-6 people (including yourself) to pursue this journey together.You can think of this as a coalition or task force, since your scope is a little bit wider and more ambitious than “study buddies”: You will be helping each other and holding each other accountable throughout this whole process. (Some people prefer to call such a team a “think tank”. That’s fine too.)
So choose your team carefully.
On Romance and Technology Careers
Open and/or polyamorous relationships are valid, but if you’re in one and considering pursuing a career change with only your sexual/romantic partners to support you, you may want to reconsider.Pursuing a career change is somewhat destabilizing (unless you’re already on shaky ground because the economy is totally fucked). If your only support network consists of people you’re intimate with, one of two things can go wrong:
- Relationship troubles can throw a wrench in your plans to pursue your career change, especially if the tension prevents you from studying together.
- Challenges and setbacks toward your new career can amplify minor conflicts with some or all of your partners.
The easiest way to mitigate these risks is to avoid the situation to begin with, and study with people outside of your polycule.
However, if all of you are negatively affected by the economy, you may not have another option than to work together.
Effective Communication Skills
The first thing you’ll want to work on when you’re starting this journey is effective communication. This may come naturally to a lot of people, so feel free to skip it if everyone’s up to snuff. There’s plenty to do and no time to waste, after all.Be direct. If you naturally adopt an indirect communication style, this is even more important to deliberately focus on.
Avoid blame. Even if something is their fault, the responsibility is shared among everyone. The only thing we have to blame is blame itself.
Practice echoing questions before you answer them. This shows that you understand the question, and in emotionally intense discussions, it tells the other person that you’re listening.
If you’re a book person, Thinking, Fast and Slow and Nonviolent Communication are both worth reading.
If you’re more of a video person, you can get a lot of mileage out of these TED Talks by Julian Treasure:
https://www.ted.com/talks/julian_treasure_how_to_speak_so_that_people_want_to_listen?language=en
https://www.ted.com/talks/julian_treasure_5_ways_to_listen_better/transcript?language=en
You’ll never be quite finished polishing up your soft skills, but that’s okay! You only need to be better than 50% of the people you’ll encounter to make a positive impression; and a lot of people downright suck at it. Their weakness is your opportunity.
Keeping Yourself Honest
Good intentions never work, you need good mechanisms to make anything happen.Jeff Bezos, the richest man in modern history (source)
As you work through the remainder of this series, whether you’re actively working together or focusing on different things independently and then meeting up routinely to compare notes, you’ll want to go a step beyond verbally agreeing to meet at a designated time.Write it down. Send calendar invites if you have to.
Don’t make it convenient for things to slip your mind, or else they will.
A huge part of changing careers is developing new habits, and the two that will serve you well universally in technology is taking notes and putting stuff on calendars.
And when (not if, when) something does slip for someone, have a conversation with them. Maybe you need to slow down so they can catch up. Maybe something happened in their personal life that disrupted their flow and they need to collect themselves. Be empathetic.
What You Will Be Doing Together
So you’ve built a coalition/team. What do you do with it?In the beginning, you will be looking at the different roles in the technology industry and trying to ascertain which jobs you’ll most likely succeed at. We’ll cover this in the next section, Mapping the Technology Landscape.
Once you have even a vague idea of where you’re headed, your coalition will be studying together to Learn the Fundamental Skills. Even if you’re all specializing in vastly different areas with no overlapping curricula, you’ll be pleasantly surprised how effective, “Can I explain this concept to someone outside of this specialized domain?” is towards measuring your own comprehension of the subject.
After you’ve got the hang of the fundamentals of your chosen discipline and had a chance to work together, the next step is Choosing Your Path: You must collectively decide whether you want to work for an existing company or start your own together. There are advantages and disadvantages to both choices, but in the beginning you may want to at least prepare to get hired rather than start out from scratch.
Regardless of your destination, the next step will be to gain some experience. For this step, I recommend Starting and Growing an Open Source Project. If you’re choosing the path of an employee, this will give you a chance to gain experience. If you’re choosing the path of an entrepreneur, this will give you a chance to develop a prototype solution for the kinds of problems you want to solve professionally. In either case, an open source project is the best opportunity to check your own understanding with other teams/coalitions going through a similar journey and learn from each other.
With a bit of experience under your belt from the previous step, the next step is Building Your C.V. Notice I didn’t call it a résumé. A résumé is just a document; a C.V. is also about your reputation and accomplishments within an industry.
Naturally, the step that follows is either Getting Your First Tech Job or Starting a Technology Company, depending on which path you chose after learning the fundamental skills.
The very first job of your technology career may not be what you hope for. Mine definitely wasn’t, but your mileage may vary. Thus, the final entry in this series is dedicated to Career Growth and Paying It Forward. If you’re not yet happy with where you landed, this section should help you grow your skills further and attain the role you were seeking from the offset of your journey. When you finally get to where you want to be, you’ll very likely be adept at helping your friends and community members escape from unfavorable life circumstances.
Next: Mapping the Technology Landscape
https://soatok.blog/furward-momentum-building-your-support-network-and-or-team/
Tonight on InfoSec Twitter, this gem was making the rounds:
Hello cybersecurity and election security people,
I sometimes embed your tweets in the Cybersecurity 202 newsletter. Some of you have a habit of swearing right in the middle of an otherwise deeply insightful tweet that I’d like to use. Please consider not doing this.
Best,
JoeIdentity redacted.
As tempting as it is to just senselessly dunk on the guy, in the spirit of fairness, let’s list the things he did right:
- His tweet was politely worded.
It’s something? He could’ve been another Karen, after all!
What Joe got wrong with this tweet is just the latest example of a widespread issue in and around the security community–especially on social media and content aggregator websites.
The structure of the problem goes like this:
- Someone: “Here’s some content I made and decided to share for free.”
- Person: “Your use of {profanity, cringe-inducing puns, work-safe furry art} (select one) prohibits me from using your content to further my own career goals. You should change what you’re doing.”
It’s a problem I’ve personally been on the receiving end of. A lot. I even wrote a post about this before, although that focused specifically on the anti-furry sentiment. Unfortunately, this problem is bigger than being repulsed by cute depictions of anthropomorphic animals (which, when sincerely held, are often thinly-veiled dog-whistles for homophobia).
Superficial Professionalism Can Fuck Right Off!
(Art by Khia.)
I totally sympathize with information security professionals who desire to be taken seriously by their business colleagues. That’s why sometimes you’ll see them don a three-piece suit, style their hair like every other corporate drone, and adopt meaningless corporate jargon as if any of it makes sense. You’re doing what you have to do to put food on your table and pay your bills. You’re not a problem.
The problem happens when this desire to appear professional leaks outside of the self and gets projected onto one’s peers.
“Knock it off, guys! You’re making it harder for me to blend in with these soulless wretches–I mean, the finance department!”
How about “No”?
Information Security Is More Than Just a Vocation
I’ve lost count of the hackers I’ve met over the years–white hat hackers, to be clear–who hack for the sheer fun and joy of it, rather than out of obligation to their corporate masters.
Information security–and all of its sub-disciplines, including cryptography–can simultaneously be a very serious and respectable professional discipline, and a hobby for nerds to enjoy.
The sheer entitlement of expecting people who are just having fun with their own skills and experience to change what they’re doing because you stand to benefit from them changing their behavior is similar to another egocentric demand we hear a lot: The cry for “responsible” disclosure.
Weirdness Yields Greatness
The strength of the information security community (read: not the industry, the community) is our diversity.
Pop quiz! What do a gothic enby (and the Bay Area’s only hacker), the woman who leads cryptography at a FAANG company, the man who discovered the BEAST and CRIME attacks against TLS, several of the most brilliant trans folks you’ll ever meet, an Italian immigrant, the co-inventor of the Whirlpool hash function, the Egyptian “father of SSL” mathematician, and some gay dude with a fursona who writes blog posts about software security for fun all have in common?
Sure, we all work in cryptography, but our demographics are all over the place.
This is a feature, not a bug.
https://twitter.com/BoozyBadger/status/1314383740999737344
If people who are sharing great content–be it on Twitter or on their personal blog–do something that prevents you from sharing their content with your coworkers, the problem isn’t us.
No, the real problem is your coworkers and bosses, and the unquestioned culture of anal-retentive diversity-choking bullshit that pervades business everywhere.
https://twitter.com/DrDeeGlaze/status/1308149586100322304
Remember, security industry:
Homogeneity leads to blind spots
If I find a zero-day in your product and want to share it alongside a dancing GIF of my fursona, that’s my prerogative. If you choose to ignore it because of the artistic expression, that’s entirely your choice to make, and your problem to deal with.
In closing, I’d like to offer a simple solution to the mess many technologists, managers, journalists, and even senior vice presidents find themselves in; wherein they can’t readily be more accepting of profanity or quirky interests that are prone to superficial, knee-jerk judgments:
Question it.
Ask yourself “Why?” Ask your team “Why?” Ask your boss “Why?” and keep asking until everyone runs out of canned responses to your questions.
Aversion stems from one of two places:
- Fear of negative consequences
- Severe reverence towards tradition, even at the expense of innovation
But it’s very easy to confuse these two. You might think you’re avoiding a negative consequence when in reality you’re acting in service of the altar of tradition. Knock that shit out!
Tradition is what humans do when they’re out of ideas. “We don’t know how to be better, and we’ve always done it this way, so we’ll just keep doing what works.” Fuck tradition.
Art by @loviesophiee
Honorable Mentions
If you’re worried about looking bad, here are some notable entities that have shared my work since I started this blog in April 2020:
https://twitter.com/EFF/status/1307037184780832769
A Google RFC for AES-GCM in OpenTitan cites one of my blog posts.
There are probably others, but it’s late and I need sleep.
https://soatok.blog/2020/10/08/vanity-vendors-and-vulnerabilities/
#professionalism #Technology #Twitter #vanity
Sometimes my blog posts end up on social link-sharing websites with a technology focus, such as Lobste.rs or Hacker News.On a good day, this presents an opportunity to share one’s writing with a larger audience and, more importantly, solicit a wider variety of feedback from one’s peers.
However, sometimes you end up with feedback like this, or this:
Apparently my fursona is ugly, and therefore I’m supposed to respect some random person’s preferences and suppress my identity online.
I’m no stranger to gatekeeping in online communities, internet trolls, or bullying in general. This isn’t my first rodeo, and it won’t be my last.
These kinds of comments exist to send a message not just to me, but to anyone else who’s furry or overtly LGBTQIA+: You’re weird and therefore not welcome here.
Of course, the moderators rarely share their views.
https://twitter.com/pushcx/status/1281207233020379137
Because of their toxic nature, there is only one appropriate response to these kinds of comments: Loud and persistent spite.
So here’s some more art I’ve commissioned or been gifted of my fursona over the years that I haven’t yet worked into a blog post:
Art by kazetheblaze
Art by leeohfox
Art by Diffuse MooseIf you hate furries so much, you will be appalled to learn that factoids about my fursona species have landed in LibreSSL’s source code (decoded).
Never underestimate furries, because we make the Internets go.
I will never let these kind of comments discourage me from being open about my hobbies, interests, or personality. And neither should anyone else.
If you don’t like my blog posts because I’m a furry but still find the technical content interesting, know now and forever more that, when you try to push me or anyone else out for being different, I will only increase the fucking thing.
Header art created by @loviesophiee and inspired by floccinaucinihilipilification.
https://soatok.blog/2020/07/09/a-word-on-anti-furry-sentiments-in-the-tech-community/
#antiFurryBullying #cyberculture #furry #HackerNews #LobsteRs #Reddit
A question I get asked frequently is, “How did you learn cryptography?”
I could certainly tell everyone my history as a self-taught programmer who discovered cryptography when, after my website for my indie game projects kept getting hacked, I was introduced to cryptographic hash functions… but I suspect the question folks want answered is, “How would you recommend I learn cryptography?” rather than my cautionary tale about poorly-implemented password hash being a gateway bug.
The Traditional Ways to Learn
There are two traditional ways to learn cryptography.
If you want a book to augment your journey in either traditional path, I recommend Serious Cryptography by Jean-Philippe Aumasson.
Academic Cryptography
The traditional academic way to learn cryptography involves a lot of self-study about number theory, linear algebra, discrete mathematics, probability, permutations, and field theory.
You’d typically start off with classical ciphers (Caesar, etc.) then work your way through the history of ciphers until you finally reach an introduction to the math underpinning RSA and Diffie-Hellman, and maybe taught about Schneier’s Law and cautioned to only use AES and SHA-2… and then you’re left to your own devices unless you pursue a degree in cryptography.
The end result of people carelessly exploring this path is a lot of designs like Telegram’s MTProto that do stupid things with exotic block cipher modes and misusing vanilla cryptographic hash functions as message authentication codes; often with textbook a.k.a. unpadded RSA, AES in ECB, CBC, or some rarely-used mode that the author had to write custom code to handle (using ECB mode under the hood), and (until recently) SHA-1.
People who decide to pursue cryptography as a serious academic discipline will not make these mistakes. They’re far too apt for the common mistakes. Instead, they run the risk of spending years involved in esoteric research about homomorphic encryption, cryptographic pairings, and other cool stuff that might not see real world deployment (outside of novel cryptocurrency hobby projects) for five or more years.
That is to say: Academia is a valid path to pursue, but it’s not for everyone.
If you want to explore this path, Cryptography I by Dan Boneh is a great starting point.
Security Industry-Driven Cryptography
The other traditional way to learn cryptography is to break existing cryptography implementations. This isn’t always as difficult as it may sound: Reverse engineering video games to defeat anti-cheat protections has led several of my friends into learning about cryptography.
For security-minded folks, the best place to start is the CryptoPals challenges. Another alternative is CryptoHack.
There are also plenty of CTF events all year around, but they’re rarely a good cryptography learning exercise above what CryptoPals offers. (Though there are notable exceptions.)
A Practical Approach to Learning Cryptography
Art by Kyume.
If you’re coming from a computer programming background and want to learn cryptography, the traditional approaches carry the risk of Reasoning By Lego.
Instead, the approach I recommend is to start gaining experience with the safest, highest-level libraries and then slowly working your way down into the details.
This approach has two benefits:
- If you have to implement something while you’re still learning, your knowledge and experience is stilted towards “use something safe and secure” not “hack together something with Blowfish in ECB mode and MD5 because they’re familiar”.
- You can let your own curiosity guide your education rather than follow someone else’s study guide.
To illustrate what this looks like, here’s how a JavaScript developer might approach learning cryptography, starting from the most easy-mode library and drilling down into specifics.
Super Easy Mode: DholeCrypto
Disclaimer: This is my project.
Dhole Crypto is an open source library, implemented in JavaScript and PHP and powered by libsodium, that tries to make security as easy as possible.
I designed Dhole Crypto for securing my own projects without increasing the cognitive load of anyone reviewing my code.
If you’re an experienced programmer, you should be able to successfully use Dhole Crypto in a Node.js/PHP project. If it does not come easy, that is a bug that should be fixed immediately.
Easy Mode: Libsodium
Using libsodium is slightly more involved than Dhole Crypto: Now you have to know what a nonce is, and take care to manage them carefully.
Advantage: Your code will be faster than if you used Dhole Crypto.
Libsodium is still pretty easy. If you use this cheat sheet, you can implement something secure without much effort. If you deviate from the cheat sheet, pay careful attention to the documentation.
If you’re writing system software (i.e. programming in C), libsodium is an incredibly easy-to-use library.
Moderate Difficulty: Implementing Protocols
Let’s say you’re working on a project where libsodium is overkill, and you only need a few cryptography primitives and constructions (e.g. XChaCha20-Poly1305). A good example: In-browser JavaScript.
Instead of forcing your users to download the entire Sodium library, you might opt to implement a compatible construction using JavaScript implementations of these primitives.
Since you have trusted implementations to test your construction against, this should be a comparatively low-risk effort (assuming the primitive implementations are also secure), but it’s not one that should be undertaken without all of the prior experience.
Note: At this stage you are not implementing the primitives, just using them.
Hard Difficulty: Designing Protocols and Constructions
Repeat after me: “I will not roll my own crypto before I’m ready.” Art by AtlasInu.
To distinguish: TLS and Noise are protocols. AES-GCM and XChaCha20-Poly1305 are constructions.
Once you’ve implemented protocols and constructions, the next step in your self-education is to design new ones.
Maybe you want to combine XChaCha20 with a MAC based on the BLAKE3 hash function, with some sort of SIV to make the whole shebang nonce-misuse resistant?
You wouldn’t want to dive headfirst into cryptography protocol/construction design without all of the prior experience.
Very Hard Mode: Implementing Cryptographic Primitives
It’s not so much that cryptography primitives are hard to implement. You could fit RC4 in a tweet before they raised the character limit to 280. (Don’t use RC4 though!)
The hard part is that they’re hard to implement securely. See also: LadderLeak.
Usually when you get to this stage in your education, you will have also picked up one or both of the traditional paths to augment your understanding. If not, you really should.
Nightmare Mode: Designing Cryptography Primitives
A lot of people like to dive straight into this stage early in their education. This usually ends in tears.
If you’ve mastered every step in my prescribed outline and pursued both of the traditional paths to the point that you have a novel published attack in a peer-reviewed journal (and mirrored on ePrint), then you’re probably ready for this stage.
Bonus: If you’re a furry and you become a cryptography expert, you can call yourself a cryptografur. If you had no other reason to learn cryptography, do it just for pun!
Header art by circuitslime.
https://soatok.blog/2020/06/10/how-to-learn-cryptography-as-a-programmer/
#cryptography #education #programming #Technology
A paper was published on the IACR’s ePrint archive yesterday, titled LadderLeak: Breaking ECDSA With Less Than One Bit of Nonce Leakage.The ensuing discussion on /r/crypto led to several interesting questions that I thought would be worth capturing and answering in detail.
What’s Significant About the LadderLeak Paper?
This is best summarized by Table 1 from the paper.
The sections labeled “This work” are what’s new/significant about this research.
The paper authors were able to optimize existing attacks exploiting one-bit leakages against 192-bit and 160-bit elliptic curves. They were further able to exploit leakages of less than one bit in the same curves.How Can You Leak Less Than One Bit?
We’re used to discrete quantities in computer science, but you can leak less than one bit of information in the case of side-channels.Biased modular reduction can also create a vulnerable scenario: If you know the probability of a 0 or a 1 in a given position in the bit-string of the one-time number (i.e. the most significant bit) is not 0.5 to 0.5, but some other ratio (e.g. 0.51 to 0.49), you can (over many samples) conclude a probability of a specific bit in your dataset.
If “less than one bit” sounds strange, that’s probably our fault for always rounding up to the nearest bit when we express costs in computer science.
What’s the Cost of the Attack?
Consult Table 3 from the paper for empirical cost data:
Table 3 from the LadderLeak paper.How Devastating is LadderLeak?
First, it assumes a lot of things:
- That you’re using ECDSA with either sect163r1 or secp192r1 (NIST P-192). Breaking larger curves requires more bits of bias (as far as we know).
- That you’re using a cryptography library with cache-timing leaks.
- That you have a way to measure the timing leaks (and not just pilfer the ECDSA secret key; i.e. in a TPM setup). This threat model generally assumes some sort of physical access.
But if you can pull the attack off, you can successfully recover the device’s ECDSA secret key. Which, for protocols like TLS, allow an attacker to impersonate a certificate-bearer (typically the server)… which is pretty devastating.
Is ECDSA Broken Now?
Non-deterministic ECDSA is not significantly more broken with LadderLeak than it already was by other attacks. LadderLeak does not break the Internet.Fundamentally, LadderLeak doesn’t really change the risk calculus. Bleichenbacher’s attack framework for solving the Hidden Number Problem using Lattices was already practical, with sufficient samples.
There’s even a CryptoPals challenge about these attacks.
As an acquaintance put it, the authors made a time-memory trade-off with a leaky oracle. It’s a neat result worthy of publication, but we aren’t any minutes closer to midnight with this revelation.
Is ECDSA’s k-value Really a Nonce?
Ehhhhhhhhh, sorta.It’s complicated!
Nonce in cryptography has always meant “number that must be used only once” (typically per key). See: AES-GCM.
Nonces are often confused for initialization vectors (IVs), which in addition to a nonce’s requirements for non-reuse must also be unpredictable. See: AES-CBC.
However, nonces and IVs can both be public, whereas ECDSA k-values MUST NOT be public! If you recover the k-value for a given signature, you can recover the secret key too.
That is to say, ECDSA k-values must be all of the above:
- Never reused
- Unpredictable
- Secret
- Unbiased
They’re really in a class of their own.
For that reason, it’s probably better to think of the k-value as a per-signature key than a simple nonce. (n.b. Many cryptography libraries actually implement them as a one-time ECDSA keypair.)
What’s the Difference Between Random and Unpredictable?
The HMAC-SHA256 output of a message under a secret key is unpredictable for anyone not in possession of said secret key. This value, though unpredictable, is not random, since signing the same message twice yields the same output.A large random integer when subjected to modular reduction by a non-Mersenne prime of the same magnitude will be biased towards small values. This bias may be negligible, but it makes the bit string that represents the reduced integer more predictable, even though it’s random.
What Should We Do? How Should We Respond?
First, don’t panic. This is interesting research and its authors deserve to enjoy their moment, but the sky is not falling.Second, acknowledge that none of the attacks are effective against EdDSA.
If you feel the urge to do something about this attack paper, file a support ticket with all of your third-party vendors and business partners that handle cryptographic secrets to ask them if/when they plan to support EdDSA (especially if FIPS compliance is at all relevant to your work, since EdDSA is coming to FIPS 186-5).
Reason: With increased customer demand for EdDSA, more companies will adopt this digital signature algorithm (which is much more secure against real-world attacks). Thus, we can ensure an improved attack variant that actually breaks ECDSA doesn’t cause the sky to fall and the Internet to be doomed.
(Seriously, I don’t think most companies can overcome their inertia regarding ECDSA to EdDSA migration if their customers never ask for it.)
https://soatok.blog/2020/05/26/learning-from-ladderleak-is-ecdsa-broken/
#crypto #cryptography #digitalSignatureAlgorithm #ECDSA #ellipticCurveCryptography #LadderLeak
Update (2024-05-14): It’s time for furries to move away from Telegram.
A question I often get–especially from cryptography experts:
What is it with furries and Telegram?
https://twitter.com/Monochromemutt/status/1407005415099883527
No, they’re almost certainly not talking about that.
Most furries use Telegram to keep in touch with other members of our community. This leads many to wonder, “Why Telegram of all platforms?”
The answer is simple: Stickers.
(Art by Khia.)
Telegram was the first major chat platform that allowed custom sticker packs to be uploaded and used by its users. This led to the creation of a fuckton of sticker packs for peoples’ fursonas.
How many furry sticker packs are there? Well, my friend Nican started a project to collect and categorize them all. You can find their project online at bunnypa.ws.
https://twitter.com/Nican/status/1200229213627801600
As of this writing, there are over 230,000 stickers across over 7,300 sticker packs (including mine). It also supports inline search!
https://twitter.com/BunnyPawsBot/status/1345902008339898371
Additionally, there’s a very strong network effect at play: Furries are going to gravitate to platforms with a strong furry presence.
With that mystery out of the way, I’d like to share a few of my thoughts about Telegram as a platform and how to make it manageable.
Don’t Use Telegram As a Secure Messenger
Despite at least one practical attack against MTProto caused by its poor authentication, Telegram refuses to implement encryption that’s half as secure as the stuff I publish under my furry identity.
Instead, they ran a vapid “contest” and point to that as evidence of their protocol’s security.
If you’re a cryptography nerd, then you probably already understand that IND-CCA2 security is necessary for confidential messaging. You’re probably cautious enough to not depend on Telegram’s MTProto for privacy.
If you’re not a cryptography nerd, then you probably don’t care about any of this jargon or what it means.
It doesn’t help that they had another vulnerability that a renowned cryptography expert described as “the most backdoor-looking bug I’ve ever seen”.
(Art by Khia.)
So let’s be clear:
Telegram is best treated as a message board or a mailing list.
Use it for public communications, knowing full well that the world can read what you have to say. So long as that’s your threat model, you aren’t likely to ever get burned by the Durov family’s ego.
For anything that you’re not comfortable with being broadcast all over the Internet, you should use something more secure. Signal is the current recommended choice until something better comes along.
(Cwtch looks very good, but it’s not ready yet.)
Enable Folders to Make Notifications Reasonable
Last year, Telegram rolled out the ability to collect conversations, groups, and chats into folders. Most furries don’t know about this feature, because it doesn’t enable itself by default.
First, open the hamburger menu (on desktop) or click on your icon (on mobile), then click Settings.
Next, you’ll see an option for Folders.
You should see a button that says “Create New Folder”.
From here, you can include Chats or general types of Chats (All Groups, All Channels, All Personal Conversations) and then exclude specific entries.
Give it a name and press “Create”. After a bit of organizing, you might end up with a setup like this.
Now, here’s the cool thing (but sadly doesn’t exist on all clients–use Telegram Desktop on Windows and Linux if you want it).
Once you’re done setting up your folders, back out to the main interface on Desktop and right click one of the folders, then press “Mark As Read”.
Finally, an easy button to zero out your notifications. Serenity at last!
Inbox Zero on Telegram? Hell yes!
(Art by Khia.)
Note: Doing this to the special Unread folder is congruent to pressing Shift + ESC on Slack. You’re welcome, Internet!
Make Yourself Undiscoverable
In the default configuration, if anyone has your phone number in their address book (n.b. queerphobic relatives) and they install Telegram, you’ll get a notification about them joining.
As you can imagine, that’s a bit of a terrifying prospect for a lot of people. Fortunately, you can turn this off.
Under Settings > Privacy and Security > Phone Number, you can limit the discovery to your contacts (n.b. in your phone’s address book).
Turn Off Notifications for Pinned Messages
Under Settings > Notifications, you will find the appropriate checkbox under the Events heading.
A lot of furry Telegram groups like to notify all users whenever they pin a message. These notifications will even override your normal preferences if you disabled notifications for that group.
Also, you’re probably going to want to disable notifications for every channel / group / rando with very few exceptions, or else Telegram will quickly get super annoying.
Increase the Interface Scale
The default font size for Telegram is tiny. This is bad for accessibility.
Fortunately, you can make the font bigger. Open the Settings menu and scroll down past the first set of options.
Set the interface scale to at least 150%. It will require Telegram to re-launch itself to take effect.
Don’t Rely on Persistent Message History
This is just a cautionary footnote, especially if you’re dealing with someone with a reputation for gaslighting: The other participant in a conversation can, at any point in time, completely or selectively erase messages from your conversation history.
However, this doesn’t delete any messages you’ve already forwarded–be it to your Saved Messages or to a private Channel.
Aside: This is why, when someone gets outed for being a terrible human being, the evidence is usually preserved as forwarded messages to a channel.
Although Telegram isn’t in the same league as Signal and WhatsApp, its user experience is good–especially if you’re a furry.
I hope with the tips I shared above, as well as resources like bunnypa.ws, the Furry Telegram experience will be greatly improved for everyone that reads my blog.
Addendum: Beware the Furry Telegram Group List
A few people have asked me, “Why don’t you tell folks about furry-telegram-groups.net and/or @furlistbot?”
The main reason is that a lot of the most popular groups on that listing are either openly or secretly run by a toxic personality cult called Furry Valley that I implore everyone to avoid.
https://soatok.blog/2021/06/22/a-furrys-guide-to-telegram/
#chat #communication #furries #furry #FurryFandom #privacySettings #stickers #Technology
I have been a begrudging user of Telegram for years simply because that’s what all the other furries use, despite their cryptography being legendarily bad.When I signed up, I held my nose and expressed my discontent at Telegram by selecting a username that’s a dig at MTProto’s inherent insecurity against chosen ciphertext attacks:
IND_CCA3_Insecure
.Art: CMYKat
I wrote about Furries and Telegram before, and included some basic privacy recommendations. As I said there: Telegram is not a private messenger. You shouldn’t think of it as one.
Recent Developments
Telegram and Elon Muck have recently begun attacking Signal and trying to paint it as insecure.Matthew Green has a Twitter thread (lol) about it, but you can also read a copy here (archive 1, archive 2, PDF).
https://twitter.com/matthew_d_green/status/1789688236933062767
https://twitter.com/matthew_d_green/status/1789689315624169716
https://twitter.com/matthew_d_green/status/1789690652399170013
https://twitter.com/matthew_d_green/status/1789691417721282958
Et cetera.
This is shitty, and exacerbates a growing problem on Telegram: The prevalence of crypto-bros and fascist groups using it to organize.
Why Signal is Better for Furries
First, Signal has sticker packs now. If you want to use mine, here you go.For years, the main draw for furries to Telegram over Signal was sticker packs. This is a solved problem.
Second, you can setup a username and keep your phone number private. You don’t need to give your phone number to strangers anymore!
(This used to be everyone’s criticism of Signal, but the introduction of usernames made it moot.)
Finally, it’s trivial for Americans to setup a second Signal account using Twilio or Google Voice, so you can compartmentalize your furry posting from the phone number your coworkers or family is likely to know.
(Note: I cannot speak to how to deal with technology outside of America, because I have never lived outside America for any significant length of time and do not know your laws. If this is relevant to you, ask someone in your country to help figure out how to navigate technological and political issues pertinent to your country; I am not local to you and have no fucking clue.)
The last two considerations were really what stopped furries (or queer people in general, really) from using Signal.
Why Signal?
There are two broadly-known private messaging apps that use state-of-the-art cryptography to ensure your messages are private, and one of them is owned by Meta (a.k.a., Facebook, which owns WhatsApp). So Signal is the only real option in my book.That being said, Cwtch certainly looks like it may be promising in the near future. However, I have not studied its cryptography in depth yet. Neither has it been independently audited to my knowledge.
It’s worth pointing out that the lead developer of Cwtch is wrote a book titled Queer Privacy, so she’s overwhelmingly more likely to be receptive to the threat models faced by the furry community (which is overwhelmingly LGBTQ+).
For the sake of expedience, today, Signal is a “yes” and Cwtch is a hopeful “maybe”.
How I Setup a Second Signal Account
I own a Samsung S23, which means I can’t just use the vanilla Android tutorials for setting up a second profile on my device. Instead, I had to use the “Secure Folder” feature. The Freedom of the Press Foundation has more guidance worth considering.If you don’t own a Samsung phone, you don’t need to bother with this “Secure Folder” feature (as the links above will tell you). You can just set up a work profile and get the same result! You probably also can’t access the same feature, since that’s a Samsung exclusive idiom. Don’t sweat it.
I don’t know anything about Apple products, so I can’t help you there, but there’s probably a way to set it up for yourself too. (If not, maybe consider this a good reason to stop giving abusive corporations like Apple money?)
The other piece of the puzzle you need is a second phone number. Google Voice is one way to acquire one; the other is to setup a Twilio account. There are plenty of guides online for doing that.
(Luckily, I’ve had one of these for several years, so I just used that.)
Why does Signal require a phone number?
The historical reason is that Signal was a replacement for text messaging (a.k.a., SMS). That’s probably still the official reason (though they don’t support SMS anymore).From what I understand, the Signal development team has always been much more concerned about privacy for people that own mobile phones, but not computers, than they were concerned about the privacy of people that own computers, but not mobile phones.
After all, if you pick a random less privileged person, especially homeless or from a poor country, they’re overwhelmingly more likely to have a mobile phone than a computer. This doesn’t scratch the itch of people who would prefer to use PGP, but it does prioritize the least privileged people’s use case.
Their workflow, therefore, optimized for people that own a phone number. And so, needing a phone number to sign up wasn’t ever a problem they worried about for the people they were most interested in protecting.
Fortunately, using Signal doesn’t immediately reveal your phone number to anyone you want to chat with, ever since they introduced usernames. You still need one to register.
Tell Your Friends
I understand that the network effect is real. But it’s high time furries jettisoned Telegram as a community.Lazy edit of the “Friendship Ended” meme
Finally, Signal is developed and operated by a non-profit. You should consider donating to them so that we can bring private messaging to the masses.
Addendum (2024-05-15)
I’ve been asked by several people about my opinions on other platforms and protocols.Specifically, Matrix. I do not trust the Matrix developers to develop or implement a secure protocol for private messaging.
I don’t have an informed opinion about Signal forks (Session, Molly, etc.). Generally, I don’t review cryptography software for FOSS maximalists with skewed threat models unless I’m being paid to do so, and that hasn’t happened yet.
https://soatok.blog/2024/05/14/its-time-for-furries-to-stop-using-telegram/
#endToEndEncryption #furries #FurryFandom #privacy #Signal #Telegram
Pegasus Spyware Maker Said to Flout Federal Court as It Lobbies to Get Off U.S. Blacklist
#politics #technology #theintercept
posted by pod_feeder_v2
Pegasus Spyware Maker Said to Flout Federal Court as It Lobbies to Get Off U.S. Blacklist
On the same day lobbyists for NSO met with Rep. Pete Sessions, a lawyer from the lobbying firm gave $1,000 to “Pete Sessions for Congress.”Georgia Gee (The Intercept)
The 30-year-old internet backdoor law that came back to bite
News broke this weekend that China-backed hackers have compromised the wiretap systems of several U.S. telecom and internet providers, likely in an effort to gather intelligence on Americans.
The wiretap systems, as mandated under a 30-year-old U.S. federal law, are some of the most sensitive in a telecom or internet provider’s network, typically granting a select few employees nearly unfettered access to information about their customers, including their internet traffic and browsing histories.
But for the technologists who have for years sounded the alarm about the security risks of legally required backdoors, news of the compromises are the “told you so” moment they hoped would never come but knew one day would.
“I think it absolutely was inevitable,” Matt Blaze, a professor at Georgetown Law and expert on secure systems, told TechCrunch regarding the latest compromises of telecom and internet providers.
Fact is, any intentional backdoor is not going to be secure. Secrets don’t remain secret. That is just the way things are, and more so if more than one person knows about it.
“There’s no way to build a backdoor that only the ‘good guys’ can use,” said Signal president Meredith Whittaker, writing on Mastodon.
The theory around backdoors comes from the same era as changing your password every 30 days. Times have changed, and we should know better in 2024.
See techcrunch.com/2024/10/07/the-…
#Blog, #backdoors, #security, #technology
Cloudflare beats patent troll so badly it basically gives up: Patents will go Public
“Sable is a patent troll. It doesn’t make, develop, innovate, or sell anything. Sable IP is merely a shell entity formed to monetize (make money from) an ancient patent portfolio acquired by Sable Networks from Caspian Networks in 2006.”
Lately, these patent profiteers have targeted the open source community. The Cloud Native Computing Foundation and Linux Foundation last month strengthened ties with United Patents, a company focused on defending against predatory patent claims.
“In the end, Sable agreed to pay Cloudflare $225,000, grant Cloudflare a royalty-free license to its entire patent portfolio, and to dedicate its patents to the public, ensuring that Sable can never again assert them against another company,” said Terrell and Nemeroff.
Well, this is a big win for the small guys and open source projects, as patent trolls can put these guys out of business and stifle innovation.
Unfortunately, it takes a Big Tech company to use its finances to fight such patent trolls. Obviously the win benefits Cloudflare, but the positive benefits will flow far wider than for themselves.
See theregister.com/2024/10/03/pat…
#Blog, #patentrolls, #patents, #technology
12 Best Free and Open Source Steganography Tools
Steganography is the art and science of concealing messages in other messages in such a way that no one, apart from the sender and intended recipient, suspects the existence of the message. It’s a form of security through obscurity. Steganography is often used with cryptography. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest. This weakness is avoided with steganography.
In most cases, no-one would even know there was a hidden message, so such means are not usually subjected to attempts to crack them.
See linuxlinks.com/best-free-open-…
#Blog, #opensource, #privacy, #Steganography, #technology
It's been interesting reading this #reddit thread. People are understandably upset about the prospect of #youtube showing ads during the pause screen. Lots of people threatening to leave YouTube, but at the end of the day, most won't.
There are alternatives. I've been running #tilvids for over 4 years now. We have great content creators like @thelinuxEXP sharing content. All people have to do is actually start voting with their eyeballs...
https://www.reddit.com/r/technology/comments/1fkbjtk/youtube_confirms_your_pause_screen_is_now_fair/
#tech #Technology #google
SSDs have a secret way to protect your data when they fail
Many SSDs will use SMART to keep track of how close they are to failure, and when they cross a threshold that indicates failure is imminent, they will lock down and enter a read-only state. This means that you can’t write anything to them, but it’s also a clear sign to the user to get everything off of the drive while it still works. You can tell if your SSD has entered that state if you can’t unlock it to write to it.
This will be reassuring to many who think if an SSD fails, it is basically not usable and the data is gone. So, if you’ve used an SDD for quite a while (a good many years) and it suddenly no longer boots, check on another computer (it’s SMART stats should show if it failed). You should be able to clone it to a new SDD drive and carry on working with your data intact.
careful...
#careful #aliens #mobilehome #killyourtelevision #corporatemedia #technology #stupid #comic
"We want to bring organisations and content creators into the Fediverse, step by step."
Our Foundation co-founder has just published an interesting piece on how we're working to help organisations and content creators find their way to the Fediverse!
For more information on what we plan to do as a charity and how Patchwork, a new service we'll be launching soon, can help 👇
https://www.blog-pat.ch/enter-the-fediverse/
#Fediverse #SocialMedia #FediDev #FediAdmin #MastoDev #MastoAdmin #Technology
Organisations and content creators enter the Fediverse
In a Fedicentric world, everything revolves around Mastodon. Mastodon has around 80% of the monthly active users on the Fediverse. Mastodon sets the standard, and is where the action is.Michael Foster (Patchwork Blog)
A Brief tour of the End, an accessible fiction podcast directory.
The End | Completed Audio Fiction
The End is a directory of complete audio fiction—either at the series or season level—ready for you to listen to and enjoy at your pace. If you love fiction podcasts, audiodramas, radio plays, or audiobooks, This is what you've been waiting for!The End | Completed Audio Fiction
TROM II: Technology Won't Save Us - https://videos.trom.tf/w/weWXUVT1zXW5v8QRZUG5Lr
#technology #renewables #renewable-energy #electric-cars #capitalism #trade
TROM II: Technology Won't Save Us
Watch the entire documentary here - https://www.tromsite.com/documentaries/trom2/videos.trom.tf
The case for frugal computing. .https://limited.systems/articles/frugal-computing/
Frugal computing • Wim Vanderbauwhede
On the need for low-carbon and sustainable computing and the path towards zero-carbon computing.limited.systems
Serbian inventor, electrical engineer, mechanical engineer, and futurist Nikola Tesla was born #OTD in 1856.
Some of Tesla´s inventions and innovations: alternating Current (AC) system; induction motor; Tesla coil; wireless transmission of electricity; radio technology; remote control; neon and fluorescent lighting; X-Ray technology; Tesla turbine; oscillators and frequency generators.
https://en.wikipedia.org/wiki/Nikola_Tesla
Books by Nikola Tesla at PG:
https://www.gutenberg.org/ebooks/author/5067
Books by Tesla, Nikola (sorted by popularity)
Project Gutenberg offers 73,924 free eBooks for Kindle, iPad, Nook, Android, and iPhone.Project Gutenberg
The Fairbuds noise-canceling earbuds have an easily swappable battery
Fairphone, the makers of the ultra-repairable Fairphone 5, have launched a new pair of easy-to-repair wireless earbuds. Instead of tossing away your earbuds when the batteries eventually die, Fairphone’s new Fairbuds let you replace the batteries inside the buds themselves and their charging case.
In addition to replacing the batteries, you can repair or exchange the left or right earbud, the silicone ring, earbud tips, the charging case outer shell, and the charging case core. The new buds also come with a standard two-year warranty, but you can add one extra year if you register them online.
Certainly, these objectives should be embraced by all manufacturers. I will never forget my first (and only) Apple AirPods and their batteries failing just a month past the warranty period ended. They were super expensive, and I vowed to never again pay so much money for a disposable product.
The downside though with Fairphone products is they are not the cheapest around, so many are still going to buy cheap disposal earbuds. They are very likely not as good as the top end earbuds are either, but I’d be interested to see some reviews around the sound quality.
One would have to assume these could last at least two or three times longer than any earbuds which have non-replaceable batteries (batteries are usually the component that fails). But the cost of any batteries being replaced also needs to be factored in.
I’d hope though if there is enough support and sales, that these prices could actually get cheaper over time too.
See https://www.theverge.com/2024/4/9/24125089/fairbuds-fairphone-noise-canceling-earbuds-battery-replace-repairability
#Blog, #earbuds, #environment, #technology
Modder made an IRC client that runs entirely inside the motherboard’s BIOS chip
Phillip Tennen, developer of the open-source axleOS, has recently decided to use what he learned from that project to create an IRC client that runs entirely within the UEFI pre-boot environment, with no operating system required. This “UEFIRC” is nearly fully functional, with a graphical interface and a TrueType font renderer, and it’s all written in the Rust programming language.
Wow! It does suggest two things to me:
- IRC is really the lightest weight social chatting app of them all…
- IRC users are a bit different…
Technically I suppose any text based micro-blogging type service could work. Twitter or Mastodon without videos and photos may also work. But the nature of how IRC still works today, means you can get a pretty close experience to the real thing even in the BIOS.
See https://www.tomshardware.com/software/someone-made-a-functioning-irc-client-that-runs-entirely-inside-the-motherboards-uefi
#Blog, #BIOS, #IRC, #technology
TROM II: We can automate almost anything
Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance
#nationalsecurity #technology #theintercept
posted by pod_feeder_v2
Elon Musk Fought Government Surveillance — While Profiting Off Government Surveillance
Elon Musk and X postured as defenders against government surveillance but sold user data to Dataminr, which facilitates such surveillance.Sam Biddle (The Intercept)
UPT: Universal Package Management Tool for Linux: One command to rule them all!
I’ve not tested this yet, but it looks interesting. One of those big differentiators between the different branches of Linux, are the different package manager per branch which one has to get to know.
There are Pacman for Arch Linux and derivatives, Alpine Package Keeper (a.k.a. APK), Advanced Package Tool (a.k.a. APT) for Debian GNU/Linux and derivatives, Aptitude, a front-end for APT, Snapcraft for Ubuntu and derivatives, Yellowdog Updater Modified (a.k.a. Yum) for RPM-based systems, Slackpkg for Slackware, Emerge for Gentoo, the guix command on Guix, and nix-env on NixOS, among others, not to mention pkg on FreeBSD, Homebrew for macOS, and Scoop for Windows. Every one of those has its own way of management, forcing you to learn different ways to do the same thing.
A developer called sigoden has created a universal tool called Universal Package-management Tool, or UPT for short, able to put things together in this jungle. Once you have it installed, you won’t need to learn another package management’s lifestyle again.
UPT is written in Rust so you need to install Rust and Cargo on Ubuntu or whatever Linux distribution you are using.
I don’t see any YouTube videos about this yet. It started out 5 years ago, but it seems that the usable releases only started from about Dec 2023.It works for Linux, Windows, macOS and BSD.The Github project page lists all supported OSs.
See https://itsfoss.com/upt/
#Blog, #linux, #opensource, #technology
How far have you ever been from a power socket?
#energy #technology #freedom #smartphone #entertainment #travel #fantasy
Millions Of #google #whatsapp #Facebook #2FA #Security Codes #Leak Online
Security experts advise against using SMS messages for two-factor authentication codes due to their vulnerability to interception or compromise. Recently, a security researcher discovered an unsecured database on the internet containing millions of such codes, which could be easily accessed by anyone.
#news #tech #technews #technology #privacy
Millions Of Google, WhatsApp, Facebook 2FA Security Codes Leak Online
A security researcher has discovered an unsecured database on the internet containing millions of two-factor authentication security codes. Here's what you need to know.Davey Winder (Forbes)
'Facial recognition' #error message on vending #machine sparks concern at #University of #Waterloo
Earlier this month, a student noticed an error message on one of the machines in the Modern Languages building. It appeared to indicate there was a problem with a facial recognition application.
#privacy #fail #FacialRecognition #face #technology #news #economy #problem #Canada
'Facial recognition' error message on vending machine sparks concern at University of Waterloo
A set of smart vending machines at the University of Waterloo is expected to be removed from campus after students raised privacy concerns about their software.Kitchener