Search
Items tagged with: fediverse
How did we move from forums to Reddit, Facebook groups, and Discord?
Content warning: From the first moment I first went online in 1996, forums were the main place to hang out. In fact the very first thing I did was join an online forum run by the Greek magazine "PC Master" so I could directly to my favourite game reviewers (for me it was
From the first moment I first went online in 1996, forums were the main place to hang out. In fact the very first thing I did was join an online forum run by the Greek magazine “PC Master” so I could directly to my favourite game reviewers (for me it was Tsourinakis, for those old enough to remember).
Whoever didn’t like the real-time nature of the IRC livechat, forums were all the rage and I admit they had a wonderful charm for the upcoming teenager who wanted to express themselves with fancy signatures and some name recognition for their antics. Each forum was a wonderful microcosm, a little community of people with a similar hobby and/or mind-frame.
BBcode-style forums took the web 1.0 internet by storm and I remember I had to juggle dozens of accounts, one for for each one I was interacting with. Basically, one for each video game (or video game publisher) I was playing, plus some Linux distros, hobbies, politics and the like. It was a wonderful mess.
But a mess it was, and if the dozens of accounts and constant context switching barely enough to handle for an PC nerd like myself, I can only imagine how impenetrable it was for the less tech-savvy. Of course, for people like me this was an added benefit, since it kept the “normies” out and avoided the “Eternal September” in our little communities.
However the demand for places accessible for everyone to discuss was not missing, it was just unfulfilled. So as soon as Web 2.0 took over with the massive walled gardens of MySpace, Facebook, Twitter and so on, that demand manifested and the ability for anyone to create and run a forum within those spaces regardless of technical competency or BBcode knowledge, spawned thousands of little communities.
Soon after Digg and then Reddit came out, and after the self-inflicted implosion of Digg, Reddit along with Facebook became the de-facto spot to create and nurture new async-discussion communities, once they added the functionality for everyone to create one and run it as they wanted.
But the previously existing BBcode forums still existed and were very well established. Places like Something Awful had such strong communities that they resisted the pull of these corporate walled gardens for a long time. But eventually, they all more or less succumbed to the pressure and their members had an exodus. What happened?
I’m not a researcher, but I was there from the start and I saw the same process play out multiple times in the old forums I used to be in. Accessibility and convenience won.
There’s a few things I attribute this to.
- The executive costs to create a new forum account is very high. Every time you want to join one, you need to go through making a username (often trying to find one that’s not taken, so now you have to juggle multiple usernames as well), new password, captchas, email verifications, application forms, review periods, lurker wait times and so on. It’s a whole thing and it’s frustrating to do every time. Even for someone like me who has gone through this process multiple times, I would internally groan for having to do it all over again.
- Keeping up to date was a lot of work. Every time I wanted to keep up to date with all my topics, I had to open new tabs for each of my forums and look at what’s new is going on. The fact that most of the forums didn’t have threaded discussions and just floated old discussions with new replies to the top didn’t help at all (“thread necromancy” was a big netiquette faux-pas). Eventually most forums added RSS feeds, but not only were most people not technical enough to utilize RSS efficiently (even I struggled), but often the RSS was not implemented in a way that was efficient to use.
- Discoverability was too onerous. Because of (1) Many people preferred to just hang out in one massive forum, and just beg or demand new forum topics to be added for their interests so they wouldn’t have to register, or learn other forum software and interact with foreign communities. This is how massive “anything goes” forums like Something Awful started, and this also started impacting other massive forums like RPGnet who slowly but surely expanded to many more topics. Hell almost every forum I remember has politics and/or “out of topic” sections for people to talk without disrupting the main topics because people couldn’t stop themselves.
And where the forum admins didn’t open new subject areas, the bottom-up pressure demanded that solutions be invented in the current paradigm. This is how you ended up with immortal threads, thousands of pages deep for one subject, or regular mega-threads and so on. Internet life found a way. - Forum admins and staff were the same petty dictators they always were and always will be. Personality cults and good ole boys clubs abounded. People were established and woe to anyone who didn’t know enough to respect it, goddammit! I run into such situations more than once, even blogged about it back in the day. But it was an expected part of the setup, so people tolerated it because, well what else will you do? Run your own forum? Who has the time and knowledge for that? And even if you did, would anyone even join you?
And so, this was the paradigm we all lived in. People just declared this is how it had to be and never considered any proper interactivity between forums as worth the effort. In fact, one would be heavily ridiculed and shunned for even suggesting such blasphemous concepts
That is, until Facebook and Reddit made it possible for everyone to run their own little fief and upended everything we knew. By adding forum functionality into a central location, and then allowing everyone to create one for any topic, they immediately solved so many of these issues.
- The executive cost to join a new topic is very low. One already has an account on Reddit and/or Facebook. All they have to do is press a button on the subreddit, group they want to join. At worst they might need to pass an approval, but they get to keep the same account, password and so on. Sure you might need to juggle 1-3 accounts for your main spaces (Reddit, Facebook, Discord), but that’s so much easier than 12 or more.
- Keeping up to date is built-in. Reddit subscriptions allows one a personalized homepage, Facebook just gives you your own feed, discord shows you where there’s activity and so on. Of course the corporate enshittification of those services means that you’re getting more and more ads along masquerading as actual content and invisible algorithms are feeding you ragebait and fearbait to get you to keep interacting at the cost of your mental and social health, but that is invisible for most users so it doesn’t turn them off.
- Discoverability is easy. Facebook randomly might show you content from groups you’re not in, shared by others. Reddit’s /r/all feed showed posts from topics you might not even know existed and people are quick to link to relevant subreddits. Every project has its own discord server link and so on.
The fourth forum problem of course was and can never be solved. There will always be sad little kings of small sad little hills. However solving 1-3 meant that the power of those abusing their power as moderators was massively diminished as one could just set up a new forum in a couple of minutes and if there was enough power abuse, whole communities would abandon the old space and move to the new one. This wasn’t perfect of course, as in Reddit, only one person could squat one specific subreddit, but as seen with successful transitions from /r/marijuana to /r/trees, given enough blow-back, it can certainly be achieved.
And the final cherry on top is that places like Reddit and discord are just…easier to use. Ain’t nobody who likes learning or using BBcode on 20-year-old software. Markdown became the norm for a reason due to how natural it is to use. Add to that less restrictions on uploads (file size, image size etc) and fancier interfaces with threaded discussions, emoji reactions and so on, and you get a lot of people using the service instead of trying to use the service. There are of course newer and better forum software like the excellent Discourse, but sadly that came in a bit too late to change momentum.
So while forums never went away, people just stopped using them, first slowly but accelerating as time passed. People banned just wouldn’t bother to create new accounts all over again when they already had a Facebook account. People who wanted to discuss a new topic wouldn’t bother with immortal mega-threads when they could just join or make a subreddit instead. It was a slow-burn that was impossible to stop once started.
10-15 years after Reddit started, it was all but over for forums. Now when someone wants to discuss a new topic, they don’t bother to even google for an appropriate forum (not that terminally enshittified search engines would find one anyway). They just search Reddit or Facebook, or ask in their discord servers for a link.
I admit, I was an immediate convert since Reddit added custom communities. I created and/or run some big ones back in the day, because I was naive about the corporate nature of Reddit and thought it was “one of the good ones”, even though I had already abandoned Facebook much earlier. It was just so much easier to use one Reddit account and have it as my internet homepage, especially once gReader was killed by Google.
But of course, as these things go, the big corporate gardens couldn’t avoid their nature and eventually once the old web forums were abandoned for good and people had no real alternatives, they started squeezing. What are you gonna do? Set up your own Reddit? Who has the time and knowledge for that? And even if you did, would anyone even join you?
Nowadays, I hear a lot of people say that the alternative to these massive services is to go back to old-school forums. My peeps, that is absurd. Nobody wants to go back to that clusterfuck I just described. The grognards who suggest this are either some of the lucky ones who used to be in the “in-crowd” in some big forums and miss the community and power they had, or they are so scarred by having to work in that paradigm, that they practically feel more comfortable in it.
No the answer is not anymore an archipelago of little fiefdoms. 1-3 forbid it! If we want to escape the greedy little fingers of u/spez and Zuckeberg, the only reasonable solution is moving forward is activitypub federated software.
We have already lemmy, piefed, and mbin, who already fulfill the role of forums, where everyone can run their own community, while at the same time solving for 1-3 above! Even Discourse understood this and started adding apub integration (although I think they should be focusing on threadiverse interoperability rather than not microblogging.)
Imagine a massive old-school forum like RPGnet migrating to a federated software and immediately allow their massive community access to the rest of the threadiverse without having to go through new accounts and so on, while everyone else gets access to the treasure trove of discussions and reviews they have. It’s a win-win for everyone and a loss for the profiteers of our social media presence.
Not only do federated forums solve for the pain points I described above, but they add a lot of other advantages as well. For example we now have way less single points of failure, as the abandonment of a federated instance doesn’t lose its content which continues living in the caches of the others who knew about it and makes it much easier for people to migrate from one lemmy instance to another due to common software and import/export functionalities. There’s a lot of other benefits, like common sysadmin support channels, support services like fediseer and so on.
These days, I see federated forums as the only way forward and I’m optimistic of the path forward. I think Reddit is a dead site running and the only way they have to go is down. I know we have our own challenges to face, but I place far more trust in the FOSS commons than I do in corporate overlords.
ToyTown: How an online community built around mutual aid is becoming a social wasteland because of hierarchy.
Today I wish to talk about ToyTown which is an online community, mainly a number of fora, where…English speakers can share news, ask questions, post answers, make advertisements, organise sports and social events, discuss current affairs, make friends, and generally engage with each other.
Now as some of you – particularly those following me on twitter or facebook – might have heard, I’ve been the victim of a real-life con (I will post details about this soon) as a result of which I started my own investigation to locate the perpetrator. At the advice of a colleague, I decided to try and ask for help in the ToyTown fora something which would also raising awareness of this type of scam to people living in my area.The reaction was a stunning display of hostility and mistrust, even after I went out of my way to substantiate my case. For a place which prides itself on its helpfulness, this just didn’t make sense.While I can understand people being snarky on someone who asks where to buy milk or not even making an attempt to use the search function, surely this
would not apply to my relatively unique thread right? Wrong.
Nevertheless, it quickly dawned on me that what was really happening was that the overwhelming negative response I perceived came from a small number of vocal people who seem to have in face a very heavy presence in the fora. If one were to take the distinct people who posted in the thread and see their response, the reception was if not positive, at least neutral. The positive replies however were drowned in a sea of abuse…from the same few antisocials, either
trolling, deliberately insulting or simply being stunningly xenophobic, while also being under the auspicious eye of the mods who silently approved of obviously trollish behaviour, as long as it came from the “great old ones”.
To make this fact abundantly clear, let me show you one of the comments that was posted in the second page:
Now you see, this is why Greece is in the shit. And us German taxpayers are expected to sort your shit out for you. Bloody charming. And what’s more, we are led to believe you got scammed by a Greenlander…or was it by any chance an Icelander?
My reply to this borderline racist comment was to call the poster for the troll he is. The result? My post got pulled by the mods because the rules of the community forbid you from calling others trolls. Something which obviously facilitates their behaviour.Surprised as I was from the results of asking for help in ToyTown, I asked my colleague as well as another, former colleague for their impressions. The former, while not as surprised as I was, still did not expect hostility of this magnitude and admitted that he feared this would happen. The latter said this among others…
yeah…trouble is..it’s the worst kind of forum, internet clique at its very worst mate – If you are new, more often than not you are ridiculed…if you have been there a while you should know better… Basically, be one of the normal 10 or 15 or forget about it.
Now both of these are expat brits mind you, very like the people who claim that this reaction is because people are expat. Bullshit. Just because you go to live in another country does not make people assholes. No, what was at play here was nothing else than a community gone astray after having morphed into a “old boys club”. Unfortunately it seems that the residents outside of this little clique have reached the point where they either passively accept this, or they feel helpless to do anything about it.Soon afterwards a reaction post was in the forum where I believe everything bad with the community was put forth plainly. Unfortunately, the result was not a good discussion as the OP would have liked but a pathetic attempt by the good ol’ boys club and the moderators to skirt the issue with accusations of conspiracy or petty flamewars. The points raised where barely touched, even though there is an obvious support from the silent majority as can be seen from the positive ranking of the OP (which you must imagine persists despite the downvote brigade by those who like the community being difunctional)
So how come this situation persists even though it’s obviously unwanted by a lot of the community members? The reason seems to be the same as to why any class society persists even though change is wanted by the majority of people living in it. Inertia and Alienation.
You see, by now ToyTown has grown huge and it the stop for the english-speaking crowd in Germany. As this just happened naturally just because there was a demand for it, the one who happened to start it first became the de-facto leader and a hierarchy formed below him. First the mods and then the good ol’ boys AKA the vocal minority. Since ToyTown has always been the property of the admin, this situation has simply not been challenged, even though the value of the community lies in its numbers, not in its owners. The site, much like a nation, will keep on growing regardless of the actions and abuses of the admins due to the existing demand for an english speaking site in Germany. This leads to the the biggest challenge any new site will face when trying to setup a healthy community around the same goal. Oobscurity. The ToyTown administration and old boys club knows this and therefore have no reason to control their behaviour. And this attitude only worsens the more a community grows larger.
This is the curse of all hierarchy. Benevolent or not, it is corrupted by the sheer control that is centralized as it naturally grows. Those at the top see themselves as increasingly benevolent even while their actions become more and more intolerant and authoritarian. Those with social power, such as that coming from seniority or friends in high places, get more and more vain and expect that their social status grants them immunity from the same things that “lesser mortals” AKA newbies get punished for.
Those not in the upper strata of the community quickly learn what their place is and take on of two actions. They either leave or keep their head down, find a niche and try to work within it. As long as they do not draw the ire of the mods or the old boys club, they can function without many issues. To challenge and stop abusive behaviour coming from those higher than them is impossible however.The will of the mods will always be over the will of the old boys will always be over the will of the unwashed masses. As a poster called Jimbo said:
however, I think the quote above is quite wrong – it belongs to Ed Bob, and ultimately, the site is therefore created in his image. Or at least he allows it to be organic and grow in its own way.
Which is simply nonsensical. A community without Ed Bob is still a community. An Ed Bob without it isn’t. To quote another userUh, no it doesn’t and that is one of the huge problems around here. The site belongs to the users and without the users E.Bob would not be in a position to make a chunk of change by selling out to The Local. Our help and our comments made this site, he just gave us the vehicle. The real life E.Bob is a pretty cool guy, but can we stop kissing the virtual E.Bob persona for once and for all?
This is why hierarchy needs to be nipped in the bud. There’s no such thing as “too little” or “just enough” hierarchy. Just look at how it can even corrupt children’s relations in the same destructive manner. It is just disruptive to healthy human relationships making good people authoritarian and allowing bad people to be cruel. We need to learn to recognise this and start building our communities with this in mind from the start. Even when structurally necessary, as is the case for web sites which require at the least an admin, a community built around them will immensely benefit the more such privilege is consciously removed.ToyTown may be too far gone to fix and like many online communities before, it may eventually implode. Just look at how quickly the immensely popular Richard Dawkins community self-immolated just by the actions of the few at the top who were completely disconnected from those at the bottom. Such events are not uncommon and more importantly, I’ve not heard of any of them which were not the result of hierarchical power gone bad.
Could it be salvaged somehow? Depends on how alienated the community is. For those at the top, things will always look good of course. They‘re at the top. This is why you see the vocal minority dismissing and trivializing the concerns others make. Unfortunately, from what I saw at ToyTown, those who do not like how things are going are not convinced or confident that they can make a difference which is not exactly true. I’ve seen a few fora and communities which managed to change things via dedicated non-conformity and persisting objection (think of almost everyone starting new threads to complain). If something like this cannot work, the only solution is an exodus which unless it is made to a system built around avoiding the same issues will only be a temporary solution.
Whatever happens, at the end of the day the power to change things is in the hands of those interested in it. The community itself, not the old boys club or Editor Bob. As long as people are too scared or apathetic to act, nothing will change obviously. For my part, I wash my hands of ToyTown. I do not care to wade into sewers just to take a shortcut.
UPDATE: It seems this blog entry is being linked from a private forum of ToyTown. I have no way of seeing what they’re saying but I’m guessing someone saw this post but was too scared to discuss it with the open public of the forum. Much better to mock me behind closed doors apparently.
UPDATE2: Given the responses that the second thread keeps receiving, I think this is appropriate
(h/t See Mike Draw)
Discourse is the place to build civilized communities
Discourse is modern forum software for your community. Use it as a mailing list, discussion forum, long-form chat room, and more!Discourse
I spent the last year working on the Fediverse. Here's what I've learned.
The Fediverse, you might have heard of it. From Mastodon to Peertube and more—it’s is a collection of different services that are all working together to form an interconnected universe of applications. It sounds unbelievable, but it’s here today. You can follow me over here on Mastodon.
It wrests control of the web out of the hands of corporate oligarchs and digital tycoons and returns that power to where it belongs: us.
I’ve spent the entirety of 2024 working on the Fediverse and I’ve learned a lot. See, I started mirroring these videos over on my Peertube instance: https://subscribeto.me/ and it’s sucked me. I’ve gone down this rabbit hole of discovery. I’ve passed through the looking glass. I’ve seen the promised land. And now, I want to share this with you.
So in this video, I want to take a moment to explore 5 of the most important lessons I’ve learned and why I’m now betting the future of my company on this amazing technology that we call “The Fediverse.”
00:00 What if we lived in a cooperative online world?
01:33 What is an "instance?"
02:11 #5 - Netiquette was not a fluke
04:49 #4 - Kindness is King
06:44 #3 - It's not a Social Network
08:06 #2 - The Fediverse is boring (complement)
11:04 #1 - Viral Memes as Bioweapons
12:59 Social Media is the enemy of humanity
13:36 What the Fediverse truly is
Subscribeto.me
Welcome to Subscribeto.me. Find access to Gardiner Bryant videos as well as many exclusives! You don't need to create an account in order to like, subscribesubscribeto.me
Key Transparency and the Right to be Forgotten
This post is the first in a new series covering some of the reasoning behind decisions made in my project to build end-to-end encryption for direct messages on the Fediverse.
(Collectively, Fedi-E2EE.)
Although the reasons for specific design decisions should be immediately obvious from reading the relevant specification (and if not, I consider that a bug in the specification), I believe writing about it less formally will improve the clarity behind the specific design decisions taken.
In the inaugural post for this series, I’d like to focus on how the Fedi-E2EE Public Key Directory specification aims to provide Key Transparency and an Authority-free PKI for the Fediverse without making GDPR compliance logically impossible.
CMYKat‘s art, edited by me.
Background
Key Transparency
For a clearer background, I recommend reading my blog post announcing the focused effort on a Public Key Directory, and then my update from August 2024.
If you’re in a hurry, I’ll be brief:
The goal of Key Transparency is to ensure everyone in a network sees the same view of who has which public key.
How it accomplishes this is a little complicated: It involves Merkle trees, digital signatures, and a higher-level protocol of distinct actions that affect the state machine.
If you’re thinking “blockchain”, you’re in the right ballpark, but we aren’t propping up a cryptocurrency. Instead, we’re using a centralized publisher model (per Public Key Directory instance) with decentralized verification.
Add a bit of cross-signing and replication, and you can stitch together a robust network of Public Key Directories that can be queried to obtain the currently-trusted list of public keys (or other auxiliary data) for a given Fediverse user. This can then be used to build application-layer protocols (i.e., end-to-end encryption with an identity key more robust than “trust on first use” due to the built-in audit trail to Merkle trees).
I’m handwaving a lot of details here. The Architecture and Specification documents are both worth a read if you’re curious to learn more.
Right To Be Forgotten
I am not a lawyer, nor do I play one on TV. This is not legal advice. Other standard disclaimers go here.
Okay, now that we’ve got that out of the way, Article 17 of the GDPR establishes a “Right to erasure” for Personal Data.
What this actually means in practice has not been consistently decided by the courts yet. However, a publicly readable, immutable ledger that maps public keys (which may be considered Personal Data) with Actor IDs (which includes usernames, which are definitely Personal Data) goes against the grain when it comes to GDPR.
It remains an open question of there is public interest in this data persisting in a read-only ledger ad infinitum, which could override the right to be forgotten. If there is, that’s for the courts to decide, not furry tech bloggers.
I know it can be tempting, especially as an American with no presence in the European Union, to shrug and say, “That seems like a them problem.” However, if other folks want to be able to use my designs within the EU, I would be remiss to at least consider this potential pitfall and try to mitigate it in my designs.
So that’s exactly what I did.
Almost Contradictory
At first glance, the privacy goals of both Key Transparency and the GDPR’s Right To Erasure are at odds.
- One creates an immutable, append-only history.
- The other establishes a right for EU citizens’ history to be selectively censored, which means history has to be mutable.
However, they’re not totally impossible to reconcile.
An untested legal theory circulating around large American tech companies is that “crypto shredding” is legally equivalent to erasure.
Crypto shredding is the act of storing encrypted data, and then when given a legal takedown request from an EU citizen, deleting the key instead of the data.
This works from a purely technical perspective: If the data is encrypted, and you don’t know the key, to you it’s indistinguishable from someone who encrypted the same number of NUL bytes.
In fact, many security proofs for encryption schemes are satisfied by reaching this conclusion, so this isn’t a crazy notion.
Is Crypto Shredding Plausible?
In 2019, the European Parliamentary Research Service published a lengthy report titled Blockchain and the General Data Protection Regulation which states the following:
Before any examination of whether blockchain technology is capable of complying with Article 17 GDPR; it must be underscored that the precise meaning of the term ‘erasure’ remains unclear.Article 17 GDPR does not define erasure, and the Regulation’s recitals are equally mum on how this term should be understood. It might be assumed that a common-sense understanding of this terminology ought to be embraced. According to the Oxford English Dictionary, erasure means ‘the removal or writing, recorded material, or data’ or ‘the removal of all traces of something: obliteration’.494
From this perspective, erasure could be taken to equal destruction. It has, however, already been stressed that the destruction of data on blockchains, particularly these of a public and permissionless nature, is far from straightforward.
There are, however, indications that the obligation inherent to Article 17 GDPR does not have to be interpreted as requiring the outright destruction of data. In Google Spain, the delisting of information from research results was considered to amount to erasure. It is important to note, however, that in this case, this is all that was requested of Google by the claimant, who did not have control over the original data source (an online newspaper publication). Had the claimant wished to obtain the outright destruction of the relevant data it would have had to address the newspaper, not Google. This may be taken as an indication that what the GDPR requires is that the obligation resting on data controllers is to do all they can to secure a result as close as possible to the destruction of their data within the limits of [their] own factual possibilities.
Dr Michèle Finck, Blockchain and the General Data Protection Regulation, pp. 75-76
From this, we can kind of intuit that the courts aren’t pedantic: The cited Google Spain case was satisfied by merely delisting the content, not the erasure of the newspaper’s archives.
The report goes on to say:
As awareness regarding the tricky reconciliation between Article 17 GDPR and distributed ledgers grows, a number of technical alternatives to the outright destruction of data have been considered by various actors. An often-mentioned solution is that of the destruction of the private key, which would have the effect of making data encrypted with a public key inaccessible. This is indeed the solution that has been put forward by the French data protection authority CNIL in its guidance on blockchains and the GDPR. The CNIL has suggested that erasure could be obtained where the keyed hash function’s secret key is deleted together with information from other systems where it was stored for processing.Dr Michèle Finck, Blockchain and the General Data Protection Regulation, pp. 76-77
That said, I cannot locate a specific court decision that affirms that crypto erasure is legally sufficient for complying with data erasure requests (nor any that affirm that it’s necessary).
I don’t have a crystal ball that can read the future on what government compliance will decide, nor am I an expert in legal matters.
Given the absence of a clear legal framework, I do think it’s totally reasonable to consider crypto-shredding equivalent to data erasure. Most experts would probably agree with this. But it’s also possible that the courts could rule totally stupidly on this one day.
Therefore, I must caution anyone that follows a similar path: Do not claim GDPR compliance just because you implement crypto-shredding in a distributed ledger. All you can realistically promise is that you’re not going out of your way to make compliance logically impossible. All we have to go by are untested legal hypotheses, and very little clarity (even if the technologists are near-unanimous on the topic!).
Towards A Solution
With all that in mind, let’s start with “crypto shredding” as the answer to the GDPR + transparency log conundrum.
This is only the start of our complications.
Protocol Risks Introduced by Crypto Shredding
Before the introduction of crypto shredding, the job of the Public Key Directory was simple:
- Receive a protocol message.
- Validate the protocol message.
- Commit the protocol message to a transparency log (in this case, Sigsum).
- Retrieve the protocol message whenever someone requests it to independently verify its inclusion.
- Miscellaneous other protocol things (cross-directory checkpoint commitment, replication, etc.).
Point being: there was very little that the directory could do to be dishonest. If they lied about the contents of a record, it would invalidate the inclusion proofs of every successive record in the ledger.
In order to make a given record crypto-shreddable without breaking the inclusion proofs for every record that follows, we need to commit to the ciphertext, not the plaintext. (And then, when a takedown request comes in, wipe the key.)
Now, things are quite more interesting.
Do you…
- …Distribute the encryption key alongside the ciphertext and let independent third parties decrypt it on demand?
…OR… - Decrypt the ciphertext and serve plaintext through the public API, keeping the encryption key private so that it may be shredded later?
The first option seems simple, but runs into governance issues: How do you claim the data was crypto-shredded if countless individuals have a copy of the encryption key, and can therefore recover the plaintext from the ciphertext?
I don’t think that would stand up in court.
Clearly, your best option is the second one.
Okay, so how does an end user know that the ciphertext that was committed to the transparency ledger decrypts to the specific plaintext value served by the Public Key Directory? How do users know it’s not lying?
Quick aside: This question is also relevant if you went with the first option and used a non-committing AEAD mode for the actual encryption scheme.In that scenario, a hostile nation state adversary could pressure a Public Key Directory to selectively give one decryption key to targeted users, and another to the rest of the Internet, in order to perform a targeted attack against citizens they’d rather didn’t have civil rights.
My entire goal with introducing key transparency to my end-to-end encryption proposal is to prevent these sorts of attacks, not enable them.
There are a lot of avenues we could explore here, but it’s always worth outlining the specific assumptions and security goals of any design before you start perusing the literature.
Assumptions
This is just a list of things we assume are true, and do not need to prove for the sake of our discussion here today. The first two are legal assumptions; the remainder are cryptographic.
Ask your lawyer if you want advice about the first two assumptions. Ask your cryptographer if you suspect any of the remaining assumptions are false.
- Crypto-shredding is a legally valid way to provide data erasure (as discussed above).
- EU courts will consider public keys to be Personal Data.
- The SHA-2 family of hash functions is secure (ignoring length-extension attacks, which won’t matter for how we’re using them).
- HMAC is a secure way to build a MAC algorithm out of a secure hash function.
- HKDF is a secure KDF if used correctly.
- AES is a secure 128-bit block cipher.
- Counter Mode (CTR) is a secure way to turn a block cipher into a stream cipher.
- AES-CTR + HMAC-SHA2 can be turned into a secure AEAD mode, if done carefully.
- Ed25519 is a digital signature algorithm that provides strong security against existent forgery under a chosen-message attack (SUF-CMA).
- Argon2id is a secure, memory-hard password KDF, when used with reasonable parameters. (You’ll see why in a moment.)
- Sigsum is a secure mechanism for building a transparency log.
This list isn’t exhaustive or formal, but should be sufficient for our purposes.
Security Goals
- The protocol messages stored in the Public Key Directory are accompanied by a Merkle tree proof of inclusion. This makes it append-only with an immutable history.
- The Public Key Directory cannot behave dishonestly about the decrypted plaintext for a given ciphertext without clients detecting the deception.
- Whatever strategy we use to solve this should be resistant to economic precomputation and brute-force attacks.
Can We Use Zero-Knowledge Proofs?
At first, this seems like an ideal situation for a succinct, non-interactive zero-knowledge proof.
After all, you’ve got some secret data that you hold, and you want to prove that a calculation is correct without revealing the data to the end user. This seems like the ideal setup for Schnorr’s identification protocol.
Unfortunately, the second assumption (public keys being considered Personal Data by courts, even though they’re derived from random secret keys) makes implementing a Zero-Knowledge Proof here very challenging.
First, if you look at Ed25519 carefully, you’ll realize that it’s just a digital signature algorithm built atop a Schnorr proof, which requires some sort of public key (even an ephemeral one) to be managed.
Worse, if you try to derive this value solely from public inputs (rather than creating a key management catch-22), the secret scalar your system derives at will have been calculated from the user’s Personal Data–which only strengthens a court’s argument that the public key is therefore personally identifiable.
There may be a more exotic zero-knowledge proof scheme that might be appropriate for our needs, but I’m generally wary of fancy new cryptography.
Here are two rules I live by in this context:
- If I can’t get the algorithms out of the crypto module for whatever programming language I find myself working with, it may as well not even exist.
- Corollary: If libsodium bindings are available, that counts as “the crypto module” too.
- If a developer needs to reach for a generic Big Integer library (e.g., GMP) for any reason in the course of implementing a protocol, I do not trust their implementation.
Unfortunately, a lot of zero-knowledge proof designs fail one or both of these rules in practice.
(Sorry not sorry, homomorphic encryption enthusiasts! The real world hasn’t caught up to your ideas yet.)
What About Verifiable Random Functions (VRFs)?
It may be tempting to use VRFs (i.e., RFC 9381), but this runs into the same problem as zero-knowledge proofs: we’re assuming that an EU court would deem public keys Personal Data.
But even if that assumption turns out false, the lifecycle of a protocol message looks like this:
- User wants to perform an action (e.g.,
AddKey
). - Their client software creates a plaintext protocol message.
- Their client software generates a random 256-bit key for each potentially-sensitive attribute, so it can be shredded later.
- Their client software encrypts each attribute of the protocol message.
- The ciphertext and keys are sent to the Public Key Directory.
- For each attribute, the Public Key Directory decrypts the ciphertext with the key, verifies the contents, and then stores both. The ciphertext is used to generate a commitment on Sigsum (signed by the Public Key Directory’s keypair).
- The Public Key Directory serves plaintext to requestors, but does not disclose the key.
- In the future, the end user can demand a legal takedown, which just wipes the key.
Let’s assume I wanted to build a VRF out of Ed25519 (similar to what Signal does with VXEdDSA). Now I have a key management problem, which is pretty much what this project was meant to address in the first place.
VRFs are really cool, and more projects should use them, but I don’t think they will help me.
Soatok’s Proposed Solution
If you want to fully understand the nitty-gritty implementation details, I encourage you to read the current draft specification, plus the section describing the encryption algorithm, and finally the plaintext commitment algorithm.
Now that we’ve established all that, I can begin to describe my approach to solving this problem.
First, we will encrypt each attribute of a protocol message, as follows:
- For subkey derivation, we use HKDF-HMAC-SHA512.
- For encrypting the actual plaintext, we use AES-256-CTR.
- For message authentication, we use HMAC-SHA512.
- Additional associated data (AAD) is accepted and handled securely; i.e., we don’t use YOLO as a hash construction.
This prevents an Invisible Salamander attack from being possible.
This encryption is performed client-side, by each user, and the symmetric key for each attribute is shared with the Public Key Directory when publishing protocol messages.If they later issue a legal request for erasure, they can be sure that the key used to encrypt the data they previously published isn’t secretly the same key used by every other user’s records.
They always know this because they selected the key, not the server. Furthermore, everyone can verify that the hash published to the Merkle tree matches a locally generated hash of the ciphertext they just emitted.
This provides a mechanism to keep everyone honest. If anything goes wrong, it will be detected.
Next, to prevent the server from being dishonest, we include a plaintext commitment hash, which is included as part of the AAD (alongside the attribute name).
(Implementing crypto-shredding is straightforward: simply wipe the encryption keys for the attributes of the records in scope for the request.)
If you’ve read this far, you’re probably wondering, “What exactly do you mean by plaintext commitment?”
Art by Scruff.
Plaintext Commitments
The security of a plaintext commitment is attained by the Argon2id password hashing function.
By using the Argon2id KDF, you can make an effective trapdoor that is easy to calculate if you know the plaintext, but economically infeasible to brute-force attack if you do not.
However, you need to do a little more work to make it safe.
The details here matter a lot, so this section is unavoidably going to be a little dense.
Pass the Salt?
Argon2id expects both a password and a salt.
If you eschew the salt (i.e., zero it out), you open the door to precomputation attacks (see also: rainbow tables) that would greatly weaken the security of this plaintext commitment scheme.
You need a salt.
If you generate the salt randomly, this commitment property isn’t guaranteed by the algorithm. It would be difficult, but probably not impossible, to find two salts (, ) such that .
Deriving the salt from public inputs eliminates this flexibility.
By itself, this reintroduces the risk of making salts totally deterministic, which reintroduces the risk of precomputation attacks (which motivated the salt in the first place).
If you include the plaintext in this calculation, it could also create a crib that gives attackers a shortcut for bypassing the cost of password hashing.
Furthermore, any two encryptions operations that act over the same plaintext would, without any additional design considerations, produce an identical value for the plaintext commitment.
Public Inputs for Salt Derivation
The initial proposal included the plaintext value for Argon2 salt derivation, and published the salt and Argon2 output next to each other.
Hacker News comex pointed out a flaw with this technique, so I’ve since revised how salts are selected to make them independent of the plaintext.
The public inputs for the Argon2 salt are now:
- The version identifier prefix for the ciphertext blob.
- The 256-bit random value used as a KDF salt (also stored in the ciphertext blob).
- A recent Merkle tree root.
- The attribute name (prefixed by its length).
These values are all hashed together with SHA-512, and then truncated to 128 bits (the length required by libsodium for Argon2 salts).
This salt is not stored, but can deterministically be calculated from public information.
Crisis Averted?
This sure sounds like we’ve arrived at a solution, but let’s also consider another situation before we declare our job done.
High-traffic Public Key Directories may have multiple users push a protocol message with the same recent Merkle root.
This may happen if two or more users query the directory to obtain the latest Merkle root before either of them publish their updates.
Later, if both of these users issue a legal takedown, someone might observe that the recent-merkle-root
is the same for two messages, but their commitments differ.
Is this enough leakage to distinguish plaintext records?
In my earlier design, we needed to truncate the salt and rely on understanding the birthday bound to reason about its security. This is no longer the case, since each salt is randomized by the same random value used in key derivation.
Choosing Other Parameters
As mentioned a second ago, we set the output length of the Argon2id KDF to 32 bytes (256 bits). We expect the security of this KDF to exceed , which to most users might as well be infinity.
With apologies to Filippo.
The other Argon2id parameters are a bit hand-wavey. Although the general recommendation for Argon2id is to use as much memory as possible, this code will inevitably run in some low-memory environments, so asking for several gigabytes isn’t reasonable.
For the first draft, I settled on 16 MiB of memory, 3 iterations, and a parallelism degree of 1 (for widespread platform support).
Plaintext Commitment Algorithm
With all that figured out, our plaintext commitment algorithm looks something like this:
- Calculate the SHA512 hash of:
- A domain separation constant
- The header prefix (stored in the ciphertext)
- The randomness used for key-splitting in encryption (stored in the ciphertext)
- Recent Merkle Root
- Attribute Name Length (64-bit unsigned integer)
- Attribute Name
- Truncate this hash to the rightmost 16 bytes (128 bits). This is the salt.
- Calculate Argon2id over the following inputs concatenated in this order, with an output length of 32 bytes (256 bits), using the salt from step 2:
- Recent Merle Root Length (64-bit unsigned integer)
- Recent Merkle Root
- Attribute Name Length (64-bit unsigned integer)
- Attribute Name
- Plaintext Length (64-bit unsigned integer)
- Plaintext
The output (step 3) is included as the AAD in the attribute encryption step, so the authentication tag is calculated over both the randomness and the commitment.
To verify a commitment (which is extractable from the ciphertext), simply recalculate the commitment you expect (using the recent Merkle root specified by the record), and compare the two in constant-time.
If they match, then you know the plaintext you’re seeing is the correct value for the ciphertext value that was committed to the Merkle tree.
If the encryption key is shredded in the future, an attacker without knowledge of the plaintext will have an enormous uphill battle recovering it from the KDF output (and the salt will prove to be somewhat useless as a crib).
Caveats and Limitations
Although this design does satisfy the specific criteria we’ve established, an attacker that already knows the correct plaintext can confirm that a specific record matches it via the plaintext commitment.
This cannot be avoided: If we are to publish a commitment of the plaintext, someone with the plaintext can always confirm the commitment after the fact.
Whether this matters at all to the courts is a question for which I cannot offer any insight.
Remember, we don’t even know if any of this is actually necessary, or if “moderation and platform safety” is a sufficient reason to sidestep the right to erasure.If the courts ever clarify this adequately, we can simply publish the mapping of Actor IDs to public keys and auxiliary data without any crypto-shredding at all.
Trying to attack it from the other direction (download a crypto-shredded record and try to recover the plaintext without knowing it ahead of time) is attack angle we’re interested in.
Herd Immunity for the Forgotten
Another interesting implication that might not be obvious: The more Fediverse servers and users publish to a single Public Key Directory, the greater the anonymity pool available to each of them.
Consider the case where a user has erased their previous Fediverse account and used the GDPR to also crypto-shred the Public Key Directory entries containing their old Actor ID.
To guess the correct plaintext, you must not only brute-force guessing possible usernames, but also permute your guesses across all of the instances in scope.
The more instances there are, the higher the cost of the attack.
Recap
I tasked myself with designing a Key Transparency solution that doesn’t make complying with Article 17 of the GDPR nigh-impossible. To that end, crypto-shredding seemed like the only viable way forward.
A serialized record containing ciphertext for each sensitive attribute would be committed to the Merkle tree. The directory would store the key locally and serve plaintext until a legal takedown was requested by the user who owns the data. Afterwards, the stored ciphertext committed to the Merkle tree is indistinguishable from random for any party that doesn’t already know the plaintext value.
I didn’t want to allow Public Key Directories to lie about the plaintext for a given ciphertext, given that they know the key and the requestor doesn’t.
After considering zero-knowledge proofs and finding them to not be a perfect fit, I settled on designing a plaintext commitment scheme based on the Argon2id password KDF. The KDF salts can be calculated from public inputs.
Altogether, this meets the requirements of enabling crypto-shredding while keeping the Public Key Directory honest. All known attacks for this design are prohibitively expensive for any terrestrial threat actors.
As an added bonus, I didn’t introduce anything fancy. You can build all of this with the cryptography available to your favorite programming language today.
Closing Thoughts
If you’ve made it this far without being horribly confused, you’ve successfully followed my thought process for developing message attribute shreddability in my Public Key Directory specification.
This is just one component of the overall design proposal, but one that I thought my readers would enjoy exploring in greater detail than the specification needed to capture.
(This post was updated on 2024-11-22 to replace the incorrect term “PII” with “personal data”. Apologies for the confusion!)
#Argon2 #crypto #cryptography #E2EE #encryption #FederatedPKI #fediverse #passwordHashing #symmetricCryptography
Update (2024-06-06): There is an update on this project.As Twitter’s new management continues to nosedive the platform directly into the ground, many people are migrating to what seem like drop-in alternatives; i.e. Cohost and Mastodon. Some are even considering new platforms that none of us have heard of before (one is called “Hive”).
Needless to say, these are somewhat chaotic times.
One topic that has come up several times in the past few days, to the astonishment of many new Mastodon users, is that Direct Messages between users aren’t end-to-end encrypted.
And while that fact makes Mastodon DMs no less safe than Twitter DMs have been this whole time, there is clearly a lot of value and demand in deploying end-to-end encryption for ActivityPub (the protocol that Mastodon and other Fediverse software uses to communicate).
However, given that Melon Husk apparently wants to hurriedly ship end-to-end encryption (E2EE) in Twitter, in some vain attempt to compete with Signal, I took it upon myself to kickstart the E2EE effort for the Fediverse.
https://twitter.com/elonmusk/status/1519469891455234048
So I’d like to share my thoughts about E2EE, how to design such a system from the ground up, and why the direction Twitter is heading looks to be security theater rather than serious cryptographic engineering.
If you’re not interested in those things, but are interested in what I’m proposing for the Fediverse, head on over to the GitHub repository hosting my work-in-progress proposal draft as I continue to develop it.
How to Quickly Build E2EE
If one were feeling particularly cavalier about your E2EE designs, they could just generate then dump public keys through a server they control, pass between users, and have them encrypt client-side. Over and done. Check that box.Every public key would be ephemeral and implicitly trusted, and the threat model would mostly be, “I don’t want to deal with law enforcement data requests.”
Hell, I’ve previously written an incremental blog post to teach developers about E2EE that begins with this sort of design. Encrypt first, ratchet second, manage trust relationships on public keys last.
If you’re catering to a slightly tech-savvy audience, you might throw in SHA256(pk1 + pk2) -> hex2dec() and call it a fingerprint / safety number / “conversation key” and not think further about this problem.
Look, technical users can verify out-of-band that they’re not being machine-in-the-middle attacked by our service.An absolute fool who thinks most people will ever do this
From what I’ve gathered, this appears to be the direction that Twitter is going.https://twitter.com/wongmjane/status/1592831263182028800
Now, if you’re building E2EE into a small hobby app that you developed for fun (say: a World of Warcraft addon for erotic roleplay chat), this is probably good enough.
If you’re building a private messaging feature that is intended to “superset Signal” for hundreds of millions of people, this is woefully inadequate.
https://twitter.com/elonmusk/status/1590426255018848256
Art: LvJ
If this is, indeed, the direction Musk is pushing what’s left of Twitter’s engineering staff, here is a brief list of problems with what they’re doing.
- Twitter Web. How do you access your E2EE DMs after opening Twitter in your web browser on a desktop computer?
- If you can, how do you know twitter.com isn’t including malicious JavaScript to snarf up your secret keys on behalf of law enforcement or a nation state with a poor human rights record?
- If you can, how are secret keys managed across devices?
- If you use a password to derive a secret key, how do you prevent weak, guessable, or reused passwords from weakening the security of the users’ keys?
- If you cannot, how do users decide which is their primary device? What if that device gets lost, stolen, or damaged?
- Authenticity. How do you reason about the person you’re talking with?
- Forward Secrecy. If your secret key is compromised today, can you recover from this situation? How will your conversation participants reason about your new Conversation Key?
- Multi-Party E2EE. If a user wants to have a three-way E2EE DM with the other members of their long-distance polycule, does Twitter enable that?
- How are media files encrypted in a group setting? If you fuck this up, you end up like Threema.
- Is your group key agreement protocol vulnerable to insider attacks?
- Cryptography Implementations.
- What does the KEM look like? If you’re using ECC, which curve? Is a common library being used in all devices?
- How are you deriving keys? Are you just using the result of an elliptic curve (scalar x point) multiplication directly without hashing first?
- Independent Third-Party Review.
- Who is reviewing your protocol designs?
- Who is reviewing your cryptographic primitives?
- Who is reviewing the code that interacts with E2EE?
- Is there even a penetration test before the feature launches?
As more details about Twitter’s approach to E2EE DMs come out, I’m sure the above list will be expanded with even more questions and concerns.
My hunch is that they’ll reuse liblithium (which uses Curve25519 and Gimli) for Twitter DMs, since the only expert I’m aware of in Musk’s employ is the engineer that developed that library for Tesla Motors. Whether they’ll port it to JavaScript or just compile to WebAssembly is hard to say.
How To Safely Build E2EE
You first need to decompose the E2EE problem into five separate but interconnected problems.
- Client-Side Secret Key Management.
- Multi-device support
- Protect the secret key from being pilfered (i.e. by in-browser JavaScript delivered from the server)
- Public Key Infrastructure and Trust Models.
- TOFU (the SSH model)
- X.509 Certificate Authorities
- Certificate/Key/etc. Transparency
- SigStore
- PGP’s Web Of Trust
- Key Agreement.
- While this is important for 1:1 conversations, it gets combinatorially complex when you start supporting group conversations.
- On-the-Wire Encryption.
- Direct Messages
- Media Attachments
- Abuse-resistance (i.e. message franking for abuse reporting)
- The Construction of the Previous Four.
- The vulnerability of most cryptographic protocols exists in the joinery between the pieces, not the pieces themselves. For example, Matrix.
This might not be obvious to someone who isn’t a cryptography engineer, but each of those five problems is still really hard.
To wit: The latest IETF RFC draft for Message Layer Security, which tackles the Key Agreement problem above, clocks in at 137 pages.
Additionally, the order I specified these problems matters; it represents my opinion of which problem is relatively harder than the others.
When Twitter’s CISO, Lea Kissner, resigned, they lost a cryptography expert who was keenly aware of the relative difficulty of the first problem.
https://twitter.com/LeaKissner/status/1592937764684980224
You may also notice the order largely mirrors my previous guide on the subject, in reverse. This is because teaching a subject, you start with the simplest and most familiar component. When you’re solving problems, you generally want the opposite: Solve the hardest problems first, then work towards the easier ones.
This is precisely what I’m doing with my E2EE proposal for the Fediverse.
The Journey of a Thousand Miles Begins With A First Step
Before you write any code, you need specifications.Before you write any specifications, you need a threat model.
Before you write any threat models, you need both a clear mental model of the system you’re working with and how the pieces interact, and a list of security goals you want to achieve.
Less obviously, you need a specific list of non-goals for your design: Properties that you will not prioritize. A lot of security engineering involves trade-offs. For example: elliptic curve choice for digital signatures is largely a trade-off between speed, theoretical security, and real-world implementation security.
If you do not clearly specify your non-goals, they still exist implicitly. However, you may find yourself contradicting them as you change your mind over the course of development.
Being wishy-washy about your security goals is a good way to compromise the security of your overall design.
In my Mastodon E2EE proposal document, I have a section called Design Tenets, which states the priorities used to make trade-off decisions. I chose Usability as the highest priority, because of AviD’s Rule of Usability.
Security at the expense of usability comes at the expense of security.Avi Douglen, Security StackExchange
Underneath Tenets, I wrote Anti-Tenets. These are things I explicitly and emphatically do not want to prioritize. Interoperability with any incumbent designs (OpenPGP, Matrix, etc.) is the most important anti-tenet when it comes to making decisions. If our end-state happens to interop with someone else’s design, cool. I’m not striving for it though!Finally, this section concludes with a more formal list of Security Goals for the whole project.
Art: LvJ
Every component (from the above list of five) in my design will have an additional dedicated Security Goals section and Threat Model. For example: Client-Side Secret Key Management.
You will then need to tackle each component independently. The threat model for secret-key management is probably the trickiest. The actual encryption of plaintext messages and media attachments is comparatively simple.
Finally, once all of the pieces are laid out, you have the monumental (dare I say, mammoth) task of stitching them together into a coherent, meaningful design.
If you did your job well at the outset, and correctly understand the architecture of the distributed system you’re working with, this will mostly be straightforward.
Making Progress
At every step of the way, you do need to stop and ask yourself, “If I was an absolute chaos gremlin, how could I fuck with this piece of my design?” The more pieces your design has, the longer the list of ways to attack it will grow.It’s also helpful to occasionally consider formal methods and security proofs. This can have surprising implications for how you use some algorithms.
You should also be familiar enough with the cryptographic primitives you’re working with before you begin such a journey; because even once you’ve solved the key management story (problems 1, 2 and 3 from the above list of 5), cryptographic expertise is still necessary.
- If you’re feeding data into a hash function, you should also be thinking about domain separation. More information.
- If you’re feeding data into a MAC or signature algorithm, you should also be thinking about canonicalization attacks. More information.
- If you’re encrypting data, you should be thinking about multi-key attacks and confused deputy attacks. Also, the cryptographic doom principle if you’re not using IND-CCA3 algorithms.
- At a higher-level, you should proactively defend against algorithm confusion attacks.
How Do You Measure Success?
It’s tempting to call the project “done” once you’ve completed your specifications and built a prototype, and maybe even published a formal proof of your design, but you should first collect data on every important metric:
- How easy is it to use your solution?
- How hard is it to misuse your solution?
- How easy is it to attack your solution? Which attackers have the highest advantage?
- How stable is your solution?
- How performant is your solution? Are the slow pieces the deliberate result of a trade-off? How do you know the balance was struck corectly?
Where We Stand Today
I’ve only begun writing my proposal, and I don’t expect it to be truly ready for cryptographers or security experts to review until early 2023.However, my clearly specified tenets and anti-tenets were already useful in discussing my proposal on the Fediverse.
@soatok @fasterthanlime Should probably embed the algo used for encryption in the data used for storing the encrypted blob, to support multiples and future changes.@fabienpenso@hachyderm.io proposes in-band protocol negotiation instead of versioned protocols
The main things I wanted to share today are:
- The direction Twitter appears to be heading with their E2EE work, and why I think it’s a flawed approach
- Designing E2EE requires a great deal of time, care, and expertise; getting to market quicker at the expense of a clear and careful design is almost never the right call
Mastodon? ActivityPub? Fediverse? OMGWTFBBQ!
In case anyone is confused about Mastodon vs ActivityPub vs Fediverse lingo:The end goal of my proposal is that I want to be able to send DMs to queer furries that use Mastodon such that only my recipient can read them.
Achieving this end goal almost exclusively requires building for ActivityPub broadly, not Mastodon specifically.
However, I only want to be responsible for delivering this design into the software I use, not for every single possible platform that uses ActivityPub, nor all the programming languages they’re written in.
I am going to be aggressive about preventing scope creep, since I’m doing all this work for free. (I do have a Ko-Fi, but I won’t link to it from here. Send your donations to the people managing the Mastodon instance that hosts your account instead.)
My hope is that the design documents and technical specifications become clear enough that anyone can securely implement end-to-end encryption for the Fediverse–even if special attention needs to be given to the language-specific cryptographic libraries that you end up using.
Art: LvJ
Why Should We Trust You to Design E2EE?
This sort of question comes up inevitably, so I’d like to tackle it preemptively.My answer to every question that begins with, “Why should I trust you” is the same: You shouldn’t.
There are certainly cryptography and cybersecurity experts that you will trust more than me. Ask them for their expert opinions of what I’m designing instead of blanketly trusting someone you don’t know.
I’m not interested in revealing my legal name, or my background with cryptography and computer security. Credentials shouldn’t matter here.
If my design is good, you should be able to trust it because it’s good, not because of who wrote it.
If my design is bad, then you should trust whoever proposes a better design instead. Part of why I’m developing it in the open is so that it may be forked by smarter engineers.
Knowing who I am, or what I’ve worked on before, shouldn’t enter your trust calculus at all. I’m a gay furry that works in the technology industry and this is what I’m proposing. Take it or leave it.
Why Not Simply Rubber-Stamp Matrix Instead?
(This section was added on 2022-11-29.)There’s a temptation, most often found in the sort of person that comments on the /r/privacy subreddit, to ask why even do all of this work in the first place when Matrix already exists?
The answer is simple: I do not trust Megolm, the protocol designed for Matrix.
Megolm has benefited from amateur review for four years. Non-cryptographers will confuse this observation with the proposition that Matrix has benefited from peer review for four years. Those are two different propositions.
In fact, the first time someone with cryptography expertise bothered to look at Matrix for more than a glance, they found critical vulnerabilities in its design. These are the kinds of vulnerabilities that are not easily mitigated, and should be kept in mind when designing a new protocol.
You don’t have to take my word for it. Listen to the Security, Cryptography, Whatever podcast episode if you want cryptographic security experts’ takes on Matrix and these attacks.
From one of the authors of the attack paper:
So they kind of, after we disclosed to them, they shared with us their timeline. It’s not fixed yet. It’s a, it’s a bigger change because they need to change the protocol. But they always said like, Okay, fair enough, they’re gonna change it. And they also kind of announced a few days after kind of the public disclosure based on the public reaction that they should prioritize fixing that. So it seems kind of in the near future, I don’t have the timeline in front of me right now. They’re going to fix that in the sense of like the— because there’s, notions of admins and so on. So like, um, so authenticating such group membership requests is not something that is kind of completely outside of, kind of like the spec. They just kind of need to implement the appropriate authentication and cryptography.Martin Albrecht, SCW podcast
From one of the podcast hosts:I guess we can at the very least tell anyone who’s going forward going to try that, that like, yes indeed. You should have formal models and you should have proofs. And so there’s this, one of the reactions to kind of the kind of attacks that we presented and also to prior previous work where we kind of like broken some cryptographic protocols is then to say like, “Well crypto’s hard”, and “don’t roll your own crypto.” But in a way the thing is like, you know, we need some people to roll their own crypto because that’s how we have crypto. Someone needs to roll it. But we have developed techniques, we have developed formalisms, we have developed methods for making sure it doesn’t have to be hard, it’s not, it’s not a dark art kind of that only kind of a few, a select few can master, but it’s, you know, it’s a science and you can learn it. So, but you need to then indeed employ a cryptographer in kind of like forming, modeling your protocol and whenever you make changes, then, you know, they need to look over this and say like, Yes, my proof still goes through. Um, so like that is how you do this. And then, then true engineering is still hard and it will remain hard and you know, any science is hard, but then at least you have some confidence in what you’re doing. You might still then kind of on the space and say like, you know, the attack surface is too large and I’m not gonna to have an encrypted backup. Right. That’s then the problem of a different hard science, social science. Right. But then just use the techniques that we have, the methods that we have to establish what we need.Thomas Ptacek, SCW podcast
It’s tempting to listen to these experts and say, “OK, you should use libsignal instead.”But libsignal isn’t designed for federation and didn’t prioritize group messaging. The UX for Signal is like an IM application between two parties. It’s a replacement for SMS.
It’s tempting to say, “Okay, but you should use MLS then; never roll your own,” but MLS doesn’t answer the group membership issue that plagued Matrix. It punts on these implementation details.
Even if I use an incumbent protocol that privacy nerds think is good, I’ll still have to stitch it together in a novel manner. There is no getting around this.
Maybe wait until I’ve finished writing the specifications for my proposal before telling me I shouldn’t propose anything.
Credit for art used in header: LvJ, Harubaki
https://soatok.blog/2022/11/22/towards-end-to-end-encryption-for-direct-messages-in-the-fediverse/
Don't use plaintext for Argon2 salt. by soatok · Pull Request #55 · fedi-e2ee/public-key-directory-specification
h/t https://news.ycombinator.com/item?id=42216619GitHub
Gibt es hier zufällig eine #Fediverse, besser gesagt einen Mastodon-#API-Expert*in, die Lust hat mit mir sich beim @PrototypeFund zu bewerben um an einer Anbindung und Interaktion zwischen @foss_events und #Mastodon zu arbeiten?
Wie der Name sagt erstmal ein Prototyp, Ergebnisse könnten auf andere Plattformen übertragen werden.
Niedziela 12:00, audycja "Magazyn filozofa" na temat aktualnego stanu serwisu #X, gość audycji Dominik Batorski - socjolog internetu, i padają magiczne słowa #fediverse i #mastodon 😊:
"...na pewno takim pierwszym momentem, kiedy część użytkowników zaczęła z Twittera odpływać, był ten moment przejęcia go przez Elona Muska, wtedy no, kilka takich serwisów, platform, też opartych o tzw fediverse, np. Mastodon zrobiło się trochę popularnych, natomiast to było to nie były bardzo duże liczby."
"...mało pewnie osób ten serwis kojarzy, no to też jest bardzo podobna [do X] platforma, aczkolwiek różnica jest taka, że jest to platforma rozproszona, to znaczy każdy może właściwie taki serwis założyć. To nie jest tak, że jest jakaś scentralizowana kontrola i jeden właściciel, który tą kontrolę ma, co też umożliwia komunikację pomiędzy różnymi serwisami, tak w tej chwili np. threads, które meta robi, też mają w tej technologii działać, to znaczy użytkownicy threads będą mogli się komunikować z użytkownikami właśnie innych platform. Czyli nie będzie tego przywiązania do platformy, ale ten odpływ użytkowników z Twittera, chociaż no, był całkiem całkiem spory i też statystyki wykorzystania Twittera ewidentnie się zmniejszały."
Sama audycja #tokfm za #paywall ' em ale można przeczytać transkrypcję: https://audycje.tokfm.pl/podcast/166810,X-dawniej-Twitter-od-poczatku-do-wspolczesnosci-Wkrotce-niszowa-banka-dla-prawicy
#tokfm
@noam @support @Friendica Support
Fediverse users: Follow @bsky.brid.gy to allow #Bluesky users to find and follow you.
Bluesky users: Follow @ap.brid.gy to allow #Fediverse users to find and follow you.
💬 VORBILD sein - andere motivieren!
Wenn hast Du (habt Ihr) zuletzt jemanden ins #Fediverse gebracht?
Wie habt ihr es angestellt?
Exactly two years ago, we started to post links on #Mastodon via our account @heiseonline 👇
https://mastodon.social/@heiseonline/109314036284496776
It took longer, than I expected, but here we are: it seems like this account now brings continuously more #traffic to heise.de than #X (#Twitter) in its entirety, although it only has ¼ of the follower number (and many of them don't seem to be active anymore).
I'll prepare some graphs after the weekend.
#SocialMedia
#TwitterExodus
#MastodonMigration
#TwitterMigration
#Fediverse
Vorerst ist das hier der offizielle Mastodon-Account von heise online.
Wird jetzt erst einmal händisch befüllt, aber wir haben schon weitergehende Pläne und wollen hier mehr machen.
Last Week in Fediverse – ep 91
Loops has finally launched, Radio Free Fedi will shut down, and governance for Bridgy Fed.
The News
Loops.video, the short-form video platform has finally launched, after weeks of delays. There is now an iOS app on TestFlight available, as well as an Android APK, and it there is no waitlist anymore. In some statistics shared by Loops developed Daniel Supernault, Loops now has more than 8000 people signed up and close to a 1000 videos posted. The app has the bare minimum of features, with only one feed that seems to be algorithmic, and there is no following feed. Supernault says that he is currently working on adding discovery features as well as notifications to the app. The app currently loads videos smoothly and quickly, and Supernault has already had to upgrade the server to deal with traffic. Loops is currently not federating with the rest of the fediverse, and you cannot interact with Loops from another fediverse account. This feature is planned, but there is no estimation when this will happen. Third party clients are already possible with Loops, and one is already available.
Radio Free Fedi has announced that it will shut down in January 2025. Radio Free Fedi is a radio station and community that broadcasts music by people on the fediverse. The project has grown from a simple stream into multiple non-stop radio streams, a specialty channel and a channel for spoken word, and build up a catalogue of over 400 artists who’s art are broadcast on the radio. Running a project requires a large amount of work, and was largely done by one person. They say that this is not sustainable anymore, and that the way that the project is structured make handing the project over to someone else not an option. Radio Free Fedi has been a big part of the artist’s community on the fediverse, which has contributed to a culture of celebrating independent art, and the sunset of Radio Free Fedi is a loss for fediverse culture.
In an update on Bridgy Fed, the software that allows bridging between different protocols, creator Ryan Barrett talks about possible futures for Bridgy Fed. Barrett says that Bridgy Fed is currently a side project for him, but people make requests for Bridgy Fed to become bigger, and become ‘core infrastructure of the social web’. Barrett is open to that possibility, but not while the project is his personal side project, and is open for conversations to house the project in a larger organisation, and with someone with experience to lead the project.
The Social Web Foundation will organise a Devroom at FOSDEM. FOSDEM is a yearly conference in Brussels for free and open source software, and will be on February 1-2, 2025. The Social Web Foundation is inviting people and projects to give talks about ActivityPub, in the format of either a talk of 25 minutes for bigger projects, or a lightning talk of 8 minutes.
OpenVibe is a client for Mastodon, Bluesky and Nostr, and has now added support for cross-posting to Threads as well. OpenVibe also offers the ability to have a combined feed, that shows posts from your accounts on all the different networks into a single feed, which now can include your Threads account, as well as your Mastodon, Nostr and Bluesky accounts.
The shutdown of the botsin.space server lead to some new experiments with bots on the fediverse:
- Ktistec is a single-user ActivityPub server that added support for bots in the form of scripts that the server itself periodically runs.
- A super simple server scripts for bots.
The Links
- Fediblock, a Tiny History – Artist Marcia X.
- A faux “Eternal September” turns into flatness – The Nexus of Privacy.
- Fediverse Migrations: A Study of User Account Portability on the Mastodon Social Network – a paper for the Internet Measurement Conference.
- IFTAS is collaborating with Bonfire on building moderation tools into the upcoming platform.
- Another update on how traffic from different platforms compare to the German news site heise.de
- Lemmy development update for the last two weeks.
- An infographic and blog on how account recommendations work in Mastodon.
- Ghost’s weekly update on their work on ActivityPug.
- For Mastodon admins: a script to ‘restart delivery to instances that had some technical difficulties a while ago but are now back online’.
- Letterbook is a social networking platform build from scratch, currently under development, and is holding office hours for maintainers.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-91/
https://loops.video is now accepting new users!
Replies get an upgrade
Leisurely conversations and spirited debates, the conversation expands.Ghost (Building ActivityPub)
I've been thinking about the demise of botsin.space. Running a site for bots is hard (and expensive) but writing and running an ActivityPub-based bot should be easy.
To prove this was the case I added experimental support for bots/automations to Ktistec in the form of scripts that the server periodically runs. These scripts can be in a programming language of your choice. The server provides credentials for its API in the process environment (if you can use curl you can publish posts), simple interaction happens via stdin/stdout/stderr, and the complexity of using ActivityPub is abstracted away.
The code is only available on the following branch for the moment:
https://github.com/toddsundsted/ktistec/commits/run-scripts/
There are a couple example shell scripts here:
https://github.com/toddsundsted/ktistec/commit/4982925a...
I have a few enhancements in mind, but it's already proven useful as a means to periodically log data from my server host, and I'll use it, when finished, to publish release notes.
#ktistec #activitypub #fediverse #bots
GitHub - toddsundsted/ktistec: Single user ActivityPub (www.w3.org/TR/activitypub/) server.
Single user ActivityPub (https://www.w3.org/TR/activitypub/) server. - toddsundsted/ktistecGitHub
Last Week in Fediverse – ep 90
The Fediverse Schema Observatory helps to improve interoperability, the botsin.space server will shut down, and more.
The News
The Fediverse Schema Observatory is a new project by Darius Kazemi, who runs the Hometown fork of Mastodon as well as co-wrote to Fediverse Governance paper this year with Erin Kissane. The Observatory collects data structures from the fediverse; it looks how different fediverse softwares use and implement ActivityPub. It explicitly does not gather any personal data or posts; instead it looks at how the data is formatted in ActivityPub. ActivityPub and the fediverse has a long-standing problem in that the selling point is interoperability between different software, but every software has their own, slightly different implementation of ActivityPub, making good interoperability difficult to pull off. Kazemi has posted about the Observatory as a Request for Comments. The Observatory is explicitly not a scraper, but considering how sensitive the subject can be in the fediverse community, Kazemi has taken a careful approach of informing the community in detail beforehand about the proposed project, and how it deals with data. The easiest way to see and understand how the Observatory is works is with this demo video.
The botsin.space Mastodon server for bots will shut down in December. The botsin.space server is a server dedicated to running bots, with a few thousand active bots running. The server is a valued part of the community, with the wild variety of bots running on the server contributing to the Mastodon in both useful and silly ways. The admin states that over time running the servers has become too expensive over time, and that is was not feasible to keep the project going. The shutdown of botsin.space showcases an ongoing struggle in the fediverse, running a server is expensive and time-consuming, and every time a server shuts down the fediverse loses a block of its history.
Sub.club is a way to add monetization options to fediverse posts. Sub.club started with being able to add paywalls to Mastodon posts, recently expanded to long-form writing with support for Write.as, and now has added support for WordPress blogs as well. Sub.club has posted a tutorial on how to add the plugin to WordPress, making it an easy system to set up.
Bridgy Fed, the bridge between ActivityPub and ATproto has gotten some updates, with the main new feature is that you can now set custom domain handles on Bluesky for fediverse accounts that get bridged into Bluesky. This brings the interoperability between the networks closer to native accounts, and makes having a bridged account more attractive.
Upcoming fediverse platform for short-form video, Loops, got some press by The Verge and TechCrunch. Creator Daniel Supernault said that there are now 5k people on the waiting list, and that a TestFlight link will go out soon for the first 100 people. An Android APK will be made available at some point as well.
GoToSocial is working on the ability to for servers to subscribe to allowlists and denylists. This makes it easier to create clusters of servers with a shared allowlist, such as the Website League. As I recently wrote about Website League, it is a cluster of federating servers that uses ActivityPub but exists separately from the rest of the fediverse, and it is started by people who build a new shared space after Cohost shut down. Website League servers predominantly use GoToSocial or Akkoma, and have been actively working on tuning the software to meet their needs.
The Links
- Flipboard is now federating accounts of publishers in Brazil, Canada, Germany and the UK.
- A long read on Content Warnings, that extensively touches on the culture on using Content Warnings in Mastodon and the Website League as well.
- Diving Into the World of Lemmy.
- One year after X: Embracing open science on Mastodon – a reflection by the University of Groningen Library.
- Mastodon, two years later – a continuation of the article ‘Mastodon – a partial history‘, by The Nexus of Privacy.
- This week’s fediverse software updates.
- The Event Federation project has drafted a Fediverse Enhancement Proposal for a common way to use the ‘event’ type in the fediverse.
- Setting up my federated fleamarket with flohmarkt.
- IFTAS October update.
- Ghost’s weekly update on their project to implement ActivityPub, mentioning that they have bridged their ActivityPub-based Ghost account to Bluesky as well.
- The Fediverse has empowered me to take back control from Big Tech. Now I want to help others do the same. – Elena Rossini.
- How the ‘Fediverse’ Works (and Why It Might Be the Future of Social Media) – Lifehacker.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-90/
Hey friends, it's hard to write this, but it's time to retire botsin.space. I wrote a post about it here: https://muffinlabs.com/posts/2024/10/29/10-29-rip-botsin-space/TLDR the site will go read-only on or around December 15th.
I'm so thankful for all the support and good times here ❤️ thanks everyone
How the 'Fediverse' Works (and Why It Might Be the Future of Social Media)
A brief, jargon-free explainer on the freer future of the social web.Justin Pot (Lifehacker)
Last Week in Fediverse – ep 86
Threads degrades their fediverse integration, a separate ActivityPub-based Island Network launches, and more news about Ghost and ActivityPub.
Threads delays posts for 15 minutes before federating
Threads’ latest update has degraded the value of their fediverse integration. Posts made on Threads will now always be delayed by 15 minutes before they are delivered to the rest of the fediverse, if fediverse sharing is turned on. The 15 minute delay is added for the purpose of post editing; posts on Threads can now be edited for 15 minutes after they are created. This used to be 5 minutes, both as a window for editing posts as well as the delay to be send out to the rest of the fediverse.
A 15 minute delay is a long time in microblogging, and significantly impacts things like breaking news, and live-posting sports events. It also meaningfully impacts the ability to have a back-and-forth conversation with people in a comment section. The delay itself is already an issue, but things get even more problematic when taken into consideration that during live events, Threads posts with a 15 minute delay are now mixed with fediverse posts without a delay and presented as happening during the same time. This was already noticeable during yesterday’s U.S. VP debate, an event where people use microblogging for the real-time reaction. But part of the real-time reactions was actually 15 minutes delayed, while another part was not, which creates even more confusing experience. A Threads engineer says that they will want to solve this problem ‘eventually’, but that it will probably come after Threads has implemented full bi-directional interoperability.
This news is not a great start for the Social Web Foundation either, which launched last week with criticism from the wider fediverse developer community for having Meta as one of their supporting members. There is a distrust of Meta’s intention within the fediverse, and them degrading their fediverse integration is likely not helping.
Website League
The Website League is a new social networking project that has arisen out of the demise of Cohost. Cohost was a social media site for the last 2 years, that has shut down, and on October 1st the website entered read-only mode. Cohost had a dedicated user base who appreciated the community that they’ve build on the site. Website League is a new project by users of Cohost (the Cohost staff is not involved) to build a successor network in Cohost’s place.
What makes Website League stand out is that it is a federated Island Network, described by Website League themselves as ‘a bunch of smallish websites that talk to each other’. This federated social network is using ActivityPub, but deliberately does not connect to the rest of the fediverse. Instead, it is an allowlist-based form of federation, where only websites/servers who agree to the Website League’s central set of rules can join.
The Website League has a big focus community organisation and governance. Even though the project is very young, and launched under time pressure of the deadline of Cohost closing, there are already multiple systems in place with an active Loomio for Stewardship, a wiki and more. The Website League provides a different vision of what a federated social network build on top of ActivityPub can look like, and I’m very curious to see where the project will go.
Ghost and Fedify
Ghost published their latest update on their work on adding ActivityPub, with more information about their upcoming beta. Ghost is slowly starting their beta process soon, making it clear that this is indeed a testing program, and data loss should be expected for people who are participating. They also said more about the performance and scaling of Ghost and ActivityPub. Sending out a newsletter over ActivityPub to 5000 subscribers turned out to need 10 servers, which indicates how resource-intensive and expensive ActivityPub can be. As a result, ActivityPub followers will count towards Ghost Pro billing, as Ghost Pro charges based on the number of members an account has.
Fedify, an open-source framework that simplifies building federated server apps, is now officially in version 1.0. Ghost’s ActivityPub integration is build on top of Fedify, and Ghost is sponsoring the Fedify developer as well.
The Links
- Flipboard is connecting another 250 accounts of publishers to the fediverse.
- Bonfire is building a native app, and a series of developer diaries with it.
- The first release candidate for Mastodon 4.3 is now available.
- This week’s fediverse software updates.
- Beyond technical features: why we need to talk about the values of the Fediverse (part 1) – Elena Rossini.
- Mastodon Announces Fediverse Discovery Providers – WeDistribute.
- fedi vs web – on the distinction between social network and social web, where activitypub straddles both.
- Mallory Knodel, the Executive Director for the new Social Web Foundation, writes about the new foundation.
- The Mastodon server strangeobjects.space will shut down, and in the announcement post the admins explain the emotional cost and impact that comes with being a server admin.
That’s all for this week, thanks for reading!
Subscribe to our newsletter!
https://fediversereport.com/last-week-in-fediverse-ep-86/
Last Week in Fediverse – ep 85It’s been an eventful week in the fediverse, with the Swiss government ending their Mastodon pilot, the launch of the Social Web Foundation, Interaction Policies with GoToSocial and more!
Swiss Government’s Mastodon instance will shut down
The Swiss Government will shut down their Mastodon server at the end of the month. The Mastodon server was launched in September 2023, as a pilot that lasted one year. During the original announcement last year, the Swiss government focused on Mastodon’s benefits regarding data protection and autonomy. Now that the pilot has run for the year, the government has decided not to continue. The main reason they give is the low engagement, stating that the 6 government accounts had around 3500 followers combined, and that the contributions also had low engagement rates. The government also notes that the falling number of active Mastodon users worldwide as a contributing factor. When the Mastodon pilot launched in September 2023, Mastodon had around 1.7M monthly active users, a number that has dropped a year later to around 1.1M.The Social Web Foundation has launched
The Social Web Foundation (SWF) is a new foundation managed by Evan Prodromou, with the goal of growing the fediverse into a healthy, financially viable and multi-polar place. The foundation launches with the support of quite a few organisations. Some are fediverse-native organisations such as Mastodon, but Meta, Automattic and Medium are also part of the organisations that support the SWF. The Ford Foundation also supports the SWF with a large grant, and in total the organisation has close to 1 million USD in funding.The SWF lists four projects that they’ll be working on for now:
- adding end-to-end encryption to ActivityPub, a project that Evan Prodromou and Tom Coates (another member of the SWF) recently got a grant for.
- Creating and maintaining a fediverse starter page. There are quite a variety of fediverse starter pages around already, but not all well maintained.
- A Technical analysis and report on compatibility between ActivityPub and GDPR.
- Working on long-form text in the fediverse.
The SWF is explicit in how they define two terms that have had a long and varied history: they state that the ‘fediverse’ is equivalent with the ‘Social Web’, and that the fediverse only consists of platforms that use ActivityPub. Both of these statements are controversial, to put it mildly, and I recommend this article for an extensive overview of the variety of ways that the term ‘fediverse’ is used by different groups of people, all with different ideas of what this network actually is, and what is a part of it. The explicit exclusion and rejection of Bluesky and the AT Protocol as not the correct protocol is especially noteworthy.
Another part of the SWF’s announcement that stands out is the inclusion of Meta as one of the supporting organisations. Meta’s arrival in the fediverse with Threads has been highly controversial since it was announced over a year ago, and one of the continuing worries that many people express is that of an ‘Extend-Embrace-Extinguish’ strategy by Meta. As the SWF will become a W3C member, and will likely continue to be active in the W3C groups, Meta being a supporter of the SWF will likely not diminish these worries.
As the SWF is an organisation with a goal of evangelising and growing the fediverse, it is worth pointing out that the reaction from a significant group within the fediverse developer community is decidedly mixed, with the presence of Meta, and arguments about the exclusive claim on the terms Social Web and fediverse being the main reasons. And as the goal of the SWF is to evangelise and grow the fediverse, can it afford to lose potential growth that comes from the support and outreach of the current fediverse developers?
Software updates
There are quite some interesting fediverse software updates this week that are worth pointing out:GoToSocial’s v0.17 release brings the software to a beta state, with a large number of new features added. The main standout feature is Interaction Policies, with GoToSocial explaining: “Interaction policies let you determine who can reply to, like, or boost your statuses. You can accept or reject interactions as you wish; accepted replies will be added to your replies collection, and unwanted replies will be dropped.”
Interaction Policies are a highly important safety feature, especially the ability to turn off replies, as game engine Godot found out this week. It is a part where Mastodon lags behind other projects, on the basis that it is very difficult in ActivityPub to fully prevent the ability for other people to reply to a post. GoToSocial takes a more practical route by telling other software what their interaction policy is for that specific post, and if a reply does not meet the policy, it is simply dropped.
- Peertube 6.3 release brings the ability to separate video streams from audio streams. This allows people now to use PeerTube as an audio streaming platform as well as a video streaming platform.
- The latest update for NodeBB signals that the ActivityPub integration for the forum software is now ready for beta testing.
- Ghost’s latest update now has fully working bi-directional federation, and they state that a private beta is now weeks away.
In Other News
IFTAS has started with a staged rollout of their Content Classification Service. With the opt-in service, a server can let IFTAS check all incoming image hashes for CSAM, with IFTAS handling the required (for US-based servers) reporting to NCMEC. IFTAS reports that over 50 servers already have signed up to participate with the service. CSAM remains a significant problem on decentralised social networks, something that is difficult to deal with for (volunteer) admins. IFTAS’ service makes this significantly easier while helping admins to execute their legal responsibilities. Emelia Smith also demoed the CCS during last week’s FediForum.The Links
- All the speed demo videos of last week’s FediForum are now available on PeerTube.
- Evan Prodromou’s book about ActivityPub, ‘ActivityPub: Programming for the Social Web‘ has officially launched.Lemmy Development Update.
- PieFed’s Development update for September 2024.
- A tool to make sure you see all replies on a fediverse posts (and an explanation on how it differs from FediFetcher).
- A work-in-progress Rust library for ActivityPub.
- The German Data Protection Office updated their Data Protection Guidelines for running a Mastodon server.
- The Revolution Will Be Federated – WeDistribute.
- This week’s updates for fediverse software.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-85/
The Revolution Will Be Federated
In this final, crucial campaign stretch: Mainstream platforms are oversaturated, while millions on the “fediverse” are perfectly situated for progressive organizing – and largely overlooked. The 2024Heidi Li Feldman (We Distribute)
The Social Web Foundation
(An announcement follows) But first, TPRC 2024 Last week I participated in the 2024 edition of the Telecommunications Policy Research Conference in Washington DC. TPRC is the Research Conference on Communications, Information and Internet Policy.Mallory Knodel (Internet Exchange)
Looks like someone really kicked the hornet’s nest recently on mastodon by announcing (not even deploying) a Mastodon-BlueSky bridge. Just take a look at the github comments here to get an idea of how this was received.
Plenty of people way more experienced than myself have weighted on this issue so I don’t feel the need to leave my two cents. However I wanted to talk about a very common counter-argument made towards those who do not want such bridges to exist. Namely, that Fediverse already provides the tools towards not having such a bridge be an issue: The allow-list model.
The idea being that if your ActivityPub server by default rejects all federation except towards trusted instances, then such bridges pose no problems whatsoever. The bridge (and any potential undercover APub scrappers) would not be able to get to your instance anyway.
Naturally, the counterargument is that this is way too limiting to one’s reach, and they shouldn’t be forced into isolation like this. Unfortunately the alternative here appears to try and scold others into submission, and this is unlikely to be long term solution. Eventually the Eternal September will come to the Fediverse. If you spent the past few years relying on peer pressure to enforce social norms, then the influx of people who do not share your values is going to make that tactic moot.
In fact, we can already see the pushback to the scolding tactics unfolding right now.
The solution then has to be a way to improve the way we handle such scenarios. Improve the tooling and our tactics so that such bridges and scrappers cannot be an issue.
A lot of the frustration I feel also comes down to the limited set of tools provided by Mastodon and other Fediverse services. A lot of the time, the improvement of tooling is stubbornly refused by the privileged core developers who don’t feel the need to support the needs of the marginalized communities. But that doesn’t mean the tooling couldn’t be expanded to be more flexible.
So let’s think about the Allow-List model for a moment. The biggest issue of an Allow-List is not necessarily that the origin server restricts themselves from the discussion. In fact they’re probably perfectly happy with that. The problem is that if this became the norm, it massively restricts the biggest strength of the Fediverse, which is for anyone to create and run their own server.
If I make a new server and most of everyone I want to interact with is in Allow-List mode, how do I even get in? We then have to start creating informal communication channels where one has to apply to join the allow-circle. Such processes have way too many drawbacks to list, such as naturally marginalizing Neurodivergent people with Rejection Sensitivity Dysphoria, balkanizing the Fediverse, empowering whisper networks and so on.
I want to instead suggest an alternative hybrid approach: The Feeler network. (provisional name)
The idea is thus: You have your well protected servers in Allow-List mode. These are the servers which require protection from constant harassment when their posts are spread publicly. These servers have a few “Feeler” instances they trust on their allow-list. Those servers in turn do not have an allow-mode turned on, but rely on blocklist like usual. Their users would be those privileged enough to be able to handle the occasional abuse or troll coming their way before blocking them.
So far so good. Nothing changes here. However what if those Feeler servers could also use the wider reach to see which instances are cool and announce that to their trusted servers? So a new instance appears in your federation. You, as a Feeler server, interact with them for a bit and nothing suspicious happens, and their users seem all to be ideologically aligned enough. You then add them into a public “endorsed list”. Now all the servers in your trust circle who are in allow-mode see this endorsement and automatically add them to their allow-lists. Bam! Problem solved. New servers have a way to be seen and eventually come into reach with Allow-List instances through a sort of organic probation period, and allow-listed servers can keep expanding their reach without private communications, and arduous application processes.
Now you might argue: “Hey Db0, yes my feelers can see my allow-list server posts, but if they boost them, now anyone can see them, and now they will be bridged to bluesky and I’m back in a bad spot!”
Yes this is possible, but also technically solvable. All we need to do is to make the Feeler servers only federate boosted posts from servers in allow-mode, to the servers that the ones in the allow-list already allow. So let’s say Server T1 and T2 are instances in allow-list mode which trust each other. Server F1 is a Feeler server trusted by T1 and T2. Server S1 is an external instance that is not blocked by F1, but not yet endorsed either. User in F1 boosts a post from T1. Normally a user in S1 would see that post by following that user. All we need to do is to change the software so that if F1 boosts a post from T1, the boost would only federate towards T2 and other instances in T1’s allow-list, instead of everyone. Sure this would require a bit more boost complexity, but it’s nothing impossible. Let’s call this “protected boost”.
Of course, this would require all Apub software to expose an “Endorsement” list for this to work. This is where the big difficulty comes from, as you now have to herd the cats that are the multitude of APub developers to add new functionality. Fortunately, this is where tools like the Fediseer can cover for the lack of development, or outright rejection by your software developer. The Fediseer already provides endorsement functionality along with a full REST API, so you can already implement this Feeler functionality by a few simple scripts!
The “protected boost” mode would require mastodon developers to do some work of course, as that relies in the software internals which cannot be easily hacked by server admins. But this too can potentially just be a patch to the software that only Feeler-admins would need to run.
The best part of this approach is that it doesn’t require any communication whatsoever. All it needs is for the “Feeler” admins to be actively curating their endorsements (either on the Fediseer, or locally if it’s ever added to the SW). Then all allow-list server has to do is choose which Feelers they trust and “subscribe” to their endorsement list for their own allow-list. And of course, they can synchronize or expand their allow-list further as they wish. This approach naturally makes the distributed nature of the Fediverse into a strength, instead of a weakness!
Now personally, I’m a big proponent of the “human touch” in social networks, so I feel that endorsement lists should be a manual mechanism. But if you want to take this to the next level, you could also easily set up a mechanism where newly discovered instances would automatically pass into your endorsement list after X weeks/months of interaction with your user without reports and X-amount of likes or whatever. Assuming admins on-point, this could make widely Feeler servers as a trusted gateway into a well protected space on the fedi, where bad actors would find it extraordinarily difficult to infiltrate, regardless of how many instances they spawn. And it this network would still keep increasing each reach constantly, without adding an extraordinary amount of load to its admins.
Barring the “protected boost” mode, this concept is already possible through the Fediseer. The scripts to do this work already exist as well. All it requires is for people to attempt to use it and see how it functions!
Do point out pitfalls you foresee in this approach and we can discuss how to potentially address them.
https://dbzer0.com/blog/can-we-improve-the-fediverse-allow-list-model/
#fediseer #fediverse #mastodon
In short, who are you yelling at? Who do you expect to "fix" things for you? Right now people are coming down on the guy who is building the bridge to bluesky. That specific guy. They're yelling at him and telling him to make different decisions to protect their personal privacy. Is that what people think they signed up for with the fediverse? Fighting with other individual humans and trying to force them to do what you want?
Last Week in Fediverse – ep 85
It’s been an eventful week in the fediverse, with the Swiss government ending their Mastodon pilot, the launch of the Social Web Foundation, Interaction Policies with GoToSocial and more!
Swiss Government’s Mastodon instance will shut down
The Swiss Government will shut down their Mastodon server at the end of the month. The Mastodon server was launched in September 2023, as a pilot that lasted one year. During the original announcement last year, the Swiss government focused on Mastodon’s benefits regarding data protection and autonomy. Now that the pilot has run for the year, the government has decided not to continue. The main reason they give is the low engagement, stating that the 6 government accounts had around 3500 followers combined, and that the contributions also had low engagement rates. The government also notes that the falling number of active Mastodon users worldwide as a contributing factor. When the Mastodon pilot launched in September 2023, Mastodon had around 1.7M monthly active users, a number that has dropped a year later to around 1.1M.
The Social Web Foundation has launched
The Social Web Foundation (SWF) is a new foundation managed by Evan Prodromou, with the goal of growing the fediverse into a healthy, financially viable and multi-polar place. The foundation launches with the support of quite a few organisations. Some are fediverse-native organisations such as Mastodon, but Meta, Automattic and Medium are also part of the organisations that support the SWF. The Ford Foundation also supports the SWF with a large grant, and in total the organisation has close to 1 million USD in funding.
The SWF lists four projects that they’ll be working on for now:
- adding end-to-end encryption to ActivityPub, a project that Evan Prodromou and Tom Coates (another member of the SWF) recently got a grant for.
- Creating and maintaining a fediverse starter page. There are quite a variety of fediverse starter pages around already, but not all well maintained.
- A Technical analysis and report on compatibility between ActivityPub and GDPR.
- Working on long-form text in the fediverse.
The SWF is explicit in how they define two terms that have had a long and varied history: they state that the ‘fediverse’ is equivalent with the ‘Social Web’, and that the fediverse only consists of platforms that use ActivityPub. Both of these statements are controversial, to put it mildly, and I recommend this article for an extensive overview of the variety of ways that the term ‘fediverse’ is used by different groups of people, all with different ideas of what this network actually is, and what is a part of it. The explicit exclusion and rejection of Bluesky and the AT Protocol as not the correct protocol is especially noteworthy.
Another part of the SWF’s announcement that stands out is the inclusion of Meta as one of the supporting organisations. Meta’s arrival in the fediverse with Threads has been highly controversial since it was announced over a year ago, and one of the continuing worries that many people express is that of an ‘Extend-Embrace-Extinguish’ strategy by Meta. As the SWF will become a W3C member, and will likely continue to be active in the W3C groups, Meta being a supporter of the SWF will likely not diminish these worries.
As the SWF is an organisation with a goal of evangelising and growing the fediverse, it is worth pointing out that the reaction from a significant group within the fediverse developer community is decidedly mixed, with the presence of Meta, and arguments about the exclusive claim on the terms Social Web and fediverse being the main reasons. And as the goal of the SWF is to evangelise and grow the fediverse, can it afford to lose potential growth that comes from the support and outreach of the current fediverse developers?
Software updates
There are quite some interesting fediverse software updates this week that are worth pointing out:
GoToSocial’s v0.17 release brings the software to a beta state, with a large number of new features added. The main standout feature is Interaction Policies, with GoToSocial explaining: “Interaction policies let you determine who can reply to, like, or boost your statuses. You can accept or reject interactions as you wish; accepted replies will be added to your replies collection, and unwanted replies will be dropped.”
Interaction Policies are a highly important safety feature, especially the ability to turn off replies, as game engine Godot found out this week. It is a part where Mastodon lags behind other projects, on the basis that it is very difficult in ActivityPub to fully prevent the ability for other people to reply to a post. GoToSocial takes a more practical route by telling other software what their interaction policy is for that specific post, and if a reply does not meet the policy, it is simply dropped.
- Peertube 6.3 release brings the ability to separate video streams from audio streams. This allows people now to use PeerTube as an audio streaming platform as well as a video streaming platform.
- The latest update for NodeBB signals that the ActivityPub integration for the forum software is now ready for beta testing.
- Ghost’s latest update now has fully working bi-directional federation, and they state that a private beta is now weeks away.
In Other News
IFTAS has started with a staged rollout of their Content Classification Service. With the opt-in service, a server can let IFTAS check all incoming image hashes for CSAM, with IFTAS handling the required (for US-based servers) reporting to NCMEC. IFTAS reports that over 50 servers already have signed up to participate with the service. CSAM remains a significant problem on decentralised social networks, something that is difficult to deal with for (volunteer) admins. IFTAS’ service makes this significantly easier while helping admins to execute their legal responsibilities. Emelia Smith also demoed the CCS during last week’s FediForum.
The Links
- All the speed demo videos of last week’s FediForum are now available on PeerTube.
- Evan Prodromou’s book about ActivityPub, ‘ActivityPub: Programming for the Social Web‘ has officially launched.Lemmy Development Update.
- PieFed’s Development update for September 2024.
- A tool to make sure you see all replies on a fediverse posts (and an explanation on how it differs from FediFetcher).
- A work-in-progress Rust library for ActivityPub.
- The German Data Protection Office updated their Data Protection Guidelines for running a Mastodon server.
- The Revolution Will Be Federated – WeDistribute.
- This week’s updates for fediverse software.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-85/
Launch of Social Web Foundation
Leaders of the open social networking movement have formed the Social Web Foundation, a non-profit organization dedicated to making connections between social platforms with the open standard protocol ActivityPub.The “social web”, also called the “Fediverse”, is a network of independent social platforms connected with the open standard protocol ActivityPub. Users on any platform can follow their friends, family, influencers, or brands on any other participating network.
ActivityPub was standardized by the World Wide Web Consortium (W3C) in 2018. It has attracted over 100 software implementations, tens of thousands of supporting web sites, and tens of millions of users.
Advocates of this increased platform choice say it will bring more individual control, more innovation, and a healthier social media experience. But there is work to do: journalism, activism, and the public square remain in a state of uncertain dissonance and privacy, safety and agency remain important concerns for anyone participating in a social network.
Leadership
The founding team of SWF merges knowledge of the Fediverse with a user-centric mindset.
- Evan Prodromou, current editor of the ActivityPub specification and author of the book “ActivityPub: Programming for the Social Web” from O’Reilly Media, is Research Director.
- Mallory Knodel, previously CTO of the Center for Democracy and Technology and human rights and internet standards researcher, will act as Executive Director.
- Tom Coates, product designer and entrepreneur, will serve as the organization’s Product Director.
Mallory Knodel (@mallory@techpolicy.social) says, “To fight inequality, participate in democracy, build an equitable society and economy, we can’t rely on a few corporate-owned, profit-driven spaces. The Social Web Foundation is our best chance to establish the conditions in which the new social media operates with zero harm.”
Program
The foundation’s program will concentrate on:
- educating general and targeted audiences about the social web
- informing policy-makers about issues on the social web
- enhancing and extending the ActivityPub protocol
- building tools and plumbing to make the social web easier and more engaging to use
“With this program, The Social Web Foundation can catalyze more growth on the Fediverse while improving user experience and safety,” says founder Prodromou (@evanprodromou@socialwebfoundation.org). “Our goal is to unblock users, developers and communities so they can get the most out of their social web experience.”
Industry support
The founders are supported by advisors from the social networking world including Chris Messina, Kaliya (Identity Woman) Young and Johannes Ernst, as well as companies and Open Source projects that have implemented ActivityPub:
- Mastodon
- Automattic
- Meta
- Ghost
- Pixelfed
- Medium
- IFTAS
- Write.as
- Fastly
- Vivaldi
- The BLVD
“Mastodon is committed to the Fediverse and proud to back the Social Web Foundation’s efforts to build a stronger, more open, and dynamic social web for all,” says Eugen Rochko, Founder and CEO, Mastodon (@Gargron@mastodon.social).
“Our vision for Threads has always been to make it the place for public conversation, and interoperability is an important part of that. That’s why we integrated Threads with the Fediverse through ActivityPub,” says Rob Sherman, VP and Deputy Chief of Privacy Officer at Meta (@robsherman@threads.net). “We believe that the Fediverse helps create a more diverse ecosystem that empowers users to connect, share, and learn from each other in new and innovative ways.”
“Automattic is excited about the launch of the Social Web Foundation and its mission,” says Matthias Pfefferle, Open Web Lead at Automattic, makers of WordPress.com (@pfefferle@notiz.blog) “We’re eager to collaborate with the Foundation to expand platform diversity and enhance the support for various content types—especially long-form content—within the Fediverse, fostering greater interoperability across the ecosystem.”
“We’ve been inspired by the products being developed across the Fediverse and the people we’ve had the pleasure to work with,” said Mike McCue, Flipboard CEO (@mike@flipboard.com). “And now, with the Social Web Foundation established, there will be a dedicated organization to foster even greater awareness, collaboration and innovation. We’re excited to be a part of this next wave of the web, using open standards to advance how we connect with each other every day.”
The Foundation will collaborate with other non-profit organizations in the space. “IFTAS wholeheartedly welcomes the launch of the Social Web Foundation and its commitment to a healthy Fediverse,” says Jaz-Michael King, executive director (@jaz@mastodon.iftas.org). “We anticipate great opportunities for collaboration in our efforts to enhance trust and safety, and we look forward to working with the Foundation to strengthen the Fediverse for the benefit of all its communities.”
“The Fediverse reminds us of the early days of the Web. We are competing against silos and corporate interests, using a W3C-based open standard and a distributed solution,” says Jon Von Tetzchner, CEO of Vivaldi (@jon@vivaldi.net). “It’s great that social networking companies are supporting the Fediverse, and Vivaldi is pleased to support the Social Web Foundation so that we can once again have a town square free of algorithms and corporate control.”
“We’re really excited about the launch of the Social Web Foundation,” says Bart Decrem, founder, The BLVD (sub.club, Mammoth) (@bart@moth.social) “This will help accelerate the growth of the Fediverse, which is so important for the future of the open web!”
“It’s time to bring back the open web we were promised, rather than the closed networks we got. We’re very excited to support the Social Web Foundation and collaborate on building a more transparent and constructive future for the internet,” says John O’Nolan, CEO of Ghost Foundation (@index@activitypub.ghost.org)
“As a long-time ActivityPub implementer, Write.as is thrilled to support the launch of the Social Web Foundation,” says Matt Baer, Founder and CEO (@matt@write.as). “With our shared mission of fostering a diverse and thriving social web, we look forward to collaborating with the Foundation, its partners, and community to realize the full potential of publishing on the Fediverse.”
Learn more
The Social Web Foundation can be found on the web at https://socialwebfoundation.org/ and on the social web at swf@socialwebfoundation.org. Email contact@socialwebfoundation.org.
The Revolution Will Be Federated
In this final, crucial campaign stretch: Mainstream platforms are oversaturated, while millions on the “fediverse” are perfectly situated for progressive organizing – and largely overlooked. The 2024Heidi Li Feldman (We Distribute)
Last Week in Fediverse – ep 88
A quieter news week: self-hosted 3d printing app Manyfold joins the fediverse, and write.as now offers paid subscriptions for fediverse accounts with sub.club.
The News
Manyfold is a self-hosted open source web app for organising and managing your collection of 3d files, and in particularly 3d printing. With their latest update, Manyfold has now joined the fediverse by adding ActivityPub support. With the new integration, you can now follow a Manyfold creator from your fediverse account of choice, and get notified when the Manyfold account uploads a new 3d file. New Manyfold uploads appear as short posts with a link in the rest of the fediverse. To demonstrate, here is the Manyfold account from the creator Floppy as visible from Mastodon, and here is the profile on their Manyfold instance itself. The Manyfold server also has a button to follow the account on the fediverse.
Manyfold implementing ActivityPub support is an illustration of how ActivityPub can be viewed as a form of ‘Social RSS’: it allows you to follow any Actor for updates, and adds social features (sharing/liking to it).
Sub.club is a service that lets people create paid subscription feeds on the fediverse. The service recently launched with the ability to monetise Mastodon feeds, and has now expanded to also include long-form writing, by collaborating with write.as. Write.as is the flagship instance of fediverse blogging software WriteFreely. With this update, blogs on write.as can now set on a a per-blog basis if a blog is a premium blog, and where the cut-off is. People who follow the blog from a fediverse account will see an option to subscribe and view the full post; this post by the sub.club account shows how a premium blog will look like from various perspectives. Adding sub.club to a write.as blog is as simple as following this three-minute PeerTube video.
The Links
- How to buy shoes in the fediverse – Erin Kissane.
- “We can have a different web, if we want it” – Newsmast’s Michael Foster.
- Mastodon’s monthly engineering update, Trunks and Tidbits, is out for September 2024.
- Ambition, The Fediverse, and Technology Freedom – Soatok, who is working on implementing E2EE for ActivityPub.
- ForgeFed is continuing to work on adding Actor Programming.
- Positioning Micro.blog.
- How to join Mastodon – Stefan Bohacek.
- Mastodon has started selling a plushy.
- Beyond technical features: why we need to talk about the values of the Fediverse (part 2) – Elena Rossini.
- ‘I for one (cautiously) welcome the Social Web Foundation to the fediverses, but we really need to talk about the big elephant in the federated room’ – The Nexus of Privacy.
- The Challenge of ActivityPub Data Portability – bengo.
- Echo is a new iOS app for Lemmy.
- This week’s fediverse software updates.
- We Distribute Is On Temporary Hiatus.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-88/
🥳 Manyfold v0.82.0 is out, with two BIG features!First up, we're joining the #Fediverse proper - you can follow public Manyfold creators on other ActivityPub platforms like Mastodon!
And secondly, Manyfold will now index PDF, TXT and video content as well as models and images!
🗞️ Full release notes: https://manyfold.app/news/2024/10/13/release-v0-82-0.html
❤️ Support us on OpenCollective: https://opencollective.com/manyfold
🏷️ #3DPrinting @3dprinting #SelfHosted
Manyfold - Open Collective
A self-hosted 3d model organisation tool for 3d printing enthusiastsopencollective.com
The Social Web Foundation and the elephant in the federated room
And I don't mean Mastodon!Jon (The Nexus Of Privacy)
Echo for #Lemmy is now available! Goodbye #Reddit, Hello @LemmyDev. #Fediverse #ActivityPub 👋
https://echo.rrainn.com/download/iphone
Echo for Lemmy
Echo for Lemmy is an iOS client for Lemmy, a community based link & text sharing decentralized social platform. - Connect with communities based on your interests. - Sort your feed by most active, trending posts, new posts, and many more.App Store
In 2022, I wrote about my plan to build end-to-end encryption for the Fediverse. The goals were simple:
- Provide secure encryption of message content and media attachments between Fediverse users, as a new type of Direct Message which is encrypted between participants.
- Do not pretend to be a Signal competitor.
The primary concern at the time was “honest but curious” Fediverse instance admins who might snoop on another user’s private conversations.
After I finally was happy with the client-side secret key management piece, I had moved on to figure out how to exchange public keys. And that’s where things got complicated, and work stalled for 2 years.
Art: AJ
I wrote a series of blog posts on this complication, what I’m doing about it, and some other cool stuff in the draft specification.
- Towards Federated Key Transparency introduced the Public Key Directory project
- Federated Key Transparency Project Update talked about some of the trade-offs I made in this design
- Not supporting ECDSA at all, since FIPS 186-5 supports Ed25519
- Adding an account recovery feature, which power users can opt out of, that allows instance admins to help a user recover from losing all their keys
- Building a Key Transparency system that can tolerate GDPR Right To Be Forgotten takedown requests without invalidating history
- Introducing Alacrity to Federated Cryptography discussed how I plan to ensure that independent third-party clients stay up-to-date or lose the ability to decrypt messages
Recently, NIST published the new Federal Information Protection Standards documents for three post-quantum cryptography algorithms:
- FIPS-203 (ML-KEM, formerly known as CRYSTALS-Kyber),
- FIPS-204 (ML-DSA, formerly known as CRYSTALS-Dilithium)
- FIPS-205 (SLH-DSA, formerly known as SPHINCS+)
The race is now on to implement and begin migrating the Internet to use post-quantum KEMs. (Post-quantum signatures are less urgent.) If you’re curious why, this CloudFlare blog post explains the situation quite well.
Since I’m proposing a new protocol and implementation at the dawn of the era of post-quantum cryptography, I’ve decided to migrate the asymmetric primitives used in my proposals towards post-quantum algorithms where it makes sense to do so.
Art: AJ
The rest of this blog post is going to talk about technical specifics and the decisions I intend to make in both projects, as well as some other topics I’ve been thinking about related to this work.
Which Algorithms, Where?
I’ll discuss these choices in detail, but for the impatient:
- Public Key Directory
- Still just Ed25519 for now
- End-to-End Encryption
- KEMs: X-Wing (Hybrid X25519 and ML-KEM-768)
- Signatures: Still just Ed25519 for now
Virtually all other uses of cryptography is symmetric-key or keyless (i.e., hash functions), so this isn’t a significant change to the design I have in mind.
Post-Quantum Algorithm Selection Criteria
While I am personally skeptical if we will see a practical cryptography-relevant quantum computer in the next 30 years, due to various engineering challenges and a glacial pace of progress on solving them, post-quantum cryptography is still a damn good idea even if a quantum computer doesn’t emerge.
Post-Quantum Cryptography comes in two flavors:
- Key Encapsulation Mechanisms (KEMs), which I wrote about previously.
- Digital Signature Algorithms (DSAs).
Originally, my proposals were going to use Elliptic Curve Diffie-Hellman (ECDH) in order to establish a symmetric key over an untrusted channel. Unfortunately, ECDH falls apart in the wake of a crypto-relevant quantum computer. ECDH is the component that will be replaced by post-quantum KEMs.
Additionally, my proposals make heavy use of Edwards Curve Digital Signatures (EdDSA) over the edwards25519 elliptic curve group (thus, Ed25519). This could be replaced with a post-quantum DSA (e.g., ML-DSA) and function just the same, albeit with bandwidth and/or performance trade-offs.
But isn’t post-quantum cryptography somewhat new?
Lattice-based cryptography has been around almost as long as elliptic curve cryptography. One of the first designs, NTRU, was developed in 1996.
Meanwhile, ECDSA was published in 1992 by Dr. Scott Vanstone (although it was not made a standard until 1999). Lattice cryptography is pretty well-understood by experts.
However, before the post-quantum cryptography project, there hasn’t been a lot of incentive for attackers to study lattices (unless they wanted to muck with homomorphic encryption).
So, naturally, there is some risk of a cryptanalysis renaissance after the first post-quantum cryptography algorithms are widely deployed to the Internet.
However, this risk is mostly a concern for KEMs, due to the output of a KEM being the key used to encrypt sensitive data. Thus, when selecting KEMs for post-quantum security, I will choose a Hybrid construction.
Hybrid what?
We’re not talking folfs, sonny!
Hybrid isn’t just a thing that furries do with their fursonas. It’s also a term that comes up a lot in cryptography.
Unfortunately, it comes up a little too much.
I made this dumb meme with imgflip
When I say we use Hybrid constructions, what I really mean is we use a post-quantum KEM and a classical KEM (such as HPKE‘s DHKEM), then combine them securely using a KDF.
Post-quantum KEMs
For the post-quantum KEM, we only really have one choice: ML-KEM. But this choice is actually three choices: ML-KEM-512, ML-KEM-768, or ML-KEM-1024.
The security margin on ML-KEM-512 is a little tight, so most cryptographers I’ve talked with recommend ML-KEM-768 instead.
Meanwhile, the NSA wants the US government to use ML-KEM-1024 for everything.
How will you hybridize your post-quantum KEM?
Originally, I was looking to use DHKEM with X25519, as part of the HPKE specification. After switching to post-quantum cryptography, I would need to combine it with ML-KEM-768 in such a way that the whole shebang is secure if either component is secure.
But then, why reinvent the wheel here? X-Wing already does that, and has some nice binding properties that a naive combination might not.
So let’s use X-Wing for our KEM.
Notably, OpenMLS is already doing this in their next release.
Art: CMYKat
Post-quantum signatures
So our KEM choice seems pretty straightforward. What about post-quantum signatures?
Do we even need post-quantum signatures?
Well, the situation here is not nearly as straightforward as KEMs.
For starters, NIST chose to standardize two post-quantum digital signature algorithms (with a third coming later this year). They are as follows:
- ML-DSA (formerly CRYSTALS-Dilithium), that comes in three flavors:
- ML-DSA-44
- ML-DSA-65
- ML-DSA-87
- SLH-DSA (formerly SPHINCS+), that comes in 24 flavors
- FN-DSA (formerly FALCON), that comes in two flavors but may be excruciating to implement in constant-time (this one isn’t standardized yet)
Since we’re working at the application layer, we’re less worried about a few kilobytes of bandwidth than the networking or X.509 folks are. Relatively speaking, we care about security first, performance second, and message size last.
After all, people ship Electron, React Native, and NextJS apps that load megabytes of JavaScript code to print, “hello world,” and no one bats an eye. A few kilobytes in this context is easily digestible for us.
(As I said, this isn’t true for all layers of the stack. WebPKI in particular feels a lot of pain with large public keys and/or signatures.)
Eliminating post-quantum signature candidates
Performance considerations would eliminate SLH-DSA, which is the most conservative choice. Even with the fastest parameter set (SLH-DSA-128f), this family of algorithms is about 550x slower than Ed25519. (If we prioritize bandwidth, it becomes 8000x slower.)
Adopted from CloudFlare’s blog post on post-quantum cryptography.
Between the other two, FN-DSA is a tempting option. Although it’s difficult to implement in constant-time, it offers smaller public key and signature sizes.
However, FN-DSA is not standardized yet, and it’s only known to be safe on specific hardware architectures. (It might be safe on others, but that’s not proven yet.)
In order to allow Fediverse users be secure on a wider range of hardware, this uncertainty would limit our choice of post-quantum signature algorithms to some flavor of ML-DSA–whether stand-alone or in a hybrid construction.
Unlike KEMs, hybrid signature constructions may be problematic in subtle ways that I don’t want to deal with. So if we were to do anything, we would probably choose a pure post-quantum signature algorithm.
Against the Early Adoption of Post-Quantum Signatures
There isn’t an immediate benefit to adopting a post-quantum signature algorithm, as David Adrian explains.
The migration to post-quantum cryptography will be a long and difficult road, which is all the more reason to make sure we learn from past efforts, and take advantage of the fact the risk is not imminent. Specifically, we should avoid:
- Standardizing without real-world experimentation
- Standardizing solutions that match how things work currently, but have significant negative externalities (increased bandwidth usage and latency), instead of designing new things to mitigate the externalities
- Deploying algorithms pre-standardization in ways that can’t be easily rolled back
- Adding algorithms that are pre-standardization or have severe shortcomings to compliance frameworks
We are not in the middle of a post-quantum emergency, and nothing points to a surprise “Q-Day” within the next decade. We have time to do this right, and we have time for an iterative feedback loop between implementors, cryptographers, standards bodies, and policymakers.
The situation may change. It may become clear that quantum computers are coming in the next few years. If that happens, the risk calculus changes and we can try to shove post-quantum cryptography into our existing protocols as quickly as possible. Thankfully, that’s not where we are.
David Adrian, Lack of post-quantum security is not plaintext.
Furthermore, there isn’t currently any commitment from the Sigsum developers to adopt a post-quantum signature scheme in the immediate future. They hard-code Ed25519 for the current iteration of the specification.
The verdict on digital signature algorithms?
Given all of the above, I’m going to opt to simply not adopt post-quantum signatures until a later date.
Version 1 of our design will continue to use Ed25519 despite it not being secure after quantum computers emerge (“Q-Day”).
When the security industry begins to see warning signs of Q-Day being realistically within a decade, we will prioritize migrating to use post-quantum signature algorithms in a new version of our design.
Should something drastic happen that would force us to decide on a post-quantum algorithm today, we would choose ML-DSA-44. However, that’s unlikely for at least several years.
Remember, Store Now, Decrypt Later doesn’t really break signatures the way it would break public-key encryption.
Art: Harubaki
Miscellaneous Technical Matters
Okay, that’s enough about post-quantum for now. I worry that if I keep talking about key encapsulation, some of my regular readers will start a shitty garage band called My KEMical Romance before the end of the year.
Let’s talk about some other technical topics related to end-to-end encryption for the Fediverse!
Federated MLS
MLS was implicitly designed with the idea of having one central service for passing messages around. This makes sense if you’re building a product like Signal, WhatsApp, or Facebook Messenger.
It’s not so great for federated environments where your Delivery Service may be, in fact, more than one service (i.e., the Fediverse). An expired Internet Draft for Federated MLS talks about these challenges.
If we wanted to build atop MLS for group key agreement (like has been suggested before), we’d need to tackle this in a way that doesn’t cede control of MLS epochs to any server that gets compromised.
How to Make MLS Tolerate Federation
First, the Authentication Service component can be replaced by client-side protocols, where public keys are sourced from the Public Key Directory (PKD) services.
That is to say, from the PKD, you can fetch a valid list of Ed25519 public keys for each participant in the group.
When a group is created, the creator’s Ed25519 public key is known. Everyone they invite, their software necessarily has to know their Ed25519 public key in order to invite them.
In order for a group action to be performed, it must be signed by one of the public keys enrolled into the group list. Additionally, some actions may be limited by permissions attached at the time of the invite (or elevated by a more privileged user; which necessitates another group action).
By requiring a valid signature from an existing group member, we remove the capability of the Fediverse instance that’s hosting the discussion group to meddle with it in any way (unless, for some reason, the server is somehow also a participant that was invited).
But therein lies the other change we need to make: In many cases, groups will span multiple Fediverse servers, so groups shouldn’t be dependent on a single instance.
Spreading The Load Across Instances
Put simply, we need a consensus algorithm to determine which instance hosts messages. We could look to Raft as a starting point, but whatever we land on should be fair, fault-tolerant, and deterministic to all participants who can agree on the same symmetric keying material at some point in time.
To that end, I propose using an additional HKDF output from the Group Key Agreement protocol to select a “leader” for all instances involved in the group, weighted by the number of participants on each instance.
Then, every N messages (where N >= 1), a new leader is elected by the same deterministic protocol. This will be performed entirely client-side, and clients will choose N. I will refer to this as a sub-epoch, since it doesn’t coincide with a new MLS epoch.
Since the agreed-upon group key always ratchets forward when a group action occurs (i.e., whenever there’s a new epoch), getting another KDF output to elect the next leader is straightforward.
This isn’t a fully fleshed out idea. Building consensus protocols that can handle real-world operational issues is heavily specialized work and there’s a high risk of falling to the illusion of safety until it’s too late. I will probably need help with this component.
That said, we aren’t building an anonymity network, so the cost of getting a detail wrong isn’t measurable in blood.
We aren’t really concerned with Sybil attacks. Winning the election just means you’re responsible for being a dumb pipe for ciphertext. Client software should trust the instance software as little as possible.
We also probably don’t need to worry about availability too much. Since we’re building atop ActivityPub, when a server goes down, the other instances can hold encrypted messages in the outbox for the host instance to pick up when it’s back online.
If that’s not satisfactory, we could also select both a primary and secondary leader for each epoch (and sub-epoch), to have built-in fail-over when more than one instance is involved in a group conversation.
If messages aren’t being delivered for an unacceptable period of time, client software can forcefully initiate a new leader election by expiring the current MLS epoch (i.e. by rotating their own public key and sending the relevant bundle to all other participants).
Art: Kyume
Those are just some thoughts. I plan to talk it over with people who have more expertise in the relevant systems.
And, as with the rest of this project, I will write a formal specification for this feature before I write a single line of production code.
Abuse Reporting
I could’ve swore I talked about this already, but I can’t find it in any of my previous ramblings, so here’s a good place as any.
The intent for end-to-end encryption is privacy, not secrecy.
What does this mean exactly? From the opening of Eric Hughes’ A Cypherpunk’s Manifesto:
Privacy is necessary for an open society in the electronic age. Privacy is not secrecy.A private matter is something one doesn’t want the whole world to know, but a secret matter is something one doesn’t want anybody to know.
Privacy is the power to selectively reveal oneself to the world.
Eric Hughes (with whitespace and emphasis added)
Unrelated: This is one reason why I use “secret key” when discussing asymmetric cryptography, rather than “private key”. It also lends towards sk
and pk
as abbreviations, whereas “private” and “public” both start with the letter P, which is annoying.
With this distinction in mind, abuse reporting is not inherently incompatible with end-to-end encryption or any other privacy technology.
In fact, it’s impossible to create useful social technology without the ability for people to mitigate abuse.
So, content warning: This is going to necessarily discuss some gross topics, albeit not in any significant detail. If you’d rather not read about them at all, feel free to skip this section.
Art: CMYKat
When thinking about the sorts of problems that call for an abuse reporting mechanism, you really need to consider the most extreme cases, such as someone joining group chats to spam unsuspecting users with unsolicited child sexual abuse material (CSAM), flashing imagery designed to trigger seizures, or graphic depictions of violence.
That’s gross and unfortunate, but the reality of the Internet.
However, end-to-end encryption also needs to prioritize privacy over appeasing lazy cops who would rather everyone’s devices include a mandatory little cop that watches all your conversations and snitches on you if you do anything that might be illegal, or against the interest of your government and/or corporate masters. You know the type of cop. They find privacy and encryption to be rather inconvenient. After all, why bother doing their jobs (i.e., actual detective work) when you can just criminalize end-to-end encryption and use dragnet surveillance instead?
Whatever we do, we will need to strike a balance that protects users’ privacy, without any backdoors or privileged access for lazy cops, with community safety.
Thus, the following mechanisms must be in place:
- Groups must have the concept of an “admin” role, who can delete messages on behalf of all users and remove users from the group. (Signal currently doesn’t have this.)
- Users must be able to delete messages on their own device and block users that send abusive content. (The Fediverse already has this sort of mechanism, so we don’t need to be inventive here.)
- Users should have the ability to report individual messages to the instance moderators.
I’m going to focus on item 3, because that’s where the technically and legally thorny issues arise.
Keep in mind, this is just a core-dump of thoughts about this topic, and I’m not committing to anything right now.
Technical Issues With Abuse Reporting
First, the end-to-end encryption must be immune to Invisible Salamanders attacks. If it’s not, go back to the drawing board.
Every instance will need to have a moderator account, who can receive abuse reports from users. This can be a shared account for moderators or a list of moderators maintained by the server.
When an abuse report is sent to the moderation team, what needs to happen is that the encryption keys for those specific messages are re-wrapped and sent to the moderators.
So long as you’re using a forward-secure ratcheting protocol, this doesn’t imply access to the encryption keys for other messages, so the information disclosed is limited to the messages that a participant in the group consents to disclosing. This preserves privacy for the rest of the group chat.
When receiving a message, moderators should not only be able to see the reported message’s contents (in the order that they were sent), but also how many messages were omitted in the transcript, to prevent a type of attack I colloquially refer to as “trolling through omission”. This old meme illustrates the concept nicely:
Trolling through omission.
And this all seems pretty straightforward, right? Let users protect themselves and report abuse in such a way that doesn’t invalidate the privacy of unrelated messages or give unfettered access to the group chats. “Did Captain Obvious write this section?”
But things aren’t so clean when you consider the legal ramifications.
Potential Legal Issues With Abuse Reporting
Suppose Alice, Bob, and Troy start an encrypted group conversation. Alice is the group admin and delete messages or boot people from the chat.
One day, Troy decides to send illegal imagery (e.g., CSAM) to the group chat.
Bob immediately, disgusted, reports it to his instance moderator (Dave) as well as Troy’s instance moderator (Evelyn). Alice then deletes the messages for her and Bob and kicks Troy from the chat.
Here’s where the legal questions come in.
If Dave and Evelyn are able to confirm that Troy did send CSAM to Alice and Bob, did Bob’s act of reporting the material to them count as an act of distribution (i.e., to Dave and/or Evelyn, who would not be able to decrypt the media otherwise)?
If they aren’t able to confirm the reports, does Alice’s erasure count as destruction of evidence (i.e., because they cannot be forwarded to law enforcement)?
Are Bob and Alice legally culpable for possession? What about Dave and Evelyn, whose servers are hosting the (albeit encrypted) material?
It’s not abundantly clear how the law will intersect with technology here, nor what specific technical mechanisms would need to be in place to protect Alice, Bob, Dave, and Evelyn from a particularly malicious user like Troy.
Obviously, I am not a lawyer. I have an understanding with my lawyer friends that I will not try to interpret law or write my own contracts if they don’t roll their own crypto.
That said, I do have some vague ideas for mitigating the risk.
Ideas For Risk Mitigation
To contend with this issue, one thing we could do is separate the abuse reporting feature from the “fetch and decrypt the attached media” feature, so that while instance moderators will be capable of fetching the reported abuse material, it doesn’t happen automatically.
When the “reason” attached to an abuse report signals CSAM in any capacity, the client software used by moderators could also wholesale block the download of said media.
Whether that would be sufficient mitigate the legal matters raised previously, I can’t say.
And there’s still a lot of other legal uncertainty to figure out here.
- Do instance moderators actually have a duty to forward CSAM reports to law enforcement?
- If so, how should abuse forwarding to be implemented?
- How do we train law enforcement personnel to receive and investigate these reports WITHOUT frivolously arresting the wrong people or seizing innocent Fediverse servers?
- How do we ensure instance admins are broadly trained to handle this?
- How do we deal with international law?
- How do we prevent scope creep?
- While there is public interest in minimizing the spread of CSAM, which is basically legally radioactive, I’m not interested in ever building a “snitch on women seeking reproductive health care in a state where abortion is illegal” capability.
- Does Section 230 matter for any of these questions?
We may not know the answers to these questions until the courts make specific decisions that establish relevant case law, or our governments pass legislation that clarifies everyone’s rights and responsibilities for such cases.
Until then, the best answer may simply to do nothing.
That is to say, let admins delete messages for the whole group, let users delete messages they don’t want on their own hardware, and let admins receive abuse reports from their users… but don’t do anything further.
Okay, we should definitely require an explicit separate action to download and decrypt the media attached to a reported message, rather than have it be automatic, but that’s it.
What’s Next?
For the immediate future, I plan on continuing to develop the Federated Public Key Directory component until I’m happy with its design. Then, I will begin developing the reference implementations for both client and server software.
Once that’s in a good state, I will move onto finishing the E2EE specification. Then, I will begin building the client software and relevant server patches for Mastodon, and spinning up a testing instance for folks to play with.
Timeline-wise, I would expect most of this to happen in 2025.
I wish I could promise something sooner, but I’m not fond of moving fast and breaking things, and I do have a full time job unrelated to this project.
Hopefully, by the next time I pen an update for this project, we’ll be closer to launching. (And maybe I’ll have answers to some of the legal concerns surrounding abuse reporting, if we’re lucky.)
https://soatok.blog/2024/09/13/e2ee-for-the-fediverse-update-were-going-post-quantum/
#E2EE #endToEndEncryption #fediverse #FIPS #Mastodon #postQuantumCryptography
Update (2024-06-06): There is an update on this project.As Twitter’s new management continues to nosedive the platform directly into the ground, many people are migrating to what seem like drop-in alternatives; i.e. Cohost and Mastodon. Some are even considering new platforms that none of us have heard of before (one is called “Hive”).
Needless to say, these are somewhat chaotic times.
One topic that has come up several times in the past few days, to the astonishment of many new Mastodon users, is that Direct Messages between users aren’t end-to-end encrypted.
And while that fact makes Mastodon DMs no less safe than Twitter DMs have been this whole time, there is clearly a lot of value and demand in deploying end-to-end encryption for ActivityPub (the protocol that Mastodon and other Fediverse software uses to communicate).
However, given that Melon Husk apparently wants to hurriedly ship end-to-end encryption (E2EE) in Twitter, in some vain attempt to compete with Signal, I took it upon myself to kickstart the E2EE effort for the Fediverse.
https://twitter.com/elonmusk/status/1519469891455234048
So I’d like to share my thoughts about E2EE, how to design such a system from the ground up, and why the direction Twitter is heading looks to be security theater rather than serious cryptographic engineering.
If you’re not interested in those things, but are interested in what I’m proposing for the Fediverse, head on over to the GitHub repository hosting my work-in-progress proposal draft as I continue to develop it.
How to Quickly Build E2EE
If one were feeling particularly cavalier about your E2EE designs, they could just generate then dump public keys through a server they control, pass between users, and have them encrypt client-side. Over and done. Check that box.Every public key would be ephemeral and implicitly trusted, and the threat model would mostly be, “I don’t want to deal with law enforcement data requests.”
Hell, I’ve previously written an incremental blog post to teach developers about E2EE that begins with this sort of design. Encrypt first, ratchet second, manage trust relationships on public keys last.
If you’re catering to a slightly tech-savvy audience, you might throw in SHA256(pk1 + pk2) -> hex2dec() and call it a fingerprint / safety number / “conversation key” and not think further about this problem.
Look, technical users can verify out-of-band that they’re not being machine-in-the-middle attacked by our service.An absolute fool who thinks most people will ever do this
From what I’ve gathered, this appears to be the direction that Twitter is going.https://twitter.com/wongmjane/status/1592831263182028800
Now, if you’re building E2EE into a small hobby app that you developed for fun (say: a World of Warcraft addon for erotic roleplay chat), this is probably good enough.
If you’re building a private messaging feature that is intended to “superset Signal” for hundreds of millions of people, this is woefully inadequate.
https://twitter.com/elonmusk/status/1590426255018848256
Art: LvJ
If this is, indeed, the direction Musk is pushing what’s left of Twitter’s engineering staff, here is a brief list of problems with what they’re doing.
- Twitter Web. How do you access your E2EE DMs after opening Twitter in your web browser on a desktop computer?
- If you can, how do you know twitter.com isn’t including malicious JavaScript to snarf up your secret keys on behalf of law enforcement or a nation state with a poor human rights record?
- If you can, how are secret keys managed across devices?
- If you use a password to derive a secret key, how do you prevent weak, guessable, or reused passwords from weakening the security of the users’ keys?
- If you cannot, how do users decide which is their primary device? What if that device gets lost, stolen, or damaged?
- Authenticity. How do you reason about the person you’re talking with?
- Forward Secrecy. If your secret key is compromised today, can you recover from this situation? How will your conversation participants reason about your new Conversation Key?
- Multi-Party E2EE. If a user wants to have a three-way E2EE DM with the other members of their long-distance polycule, does Twitter enable that?
- How are media files encrypted in a group setting? If you fuck this up, you end up like Threema.
- Is your group key agreement protocol vulnerable to insider attacks?
- Cryptography Implementations.
- What does the KEM look like? If you’re using ECC, which curve? Is a common library being used in all devices?
- How are you deriving keys? Are you just using the result of an elliptic curve (scalar x point) multiplication directly without hashing first?
- Independent Third-Party Review.
- Who is reviewing your protocol designs?
- Who is reviewing your cryptographic primitives?
- Who is reviewing the code that interacts with E2EE?
- Is there even a penetration test before the feature launches?
As more details about Twitter’s approach to E2EE DMs come out, I’m sure the above list will be expanded with even more questions and concerns.
My hunch is that they’ll reuse liblithium (which uses Curve25519 and Gimli) for Twitter DMs, since the only expert I’m aware of in Musk’s employ is the engineer that developed that library for Tesla Motors. Whether they’ll port it to JavaScript or just compile to WebAssembly is hard to say.
How To Safely Build E2EE
You first need to decompose the E2EE problem into five separate but interconnected problems.
- Client-Side Secret Key Management.
- Multi-device support
- Protect the secret key from being pilfered (i.e. by in-browser JavaScript delivered from the server)
- Public Key Infrastructure and Trust Models.
- TOFU (the SSH model)
- X.509 Certificate Authorities
- Certificate/Key/etc. Transparency
- SigStore
- PGP’s Web Of Trust
- Key Agreement.
- While this is important for 1:1 conversations, it gets combinatorially complex when you start supporting group conversations.
- On-the-Wire Encryption.
- Direct Messages
- Media Attachments
- Abuse-resistance (i.e. message franking for abuse reporting)
- The Construction of the Previous Four.
- The vulnerability of most cryptographic protocols exists in the joinery between the pieces, not the pieces themselves. For example, Matrix.
This might not be obvious to someone who isn’t a cryptography engineer, but each of those five problems is still really hard.
To wit: The latest IETF RFC draft for Message Layer Security, which tackles the Key Agreement problem above, clocks in at 137 pages.
Additionally, the order I specified these problems matters; it represents my opinion of which problem is relatively harder than the others.
When Twitter’s CISO, Lea Kissner, resigned, they lost a cryptography expert who was keenly aware of the relative difficulty of the first problem.
https://twitter.com/LeaKissner/status/1592937764684980224
You may also notice the order largely mirrors my previous guide on the subject, in reverse. This is because teaching a subject, you start with the simplest and most familiar component. When you’re solving problems, you generally want the opposite: Solve the hardest problems first, then work towards the easier ones.
This is precisely what I’m doing with my E2EE proposal for the Fediverse.
The Journey of a Thousand Miles Begins With A First Step
Before you write any code, you need specifications.Before you write any specifications, you need a threat model.
Before you write any threat models, you need both a clear mental model of the system you’re working with and how the pieces interact, and a list of security goals you want to achieve.
Less obviously, you need a specific list of non-goals for your design: Properties that you will not prioritize. A lot of security engineering involves trade-offs. For example: elliptic curve choice for digital signatures is largely a trade-off between speed, theoretical security, and real-world implementation security.
If you do not clearly specify your non-goals, they still exist implicitly. However, you may find yourself contradicting them as you change your mind over the course of development.
Being wishy-washy about your security goals is a good way to compromise the security of your overall design.
In my Mastodon E2EE proposal document, I have a section called Design Tenets, which states the priorities used to make trade-off decisions. I chose Usability as the highest priority, because of AviD’s Rule of Usability.
Security at the expense of usability comes at the expense of security.Avi Douglen, Security StackExchange
Underneath Tenets, I wrote Anti-Tenets. These are things I explicitly and emphatically do not want to prioritize. Interoperability with any incumbent designs (OpenPGP, Matrix, etc.) is the most important anti-tenet when it comes to making decisions. If our end-state happens to interop with someone else’s design, cool. I’m not striving for it though!Finally, this section concludes with a more formal list of Security Goals for the whole project.
Art: LvJ
Every component (from the above list of five) in my design will have an additional dedicated Security Goals section and Threat Model. For example: Client-Side Secret Key Management.
You will then need to tackle each component independently. The threat model for secret-key management is probably the trickiest. The actual encryption of plaintext messages and media attachments is comparatively simple.
Finally, once all of the pieces are laid out, you have the monumental (dare I say, mammoth) task of stitching them together into a coherent, meaningful design.
If you did your job well at the outset, and correctly understand the architecture of the distributed system you’re working with, this will mostly be straightforward.
Making Progress
At every step of the way, you do need to stop and ask yourself, “If I was an absolute chaos gremlin, how could I fuck with this piece of my design?” The more pieces your design has, the longer the list of ways to attack it will grow.It’s also helpful to occasionally consider formal methods and security proofs. This can have surprising implications for how you use some algorithms.
You should also be familiar enough with the cryptographic primitives you’re working with before you begin such a journey; because even once you’ve solved the key management story (problems 1, 2 and 3 from the above list of 5), cryptographic expertise is still necessary.
- If you’re feeding data into a hash function, you should also be thinking about domain separation. More information.
- If you’re feeding data into a MAC or signature algorithm, you should also be thinking about canonicalization attacks. More information.
- If you’re encrypting data, you should be thinking about multi-key attacks and confused deputy attacks. Also, the cryptographic doom principle if you’re not using IND-CCA3 algorithms.
- At a higher-level, you should proactively defend against algorithm confusion attacks.
How Do You Measure Success?
It’s tempting to call the project “done” once you’ve completed your specifications and built a prototype, and maybe even published a formal proof of your design, but you should first collect data on every important metric:
- How easy is it to use your solution?
- How hard is it to misuse your solution?
- How easy is it to attack your solution? Which attackers have the highest advantage?
- How stable is your solution?
- How performant is your solution? Are the slow pieces the deliberate result of a trade-off? How do you know the balance was struck corectly?
Where We Stand Today
I’ve only begun writing my proposal, and I don’t expect it to be truly ready for cryptographers or security experts to review until early 2023.However, my clearly specified tenets and anti-tenets were already useful in discussing my proposal on the Fediverse.
@soatok @fasterthanlime Should probably embed the algo used for encryption in the data used for storing the encrypted blob, to support multiples and future changes.@fabienpenso@hachyderm.io proposes in-band protocol negotiation instead of versioned protocols
The main things I wanted to share today are:
- The direction Twitter appears to be heading with their E2EE work, and why I think it’s a flawed approach
- Designing E2EE requires a great deal of time, care, and expertise; getting to market quicker at the expense of a clear and careful design is almost never the right call
Mastodon? ActivityPub? Fediverse? OMGWTFBBQ!
In case anyone is confused about Mastodon vs ActivityPub vs Fediverse lingo:The end goal of my proposal is that I want to be able to send DMs to queer furries that use Mastodon such that only my recipient can read them.
Achieving this end goal almost exclusively requires building for ActivityPub broadly, not Mastodon specifically.
However, I only want to be responsible for delivering this design into the software I use, not for every single possible platform that uses ActivityPub, nor all the programming languages they’re written in.
I am going to be aggressive about preventing scope creep, since I’m doing all this work for free. (I do have a Ko-Fi, but I won’t link to it from here. Send your donations to the people managing the Mastodon instance that hosts your account instead.)
My hope is that the design documents and technical specifications become clear enough that anyone can securely implement end-to-end encryption for the Fediverse–even if special attention needs to be given to the language-specific cryptographic libraries that you end up using.
Art: LvJ
Why Should We Trust You to Design E2EE?
This sort of question comes up inevitably, so I’d like to tackle it preemptively.My answer to every question that begins with, “Why should I trust you” is the same: You shouldn’t.
There are certainly cryptography and cybersecurity experts that you will trust more than me. Ask them for their expert opinions of what I’m designing instead of blanketly trusting someone you don’t know.
I’m not interested in revealing my legal name, or my background with cryptography and computer security. Credentials shouldn’t matter here.
If my design is good, you should be able to trust it because it’s good, not because of who wrote it.
If my design is bad, then you should trust whoever proposes a better design instead. Part of why I’m developing it in the open is so that it may be forked by smarter engineers.
Knowing who I am, or what I’ve worked on before, shouldn’t enter your trust calculus at all. I’m a gay furry that works in the technology industry and this is what I’m proposing. Take it or leave it.
Why Not Simply Rubber-Stamp Matrix Instead?
(This section was added on 2022-11-29.)There’s a temptation, most often found in the sort of person that comments on the /r/privacy subreddit, to ask why even do all of this work in the first place when Matrix already exists?
The answer is simple: I do not trust Megolm, the protocol designed for Matrix.
Megolm has benefited from amateur review for four years. Non-cryptographers will confuse this observation with the proposition that Matrix has benefited from peer review for four years. Those are two different propositions.
In fact, the first time someone with cryptography expertise bothered to look at Matrix for more than a glance, they found critical vulnerabilities in its design. These are the kinds of vulnerabilities that are not easily mitigated, and should be kept in mind when designing a new protocol.
You don’t have to take my word for it. Listen to the Security, Cryptography, Whatever podcast episode if you want cryptographic security experts’ takes on Matrix and these attacks.
From one of the authors of the attack paper:
So they kind of, after we disclosed to them, they shared with us their timeline. It’s not fixed yet. It’s a, it’s a bigger change because they need to change the protocol. But they always said like, Okay, fair enough, they’re gonna change it. And they also kind of announced a few days after kind of the public disclosure based on the public reaction that they should prioritize fixing that. So it seems kind of in the near future, I don’t have the timeline in front of me right now. They’re going to fix that in the sense of like the— because there’s, notions of admins and so on. So like, um, so authenticating such group membership requests is not something that is kind of completely outside of, kind of like the spec. They just kind of need to implement the appropriate authentication and cryptography.Martin Albrecht, SCW podcast
From one of the podcast hosts:I guess we can at the very least tell anyone who’s going forward going to try that, that like, yes indeed. You should have formal models and you should have proofs. And so there’s this, one of the reactions to kind of the kind of attacks that we presented and also to prior previous work where we kind of like broken some cryptographic protocols is then to say like, “Well crypto’s hard”, and “don’t roll your own crypto.” But in a way the thing is like, you know, we need some people to roll their own crypto because that’s how we have crypto. Someone needs to roll it. But we have developed techniques, we have developed formalisms, we have developed methods for making sure it doesn’t have to be hard, it’s not, it’s not a dark art kind of that only kind of a few, a select few can master, but it’s, you know, it’s a science and you can learn it. So, but you need to then indeed employ a cryptographer in kind of like forming, modeling your protocol and whenever you make changes, then, you know, they need to look over this and say like, Yes, my proof still goes through. Um, so like that is how you do this. And then, then true engineering is still hard and it will remain hard and you know, any science is hard, but then at least you have some confidence in what you’re doing. You might still then kind of on the space and say like, you know, the attack surface is too large and I’m not gonna to have an encrypted backup. Right. That’s then the problem of a different hard science, social science. Right. But then just use the techniques that we have, the methods that we have to establish what we need.Thomas Ptacek, SCW podcast
It’s tempting to listen to these experts and say, “OK, you should use libsignal instead.”But libsignal isn’t designed for federation and didn’t prioritize group messaging. The UX for Signal is like an IM application between two parties. It’s a replacement for SMS.
It’s tempting to say, “Okay, but you should use MLS then; never roll your own,” but MLS doesn’t answer the group membership issue that plagued Matrix. It punts on these implementation details.
Even if I use an incumbent protocol that privacy nerds think is good, I’ll still have to stitch it together in a novel manner. There is no getting around this.
Maybe wait until I’ve finished writing the specifications for my proposal before telling me I shouldn’t propose anything.
Credit for art used in header: LvJ, Harubaki
https://soatok.blog/2022/11/22/towards-end-to-end-encryption-for-direct-messages-in-the-fediverse/
There are two mental models for designing a cryptosystem that offers end-to-end encryption to all of its users.
The first is the Signal model.
Predicated on Moxie’s notion that the ecosystem is moving, Signal (and similar apps) maintain some modicum of centralized control over the infrastructure and deployment of their app. While there are obvious downsides to this approach, it allows them to quickly roll out ecosystem-wide changes to their encryption protocols without having to deal with third-party clients falling behind.
The other is the federated model, which is embraced by Matrix, XMPP with OMEMO, and other encrypted chat apps and protocols.
This model can be attractive to a lot of people whose primary concern is data sovereignty rather than cryptographic protections. (Most security experts care about both aspects, but we differ in how they rank the two priorities relative to each other.)
As I examined in my criticisms of Matrix and XMPP+OMEMO, they kind of prove Moxie’s point about the ecosystem:
- Two years after the Matrix team deprecated their C implementation of Olm in favor of a Rust library, virtually all of the clients that actually switched (as of the time of my blog post disclosing vulnerabilities in their C library) were either Element, or forks of Element. The rest were still wrapping libolm.
- Most OMEMO libraries are still stuck on version 0.3.0 of the specification, and cannot communicate with XMPP+OMEMO implementations that are on newer versions of the specification.
And this is personally a vexing observation, for two reasons:
- I don’t like that Moxie’s opinion is evidently more correct when you look at the consequences of each model.
- I’m planning to develop end-to-end encryption for direct messages on the Fediverse, and don’t want to repeat the mistakes of Matrix and OMEMO.
(Aside from them mistakenly claiming to be Signal competitors, which I am not doing with my E2EE proposal or any implementations thereof.)
Fortunately, I have a solution to both annoyances that I intend to implement in my end-to-end encryption proposal.
Thus, I’d like to introduce Cryptographic Alacrity to the discussion.
Note: The term “crypto agility” was already coined by people who never learned from the alg=none vulnerability of JSON Web Tokens and think it’s A-okay to negotiate cryptographic primitives at run-time based on attacker-controllable inputs.Because they got their foolish stink all over that term, I discarded it in favor of coining a new one. I apologize for the marginal increase in cognitive load this decision may cause in the future.
Cryptographic Alacrity
For readers who aren’t already familiar with the word “alacrity” from playing Dungeons & Dragons once upon a time, the Merriam-Webster dictionary defines Alacrity as:
promptness in response : cheerful readiness
When I describe a cryptography protocol as having “cryptographic alacrity”, I mean there is a built-in mechanism to enforce protocol upgrades in a timely manner, and stragglers (read: non-compliant implementations) will lose the ability to communicate with up-to-date software.
Alacrity must be incorporated into a protocol at its design phase, specified clearly, and then enforced by the community through its protocol library implementations.
The primary difference between Alacrity and Agility is that Alacrity is implemented through protocol versions and a cryptographic mechanism for enforcing implementation freshness across the ecosystem, whereas Agility is about being able to hot-swap cryptographic primitives in response to novel cryptanalysis.
This probably still sounds a bit abstract to some of you.
To best explain what I mean, let’s look at a concrete example. Namely, how I plan on introducing Alacrity to my Fediverse E2EE design, and then enforcing it henceforth.
Alacrity in E2EE for the Fediverse
One level higher in the protocol than bulk message and/or media attachment encryption will be a Key Derivation Function. (Probably HKDF, probably as part of a Double Ratchet protocol or similar. I haven’t specified that layer just yet.)
Each invocation of HKDF will have a hard-coded 256-bit salt particular to the protocol version that is currently being used.
(What most people would think to pass as the salt in HKDF will instead be appended to the info parameter.)
The protocol version will additionally be used in a lot of other places (i.e., domain separation constants), but those are going to be predictable string values.
The salt will not be predictable until the new version is specified. I will likely tie it to the SHA256 hash of a Merkle root of a Federated Public Key Directory instance and the nickname for each protocol version.
Each library will have a small window (probably no more than 3 versions at any time) of acceptable protocol versions.
A new version will be specified, with a brand new KDF salt, every time we need to improve the protocol to address a security risk. Additionally, we will upgrade the protocol version at least once a year, even if no security risks have been found in the latest version of the protocol.
If your favorite client depends on a 4 year old version of the E2EE protocol library, you won’t be able to silently downgrade security for all of your conversation participants. Instead, you will be prevented from talking to most users, due to incompatible cryptography.
Version Deprecation Schedule
Let’s pretend, for the sake of argument, that we launch the first protocol version on January 1, 2025. And that’s when the first clients start to be built atop the libraries that speak the protocols.
Assuming no emergencies occur, after 9 months (i.e., by October 1, 2025), version 2 of the protocol will be specified. Libraries will be updated to support reading (but not sending) messages encrypted with protocol v2.
Then, on January 1, 2026 at midnight UTC–or a UNIX timestamp very close to this, at least–clients will start speaking protocol v2. Other clients can continue to read v1, but they should write v2.
This will occur every year on the same cadence, but with a twist: After clients are permitted to start writing v3, support for reading v1 MUST be removed from the codebase.
This mechanism will hold true even if the protocols are largely the same, aside from tweaked constants.
What does Alacrity give us?
Alacrity allows third-party open source developers the capability of writing their own software (both clients and servers) without a significant risk of the Matrix and OMEMO problem (i.e., stale software being used years after it should have been deprecated).
By offering a sliding window of acceptable versions and scheduling planned version bumps to be about a year apart, we can minimize the risk of clock skew introducing outages.
Additionally, it provides third-party developers ample opportunity to keep their client software conformant to the specification.
It doesn’t completely eliminate the possibility of stale versions being used in silos. Especially if some developers choose malice. However, anyone who deviates from the herd to form their own cadre of legacy protocol users has deliberately or negligently accepted the compatibility risks.
Can you staple Alacrity onto other end-to-end encryption projects?
Not easily, no.
This is the sort of mechanism that needs to be baked in from day one, and everyone needs to be onboard at the project’s inception.
Retroactively trying to make Matrix, XMPP, OpenPGP, etc. have Cryptographic Alacrity after the horses left the barn is an exercise in futility.
I would like your help introducing Alacrity into my pet project.
I’d love to help, but I’m already busy enough with work and my own projects.
If you’re designing a product that you intend to sell to the public, talk to a cryptography consulting firm. I can point you to several that are reputable, but most of them are pretty good.
If you’re designing something for the open source community, and don’t have the budget to hire professionals, I’ll respond to such inquiries when my time, energy, and emotional bandwidth is available to do so. No promises on a timeline, of course.
How do you force old versions to get dropped?
You don’t.
The mechanism I mostly care about is forcing new versions get adopted.
Dropping support for older versions is difficult to mechanize. Being actively involved in the community to encourage implementations do this (if for no other reason to reduce risk by deleting dead code) is sufficient.
I am choosing to not make perfect the enemy of good with this proposal.
This isn’t a new idea.
No, it isn’t a new idea. The privacy-focused cryptocurrency, Zcash, has a similar mechanism build into their network upgrades.
It’s wildly successful when federated or decentralized systems adopt such a policy, and actually enforce it.
The only thing that’s novel in this post is the coined term, Cryptographic Alacrity.
Addendum – Questions Asked After This Post Went Live
Art: ScruffKerfluff
What about Linux Distros with slow release cycles?
What about them?!
In my vision of the future, the primary deliverable that users will actually hold will most likely be a Browser Extension, not a binary blob signed by their Linux distro.
They already make exceptions to their glacial release cadences for browsers, so I don’t anticipate whatever release cadence we settle on being a problem in practice.
For people who write clients with desktop software: Debian and Ubuntu let users install PPAs. Anyone who does Node.js development on Linux is familiar with them.
Why 1 year?
It was an example. We could go shorter or longer depending on the needs of the ecosystem.
How will you enforce the removal of old versions if devs don’t comply?
That’s a much lower priority than enforcing the adoption of new versions.
But realistically, sending pull requests to remove old versions would be the first step.
Publicly naming and shaming clients that refuse to drop abandoned protocol versions is always an option for dealing with obstinance.
We could also fingerprint clients that still support the old versions and refuse to connect to them at all, even if there is a version in common, until they update to drop the old version.
That said, I would hope to not need to go that far.
I really don’t want to overindex on this, but people keep asking or trying to send “what about?” comments that touch on this question, so now I’m writing a definitive statement to hopefully quell this unnecessary discourse.
The ubiquitous adoption of newer versions is a much higher priority than the sunsetting of old versions. It should be obvious that getting your users to use the most secure mode available is intrinsically a net-positive.
If your client can negotiate in the most secure mode available (i.e., if we move onto post-quantum cryptography), and your friends’ clients enforce the correct minimum version, it doesn’t really matter so much if your client in particular is non-compliant.
Focusing so much on this aspect is a poor use of time and emotional bandwidth.
Header art also made by AJ.
https://soatok.blog/2024/08/28/introducing-alacrity-to-federated-cryptography/
#cryptographicAgility #cryptographicAlacrity #cryptography #endToEndEncryption #fediverse #Matrix #OMEMO #XMPP
I don’t consider myself exceptional in any regard, but I stumbled upon a few cryptography vulnerabilities in Matrix’s Olm library with so little effort that it was nearly accidental.It should not be this easy to find these kind of issues in any product people purportedly rely on for private messaging, which many people evangelize incorrectly as a Signal alternative.
Later, I thought I identified an additional vulnerability that would have been much worse, but I was wrong about that one. For the sake of transparency and humility, I’ll also describe that in detail.
This post is organized as follows:
- Disclosure Timeline
- Vulnerabilities in Olm (Technical Details)
- Recommendations
- Background Information
- An Interesting Non-Issue That Looked Critical
I’ve opted to front-load the timeline and vulnerability details to respect the time of busy security professionals.
Please keep in mind that this website is a furry blog, first and foremost, that sometimes happens to cover security and cryptography topics.Many people have, over the years, assumed the opposite and commented accordingly. The ensuing message board threads are usually is a waste of time and energy for everyone involved. So please adjust your expectations.
Art by Harubaki
If you’re curious, you can learn more here.
Disclosure Timeline
- 2024-05-15: I took a quick look at the Matrix source code. I identified two issues and emailed them to their
security@
email address.
In my email, I specify that I plan to disclose my findings publicly in 90 days (i.e. on August 14), in adherence with industry best practices for coordinated disclosure, unless they request an extension in writing.- 2024-05-16: I checked something else on a whim and find a third issue, which I also email to their
security@
email address.- 2024-05-17: Matrix security team confirms receipt of my reports.
- 2024-05-17: I follow up with a suspected fourth finding–the most critical of them all. They point out that it is not actually an issue, because I overlooked an important detail in how the code is architected. Mea culpa!
- 2024-05-18: A friend discloses a separate finding with Matrix: Media can be decrypted to multiple valid plaintexts using different keys and Malicious homeservers can trick Element/Schildichat into revealing links in E2EE rooms.
They instructed the Matrix developers to consult with me if they needed cryptography guidance. I never heard from them on this externally reported issue.- 2024-07-12: I shared this blog post draft with the Matrix security team while reminding them of the public disclosure date.
- 2024-07-31: Matrix pushes a commit that announces that libolm is deprecated.
- 2024-07-31: I email the Matrix security team asking if they plan to fix the reported issues (and if not, if there’s any other reason I should withhold publication).
- 2024-07-31: Matrix confirms they will not fix these issues (due to its now deprecated status), but ask that I withhold publication until the 14th as originally discussed.
- 2024-08-14: This blog post is publicly disclosed to the Internet.
- 2024-08-14: The lead Matrix dev claims they already knew about these issues, and, in fact, knowingly shipped cryptography code that was vulnerable to side-channel attacks for years. See Addendum.
- 2024-08-23: MITRE has assigned CVE IDs to these three findings.
Vulnerabilities in Olm
I identified the following issues with Olm through a quick skim of their source code on Gitlab:
- AES implementation is vulnerable to cache-timing attacks
- Ed25519 signatures are malleable
- Timing leakage in base64 decoding of private key material
This is sorted by the order in which they were discovered, rather than severity.
AES implementation is vulnerable to cache-timing attacks
a.k.a. CVE-2024-45191Olm ships a pure-software implementation of AES, rather than leveraging hardware acceleration.
// Substitutes a word using the AES S-Box.WORD SubWord(WORD word){unsigned int result;result = (int)aes_sbox[(word >> 4) & 0x0000000F][word & 0x0000000F];result += (int)aes_sbox[(word >> 12) & 0x0000000F][(word >> 8) & 0x0000000F] << 8;result += (int)aes_sbox[(word >> 20) & 0x0000000F][(word >> 16) & 0x0000000F] << 16;result += (int)aes_sbox[(word >> 28) & 0x0000000F][(word >> 24) & 0x0000000F] << 24;return(result);}
The code in question is called from this code, which is in turn used to actually encrypt messages.
Software implementations of AES that use a look-up table for the SubWord step of the algorithm are famously susceptible to cache-timing attacks.
This kind of vulnerability in software AES was previously used to extract a secret key from OpenSSL and dm-crypt in about 65 milliseconds. Both papers were published in 2005.
A general rule in cryptography is, “attacks only get better; they never get worse“.
As of 2009, you could remotely detect a timing difference of about 15 microseconds over the Internet with under 50,000 samples. Side-channel exploits are generally statistical in nature, so such a sample size is generally not a significant mitigation.
How is this code actually vulnerable?
In the above code snippet, the vulnerability occurs inaes_sbox[/* ... */][/* ... */]
.Due to the details of how the AES block cipher works, the input variable (
word
) is a sensitive value.Software written this way allows attackers to detect whether or not a specific value was present in one of the processor’s caches.
To state the obvious: Cache hits are faster than cache misses. This creates an observable timing difference.
Such a timing leak allows the attacker to learn the value that was actually stored in said cache. You can directly learn this from other processes on the same hardware, but it’s also observable over the Internet (with some jitter) through the normal operation of vulnerable software.
See also: cryptocoding’s description for table look-ups indexed by secret data.
How to mitigate this cryptographic side-channel
The correct way to solve this problem is to use hardware accelerated AES, which uses distinct processor features to implement the AES round function and side-steps any cache-timing shenanigans with the S-box.Not only is this more secure, but it’s faster and uses less energy too!
If you’re also targeting devices that don’t have hardware acceleration available, you should first use hardware acceleration where possible, but then fallback to a bitsliced implementation such as the one in Thomas Pornin’s BearSSL.
See also: the BearSSL documentation for constant-time AES.
Art by AJ
Ed25519 signatures are malleable
a.k.a. CVE-2024-45193Ed25519 libraries come in various levels of quality regarding signature validation criteria; much to the chagrin of cryptography engineers everywhere. One of those validation criteria involves signature malleability.
Signature malleability usually isn’t a big deal for most protocols, until suddenly you discover a use case where it is. If it matters, that usually that means you’re doing something with cryptocurrency.
Briefly, if your signatures are malleable, that means you can take an existing valid signature for a given message and public key, and generate a second valid signature for the same message. The utility of this flexibility is limited, and the impact depends a lot on how you’re using signatures and what properties you hope to get out of them.
For ECDSA, this means that for a given signature , a second signature is also possible (where is the order of the elliptic curve group you’re working with).
Matrix uses Ed25519, whose malleability is demonstrated between and .
This is trivially possible because S is implicitly reduced modulo the order of the curve, , which is a 253-bit number (
0x1000000000000000000000000000000014def9dea2f79cd65812631a5cf5d3ed
) and S is encoded as a 256-bit number.The Ed25519 library used within Olm does not ensure that , thus signatures are malleable. You can verify this yourself by looking at the Ed25519 verification code.
int ed25519_verify(const unsigned char *signature, const unsigned char *message, size_t message_len, const unsigned char *public_key) { unsigned char h[64]; unsigned char checker[32]; sha512_context hash; ge_p3 A; ge_p2 R; if (signature[63] & 224) { return 0; } if (ge_frombytes_negate_vartime(&A, public_key) != 0) { return 0; } sha512_init(&hash); sha512_update(&hash, signature, 32); sha512_update(&hash, public_key, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, h); sc_reduce(h); ge_double_scalarmult_vartime(&R, h, &A, signature + 32); ge_tobytes(checker, &R); if (!consttime_equal(checker, signature)) { return 0; } return 1;}
This is almost certainly a no-impact finding (or low-impact at worst), but still an annoying one to see in 2024.
If you’d like to learn more, this page is a fun demo of Ed25519 malleability.
To mitigate this, I recommend implementing these checks from libsodium.
Art: CMYKat
Timing leakage in base64 decoding of private key material
a.k.a. CVE-2024-45192If you weren’t already tired of cache-timing attacks based on table look-ups from AES, the Matrix base64 code is also susceptible to the same implementation flaw.
while (pos != end) { unsigned value = DECODE_BASE64[pos[0] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[1] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[2] & 0x7F]; value <<= 6; value |= DECODE_BASE64[pos[3] & 0x7F]; pos += 4; output[2] = value; value >>= 8; output[1] = value; value >>= 8; output[0] = value; output += 3;}
The base64 decoding function in question is used to load the group session key, which means the attack published in this paper almost certainly applies.
How would you mitigate this leakage?
Steve Thomas (one of the judges of the Password Hashing Competition, among other noteworthy contributions) wrote some open source code a while back that implements base64 encoding routines in constant-time.The real interesting part is how it avoids a table look-up by using arithmetic (from this file):
// Base64 character set:// [A-Z] [a-z] [0-9] + /// 0x41-0x5a, 0x61-0x7a, 0x30-0x39, 0x2b, 0x2finline int base64Decode6Bits(char src){int ch = (unsigned char) src;int ret = -1;// if (ch > 0x40 && ch < 0x5b) ret += ch - 0x41 + 1; // -64ret += (((0x40 - ch) & (ch - 0x5b)) >> 8) & (ch - 64);// if (ch > 0x60 && ch < 0x7b) ret += ch - 0x61 + 26 + 1; // -70ret += (((0x60 - ch) & (ch - 0x7b)) >> 8) & (ch - 70);// if (ch > 0x2f && ch < 0x3a) ret += ch - 0x30 + 52 + 1; // 5ret += (((0x2f - ch) & (ch - 0x3a)) >> 8) & (ch + 5);// if (ch == 0x2b) ret += 62 + 1;ret += (((0x2a - ch) & (ch - 0x2c)) >> 8) & 63;// if (ch == 0x2f) ret += 63 + 1;ret += (((0x2e - ch) & (ch - 0x30)) >> 8) & 64;return ret;}
Any C library that handles base64 codecs for private key material should use a similar implementation. It’s fine to have a faster base64 implementation for non-secret data.
Worth noting: Libsodium also provides a reasonable Base64 codec.
Recommendations
These issues are not fixed in libolm.Instead of fixing libolm, the Matrix team recommends all Matrix clients adopt vodozemac.
I can’t speak to the security of vodozemac.
Art: CMYKat
But I can speak against the security of libolm, so moving to vodozemac is probably a good idea. It was audited by Least Authority at one point, so it’s probably fine.
Most Matrix clients that still depended on libolm should treat this blog as public 0day, unless the Matrix security team already notified you about these issues.
Background Information
If you’re curious about the backstory and context of these findings, read on.Otherwise, feel free to skip this section. It’s not pertinent to most audiences. The people that need to read it already know who they are.
End-to-end encryption is one of the topics within cryptography that I find myself often writing about.In 2020, I wrote a blog post covering end-to-end encryption for application developers. This was published several months after another blog I wrote covering gripes with AES-GCM, which included a shallow analysis of how Signal uses the algorithm for local storage.
In 2021, I published weaknesses in another so-called private messaging app called Threema.
In 2022, after Elon Musk took over Twitter, I joined the Fediverse and sought to build end-to-end encryption support for direct messages into ActivityPub, starting with a specification. Work on this effort was stalled while trying to solve Public Key distribution in a federated environment (which I hope to pick up soon, but I digress).
Earlier this year, the Telegram CEO started fearmongering about Signal with assistance from Elon Musk, so I wrote a blog post urging the furry fandom to move away from Telegram and start using Signal more. As I had demonstrated years prior, I was familiar with Signal’s code and felt it was a good recommendation for security purposes (even if its user experience needs significant work).
I thought that would be a nice, self-contained blog post. Some might listen, most would ignore it, but I could move on with my life.
I was mistaken about that last point.
Art by AJAn overwhelming number of people took it upon themselves to recommend or inquire about Matrix, which prompted me to hastily scribble down my opinion on Matrix so that I might copy/paste a link around and save myself a lot of headache.
Just when I thought the firehose was manageable and I could move onto other topics, one of the Matrix developers responded to my opinion post.
Thus, I decided to briefly look at their source code and see if any major or obvious cryptography issues would fall out of a shallow visual scan.
Since you’re reading this post, you already know how that ended.
Credit: CMYKat
Since the first draft of this blog post was penned, I also outlined what I mean when I say an encrypted messaging app is a Signal competitor or not, and published my opinion on XMPP+OMEMO (which people also recommend for private messaging).
Why mention all this?
Because it’s important to know that I have not audited the Olm or Megolm codebases, nor even glanced at their new Rust codebase.The fact is, I never intended to study Matrix. I was annoyed into looking at it in the first place.
My opinion of their project was already calcified by the previously discovered practically-exploitable cryptographic vulnerabilities in Matrix in 2022.
The bugs described above are the sort of thing I mentally scan for when I first look at a project just to get a feel for the maturity of the codebase. I do this with the expectation (hope, really) of not finding anything at all.
(If you want two specific projects that I’ve subjected to a similar treatment, and failed to discover anything interesting in: Signal and WireGuard. These two set the bar for cryptographic designs.)
It’s absolutely bonkers that an AES cache timing vulnerability was present in their code in 2024.
It’s even worse when you remember that I was inundated with Matrix evangelism in response to recommending furries use Signal.I’m a little outraged because of how irresponsible this is, in context.
It’s so bad that I didn’t even need to clone their git repository, let alone run basic static analysis tools locally.So if you take nothing else away from this blog post, let it be this:
There is roughly a 0% chance that I got extremely lucky in my mental
grep
and found the only cryptography implementation flaws in their source code. I barely tried at all and found these issues.I would bet money on there being more bugs or design flaws that I didn’t find, because this discovery was the result of an extremely half-assed effort to blow off steam.
Wasn’t libolm deprecated in May 2022?
The Matrix developers like to insist that their new Rust hotness “vodozemac” is what people should be using today.I haven’t looked at vodozemac at all, but let’s pretend, for the sake of argument, that its cryptography is actually secure.
(This is very likely if they turn out to be using RustCrypto for their primitives, but I don’t have the time or energy for that nerd snipe, so I’m not going to look. Least Authority did audit their Rust library, for what it’s worth, and Least Authority isn’t clownshoes.)
It’s been more than 2 years since they released vodozemac. What does the ecosystem penetration for this new library look like, in practice?
A quick survey of the various Matrix clients on GitHub says that libolm is still the most widely used cryptography implementation in the Matrix ecosystem (as of this writing):
Matrix Client Cryptography Backend https://github.com/tulir/gomuks libolm (1, 2) https://github.com/niochat/nio libolm (1, 2) https://github.com/ulyssa/iamb vodozemac (1, 2) https://github.com/mirukana/mirage libolm (1) https://github.com/Pony-House/Client libolm (1) https://github.com/MTRNord/cetirizine vodozemac (1) https://github.com/nadams/go-matrixcli none https://github.com/mustang-im/mustang libolm (1) https://github.com/marekvospel/libretrix libolm (1) https://github.com/yusdacra/icy_matrix none https://github.com/ierho/element libolm (through the python SDK) https://github.com/mtorials/cordless none https://github.com/hwipl/nuqql-matrixd libolm (through the python SDK) https://github.com/maxkratz/element-web vodozemac (1, 2, 3, 4) https://github.com/asozialesnetzwerk/riot libolm (wasm file) https://github.com/NotAlexNoyle/Versi libolm (1, 2) 3 of the 16 clients surveyed use the new vodozemac library. 10 still use libolm, and 3 don’t appear to implement end-to-end encryption at all.
If we only focus on clients that support E2EE, vodozemac has successfully been adopted by 19% of the open source Matrix clients on GitHub.
I deliberately excluded any repositories that were archived or clearly marked as “old” or “legacy” software, because including those would artificially inflate the representation of libolm. It would make for a more compelling narrative to do so, but I’m not trying to be persuasive here.
Deprecation policies are a beautiful lie. The impact of a vulnerability in Olm or Megolm is still far-reaching, and should be taken seriously by the Matrix community.
Worth calling out: this quick survey, which is based on a GitHub Topic, certainly misses other implementations. Both FluffyChat and Cinny, which were not tagged with this GitHub Topic, depend a language-specific Olm binding.These bindings in turn wrap libolm rather than the Rust replacement, vodozemac.
But the official clients…
I thought the whole point of choosing Matrix over something like Signal is to be federated, and run your own third-party clients?If we’re going to insist that everyone should be using Element if they want to be secure, that defeats the entire marketing point about third-party clients that Matrix evangelists cite when they decry Signal’s centralization.
So I really don’t want to hear it.
An Interesting Non-Issue That Looked Critical
As I mentioned in the timeline at the top, I thought I found a fourth issue with Matrix’s codebase. Had I been correct, this would have been a critical severity finding that the entire Matrix ecosystem would need to melt down to remediate.Fortunately for everyone, I made a mistake, and there is no fourth vulnerability after all.
However, I thought it would be interesting to write about what I thought I found, the impact it would have had if it were real, and why I believed it to be an issue.
Let’s start with the code in question:
void ed25519_sign(unsigned char *signature, const unsigned char *message, size_t message_len, const unsigned char *public_key, const unsigned char *private_key) { sha512_context hash; unsigned char hram[64]; unsigned char r[64]; ge_p3 R; sha512_init(&hash); sha512_update(&hash, private_key + 32, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, r); sc_reduce(r); ge_scalarmult_base(&R, r); ge_p3_tobytes(signature, &R); sha512_init(&hash); sha512_update(&hash, signature, 32); sha512_update(&hash, public_key, 32); sha512_update(&hash, message, message_len); sha512_final(&hash, hram); sc_reduce(hram); sc_muladd(signature + 32, hram, private_key, r);}
The highlighted segment is doing pointer arithmetic. This means it’s reading 32 bytes, starting from the 32nd byte in
private_key
.What’s actually happening here is:
private_key
is the SHA512 hash of a 256-bit seed. If you look at the function prototype, you’ll notice thatpublic_key
is a separate input.Virtually every other Ed25519 implementation I’ve ever looked at before expected users to provide a 32 byte seed followed by the public key as a single input.
This led me to believe that this
private_key + 32
pointer arithmetic was actually using the public key for calculatingr
.The variable
r
(not to be confused with big R) generated via the first SHA512 is the nonce for a given signature, it must remain secret for Ed25519 to remain secure.If
r
is known to an attacker, you can do some arithmetic to recover the secret key from a single signature.Because I had mistakenly believed that
r
was calculated from the SHA512 of only public inputs (the public key and message), which I must emphasize isn’t correct, I had falsely concluded that any previously intercepted signature could be used to steal user’s private keys.Credit: CMYKat
But because
private_key
was actually the full SHA512 hash of the seed, rather than the seed concatenated with the public key, this pointer arithmetic did NOT use the public key for the calculation ofr
, so this vulnerability does not exist.If the code did what I thought it did, however, this would have been a complete fucking disaster for the Matrix ecosystem. Any previously intercepted message would have allowed an attacker to recover a user’s secret key and impersonate them. It wouldn’t be enough to fix the code; every key in the ecosystem would need to be revoked and rotated.
Whew!
I’m happy to be wrong about this one, because that outcome is a headache nobody wants.
So no action is needed, right?
Well, maybe.Matrix’s library was not vulnerable, but I honestly wouldn’t put it past software developers at large to somehow, somewhere, use the public key (rather than a secret value) to calculate the EdDSA signature nonces as described in the previous section.
To that end, I would like to propose a test vector be added to the Wycheproof test suite to catch any EdDSA implementation that misuses the public key in this way.
Then, if someone else screws up their Ed25519 implementation in the exact way I thought Matrix was, the Wycheproof tests will catch it.
For example, here’s a vulnerable test input for Ed25519:
{ "should-fail": true, "secret-key": "d1d0ef849f9ec88b4713878442aeebca5c7a43e18883265f7f864a8eaaa56c1ef3dbb3b71132206b81f0f3782c8df417524463d2daa8a7c458775c9af725b3fd", "public-key": "f3dbb3b71132206b81f0f3782c8df417524463d2daa8a7c458775c9af725b3fd", "message": "Test message", "signature": "ffc39da0ce356efb49eb0c08ed0d48a1cadddf17e34f921a8d2732a33b980f4ae32d6f5937a5ed25e03a998e4c4f5910c931b31416e143965e6ce85b0ea93c09"}
A similar test vector would also be worth creating for Ed448, but the only real users of Ed448 were the authors of the xz backdoor, so I didn’t bother with that.
(None of the Project Wycheproof maintainers knew this suggestion is coming, by the way, because I was respecting the terms of the coordinated disclosure.)
Closing Thoughts
Despite finding cryptography implementation flaws in Matric’s Olm library, my personal opinion on Matrix remains largely unchanged from 2022. I had already assumed it would not meet my bar for security.Cryptography engineering is difficult because the vulnerabilities you’re usually dealing with are extremely subtle. (Here’s an unrelated example if you’re not convinced of this general observation.) As SwiftOnSecurity once wrote:
https://twitter.com/SwiftOnSecurity/status/832058185049579524
The people that developed Olm and Megolm has not proven themselves ready to build a Signal competitor. In balance, most teams are not qualified to do so.
I really wish the Matrix evangelists would accept this and stop trying to cram Matrix down other people’s throats when they’re talking about problems with other platforms entirely.
More important for the communities of messaging apps:You don’t need to be a Signal competitor. Having E2EE is a good thing on its own merits, and really should be table stakes for any social application in 2024.
It’s only when people try to advertise their apps as a Signal alternative (or try to recommend it instead of Signal), and offer less security, that I take offense.
Just be your own thing.
My work-in-progress proposal to bring end-to-end encryption to the Fediverse doesn’t aim to compete with Signal. It’s just meant to improve privacy, which is a good thing to do on its own merits.
If I never hear Matrix evangelism again after today, it would be far too soon.If anyone feels like I’m picking on Matrix, don’t worry: I have far worse things to say about Telegram, Threema, XMPP+OMEMO, Tox, and a myriad other projects that are hungry for Signal’s market share but don’t measure up from a cryptographic security perspective.
If Signal fucked up as bad as these projects, my criticism of Signal would be equally harsh. (And remember, I have looked at Signal before.)
Addendum (2024-08-14)
One of the lead Matrix devs posted a comment on Hacker News after this blog post went live that I will duplicate here:the author literally picked random projects from github tagged as matrix, without considering their prevalence or whether they are actually maintained etc.if you actually look at % of impacted clients, it’s tiny.
meanwhile, it is very unclear that any sidechannel attack on a libolm based client is practical over the network (which is why we didn’t fix this years ago). After all, the limited primitives are commented on in the readme and https://github.com/matrix-org/olm/issues/3 since day 1.
So the Matrix developers already knew about these vulnerabilities, but deliberately didn’t fix them, for years.Congratulations, you’ve changed my stance. It used to be “I don’t consider Matrix a Signal alternative and they’ve had some embarrassing and impactful crypto bugs but otherwise I don’t care”. Now it’s a stronger stance:
Don’t use Matrix.
I had incorrectly assumed ignorance, when it was in fact negligence.
There’s no reasonable world in which anyone should trust the developers of cryptographic software (i.e., libolm) that deliberately ships with side-channels for years, knowing they’re present, and never bother to fix them.
This is fucking clownshoes.
https://soatok.blog/2024/08/14/security-issues-in-matrixs-olm-library/
#crypto #cryptography #endToEndEncryption #Matrix #sideChannels #vuln
In late 2022, I blogged about the work needed to develop a specification for end-to-end encryption for the fediverse. I sketched out some of the key management components on GitHub, and then the public work abruptly stalled.
A few of you have wondered what’s the deal with that.
This post covers why this effort stalled, what I’m proposing we do next.
What’s The Hold Up?
The “easy” (relatively speaking) parts of the problem are as follows:
- Secret key management. (This is sketched out already, and provides multiple mechanisms for managing secret key material. Yay!)
- Bulk encryption of messages and media. (I’ve done a lot of work in this space over the years, so it’s an area I’m deeply familiar with. When we get to this part, it will be almost trivial. I’m not worried about it at all.)
- Forward-secure ratcheting / authenticated key exchange / group key agreement. (RFC 9420 is a great starting point.)
That is to say, managing secret keys, using secret keys, and deriving shared secret keys are all in the “easy” bucket.
The hard part? Public key management.
CMYKat made this
Why is Public Key Management Hard?
In a centralized service (think: Twitter, Facebook, etc.), this is actually much easier to build: Shove your public keys into a database, and design your client-side software to trust whatever public key your server gives them. Bob’s your uncle, pack it up and go home.
Unfortunately, it’s kind of stupid to build anything that way.
If you explicitly trust the server, the server could provide the wrong public key (i.e., one for which the server knows the corresponding secret key) and you’ll be none the wiser. This makes it trivial for the server to intercept and read your messages.
If your users are trusting you regardless, they’re probably just as happy if you don’t encrypt at the endpoint at all (beyond using TLS, but transport encryption is table stakes for any online service so nevermind that).
But let’s say you wanted to encrypt between peers anyway, because you’re feeling generous (or don’t want to field a bunch of questionably legal demands for user data by law enforcement; a.k.a. the Snapchat threat model).
You could improve endpoint trust by shoving all of your users’ public keys into an append-only data structure; i.e. key transparency, like WhatsApp proposed in 2023:
https://www.youtube.com/watch?v=_N4Q05z5vPE
And, to be perfectly clear, key transparency is a damn good idea.
Key transparency keeps everyone honest and makes it difficult for criminals to secretly replace a victim’s public key, because the act of doing so is unavoidably published to an append-only log.
The primary challenge is scaling a transparency feature to serve a public, federated system.
Federated Key Transparency?
Despite appearances, I haven’t been sitting on my thumbs for the past year or so. I’ve been talking with cryptography experts about their projects and papers in the same space.
Truthfully, I had been hoping to piggyback off one of those upcoming projects (which is focused more on public key discovery for SAML- and OAuth-like protocols) to build the Federated PKI piece for E2EE for the Fediverse.
Unfortunately, that project keeps getting delayed and pushed back, and I’ve just about run out of patience for it.
Additionally, there are some engineering challenges that I would need to tackle to build atop it, so it’s not as simple as “let’s just use that protocol”, either.
So let’s do something else instead:
Art: ScruffKerfluff
Fediverse Public Key Directories
Orthogonal to the overall Fediverse E2EE specification project, let’s build a Public Key Directory for the Fediverse.
This will not only be useful for building a coherent specification for E2EE (as it provides the “Federated PKI” component we’d need to build it securely), but it would also be extremely useful for software developers the whole world over.
Imagine this:
- If you want to fetch a user’s SSH public key, you can just query for their username and get a list of non-expired, non-revoked public keys to choose from.
- If you wanted public key pinning and key rotation for OAuth2 and/or OpenID Connect identity providers without having to update configurations or re-deploy any applications, you can do that.
- If you want to encrypt a message to a complete stranger, such that only they can decrypt it, without any sort of interaction (i.e., they could be offline for a holiday and still decrypt it when they get back), you could do that.
Oh, and best of all? You can get all these wins without propping up any cryptocurrency bullshit either.
From simple abstractions, great power may bloom.Mark Miller
How Will This Work?
We need to design a specific kind of server that speaks a limited set of the ActivityPub protocol.
I say “limited” because it will only not support editing or deleting messages provided by another instance. It will only append data.
To understand the full picture, let’s first look at the message types, public key types, and how the message types will be interpreted.
Message Types
Under the ActivityPub layer, we will need to specify a distinct set of Directory Message Types. An opening offer would look like this:
[b]AddKey[/b]
— contains an Asymmetric Public Key, a number mapped to the user, and instance that hosts it, and some other metadata (i.e., time)[b]RevokeKey[/b]
— marks an existing public key as revoked[b]MoveIdentity[/b]
— moves all of the public keys from identity A to identity B. This can be used for username changes or instance migrations.
We may choose to allow more message types at the front-end if need be, but that’s enough for our purposes.
Public Key Types
We are not interested in backwards compatibility with every existing cryptosystem. We will only tolerate a limited set of public key types.
At the outset, only Ed25519 will be supported.
In the future, we will include post-quantum digital signature algorithms on this list, but not before the current designs have had time to mature.
RSA will never be included in the set.
ECDSA over NIST P-384 may be included at some point, if there’s sufficient interest in supporting e.g., US government users.
If ECDSA is ever allowed, RFC 6979 is mandatory.
Message Processing
When an instance sends a message to a Directory Server, it will need to contain a specific marker for our protocol. Otherwise, it will be rejected.
Each message will have its own processing rules.
After the processing rules are applied, the message will be stored in the Directory Server, and a hash of the message will be published to a SigSum transparency ledger. The Merkle root and inclusion proofs will be stored in an associated record, attached to the record for the new message.
Every message will have its hash published in SigSum. No exceptions.
We will also need a mechanism for witness co-signatures to be published and attached to the record.
Additionally, all messages defined here are generated by the users, client-side. Servers are not trusted, generally, as part of the overall E2EE threat model.
AddKey
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "AddKey", "message": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfMessage"}
The first AddKey
for any given identity will need to be self-signed by the key being added (in addition to ActivityPub messages being signed by the instance).
After an identity exists in the directory, every subsequent public key MUST be signed by a non-revoked keypair.
RevokeKey
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "RevokeKey", "message": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfMessage"}
This marks the public key as untrusted, and effectively “deletes” it from the list that users will fetch.
Important: RevokeKey will fail unless there is at least one more trusted public key for this user. Otherwise, a denial of service would be possible.
Replaying an AddKey for a previously-revoked key MUST fail.
MoveIdentity
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "action": "MoveIdentity", "message": { "time": "2024-12-31T23:59:59Z", "old-identity": "foo@mastodon.example.com", "new-identity": "bar@akko.example.net" }, "signature": "SignatureOfMessage"}
This exists to facilitate migrations and username changes.
Other Message Types
The above list is not exhaustive. We may need other message types depending on the exact feature set needed by the final specification.
Fetching Public Keys
A simple JSON API (and/or an ActivityStream; haven’t decided) will be exposed to query for the currently trusted public keys for a given identity.
{ "@context": "https://example.com/ns/fedi-e2ee/v1", "public-keys": [ { "data": { "time": "2024-12-31T23:59:59Z", "identity": "foo@mastodon.example.com", "public-key": "ed25519:<key goes here>" }, "signature": "SignatureOfData", "sigsum": { /* ... */ }, }, { "data": { /* ... */ }, /* ... */ }, /* ... */ ]}
Simple and easy.
Gossip Between Instances
Directory Servers should be configurable to mirror records from other instances.
Additionally, they should be configurable to serve as Witnesses for the SigSum protocol.
The communication layer here between Directory Servers will also be ActivityPub.
Preventing Abuse
The capability of learning a user’s public key doesn’t imply the ability to send messages or bypass their block list.
Additionally, Fediverse account usernames are (to my knowledge) generally not private, so I don’t anticipate there being any danger in publishing public keys to an append-only ledger.
That said, I am totally open to considering use cases where the actual identity is obfuscated (e.g., HMAC with a static key known only to the instance that hosts them instead of raw usernames).
What About GDPR / Right To Be Forgotten?
Others have previously suggested that usernames might be subject to the “right to be forgotten”, which would require breaking history for an append-only ledger.
After discussing a proposed workaround with a few people in the Signal group for this project, we realized complying necessarily introduced security issues by giving instance admins the capability of selectively remapping the user ID to different audiences, and detecting/mitigating this remapping is annoying.
However, we don’t need to do that in the first place.
According to this webpage about GDPR’s Right to be Forgotten:
However, an organization’s right to process someone’s data might override their right to be forgotten. Here are the reasons cited in the GDPR that trump the right to erasure:
- (…)
- The data is being used to perform a task that is being carried out in the public interest or when exercising an organization’s official authority.
- (…)
- The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
Enabling private communication is in the public interest. The only information that will be stored in the ledger in relation to the username are cryptographic public keys, so it’s not like anything personal (e.g., email addresses or legal names) will be included.
However, we still need to be extremely up-front about this to ensure EU citizens are aware of the trade-off we’re making.
Account Recovery
In the event that a user loses access to all of their secret keys and wants to burn down the old account, they may want a way to start over with another fresh self-signed AddKey
.
However, the existing policies I wrote above would make this challenging:
- Since every subsequent
AddKey
must be signed by an incumbent key, if you don’t have access to these secret keys, you’re locked out. - Since
RevokeKey
requires one trusted keypair remains in the set, for normal operations, you can’t just burn the set down to zero even while you still had access to the secret keys.
There is an easy way out of this mess: Create a new verb; e.g. BurnDown
that an instance can issue that resets all signing keys for a given identity.
The use of BurnDown
should be a rare, exceptional event that makes a lot of noise:
- All existing E2EE sessions must break, loudly.
- All other participants must be alerted to the change, through the client software.
- Witnesses and watchdog nodes must take note of this change.
This comes with some trade-offs. Namely: Any account recovery mechanism is a backdoor, and giving the instance operators the capability of issuing BurnDown
messages is a risk to their users.
Therefore, users who trust their own security posture and wish to opt out of this recovery feature should also be able to issue a Fireproof
message at any point in the process, which permanent and irrevocably prevents any BurnDown
from being accepted on their current instance.
If users opt out of recovery and then lose their signing keys, they’re locked out and need to start over with a new Fediverse identity. On the flipside, their instance operator cannot successfully issue a BurnDown for them, so they have to trust them less.
Notice
This is just a rough sketch of my initial ideas, going into this project. It is not comprehensive, nor complete.
There are probably big gaps that need to be filled in, esp. on the ActivityPub side of things. (I’m not as worried about the cryptography side of things.)
How Will This Be Used for E2EE Direct Messaging?
I anticipate that a small pool of Directory Servers will be necessary, due to only public keys and identities being stored.
Additional changes beyond just the existence of Directory Servers will need to be made to facilitate private messaging. Some of those changes include:
- Some endpoint for users to know which Directory Servers a given ActivityPub instance federates with (if any).
- Some mechanism for users to asynchronously exchange Signed Pre-Key bundles for initiating contact. (One for users to publish new bundles, another for users to retrieve a bundle.)
- These will be Ed25519-signed payloads containing an ephemeral X25519 public key.
This is all outside the scope of the proposal I’m sketching out here today, but it’s worth knowing that I’m aware of the implementation complexity.
The important thing is: I (soatok@furry.engineer) should be able to query pawb.fun, find the Directory Server(s) they federate with, and then query that Directory server for Crashdoom@pawb.fun
and get his currently trusted Ed25519 public keys.
From there, I can query pawb.fun for a SignedPreKey bundle, which will have been signed by one of those public keys.
And then we can return to the “easy” pile.
Development Plan
Okay, so that was a lot of detail, and yet not enough detail, depending on who’s reading this blog post.
What I wrote here today is a very rough sketch. The devil is always in the details, especially with cryptography.
Goals and Non-Goals
We want Fediverse users to be able to publish a public key that is bound to their identity, which anyone else on the Internet can fetch and then use for various purposes.
We want to leverage the existing work into key transparency by the cryptography community.
We don’t want to focus on algorithm agility or protocol compatibility.
We don’t want to involve any government offices in the process. We don’t care about “real” identities, nor about codifying falsehoods about names.
We don’t want any X.509 or Web-of-Trust machinery involved in the process.
Tasks
The first thing we would need to do is write a formal specification for a Directory Server (whose job is only to vend Public Keys in an auditable, transparent manner).
Next, we need to actually build a reference implementation of this server, test it thoroughly, and then have security experts pound at the implementation for a while. Any security issues that can be mitigated by design will require a specification update.
We will NOT punt these down to implementors to be responsible for, unless we cannot avoid doing so.
Once these steps are done, we can start rolling the Directory Servers out. At this point, we can develop client-side libraries in various programming languages to make it easy for developers to adopt.
My continued work on the E2EE specification for the Fediverse can begin after we have an implementation of the Directory Server component ready to go.
Timeline
I have a very demanding couple of months ahead of me, professionally, so I don’t yet know when I can commit to starting the Fediverse Directory Server specification work.
Strictly speaking, it’s vaguely possible to get buy-in from work to focus on this project as part of my day-to-day responsibilities, since it has immediate and lasting value to the Internet.However, I don’t want to propose it because that would be crossing the professional-personal streams in a way I’m not really comfortable with.
The last thing I need is angry Internet trolls harassing my coworkers to try to get under my fur, y’know?
If there is enough interest from the broader Fediverse community, I’m also happy to delegate this work to anyone interested.
Once the work can begin, I don’t anticipate it will take more than a week for me to write a specification that other crypto nerds will take seriously.
I am confident in this because most of the cryptography will be constrained to hash functions, preventing canonicalization and cross-protocol attacks, and signatures.
Y’know, the sort of thing I write about on my furry blog for fun!
Building a reference implementation will likely take a bit longer; if, for no other reason, than I believe it would be best to write it in Go (which has the strongest SigSum support, as of this writing).
This is a lot of words to say, as far as timelines go:
How to Get Involved
Regardless of whether my overall E2EE proposal gets adopted, the Directory Server component is something that should be universally useful to the Fediverse and to software developers around the world.
If you are interested in participating in any technical capacity, I have just created a Signal Group for discussing and coordinating efforts.
All of these efforts will also be coordinated on the fedi-e2ee GitHub organization.
The public key directory server’s specification will eventually exist in this GitHub repository.
Can I Contribute Non-Technically?
Yes, absolutely. In the immediate future, once it kicks off, the work is going to be technology-oriented.
However, we may need people with non-technical skills at some point, so feel free to dive in whenever you feel comfortable.
What About Financially?
If you really have money burning a hole in your pocket and want to toss a coin my way, I do have a Ko-Fi. Do not feel pressured at all to do so, however.
Because I only use Ko-Fi as a tip jar, rather than as a business, I’m not specifically tracking which transaction is tied to which project, so I can’t make any specific promises about how any of the money sent my way will be allocated.
What I will promise, however, is that any icons/logos/etc. created for this work will be done by an artist and they will be adequately compensated for their work. I will not use large-scale computing (a.k.a., “Generative AI”) for anything.
Closing Thoughts
What I’ve sketched here is much simpler (and more ActivityPub-centric) than the collaboration I was originally planning.
Thanks for being patient while I tried, in vain, to make that work.
As of today, I no longer think we need to wait for them. We can build this ourselves, for each other.
https://soatok.blog/2024/06/06/towards-federated-key-transparency/
#cryptography #endToEndEncryption #fediverse #KeyTransparency #Mastodon #MerkleTrees #PublicKeys
Update (2024-06-06): There is an update on this project.As Twitter’s new management continues to nosedive the platform directly into the ground, many people are migrating to what seem like drop-in alternatives; i.e. Cohost and Mastodon. Some are even considering new platforms that none of us have heard of before (one is called “Hive”).
Needless to say, these are somewhat chaotic times.
One topic that has come up several times in the past few days, to the astonishment of many new Mastodon users, is that Direct Messages between users aren’t end-to-end encrypted.
And while that fact makes Mastodon DMs no less safe than Twitter DMs have been this whole time, there is clearly a lot of value and demand in deploying end-to-end encryption for ActivityPub (the protocol that Mastodon and other Fediverse software uses to communicate).
However, given that Melon Husk apparently wants to hurriedly ship end-to-end encryption (E2EE) in Twitter, in some vain attempt to compete with Signal, I took it upon myself to kickstart the E2EE effort for the Fediverse.
https://twitter.com/elonmusk/status/1519469891455234048
So I’d like to share my thoughts about E2EE, how to design such a system from the ground up, and why the direction Twitter is heading looks to be security theater rather than serious cryptographic engineering.
If you’re not interested in those things, but are interested in what I’m proposing for the Fediverse, head on over to the GitHub repository hosting my work-in-progress proposal draft as I continue to develop it.
How to Quickly Build E2EE
If one were feeling particularly cavalier about your E2EE designs, they could just generate then dump public keys through a server they control, pass between users, and have them encrypt client-side. Over and done. Check that box.Every public key would be ephemeral and implicitly trusted, and the threat model would mostly be, “I don’t want to deal with law enforcement data requests.”
Hell, I’ve previously written an incremental blog post to teach developers about E2EE that begins with this sort of design. Encrypt first, ratchet second, manage trust relationships on public keys last.
If you’re catering to a slightly tech-savvy audience, you might throw in SHA256(pk1 + pk2) -> hex2dec() and call it a fingerprint / safety number / “conversation key” and not think further about this problem.
Look, technical users can verify out-of-band that they’re not being machine-in-the-middle attacked by our service.An absolute fool who thinks most people will ever do this
From what I’ve gathered, this appears to be the direction that Twitter is going.https://twitter.com/wongmjane/status/1592831263182028800
Now, if you’re building E2EE into a small hobby app that you developed for fun (say: a World of Warcraft addon for erotic roleplay chat), this is probably good enough.
If you’re building a private messaging feature that is intended to “superset Signal” for hundreds of millions of people, this is woefully inadequate.
https://twitter.com/elonmusk/status/1590426255018848256
Art: LvJ
If this is, indeed, the direction Musk is pushing what’s left of Twitter’s engineering staff, here is a brief list of problems with what they’re doing.
- Twitter Web. How do you access your E2EE DMs after opening Twitter in your web browser on a desktop computer?
- If you can, how do you know twitter.com isn’t including malicious JavaScript to snarf up your secret keys on behalf of law enforcement or a nation state with a poor human rights record?
- If you can, how are secret keys managed across devices?
- If you use a password to derive a secret key, how do you prevent weak, guessable, or reused passwords from weakening the security of the users’ keys?
- If you cannot, how do users decide which is their primary device? What if that device gets lost, stolen, or damaged?
- Authenticity. How do you reason about the person you’re talking with?
- Forward Secrecy. If your secret key is compromised today, can you recover from this situation? How will your conversation participants reason about your new Conversation Key?
- Multi-Party E2EE. If a user wants to have a three-way E2EE DM with the other members of their long-distance polycule, does Twitter enable that?
- How are media files encrypted in a group setting? If you fuck this up, you end up like Threema.
- Is your group key agreement protocol vulnerable to insider attacks?
- Cryptography Implementations.
- What does the KEM look like? If you’re using ECC, which curve? Is a common library being used in all devices?
- How are you deriving keys? Are you just using the result of an elliptic curve (scalar x point) multiplication directly without hashing first?
- Independent Third-Party Review.
- Who is reviewing your protocol designs?
- Who is reviewing your cryptographic primitives?
- Who is reviewing the code that interacts with E2EE?
- Is there even a penetration test before the feature launches?
As more details about Twitter’s approach to E2EE DMs come out, I’m sure the above list will be expanded with even more questions and concerns.
My hunch is that they’ll reuse liblithium (which uses Curve25519 and Gimli) for Twitter DMs, since the only expert I’m aware of in Musk’s employ is the engineer that developed that library for Tesla Motors. Whether they’ll port it to JavaScript or just compile to WebAssembly is hard to say.
How To Safely Build E2EE
You first need to decompose the E2EE problem into five separate but interconnected problems.
- Client-Side Secret Key Management.
- Multi-device support
- Protect the secret key from being pilfered (i.e. by in-browser JavaScript delivered from the server)
- Public Key Infrastructure and Trust Models.
- TOFU (the SSH model)
- X.509 Certificate Authorities
- Certificate/Key/etc. Transparency
- SigStore
- PGP’s Web Of Trust
- Key Agreement.
- While this is important for 1:1 conversations, it gets combinatorially complex when you start supporting group conversations.
- On-the-Wire Encryption.
- Direct Messages
- Media Attachments
- Abuse-resistance (i.e. message franking for abuse reporting)
- The Construction of the Previous Four.
- The vulnerability of most cryptographic protocols exists in the joinery between the pieces, not the pieces themselves. For example, Matrix.
This might not be obvious to someone who isn’t a cryptography engineer, but each of those five problems is still really hard.
To wit: The latest IETF RFC draft for Message Layer Security, which tackles the Key Agreement problem above, clocks in at 137 pages.
Additionally, the order I specified these problems matters; it represents my opinion of which problem is relatively harder than the others.
When Twitter’s CISO, Lea Kissner, resigned, they lost a cryptography expert who was keenly aware of the relative difficulty of the first problem.
https://twitter.com/LeaKissner/status/1592937764684980224
You may also notice the order largely mirrors my previous guide on the subject, in reverse. This is because teaching a subject, you start with the simplest and most familiar component. When you’re solving problems, you generally want the opposite: Solve the hardest problems first, then work towards the easier ones.
This is precisely what I’m doing with my E2EE proposal for the Fediverse.
The Journey of a Thousand Miles Begins With A First Step
Before you write any code, you need specifications.Before you write any specifications, you need a threat model.
Before you write any threat models, you need both a clear mental model of the system you’re working with and how the pieces interact, and a list of security goals you want to achieve.
Less obviously, you need a specific list of non-goals for your design: Properties that you will not prioritize. A lot of security engineering involves trade-offs. For example: elliptic curve choice for digital signatures is largely a trade-off between speed, theoretical security, and real-world implementation security.
If you do not clearly specify your non-goals, they still exist implicitly. However, you may find yourself contradicting them as you change your mind over the course of development.
Being wishy-washy about your security goals is a good way to compromise the security of your overall design.
In my Mastodon E2EE proposal document, I have a section called Design Tenets, which states the priorities used to make trade-off decisions. I chose Usability as the highest priority, because of AviD’s Rule of Usability.
Security at the expense of usability comes at the expense of security.Avi Douglen, Security StackExchange
Underneath Tenets, I wrote Anti-Tenets. These are things I explicitly and emphatically do not want to prioritize. Interoperability with any incumbent designs (OpenPGP, Matrix, etc.) is the most important anti-tenet when it comes to making decisions. If our end-state happens to interop with someone else’s design, cool. I’m not striving for it though!Finally, this section concludes with a more formal list of Security Goals for the whole project.
Art: LvJ
Every component (from the above list of five) in my design will have an additional dedicated Security Goals section and Threat Model. For example: Client-Side Secret Key Management.
You will then need to tackle each component independently. The threat model for secret-key management is probably the trickiest. The actual encryption of plaintext messages and media attachments is comparatively simple.
Finally, once all of the pieces are laid out, you have the monumental (dare I say, mammoth) task of stitching them together into a coherent, meaningful design.
If you did your job well at the outset, and correctly understand the architecture of the distributed system you’re working with, this will mostly be straightforward.
Making Progress
At every step of the way, you do need to stop and ask yourself, “If I was an absolute chaos gremlin, how could I fuck with this piece of my design?” The more pieces your design has, the longer the list of ways to attack it will grow.It’s also helpful to occasionally consider formal methods and security proofs. This can have surprising implications for how you use some algorithms.
You should also be familiar enough with the cryptographic primitives you’re working with before you begin such a journey; because even once you’ve solved the key management story (problems 1, 2 and 3 from the above list of 5), cryptographic expertise is still necessary.
- If you’re feeding data into a hash function, you should also be thinking about domain separation. More information.
- If you’re feeding data into a MAC or signature algorithm, you should also be thinking about canonicalization attacks. More information.
- If you’re encrypting data, you should be thinking about multi-key attacks and confused deputy attacks. Also, the cryptographic doom principle if you’re not using IND-CCA3 algorithms.
- At a higher-level, you should proactively defend against algorithm confusion attacks.
How Do You Measure Success?
It’s tempting to call the project “done” once you’ve completed your specifications and built a prototype, and maybe even published a formal proof of your design, but you should first collect data on every important metric:
- How easy is it to use your solution?
- How hard is it to misuse your solution?
- How easy is it to attack your solution? Which attackers have the highest advantage?
- How stable is your solution?
- How performant is your solution? Are the slow pieces the deliberate result of a trade-off? How do you know the balance was struck corectly?
Where We Stand Today
I’ve only begun writing my proposal, and I don’t expect it to be truly ready for cryptographers or security experts to review until early 2023.However, my clearly specified tenets and anti-tenets were already useful in discussing my proposal on the Fediverse.
@soatok @fasterthanlime Should probably embed the algo used for encryption in the data used for storing the encrypted blob, to support multiples and future changes.@fabienpenso@hachyderm.io proposes in-band protocol negotiation instead of versioned protocols
The main things I wanted to share today are:
- The direction Twitter appears to be heading with their E2EE work, and why I think it’s a flawed approach
- Designing E2EE requires a great deal of time, care, and expertise; getting to market quicker at the expense of a clear and careful design is almost never the right call
Mastodon? ActivityPub? Fediverse? OMGWTFBBQ!
In case anyone is confused about Mastodon vs ActivityPub vs Fediverse lingo:The end goal of my proposal is that I want to be able to send DMs to queer furries that use Mastodon such that only my recipient can read them.
Achieving this end goal almost exclusively requires building for ActivityPub broadly, not Mastodon specifically.
However, I only want to be responsible for delivering this design into the software I use, not for every single possible platform that uses ActivityPub, nor all the programming languages they’re written in.
I am going to be aggressive about preventing scope creep, since I’m doing all this work for free. (I do have a Ko-Fi, but I won’t link to it from here. Send your donations to the people managing the Mastodon instance that hosts your account instead.)
My hope is that the design documents and technical specifications become clear enough that anyone can securely implement end-to-end encryption for the Fediverse–even if special attention needs to be given to the language-specific cryptographic libraries that you end up using.
Art: LvJ
Why Should We Trust You to Design E2EE?
This sort of question comes up inevitably, so I’d like to tackle it preemptively.My answer to every question that begins with, “Why should I trust you” is the same: You shouldn’t.
There are certainly cryptography and cybersecurity experts that you will trust more than me. Ask them for their expert opinions of what I’m designing instead of blanketly trusting someone you don’t know.
I’m not interested in revealing my legal name, or my background with cryptography and computer security. Credentials shouldn’t matter here.
If my design is good, you should be able to trust it because it’s good, not because of who wrote it.
If my design is bad, then you should trust whoever proposes a better design instead. Part of why I’m developing it in the open is so that it may be forked by smarter engineers.
Knowing who I am, or what I’ve worked on before, shouldn’t enter your trust calculus at all. I’m a gay furry that works in the technology industry and this is what I’m proposing. Take it or leave it.
Why Not Simply Rubber-Stamp Matrix Instead?
(This section was added on 2022-11-29.)There’s a temptation, most often found in the sort of person that comments on the /r/privacy subreddit, to ask why even do all of this work in the first place when Matrix already exists?
The answer is simple: I do not trust Megolm, the protocol designed for Matrix.
Megolm has benefited from amateur review for four years. Non-cryptographers will confuse this observation with the proposition that Matrix has benefited from peer review for four years. Those are two different propositions.
In fact, the first time someone with cryptography expertise bothered to look at Matrix for more than a glance, they found critical vulnerabilities in its design. These are the kinds of vulnerabilities that are not easily mitigated, and should be kept in mind when designing a new protocol.
You don’t have to take my word for it. Listen to the Security, Cryptography, Whatever podcast episode if you want cryptographic security experts’ takes on Matrix and these attacks.
From one of the authors of the attack paper:
So they kind of, after we disclosed to them, they shared with us their timeline. It’s not fixed yet. It’s a, it’s a bigger change because they need to change the protocol. But they always said like, Okay, fair enough, they’re gonna change it. And they also kind of announced a few days after kind of the public disclosure based on the public reaction that they should prioritize fixing that. So it seems kind of in the near future, I don’t have the timeline in front of me right now. They’re going to fix that in the sense of like the— because there’s, notions of admins and so on. So like, um, so authenticating such group membership requests is not something that is kind of completely outside of, kind of like the spec. They just kind of need to implement the appropriate authentication and cryptography.Martin Albrecht, SCW podcast
From one of the podcast hosts:I guess we can at the very least tell anyone who’s going forward going to try that, that like, yes indeed. You should have formal models and you should have proofs. And so there’s this, one of the reactions to kind of the kind of attacks that we presented and also to prior previous work where we kind of like broken some cryptographic protocols is then to say like, “Well crypto’s hard”, and “don’t roll your own crypto.” But in a way the thing is like, you know, we need some people to roll their own crypto because that’s how we have crypto. Someone needs to roll it. But we have developed techniques, we have developed formalisms, we have developed methods for making sure it doesn’t have to be hard, it’s not, it’s not a dark art kind of that only kind of a few, a select few can master, but it’s, you know, it’s a science and you can learn it. So, but you need to then indeed employ a cryptographer in kind of like forming, modeling your protocol and whenever you make changes, then, you know, they need to look over this and say like, Yes, my proof still goes through. Um, so like that is how you do this. And then, then true engineering is still hard and it will remain hard and you know, any science is hard, but then at least you have some confidence in what you’re doing. You might still then kind of on the space and say like, you know, the attack surface is too large and I’m not gonna to have an encrypted backup. Right. That’s then the problem of a different hard science, social science. Right. But then just use the techniques that we have, the methods that we have to establish what we need.Thomas Ptacek, SCW podcast
It’s tempting to listen to these experts and say, “OK, you should use libsignal instead.”But libsignal isn’t designed for federation and didn’t prioritize group messaging. The UX for Signal is like an IM application between two parties. It’s a replacement for SMS.
It’s tempting to say, “Okay, but you should use MLS then; never roll your own,” but MLS doesn’t answer the group membership issue that plagued Matrix. It punts on these implementation details.
Even if I use an incumbent protocol that privacy nerds think is good, I’ll still have to stitch it together in a novel manner. There is no getting around this.
Maybe wait until I’ve finished writing the specifications for my proposal before telling me I shouldn’t propose anything.
Credit for art used in header: LvJ, Harubaki
https://soatok.blog/2022/11/22/towards-end-to-end-encryption-for-direct-messages-in-the-fediverse/
If you’re new to reading this blog, you might not already be aware of my efforts to develop end-to-end encryption for ActivityPub-based software. It’s worth being aware of before you continue to read this blog post.
To be very, very clear, this is work I’m doing independent of the W3C or any other standards organization and/or funding source (and they have their own ideas about how to approach it).Really, I’m doing my own thing and releasing my designs under a public domain-equivalent license so anyone (including the W3C grant awardees) can pick it up and use it, if they see fit.
But the work I’m doing has no official standing and is not representative of anyone (except maybe a lot of other furries interested in technology). They have, emphatically, never endorsed anything I’m doing. I have not talked with any of them about my ideas, nor has my name come up in any of their meeting notes.
My background is in applied cryptography and software security assessments, so I have strong opinions about how such software should be developed.
I’m being very up-front about this because I don’t want anyone to mistake my ideas for anything “official”.
Why spend your time on that?
My end goal is pretty straightforward.
Before Musk took it over, Twitter was wonderful for queer people. I’ve even heard it described as the most successful dating platform for the LGBTQIA+ community.
These days, it’s full of Nazis and people who think the ideal version of “free speech” means not being allowed to say the word “cisgender.” But I repeat myself.
The typical threat model for Twitter was: You have to trust the person you’re talking with, and the Twitter corporation, to keep your conversations (or nudes, if we’re being frank about it) private.
With the Fediverse, things are a little more complicated. Instance operators also have access to the plaintext versions of any Direct Messages between you and other participants.
And maybe you trust your instance operator… but do you trust your friends’? And do they trust yours?
If implemented securely, end-to-end encryption saves you from having to care about this injection of additional threat actors to consider.
If not implemented securely, it’s little more than security theater and should be ridiculed loudly.
So it’s natural and obvious for a person with my particular interests and skills to want to solve this problem.
Technological Decisions
When I started this project, I separated the end goal into 4 separate components:
- Client-side secret key management.
- Federated public-key infrastructure.
- Shared key agreement for group messaging.
- The actual bulk encryption techniques.
A lot of hobbyist projects over-index on the fourth component, rather than the actual hard problems. This is why so many doomed projects start with PGP, or implement weird “cipher cascades” to hedge against AES getting broken.
In reality, every component matters for the security of the whole system, but the bulk encryption is boring. It’s the well-tread path of any cryptosystem. The significantly harder parts are key management.
Political Decisions
Let’s not mince words: How you implement key management is inherently a political decision.
If that sounds counter-intuitive, meditate on this bit of wisdom for a while:
Repeat after me: all technical problems of sufficient scope or impact are actually political problems first.
Many projects, when confronted with the complexity of key management, are perfectly happy with “just write private keys to disk” or “put blind trust in AWS KMS.”
Or, more directly: “YOLO.”
With my Fediverse E2EE project, I wanted to minimize the amount of trust you have to place in others. (Especially, minimize the trust needed in Soatok!)
How Decisions Flow
Client-side secrets are the most visible area of risk to end users. Backing up and managing their own credentials, recovering from failure modes, the Mud Puddle test, etc.
Once each participant has secret keys managed (1), they can provide public keys to each other.
Public-key infrastructure (2) is how you decide trust relationships between parties. We’re operating in a federated environment, and want to minimize the amount of unchecked “authority” anyone has, so that complicates matters. But, if it wasn’t challenging, it would already be solved.
Once you’ve figured out a trust mechanism to tie a public key to an identity, you can try to agree on a shared symmetric key securely, even over an untrusted channel.
Key agreement for group messaging (3) is how you decide which shared key to use, and when, and who has access to this key and for how long.
And from there, you can actually encrypt shit (4).
It doesn’t really matter how much you boil the ocean on mitigating hypothetical weaknesses in AES if an adversary can muck with your key management.
Thus, it should hopefully be reasonable to divide the work up in this fashion.
But there is a fifth component; one that I am not qualified to comment on:
User experience.
The final deliverable for my participation in this project will be software libraries (and any necessary patches to server software) to facilitate secure end-to-end encryption between Fediverse users.
As for what that experience looks like? How it’s presented visually? What accessibility features are used, and how? How elements are organized and in what order they are displayed? Any quality-of-life design decisions that delight users and avoid dark patterns?
Yeah, sorry, I’m totally out of my depth here. That’s not my domain.
I will do my damnedest to not make security decisions that are inherently onerous towards making usable software.
(After all, security at the cost of usability comes at the cost of security.)
But I can’t promise that the experience will be totally seamless for everyone, all the time.
Lacking Ambition?
One of the things that’s been bothering me, as I work on out the finer details about this end-to-end encryption project, is that it seems to lack ambition.
Sure, I can talk your ear off for hours about the ins and outs of implementing end-to-end encryption securely, but we already have end-to-end encryption apps. So many private messengers.
How does “you can now have encrypted DMs in Mastodon” help people who can already use Signal or WhatsApp? Why should the people who aren’t computer nerds care about it at all?
What’s actually new or exciting about this work?
And, honestly, the best answer I can come up with is that it’s the first step.
Tech Freedom and You
Before the Big Data and cloud computing crazes took the technology industry by storm (or any of the messes that followed), most software was designed to work offline. That is, without Internet access.
With the growing ubiquity of Internet access (and mobile networks), the Overton window shifted towards always-on devices, edge computing, and no longer owning anything. Instead, consumers rent licenses to software that a third party can revoke on a whim.
The Free Software movement, for all of the very pronounced personality quirks associated with it today, foresaw this problem long before the modern Internet existed. Technologists, lawyers, and activists spent thousands of person-years of effort on trying to protect end users’ rights from greedy monopolies.
Kyume
(I couldn’t not include this meme in this section.)
This isn’t a modern problem, by any stretch of the imagination.
Every year, our rights and digital freedoms are eroded by court decisions by corrupt judges, terrible legislature, and questionable leadership.
But the Electronic Frontier Foundation and its friends in other nations have been talking about this and fighting court battles since the 1990s.
Even if I somehow made some small innovation that benefited end users with allowing Fediverse users to message each other privately, that’s not really ambitious either.
From Sparks to Embers
As I was noodling over this, a friend of mine linked me to an article titled Rust Needs a Web Framework for Lazy Developers the other day.
It made me realize how much I miss the era when software was offline-first, even if it had online components. The past several years of Live Service Games has exhausted my tolerance more than anything else, but they’re not alone.
When I initially delineated my proposal into 4 components, my goal was to simplify the security analysis and make the threat models digestible.
But it occurred to me, recently, that by abstracting these components (especially the Federated Public Key Infrastructure design), a new era of cypherpunks and pirates could breathe new ambition into software projects that build atop the boring infrastructure I’m building.
Let’s Turn the Ambition Up To 11
Imagine peer-to-peer software that uses the Fediverse and/or onion routing technologies (similar to Tor) to establish peer-to-peer encrypted data tunnels between devices, with the Federated PKI as the source of truth for identity public keys so you always know you’re talking to the correct entity.
Now combine that with developer tools that make it easy for people to self-publish software (even if only through Tor Hidden Services), with an optional way to create a public portal (e.g., for a public-facing website).
You could even create a protocol for people with rack space and spare bandwidth to host said public portals, without biasing for a particular one.
This would allow technologists to build the tools for normal people to create an anti-corporate, decentralized network.
And you could do it without ever mentioning the word “blockchain” (though you may need to tolerate it if you want to prevent anti-porn groups like Exodus Cry from having any say in what we compute).
Finally, imagine that we build all of this in memory-safe languages.
Are you building this today?
In short: No, I’m not.
Ambitious ideas and cryptography should only intersect rarely. I’m focused on the cryptography.
Instead, I wanted to lay this rough sketch out there as a possibility that someone else–presumably more ambitious, charismatic, and/or resourceful–could easily pick up if they so choose.
More importantly, all of the hard parts of this would be solved problems by the time I finish with the end-to-end encryption project. (Most of them already exist, in fact!)
That’s what I meant above by “it’s the first step”.
Along the way to achieving my own goals, I’m building at least one useful building block. What the rest of the technology industry decides to do with it is up to the rest of us.
I can’t, and will not try, to do it alone.
There is a lot of potential for tech freedom that could benefit users beyond what they can get from the Fediverse today. I wanted to examine how some of these ideas could be useful for–
Rejected! What else you got?
Oh.
…
Okay, so y’know how a lot of video games (Undertale/Deltarune, Doki Doki Literature Club) try to make a highly immersive experience with many diegetic elements?
Let’s build an operating system, based on some flavor of Linux, that is in and of itself a game. People can write their own DLC by developing packages for that OS. The end deliverable will be a virtual machine, and in order to get it to work on Steam, we would install Docker or Kubernetes, but users will also be able to install it via VirtualBox.
Inevitably, someone will decide this OS is their new daily driver. Imagine the impact this would have on corporate IT the whole world over.
This is the worst idea in the history of bad ideas!
Oh, I can do worse. I can do so much worse.
I don’t know if I can top the various attempts to build a Message Authentication Code out of the insecure RC4 stream cipher, of course.
If you want ambition, you sacrifice wisdom.
If you want freedom, you sacrifice convenience.
If you want security, you sacrifice usability.
…
Or do you?
They Can’t All Be Winners
I have a lot of bad ideas, all the time. That’s the only reason I ever occasionally have moderately good ones.
My process of eliminating bad ideas is ruthless, and may cull some interesting or fun ones along the way. This is an unfortunate side-effect of being an effective security engineer.
I don’t actually think the ideas I’ve written above are that bad. I wrote them this way for comedic effect.
Rather, I’m just not actually sure they’re actually good, or worthwhile to invest time into.
Whether someone could build atop the work I’m doing to reclaim our Internet from the grip of massive technology corporations is, at best, difficult to classify.
I do not have the time, energy, or motivation to do the work already on my own plate and then explore these ideas fully.
Maybe someone reading this does?
If not, that’s cool. Ideas are allowed to just exist as idle curiosities. Not everything has to matter all the time.
The “ship a whole god damn OS as an indie
game” idea could be fun though.
https://soatok.blog/2024/10/12/ambition-the-fediverse-and-technology-freedom/
#endToEndEncryption #fediverse #FreeSoftware #OnlinePrivacy #Society #SoftwareFreedom #TechFreedom #Technology
In 2022, I wrote about my plan to build end-to-end encryption for the Fediverse. The goals were simple:
- Provide secure encryption of message content and media attachments between Fediverse users, as a new type of Direct Message which is encrypted between participants.
- Do not pretend to be a Signal competitor.
The primary concern at the time was “honest but curious” Fediverse instance admins who might snoop on another user’s private conversations.
After I finally was happy with the client-side secret key management piece, I had moved on to figure out how to exchange public keys. And that’s where things got complicated, and work stalled for 2 years.
Art: AJ
I wrote a series of blog posts on this complication, what I’m doing about it, and some other cool stuff in the draft specification.
- Towards Federated Key Transparency introduced the Public Key Directory project
- Federated Key Transparency Project Update talked about some of the trade-offs I made in this design
- Not supporting ECDSA at all, since FIPS 186-5 supports Ed25519
- Adding an account recovery feature, which power users can opt out of, that allows instance admins to help a user recover from losing all their keys
- Building a Key Transparency system that can tolerate GDPR Right To Be Forgotten takedown requests without invalidating history
- Introducing Alacrity to Federated Cryptography discussed how I plan to ensure that independent third-party clients stay up-to-date or lose the ability to decrypt messages
Recently, NIST published the new Federal Information Protection Standards documents for three post-quantum cryptography algorithms:
- FIPS-203 (ML-KEM, formerly known as CRYSTALS-Kyber),
- FIPS-204 (ML-DSA, formerly known as CRYSTALS-Dilithium)
- FIPS-205 (SLH-DSA, formerly known as SPHINCS+)
The race is now on to implement and begin migrating the Internet to use post-quantum KEMs. (Post-quantum signatures are less urgent.) If you’re curious why, this CloudFlare blog post explains the situation quite well.
Since I’m proposing a new protocol and implementation at the dawn of the era of post-quantum cryptography, I’ve decided to migrate the asymmetric primitives used in my proposals towards post-quantum algorithms where it makes sense to do so.
Art: AJ
The rest of this blog post is going to talk about technical specifics and the decisions I intend to make in both projects, as well as some other topics I’ve been thinking about related to this work.
Which Algorithms, Where?
I’ll discuss these choices in detail, but for the impatient:
- Public Key Directory
- Still just Ed25519 for now
- End-to-End Encryption
- KEMs: X-Wing (Hybrid X25519 and ML-KEM-768)
- Signatures: Still just Ed25519 for now
Virtually all other uses of cryptography is symmetric-key or keyless (i.e., hash functions), so this isn’t a significant change to the design I have in mind.
Post-Quantum Algorithm Selection Criteria
While I am personally skeptical if we will see a practical cryptography-relevant quantum computer in the next 30 years, due to various engineering challenges and a glacial pace of progress on solving them, post-quantum cryptography is still a damn good idea even if a quantum computer doesn’t emerge.Post-Quantum Cryptography comes in two flavors:
- Key Encapsulation Mechanisms (KEMs), which I wrote about previously.
- Digital Signature Algorithms (DSAs).
Originally, my proposals were going to use Elliptic Curve Diffie-Hellman (ECDH) in order to establish a symmetric key over an untrusted channel. Unfortunately, ECDH falls apart in the wake of a crypto-relevant quantum computer. ECDH is the component that will be replaced by post-quantum KEMs.
Additionally, my proposals make heavy use of Edwards Curve Digital Signatures (EdDSA) over the edwards25519 elliptic curve group (thus, Ed25519). This could be replaced with a post-quantum DSA (e.g., ML-DSA) and function just the same, albeit with bandwidth and/or performance trade-offs.
But isn’t post-quantum cryptography somewhat new?
Lattice-based cryptography has been around almost as long as elliptic curve cryptography. One of the first designs, NTRU, was developed in 1996.Meanwhile, ECDSA was published in 1992 by Dr. Scott Vanstone (although it was not made a standard until 1999). Lattice cryptography is pretty well-understood by experts.
However, before the post-quantum cryptography project, there hasn’t been a lot of incentive for attackers to study lattices (unless they wanted to muck with homomorphic encryption).
So, naturally, there is some risk of a cryptanalysis renaissance after the first post-quantum cryptography algorithms are widely deployed to the Internet.
However, this risk is mostly a concern for KEMs, due to the output of a KEM being the key used to encrypt sensitive data. Thus, when selecting KEMs for post-quantum security, I will choose a Hybrid construction.
Hybrid what?
We’re not talking folfs, sonny!Hybrid isn’t just a thing that furries do with their fursonas. It’s also a term that comes up a lot in cryptography.
Unfortunately, it comes up a little too much.
I made this dumb meme with imgflip
When I say we use Hybrid constructions, what I really mean is we use a post-quantum KEM and a classical KEM (such as HPKE‘s DHKEM), then combine them securely using a KDF.Post-quantum KEMs
For the post-quantum KEM, we only really have one choice: ML-KEM. But this choice is actually three choices: ML-KEM-512, ML-KEM-768, or ML-KEM-1024.The security margin on ML-KEM-512 is a little tight, so most cryptographers I’ve talked with recommend ML-KEM-768 instead.
Meanwhile, the NSA wants the US government to use ML-KEM-1024 for everything.
How will you hybridize your post-quantum KEM?
Originally, I was looking to use DHKEM with X25519, as part of the HPKE specification. After switching to post-quantum cryptography, I would need to combine it with ML-KEM-768 in such a way that the whole shebang is secure if either component is secure.But then, why reinvent the wheel here? X-Wing already does that, and has some nice binding properties that a naive combination might not.
So let’s use X-Wing for our KEM.
Notably, OpenMLS is already doing this in their next release.
Art: CMYKat
Post-quantum signatures
So our KEM choice seems pretty straightforward. What about post-quantum signatures?Do we even need post-quantum signatures?
Well, the situation here is not nearly as straightforward as KEMs.
For starters, NIST chose to standardize two post-quantum digital signature algorithms (with a third coming later this year). They are as follows:
- ML-DSA (formerly CRYSTALS-Dilithium), that comes in three flavors:
- ML-DSA-44
- ML-DSA-65
- ML-DSA-87
- SLH-DSA (formerly SPHINCS+), that comes in 24 flavors
- FN-DSA (formerly FALCON), that comes in two flavors but may be excruciating to implement in constant-time (this one isn’t standardized yet)
Since we’re working at the application layer, we’re less worried about a few kilobytes of bandwidth than the networking or X.509 folks are. Relatively speaking, we care about security first, performance second, and message size last.
After all, people ship Electron, React Native, and NextJS apps that load megabytes of JavaScript code to print, “hello world,” and no one bats an eye. A few kilobytes in this context is easily digestible for us.
(As I said, this isn’t true for all layers of the stack. WebPKI in particular feels a lot of pain with large public keys and/or signatures.)
Eliminating post-quantum signature candidates
Performance considerations would eliminate SLH-DSA, which is the most conservative choice. Even with the fastest parameter set (SLH-DSA-128f), this family of algorithms is about 550x slower than Ed25519. (If we prioritize bandwidth, it becomes 8000x slower.)Adopted from CloudFlare’s blog post on post-quantum cryptography.
Between the other two, FN-DSA is a tempting option. Although it’s difficult to implement in constant-time, it offers smaller public key and signature sizes.
However, FN-DSA is not standardized yet, and it’s only known to be safe on specific hardware architectures. (It might be safe on others, but that’s not proven yet.)
In order to allow Fediverse users be secure on a wider range of hardware, this uncertainty would limit our choice of post-quantum signature algorithms to some flavor of ML-DSA–whether stand-alone or in a hybrid construction.
Unlike KEMs, hybrid signature constructions may be problematic in subtle ways that I don’t want to deal with. So if we were to do anything, we would probably choose a pure post-quantum signature algorithm.
Against the Early Adoption of Post-Quantum Signatures
There isn’t an immediate benefit to adopting a post-quantum signature algorithm, as David Adrian explains.The migration to post-quantum cryptography will be a long and difficult road, which is all the more reason to make sure we learn from past efforts, and take advantage of the fact the risk is not imminent. Specifically, we should avoid:
- Standardizing without real-world experimentation
- Standardizing solutions that match how things work currently, but have significant negative externalities (increased bandwidth usage and latency), instead of designing new things to mitigate the externalities
- Deploying algorithms pre-standardization in ways that can’t be easily rolled back
- Adding algorithms that are pre-standardization or have severe shortcomings to compliance frameworks
We are not in the middle of a post-quantum emergency, and nothing points to a surprise “Q-Day” within the next decade. We have time to do this right, and we have time for an iterative feedback loop between implementors, cryptographers, standards bodies, and policymakers.
The situation may change. It may become clear that quantum computers are coming in the next few years. If that happens, the risk calculus changes and we can try to shove post-quantum cryptography into our existing protocols as quickly as possible. Thankfully, that’s not where we are.
David Adrian, Lack of post-quantum security is not plaintext.
Furthermore, there isn’t currently any commitment from the Sigsum developers to adopt a post-quantum signature scheme in the immediate future. They hard-code Ed25519 for the current iteration of the specification.The verdict on digital signature algorithms?
Given all of the above, I’m going to opt to simply not adopt post-quantum signatures until a later date.Version 1 of our design will continue to use Ed25519 despite it not being secure after quantum computers emerge (“Q-Day”).
When the security industry begins to see warning signs of Q-Day being realistically within a decade, we will prioritize migrating to use post-quantum signature algorithms in a new version of our design.
Should something drastic happen that would force us to decide on a post-quantum algorithm today, we would choose ML-DSA-44. However, that’s unlikely for at least several years.
Remember, Store Now, Decrypt Later doesn’t really break signatures the way it would break public-key encryption.
Art: Harubaki
Miscellaneous Technical Matters
Okay, that’s enough about post-quantum for now. I worry that if I keep talking about key encapsulation, some of my regular readers will start a shitty garage band called My KEMical Romance before the end of the year.Let’s talk about some other technical topics related to end-to-end encryption for the Fediverse!
Federated MLS
MLS was implicitly designed with the idea of having one central service for passing messages around. This makes sense if you’re building a product like Signal, WhatsApp, or Facebook Messenger.It’s not so great for federated environments where your Delivery Service may be, in fact, more than one service (i.e., the Fediverse). An expired Internet Draft for Federated MLS talks about these challenges.
If we wanted to build atop MLS for group key agreement (like has been suggested before), we’d need to tackle this in a way that doesn’t cede control of MLS epochs to any server that gets compromised.
How to Make MLS Tolerate Federation
First, the Authentication Service component can be replaced by client-side protocols, where public keys are sourced from the Public Key Directory (PKD) services.That is to say, from the PKD, you can fetch a valid list of Ed25519 public keys for each participant in the group.
When a group is created, the creator’s Ed25519 public key is known. Everyone they invite, their software necessarily has to know their Ed25519 public key in order to invite them.
In order for a group action to be performed, it must be signed by one of the public keys enrolled into the group list. Additionally, some actions may be limited by permissions attached at the time of the invite (or elevated by a more privileged user; which necessitates another group action).
By requiring a valid signature from an existing group member, we remove the capability of the Fediverse instance that’s hosting the discussion group to meddle with it in any way (unless, for some reason, the server is somehow also a participant that was invited).
But therein lies the other change we need to make: In many cases, groups will span multiple Fediverse servers, so groups shouldn’t be dependent on a single instance.
Spreading The Load Across Instances
Put simply, we need a consensus algorithm to determine which instance hosts messages. We could look to Raft as a starting point, but whatever we land on should be fair, fault-tolerant, and deterministic to all participants who can agree on the same symmetric keying material at some point in time.To that end, I propose using an additional HKDF output from the Group Key Agreement protocol to select a “leader” for all instances involved in the group, weighted by the number of participants on each instance.
Then, every N messages (where N >= 1), a new leader is elected by the same deterministic protocol. This will be performed entirely client-side, and clients will choose N. I will refer to this as a sub-epoch, since it doesn’t coincide with a new MLS epoch.
Since the agreed-upon group key always ratchets forward when a group action occurs (i.e., whenever there’s a new epoch), getting another KDF output to elect the next leader is straightforward.
This isn’t a fully fleshed out idea. Building consensus protocols that can handle real-world operational issues is heavily specialized work and there’s a high risk of falling to the illusion of safety until it’s too late. I will probably need help with this component.
That said, we aren’t building an anonymity network, so the cost of getting a detail wrong isn’t measurable in blood.
We aren’t really concerned with Sybil attacks. Winning the election just means you’re responsible for being a dumb pipe for ciphertext. Client software should trust the instance software as little as possible.
We also probably don’t need to worry about availability too much. Since we’re building atop ActivityPub, when a server goes down, the other instances can hold encrypted messages in the outbox for the host instance to pick up when it’s back online.
If that’s not satisfactory, we could also select both a primary and secondary leader for each epoch (and sub-epoch), to have built-in fail-over when more than one instance is involved in a group conversation.
If messages aren’t being delivered for an unacceptable period of time, client software can forcefully initiate a new leader election by expiring the current MLS epoch (i.e. by rotating their own public key and sending the relevant bundle to all other participants).
Art: Kyume
Those are just some thoughts. I plan to talk it over with people who have more expertise in the relevant systems.
And, as with the rest of this project, I will write a formal specification for this feature before I write a single line of production code.
Abuse Reporting
I could’ve swore I talked about this already, but I can’t find it in any of my previous ramblings, so here’s a good place as any.The intent for end-to-end encryption is privacy, not secrecy.
What does this mean exactly? From the opening of Eric Hughes’ A Cypherpunk’s Manifesto:
Privacy is necessary for an open society in the electronic age. Privacy is not secrecy.A private matter is something one doesn’t want the whole world to know, but a secret matter is something one doesn’t want anybody to know.
Privacy is the power to selectively reveal oneself to the world.
Eric Hughes (with whitespace and emphasis added)
Unrelated: This is one reason why I use “secret key” when discussing asymmetric cryptography, rather than “private key”. It also lends towardssk
andpk
as abbreviations, whereas “private” and “public” both start with the letter P, which is annoying.With this distinction in mind, abuse reporting is not inherently incompatible with end-to-end encryption or any other privacy technology.
In fact, it’s impossible to create useful social technology without the ability for people to mitigate abuse.
So, content warning: This is going to necessarily discuss some gross topics, albeit not in any significant detail. If you’d rather not read about them at all, feel free to skip this section.
Art: CMYKat
When thinking about the sorts of problems that call for an abuse reporting mechanism, you really need to consider the most extreme cases, such as someone joining group chats to spam unsuspecting users with unsolicited child sexual abuse material (CSAM), flashing imagery designed to trigger seizures, or graphic depictions of violence.
That’s gross and unfortunate, but the reality of the Internet.
However, end-to-end encryption also needs to prioritize privacy over appeasing lazy cops who would rather everyone’s devices include a mandatory little cop that watches all your conversations and snitches on you if you do anything that might be illegal, or against the interest of your government and/or corporate masters. You know the type of cop. They find privacy and encryption to be rather inconvenient. After all, why bother doing their jobs (i.e., actual detective work) when you can just criminalize end-to-end encryption and use dragnet surveillance instead?
Whatever we do, we will need to strike a balance that protects users’ privacy, without any backdoors or privileged access for lazy cops, with community safety.
Thus, the following mechanisms must be in place:
- Groups must have the concept of an “admin” role, who can delete messages on behalf of all users and remove users from the group. (Signal currently doesn’t have this.)
- Users must be able to delete messages on their own device and block users that send abusive content. (The Fediverse already has this sort of mechanism, so we don’t need to be inventive here.)
- Users should have the ability to report individual messages to the instance moderators.
I’m going to focus on item 3, because that’s where the technically and legally thorny issues arise.
Keep in mind, this is just a core-dump of thoughts about this topic, and I’m not committing to anything right now.
Technical Issues With Abuse Reporting
First, the end-to-end encryption must be immune to Invisible Salamanders attacks. If it’s not, go back to the drawing board.Every instance will need to have a moderator account, who can receive abuse reports from users. This can be a shared account for moderators or a list of moderators maintained by the server.
When an abuse report is sent to the moderation team, what needs to happen is that the encryption keys for those specific messages are re-wrapped and sent to the moderators.
So long as you’re using a forward-secure ratcheting protocol, this doesn’t imply access to the encryption keys for other messages, so the information disclosed is limited to the messages that a participant in the group consents to disclosing. This preserves privacy for the rest of the group chat.
When receiving a message, moderators should not only be able to see the reported message’s contents (in the order that they were sent), but also how many messages were omitted in the transcript, to prevent a type of attack I colloquially refer to as “trolling through omission”. This old meme illustrates the concept nicely:
Trolling through omission.
And this all seems pretty straightforward, right? Let users protect themselves and report abuse in such a way that doesn’t invalidate the privacy of unrelated messages or give unfettered access to the group chats. “Did Captain Obvious write this section?”But things aren’t so clean when you consider the legal ramifications.
Potential Legal Issues With Abuse Reporting
Suppose Alice, Bob, and Troy start an encrypted group conversation. Alice is the group admin and delete messages or boot people from the chat.One day, Troy decides to send illegal imagery (e.g., CSAM) to the group chat.
Bob immediately, disgusted, reports it to his instance moderator (Dave) as well as Troy’s instance moderator (Evelyn). Alice then deletes the messages for her and Bob and kicks Troy from the chat.
Here’s where the legal questions come in.
If Dave and Evelyn are able to confirm that Troy did send CSAM to Alice and Bob, did Bob’s act of reporting the material to them count as an act of distribution (i.e., to Dave and/or Evelyn, who would not be able to decrypt the media otherwise)?
If they aren’t able to confirm the reports, does Alice’s erasure count as destruction of evidence (i.e., because they cannot be forwarded to law enforcement)?
Are Bob and Alice legally culpable for possession? What about Dave and Evelyn, whose servers are hosting the (albeit encrypted) material?
It’s not abundantly clear how the law will intersect with technology here, nor what specific technical mechanisms would need to be in place to protect Alice, Bob, Dave, and Evelyn from a particularly malicious user like Troy.
Obviously, I am not a lawyer. I have an understanding with my lawyer friends that I will not try to interpret law or write my own contracts if they don’t roll their own crypto.
That said, I do have some vague ideas for mitigating the risk.
Ideas For Risk Mitigation
To contend with this issue, one thing we could do is separate the abuse reporting feature from the “fetch and decrypt the attached media” feature, so that while instance moderators will be capable of fetching the reported abuse material, it doesn’t happen automatically.When the “reason” attached to an abuse report signals CSAM in any capacity, the client software used by moderators could also wholesale block the download of said media.
Whether that would be sufficient mitigate the legal matters raised previously, I can’t say.
And there’s still a lot of other legal uncertainty to figure out here.
- Do instance moderators actually have a duty to forward CSAM reports to law enforcement?
- If so, how should abuse forwarding to be implemented?
- How do we train law enforcement personnel to receive and investigate these reports WITHOUT frivolously arresting the wrong people or seizing innocent Fediverse servers?
- How do we ensure instance admins are broadly trained to handle this?
- How do we deal with international law?
- How do we prevent scope creep?
- While there is public interest in minimizing the spread of CSAM, which is basically legally radioactive, I’m not interested in ever building a “snitch on women seeking reproductive health care in a state where abortion is illegal” capability.
- Does Section 230 matter for any of these questions?
We may not know the answers to these questions until the courts make specific decisions that establish relevant case law, or our governments pass legislation that clarifies everyone’s rights and responsibilities for such cases.
Until then, the best answer may simply to do nothing.
That is to say, let admins delete messages for the whole group, let users delete messages they don’t want on their own hardware, and let admins receive abuse reports from their users… but don’t do anything further.
Okay, we should definitely require an explicit separate action to download and decrypt the media attached to a reported message, rather than have it be automatic, but that’s it.
What’s Next?
For the immediate future, I plan on continuing to develop the Federated Public Key Directory component until I’m happy with its design. Then, I will begin developing the reference implementations for both client and server software.Once that’s in a good state, I will move onto finishing the E2EE specification. Then, I will begin building the client software and relevant server patches for Mastodon, and spinning up a testing instance for folks to play with.
Timeline-wise, I would expect most of this to happen in 2025.
I wish I could promise something sooner, but I’m not fond of moving fast and breaking things, and I do have a full time job unrelated to this project.
Hopefully, by the next time I pen an update for this project, we’ll be closer to launching. (And maybe I’ll have answers to some of the legal concerns surrounding abuse reporting, if we’re lucky.)
https://soatok.blog/2024/09/13/e2ee-for-the-fediverse-update-were-going-post-quantum/
#E2EE #endToEndEncryption #fediverse #FIPS #Mastodon #postQuantumCryptography
The people afraid to show their peers or bosses my technical writing because it also contains furry art are some of the dumbest cowards in technology.
Considering the recent events at ApeFest, a competitive level of stupidity is quite impressive.
To be clear, the exhibited stupidity in question is their tendency to project their own sexual connotations onto furry art–even if said art isn’t sexual in nature in any meaningful sense of the word.
But then again, poetry can be sexual, so who knows?
Scandalous furry,
Why are you glitching like that?
Haiku are lewd too!
Art: AJ
The cowardice comes in with the fear of their peers or bosses judging them for *checks notes* the content and presentation that I wrote, and not them.
Which (if you think about it for any significant length of time) implies that they’re generally eager to take credit for other people’s work, but their selfishness was thwarted by a cute cartoon dhole doing something totally innocent.
Even sillier, there’s a small contingent on technical forums that are “concerned” about the growing prevalence of queer and furry identities in technical spaces (archived).
Even some old school hackers conveniently forget that alt.fan.furry
was a thing before the Internet.
As frustratingly incompetent as these hot takes are, they pale in comparison to, by far, the biggest source of bad opinions about the furry fandom.
Credit: Tirrelous
The call is coming from inside the house.
Like Cats and Dogs
Last month, I wrote a blog post about Aural Alliance, which caused a menace in the furry music space to accuse me of “bad journalism” for not verbally crucifying the label’s creator (a good friend of mine) for having a failed business venture in the past, or taking credit for donating to their cause early on.
Twitter DM conversation.
Everyone I’ve talked to that has dealt with this particular person before responded with, “Yeah, this is typical Cassidy behavior.”
To which one must wonder, “Since when am I a journalist?”
I’ve never called myself a journalist. I’m a blogger and I don’t pretend to be anything more than that. I especially would never besmirch the work of real journalists by comparing it with my musings.
At times, I also wear the security researcher hat, but you’ll only hear about it when I’m publishing a vulnerability.
This is a personal blog. I will neither be censored nor subject to compelled speech. I have no moral or professional obligations to “both sides” of what amounts to a nontroversy.
Nobody has ever paid me to write anything here, and I will never accept any compensation for my writing.
Sure, I contributed to covering Aural Alliance’s up-front infrastructure costs when it was just an idea in Finn’s head. I’m not going to apologize for supporting artists. The Furry Fandom wouldn’t exist without artists.
This kind of behavior isn’t an isolated incident, unfortunately. A handful of furries have rage-quit tech groups I’m in because they found out I generously tipped artists that were under-charging for their work.
It bewilders me every time someone reacts this way. Do you not know the community you’re in?
The most intelligible pushback I’ve seen over the years is, “Well if everyone raises their prices, low-income furries will be pushed out of the market!”
Setting aside that art is a luxury, not a need for a moment, that’s not actually true.
There are so many artists, and they’re so decentralized, that no coherent price coordination effort is even possible. It’s worse than herding cats. Some may raise their prices by $5, others by $500. If furries were organized enough to coordinate something like this, then we’d have a tough time explaining why there are still abusers in the fandom.
Also, it costs very little to learn to draw, yourself:
https://www.youtube.com/watch?v=jeoQx9hphBw
Oh, but I’m not done.
The demand for low-priced digital art incentivizes people to reach for theft enabled by large-scale computing (a.k.a. “AI” by its proponents).
A similar demand for cheap, high-quality fursuits (usually at the maker’s expense) will lead to a walmartization of the furry community.
If you listen to these hot takes long enough, you start to notice a pattern of short-sighted selfishness.
When you demand something of the furry community, and don’t think of the long-term consequences of your demands, you’re probably being an idiot. This is true even if it’s actually a good idea.
If me supporting artists somehow prices you out of commissioning your favorite artist, you still have other options: Learning to make your own, finding new artists, saving money, etc.
On the flipside, the artists you admire will suffer less due to money troubles. Fewer artists starving makes the world a more beautiful place.
Center of the Fediverse
If flame war and retoot count relieved desire
In the comment thread someone must have known
That the hottest takes truly leave us tired
‘Cause in the center of the fediverse
We are all aloneWith apologies to Kamelot
If you’re on the Fediverse (e.g., Mastodon), and your instance uses a blocklist like TheBadSpace (TBS), you probably cannot see my posts on furry.engineer
anymore.
This is because the people running TBS have erroneously decided that any criticism of its curators is anti-blackness.
If you want a biased but detailed (with receipts!) account of the conflicts that led up to furry.engineer
‘s erroneous inclusion on their blocklist, Silver Eagle wrote about their experience with TBS, blocklist criticism, and receiving death threats from the friends of TBS curators.
(Spoiler: It was largely prompted by another predominantly LGBTQIA+ instance, tech.lgbt
, being erroneously added to the same blocklist, which resulted in criticism of said blocklist curators.)
Be forewarned, though: Linking to Silver Eagle’s blog post was enough for TBS supporters to harass me and directly accuse me, personally, of anti-blackness, so don’t expect any degree of level-headed discussion from that crowd.
Art: CMYKat
What Can We Do About This?
If you cannot see my Fediverse posts anymore, and actually want to see them, message your instance moderators and suggest unsubscribing from TheBadSpace’s blocklist.
If they refuse, your only real recourse is to move to another instance. The great thing about the Fediverse is, you can just do that, and nobody can lock you in.
Personally, I plan on sticking on furry.engineer
. I trust its moderators to not tolerate racist and/or fascist bullshit.
The baseless accusations of anti-blackness are, unsurprisingly, false.
Burnout Isn’t Inevitable
A few months ago, I quit a great job with an amazing team because the CEO decided that everyone has to return to working in the office, including people that were hired fully remote before the pandemic. This meant being forced to move more than 3,000 miles, or resigning. I’ve been told the legal term for such a move is “constructive dismissal.”
In hindsight, I was starting to burn out anyway, so leaving when I did was a great move for my mental health and life satisfaction.
Art: CMYKat
I’m an introvert. I have a finite social battery. Because my work was split across three different teams at the same company, I was a necessary participant in a lot of meetings.
More than 5 hours per day of meetings, as an individual contributor. Sometimes as many as 7 hours/day of them. I almost never had a quiet day, even after blocking one day every week so nobody would schedule any meetings and I could get productive work done.
If you’re interested in being a people manager, or have an extroverted personality, you’re probably unperturbed by this account. But I was absolutely miserable. My close friends started to worry if I was suffering from depression, because of how socially exhausted I was all the time.
I took a few weeks off between jobs. My new role doesn’t pointlessly encumber me with unnecessary meetings.
Every day, I feel the burnout symptoms leaving my mind. I feel challenged and stimulated in a good way. I’m learning new technologies and being productive. I’ve never spent more than 3 hours of any given day in a meeting.
Different people burn out in many different ways, for many different reasons.
In my experience, the consequences appear to be reversible if caught early enough. I don’t know if they would be if I held onto my old job for much longer.
The job market’s tough right now, but if you’re deeply unsatisfied with an aspect of your current job, prioritize yourself and make whatever change is necessary.
This doesn’t mean you have to switch jobs like I did, of course. It was a good move for me. Your mileage may vary.
Where’s The Cryptography?
https://youtu.be/4KNzdlc7ZcA?t=59
Somedays I feel like writing about technical topics. Other days, I feel like writing about unimportant or personal topics.
If you’re disappointed in this post, perhaps you also expect everything on this blog to be professionally useful?
Well, worry not, for you’re eligible for a full refund for the amount you paid to read it.
Art: CMYKat
Logging Off
This post has been a collection of unrelated topics on my mind over the past few months. There is one other thing, but I was unsure if it warranted a separate post of its own, or an addendum on this one. Since you’re reading this, you’ll know I ultimately settled on the latter.
I started this blog in 2020 because I thought having a personal blog where I talk about things that interest me (mainly the furry fandom and software security) would be fun. And I wanted to do it in a way that was fun for me.
“Having fun with it” has been the guiding principle of this blog for over 3 years. I never intended to do anything important or meaningful, that sort of happened by accident. I didn’t care about others being able to use my writing in a professional setting (hence, my scoffing at the very notion above).
Lately, posts have slowed to a crawl, because it’s not fun for me anymore. I have a lot of ideas I’d love to write about, but when it comes time to turn an idea into something tangible, I lose all inspiration.
So I’m not going to force it.
This will be the last post on this blog for a while. I recently tried to pick up fiction writing, but I’m not happy with anything I’ve been able to produce yet, so I won’t bore anyone with that garbage.
There are a lot of brilliant people that read my writing. Most of you are more than capable of picking up where I left off and starting your own blogs.
I encourage you to do so.
Have fun with it, too. Just remember, when it’s time to put the pen down and take a rest, don’t be stubborn and burn yourself out.
Happy hacking.
Header is a collage of art from AJ, CMYKat, Kyume, WeaselDumb, and a DEFCON Furs 2023 photo from Chevron.
https://soatok.blog/2023/11/17/this-would-be-more-professionally-useful-if-not-for-the-furry-art/
#fediverse #furries #furry #FurryFandom #furryMusic
If you’ve somehow never encountered an Internet meme before, you may be surprised to learn that the number 69 is often associated with sex (and, more specifically, a particular sex act).This happens to be the 69th blog post published on Dhole Moments, since I started the blog in April 2020.
You could even go as far as to say it’s the 4/20 +69th post, for maximum meme potential.
42069, get it? (Art by Khia)
However! I make a concerted effort to keep my blog safe-for-work, so if you’re worried about this post being flooded with furry porn (a.k.a. yiff art), or cropped yiff memes, or any other such lascivious nonsense, you won’t find any of that on this blog. (Sorry to disappoint.)
Instead, I’d like to take the opportunity to correct some public misconceptions about human sexuality, identity, and how these topics relate to the furry fandom.
Is Furry a Sex Thing?
I find it difficult to overstate how often people assume the “furry is a sex thing” premise. Especially on technical forums.But let’s backtrack for a second. What isn’t a sex thing?
Art by Khia.
This turns out to be a difficult question to answer. Even Wikipedia’s somewhat concise list of paraphilias doesn’t leave a lot of topics off the table.
Are shoes a sex thing? Are cigarettes? Poetry?
Comic from Saturday Morning Breakfast Cereal.
Hell, one might be tempted to cry foul on the header image used in this blog post for including tentacles, hypnotic eyes, and footpaws in the same image. (Scandalous!) But if you look at the uncropped versions of these images, you’ll quickly realize they aren’t yiffy.
Top Art by AtlasInu.
Bottom: Created by FlashWhite_. Fox is Kiit Lock.
The more you read about this topic, the more you’ll realize this question is inert. Anything can be a sex thing. Humans are largely a sexual species, and sex is deeply ingrained in our culture (which can make life awkward for asexual people).Instead, the question of whether or not the furry fandom is sexual becomes a bit of a Rorschach test for one’s cognitive biases.
If you’re chiefly concerned with public image–especially when fursuiting in public, where kids can see–you’re incentivized to double down on the fact that the furry fandom is no more inherently sexual than anything else can be. And this is true.
If you’re concerned with cultivating a sex-positive environment where people can live out their sexual fantasies in a safe, sane, and consensual manner, you’re incentivized to insist that furry is a sexual thing. “We have murrsuits for crying out loud! Stop kink-shaming! Down with puritan ideologies on sex!” And this is also true.
Humans are largely sexual, so any activity humans engage in will inevitably involve people sexualizing it. Even tupperware parties, for fuck’s sake! Anyone who believes there is a “Rule 34 of the Internet” tacitly acknowledges this fact, even if it’s inconvenient for a narrative they’re trying to spin.
So while this might be a meaningless question, one has to wonder…
Why Does Everyone Care So Much If Being a Furry (In Particular) Is Sexual or Not?
To understand what’s really happening here, you need to know a few things about the furry fandom.
- Approximately 80% of furries are LGBTQIA+ (source).
- Early anti-furry sentiments were motivated by queerphobia, especially on forums like Something Awful–and the influence of early hateful memes can still be seen to this day.
https://twitter.com/spacetwinks/status/728349066178998274
One of the Something Awful staff eventually acknowledged and apologized for this.
Archived from here. To corroborate, an Internet author named Maddox once parodied SomethingAwful’s hateful obsession with furries.
There was even a movement within the furry fandom history (the “Burned Furs“) that aimed to excise queerness and sex-positivity from the community. It’s no coincidence that a lot of the former Burned Furs joined with the alt-right movement within the furry fandom.
The alt-right is explicitly queerphobic; especially against trans people. But it’s not just queerphobic; it’s also an ableist and racist movement.
Regardless of sexual orientation, a lot of furries are neurodivergent, too.
Simply put: The reason that most people care whether or not furries are sexual is rooted in the propensity of anti-furry rhetoric in Internet culture, which was motivated at its inception by mostly queerphobia with a dash of ableism.
Art by Khia.
The notion that furries are “too sexual” originated as a dog-whistle for “too gay”, and caught on with people who didn’t know the hidden meaning of the idea. Now a lot of people repeat these ideas without intending or even knowing their roots, and many more have internalized shame about the whole situation.
Unfortunately, this even precipitates into the furry fandom itself, which leads to an unfortunate cyclical discourse that takes place largely on Furry Twitter.
Original tweet unavailable
Furry Isn’t a Sexuality. There is no F in LGBT!
If you publicly state “anti-furry rhetoric is largely queerphobic dog-whistles”, you will inevitably hear someone try to retort this way. So let’s be very clear about it.Furry isn’t its own sexual identity, and I would never claim otherwise.
Unlike transgender people, furries do not experience anything like “species dysphoria” (although therians/otherkin do report experiencing this; don’t conflate the two).
What’s happening here is: Most furries (about 80% of us) have separate sexual/gender identities that deviate from the heteronormative. A lot of queerphobia is easier to sell when you convey it through dog-whistles. So that’s what bigots did.
Polite company that wouldn’t partake in queer-bashing is often willing to laugh at the notion of “Beat A Furry Day“.
Anyone who tries to twist this acknowledgement to mean something ridiculous like an LGBTF movement is either being irrational or a 4chan troll.
Art by Khia.
For related reasons, you shouldn’t ever feel the need to “come out” as a furry.
https://www.youtube.com/watch?v=ZG2DRLimBSM
It’s okay to just really like Beastars, Zootopia, or even the Furry aspects of the Minecraft and Roblox communities. It doesn’t make you a sex-freak.
What’s the Take-Away?
It doesn’t really matter if the furry fandom has a sexual side to it. Everything does! The people who proclaim to care very much about this care for all the wrong reasons. Don’t be one of them.Art by Swizz.
And remember: Lewd furries aren’t furry trash; we’re yiff-raff!
Sex Isn’t Well-Defined Either
While we’re talking about sex, did you know that biological sex isn’t neatly divided into “male” and “female”? This isn’t an ideological position; it’s a scientific one. Just ask a biologist!https://twitter.com/JUNIUS_64/status/1054387892624285699
Trans and nonbinary people change gender (which is about your role within society) from what they were assigned at birth, but even sex itself isn’t so concrete.
The next time someone tries to appeal to “science” when talking about trans rights and then vomits up some unenlightened K-12 explanation of human reproduction and biological sex, remind them that science disagrees with their oversimplified and outdated mental model–and they might know this if they kept up with scientists.
Where Can I Learn More About the Sexual Side of the Furry Fandom?
Important: If you’re under the age of 18, you should stay out of adult spaces until you’re old enough to participate. No excuses.If you’re looking for pornographic furry art (also called “yiff”), most furry art sites (FurryLife, FurAffinity, etc.) have adult content filters that you can turn off when you register an account.
If you’re looking for something more interactive, there’s a swath of furries that develop private VR experiences for 18+ audiences. One of the most well-funded Patreon artists makes adult furry games.
If you’re curious about why and how people express their sexuality when fursuiting (also called “murrsuiting”), there’s a subreddit for that.
It’s really not hard to find. This is one of the advantages of furry being a largely sex-positive community.
Furry YouTuber Ragehound even has a series about Furries After Dark if you want to learn more about these topics.
https://www.youtube.com/watch?v=nGOlQJDO5no
Finally, similar to how 69 is a meme number for sex, furries have an additional meme number (621) that comes from the name of an adult furry website (e621.net).
You now have enough knowledge to navigate the adult side of the fandom. Just don’t come crying to me when you develop the uncanny knack for recognizing which r/furry_irl posts are actually cropped yiff versus wholly worksafe art.
https://soatok.blog/2021/04/02/the-furry-sexuality-blog-post/
#furries #furry #FurryFandom #LGBTQIA_ #Society
Last Week in Fediverse – ep 82
1 million new accounts on Bluesky as Brazil bans X, and premium feeds with Sub.club, and much much more.
Brazil bans X, and a signup wave to Bluesky
The Brazilian supreme court has banned the use of X in an ongoing legal fight with Elon Musk. The ban follows after a long trajectory of legal issues between the Brazilian government and Musk’s X. In April 2024, the Brazilian court ordered X to block certain X accounts that were allegedly related to the 2023 coup attempt, which Musk refused to do. In that same time period, President Luiz Inácio Lula da Silva opened an account on Bluesky, and there was already an inflow of a Brazilian community into Bluesky. Now, the legal fight has further escalated over X’s refusal to appoint a legal representative in the country, and Musk’s continuing refusal to comply with Brazil’s laws and regulation has resulted in the supreme court banning the use of X in the country altogether.
The ban on X has caused a massive signup wave to Bluesky, with over 1 million new accounts created in just three days, of which the large majority are from Brazil. The user statistics shot up even more than that, suggesting that there are a lot of people with an existing account logging back in as well.
The new inflow of people to Bluesky is having some significant effects on the network, as well as on the state of decentralised social networks more broadly:
- President Lula is putting actual focus on Bluesky. In one of his final posts on X, Luala listed in non-alphabetical order all other platforms that he is active on, and placed Bluesky at the top of the list. Posts by Lula that are placed on Bluesky (134k followers) as well as on Threads (2.4m followers) get more than 5 times as much likes on Bluesky. Today, Lula explicitly asked people on Bluesky what they thought about the platform, in a post that got over 30k likes and counting. It is hard to imagine that the Brazilian government is not paying attention to this all, and is looking which platform(s) the Brazilian community is moving towards in the wake of the ban on X.
- Brazilians are a very active community on the internet (see Orkut), and bring with them their own unique culture to Bluesky. The current decentralised social networks are heavily focused on US politics, judged by top posts on both Mastodon and Bluesky, and beyond shitposts and memes there is surprisingly little space for mainstream pop culture and sports. The Brazilian community does seem to bring a large number of pop culture and sports to Bluesky, significantly diversifying the topics of discussion, and in turn, creating more space for other people who are interested in that in the future. The activity of Brazilians on microblogging can also be seen in the like counts on popular posts of Bluesky: before this week, the most popular posts of any given day usually got around 3k likes, this has sprung up to 30k to 50k likes. Brazilians are so chatty in fact, that currently 81% of the posts on the network are in Portugese, and the amount of accounts of people who post on a given day has gone up from a third to over 50%.
- The Bluesky engineers have build a very robust infrastructure system, and the platform has largely cruised along fine without issues, even when faced with a 15x increase in traffic. This all without having to add any new servers. For third party developers, such as the Skyfeed developer, this increase in traffic did came with downtime and more hardware requirements however. It shows the complications of engineering an open system, while the Bluesky team itself was prepared with their core infrastructure, third party infrastructure, on which a large number of custom feeds rely, was significantly less prepared for the massive increase in traffic.
In contrast, the ban on X in Brazil has made little impact on Mastodon, with 3.5k new signups from Brazil on Mastodon.social. I’d estimate that this week has seen 10k new accounts above average, with 15k new accounts the previous week and 25k in this week. That places Mastodon two orders of magnitude behind Bluesky in signups from Brazil. There are a variety of reasons for this, which deserve their own analysis, this newsletter is long enough as it is. One thing I do want to point out is within fediverse community there are two sub communities that each have their own goals and ideas about the fediverse and growth. Some people responded with the news that most Brazilians went to Bluesky with type of response that indicated that they appreciate the small, quiet and cozy community that the fediverse currently provides, and a distrust of the growth-at-all-costs model for social networks. For other people however, their goal of the fediverse is to build a global network that everyone is a part of and everyone uses (‘Big Fedi’), a view of the fediverse that is also represented in the latest episode of the Waveform podcast (see news below). And if the goal is to build ActivityPub into the default protocol for the social web, it is worth paying attention to what is happening right now in the Brazilian ATmosphere.
The News
Sub.club is a new way to monetise feeds on the fediverse, with the goal of bringing the creator economy to the fediverse. It gives people the ability to create premium feeds that people can only access via a subscription. People can follow this feed from any Mastodon account (work on other fediverse platforms is ongoing). Sub.club handles the payment processes and infrastructure, for which they charge 6% of the subscription fee (compared to 8-12% Patreon charges). Sub.club also makes it possible for other apps to integrate, both IceCubes and Mammoth have this option. Bart Decrem, who is one of the people behind Sub.club, is also the co-founder of the Mastodon app Mammoth. Sub.club also explicitly positions itself as a way for server admins to fund their server. Most server admins rely on donations by their users, often via services like Patreon, Ko-fi, Open Collective or other third party options. By integration payments directly into the fediverse, Sub.club hopes that the barrier for donations will be lower, and more server admins can be financially sustainable.
Newsmast has build a new version of groups software for the fediverse, and the first group is dedicated to the Harris campaign. There are few types of groups available that integrate with Mastodon, such as with Friendica or a.gup.pe. These groups function virtually identical to hashtags, by boosting out posts where the group account is tagged in to everyone who follows the group account. As there is no moderation in these types of group accounts, it allows anyone to hijack the group account. A group account dedicated to a political campaign is especially vulnerable to this. On Mastodon a volunteer Harris Campaign group used a Friendica group for campaign organising, but the limited moderation tools (blocking a user from following the group) that are available are not working, which allowed blocked users to still get their posts boosted by the group account. Newsmast’s version of Groups gives (working) moderation tools, and only boosts top level comments and not replies, to cut down on the noise. For now, the new Group is only available to the Harris Campaign group for testing, but it will come later to Mastodon servers that run the upcoming Patchwork plugin.
Bluesky added quite a number of new anti-toxicity features in their most recent app update. Bluesky has added quote posting controls, allowing people to set on a per-post basis if people can quote the post or not. There is also the option to remove quotes after the fact as well: if you’ve allowed quote posts on a post you’ve made, but someone made a quote post that you do not feel comfortable with, you have the possibility to detach your post. Another update is the possibility to hide replies on your posts. Bluesky already hides comments under a ‘show more’ button if the comment is labeled by a labeler you subscribe to. You now have the option to do so on all comments that are made on your posts, and the hidden comment will be hidden for everyone. Finally, Bluesky has changed how replies are shown in the Following feed, which is an active subject of discussion. I appreciate the comments made by Bluesky engineer Dan Abramov here, who notes there are two different ways of using Bluesky, who each prioritise comments in conflicting ways. As new communities grow on Bluesky, prioritising their (conflicting) needs becomes more difficult, and I’m curious to see how this further plays out.
The WVFRM (Waveform) podcast of popular tech YouTuber MKBHD has a special show about the fediverse, ‘Protocol Wars – The Fediverse Explained!’. It is partially a discussion podcast, partial explainer, and partial interview with many people within the community. They talk with Mastodon’s Eugen Rochko, Bluesky’s Jay Graber, Threads’s Adam Mosseri, and quite some more people. It is worth noting for a variety of reason. The show is quite a good introduction, that talks to many of the most relevant names within the community. MKBHD is one of the biggest names in the tech creator scene, and many people are paying attention to what he and his team is talking about. Furthermore, I found the framing as ‘protocol wars’ interesting, as the popularity of Bluesky in Brazil as an X replacement indicates that there is indeed a race between platforms to be build on top of the new dominant protocol.
Darnell Clayton has a very interesting blog post, in which he discovers that there is a discrepancy in follower count for Threads accounts that have turned on fediverse sharing. Clayton notes that the follower count shown in the Threads app is lower than the one shown in a fediverse client, for both Mastodon and Flipboard. He speculates that this difference is the number of fediverse accounts that follow a Threads account. It should be noted that this is speculation and has not been confirmed, but if this is true, it would give us a helpful indication of how many fediverse accounts are using the connection with Threads. While we’re talking about Threads accounts, Mastodon CEO Eugen Rochko confirmed that the mastodon.social server has made a connection with 15.269 Threads accounts who have turned on fediverse sharing.
The Links
- Threads has figured out how maximise publicity by making minimal incremental updates to their ActivityPub implementation, edition 500.
- A Developer’s Guide to ActivityPub and the Fediverse – The New Stack interviews Evan Prodromou about his new book about ActivityPub.
- FedIAM is a research project where people can use fediverse and Indieweb protocols for logging in.
- You can now test Forgejo’s federation implementation.
- This week’s fediverse software updates.
- Ghost’s latest update on their work on implementing ActivityPub: “With this milestone, Ghost is for the first time exceeding the functionality of a basic RSS reader. This is 2-way interaction. You publish, and your readers can respond.”
- Dhaaga is a multiplatform fediverse client that adds unique client-side functionalities.
- Lotide, a experimental link-aggregator fediverse platform, ceases development.
- A custom QR code generator, which some pretty examples of custom QR codes for your fediverse profile.
- Custom decentralised badges on atproto with badges.blue, a new work in process by the create of atproto event planner Smoke Signal.
- Smoke Signal will be presenting at the next version of the (third party organised) ATproto Tech Talk.
On a final note: I wrote a easy to read and share PDF file, Fediverse for Publishers. It gives a quick overview of what the fediverse is, why it matters for publishers and journalists, and an easy overview of the different ways to get started. Check it out, and feel free to share and use it. Thanks to Germany’s Sovereign Tech Fund for supporting this research.
That’s all for this week, thanks for reading.
https://fediversereport.com/last-week-in-fediverse-ep-82/
Last Week in Fediverse – ep 64This edition of Last Week in Fediverse seems to be a President’s edition; Barack Obama turns on fediverse sharing for his Threads account, and Brazil’s president Lula joins Bluesky. Lots more going on this week, lets dive in:
The News
IFTAS, the nonprofit organisation for Trust & Safety on the social web, has put out a guide for the EU’s Digital Services Act (DSA). The guide caters towards ‘small and micro services’ that has member accounts in the EU, which is the large majority of fediverse servers. It is a practical and easy overview of what is expected if you are the operator of a fediverse server, and highly recommended if you are a server admin to check it out. Most requirements in the DSA that are applicable to ‘small and micro services’ (platforms with less than 50 employees and less than 10M EUR turnover) are on how to provide ways of communication with authorities and how to handle their requests. The requirement (art 13) in the DSA that might give server operators the most difficulty is that platforms that are located outside of the EU, but ‘serve EU users or make their services available in the EU are required to have an EU-based
legal representative to manage compliance and communication with EU authorities.’ It seems a significant number of fediverse servers are currently not in compliance with this requirement, and no clear direction yet on how to get there.Sora is an iOS and MacOS client for the fediverse (for Mastodon, the Forkeys as well as Bluesky), which has been pushing the boundaries with what is possible with 3rd party fediverse clients. The app features a custom For You algorithmic feed, and the developer recently showed during FediForum how people have complete control over their algorithm. Now the developer is back with another update, this time adding P2P video calling to the client. A gif in the announcement post shows how it works. You can schedule a meeting, which send a link for the other person’s fediverse account to join. Both people need to use Sora to use the feature. The developer stated that if there is enough interest in the feature, he will work on making the feature available as a web client that does not require Sora.
Flipboard has reached another major milestone in their process fully federate Flipboard and have full interoperability with the rest of the fediverse. There is now two-way interaction with fediverse accounts and Flipboard accounts that are federated. CEO Mike McCue explains: “Now when a federated Flipboard user curates, people in the fediverse can reply, favorite, boost or follow those Flipboard users who will in turn see that activity in their usual notifications tab. Even better, Flipboard users can directly reply to people in the fediverse — and very soon they will also be able to follow each other.” Furthermore, Flipboard has enabled federation for another 11000 magazines, creating increasing the amount of curated content that is available in the fediverse.
Lyrak is a new social platform that focuses on real-time news and revenue sharing with creators that was announced this week. In the announcement post, Lyrak also stated that fediverse integration will be added to the platform ‘soon’. For more information on Lyrak, Sarah Perez has more extensive look, over at TechCrunch.
Russia’s censorship agency blocks access to the lgtbqia.space server in Russia. The admins of the lgtbqia.space server got a notification by the Russian agency demanding that they remove an account from their server. The account is for a ‘blog about LGBTQ+ people, literature, sports, humor, etc.’ The admins refused to comply, after which the server is now inaccessible in Russia.
During FediForum, Newsmast showcased their new project Patchwork. In a new update, Newsmast says that they ‘are looking at rolling out a Beta version in the coming months, with features like easy opt in or out of networking with Threads & Bluesky, spam management and content filters.’
Some news from Threads
- Barack Obama’s also turns on fediverse sharing for his Threads account, making him the second US President to do so.
- WeDistribute wrote a ‘A Beginner’s Guide to the Fediverse, for Threads Users’.
- A blog post on using Mastodon to follow on Threads accounts, from the perspective of someone who has mainly been using Threads. The blog showcases how third party clients are a major selling point for the fediverse.
- Meanwhile, Threads invites developers to sign up for API access, but it seems the API can only be used for posting into Threads, as well as analytics. It rules out the possibility of building full-featured third-party clients as you can with the rest of the fediverse.
Some news from Bluesky
Brazilian president Lula has opened an account on Bluesky, Brasil247.com reports. The news comes after an escalating conflict between X and the Brazilian Courts. Elon Musk publicly refused to follow orders by the Brazilian court to block certain accounts on X, and a Brazilian judge has ordered an investigation of Elon Musk for obstruction of justice. President Lula opening an account on Bluesky is a direct response to the ongoing conflict between the Brazilian government and X, and indicates how governments are starting to be fed up with the situation at X. President Lula used his first post on Bluesky to say that 38 slaughterhouses will be authorised to export meat to China. (?)The Links
- Mastodon is hiring a new core team member for back-end development.
- An update on BridgyFed, the upcoming bridge between the fediverse and Bluesky, and the work to make it fully opt-in/consent based.
- Fediverse Event planning tool Mobilizon has transferred ownership recently, and the new team, Kaihuri, will give a presentation of the new version next week on Monday April 15th.
- A reading of the Canadian Online Harms Act, from the perspective of fediverse admins.
- An update on radio free fedi, who have launched their new website as well.
- Pixelfed open-sources their mobile apps.
- Annual Mastodon Pledge Drive.
- The University of Innsbruck expands their Mastodon server to all university employees.
- Notes on an setting up a fediverse relay with FediBuzz on an Ubuntu server.
- Lifehacker writes about the current state of the podcast landscape, and role that ActivityPub can play.
- How to get started with FediTest, a testing suite that is currently being build.
- An update by ForgeFed on their work on implementing federation into software forges.
- An overview of this week’s updates to fediverse products.
- An update from NodeBB and their work on ActivityPub Development.
- Lemmy’s biweekly development update.
That’s all for this week. If you want more, you can subscribe to my fediverse account or to the mailing list below:
https://fediversereport.com/last-week-in-fediverse-ep-64/
The Podcast Landscape Is a Mess, and That’s a Good Thing
Podcasting is one of the best examples of the open web working—and I hope it becomes the norm.Justin Pot (Lifehacker)
Tech Talk: Smoke Signal Events
Smoke Signal is an events & RSVP system, built by Nick Gerakines on top of ATProtocol.Boris Mann (ATProtocol Dev)
Eager to test #forgejo 's #federation ?
We enabled federation on our instance (https://repo.prod.meissa.de/).
For testing you can
1. fork/migrate one of our repos,
2. configure the following section in your fork (howto described here: https://domaindrivenarchitecture.org/) &
3. star on your instance.
You will see the federated star arriving on our instance 😀
@jdp23 @dalias @Gargron According to @MostFollowed by @stefan, the top 3 most followed #Fediverse accounts by #Mastodon users are:
🥇 @Mastodon
🥈 @espn (on #Flipboard!)
🥉 @mosseri (on #Threads)
👉🏾 Mastodon Most Followed: https://most-followed-mastodon-accounts.stefanhayden.com/
I find it interesting that beta testing accounts on Flipboard & Threads are in the top three.
Top 10,000 Mastodon Accounts
Most followed accounts acrosss the Fediverse as tracked by @mostFollowed@mastodon.socialmost-followed-mastodon-accounts.stefanhayden.com
We’re very proud to announce that we’ve worked with @tchambers & team to build a new membership-only group on the Fediverse!
@KamalaHarrisWin is a group dedicated to organising and discussing Harris’s 2024 campaign. With better moderation controls the people there can focus on what matters, like protecting democracy, rather than having to fight through trolls and bots.
We hope the members are enjoying it! 🥥🌴
As I often do, I made a poll on the fediverse about two concepts I am interested in: Big Fedi versus Small Fedi. Although I think these are interesting topics, I couldn’t come up with exact summations of what the “Big Fedi” and “Small Fedi” positions are. So, I wanted to write down what I could here.
The fediverse, in this case, is an internetwork of social networks. It works a lot like email; you can have an account on one network and follow, message, and react to people (or bots) on other networks. The biggest software tool for making fediverse networks is Mastodon; there are a lot of other Open Source servers for setting up nodes. There are also some proprietary nodes — Meta Threads and Flipboard are two of the biggest.
The following are some clusters of ideas that I think coalesce into “Big Fedi” and “Small Fedi”. I haven’t been able to tie them all back to some fundamental principle on either side.
Big Fedi
The “Big Fedi” position is a set of ideas that roughly cluster together. Not everyone who agrees with one or a few of these agrees with them all, but I think they tend to be related.
- The fediverse should be big. Real big. Like, everyone on the planet should have an account on the fediverse. It will make the internet better and the world better.
- We should make choices that help bring the fediverse to new people. Because the fediverse should be big, we should be doing things to make it bigger; in particular, to bring it to more people.
- There should be a lot of different account servers. (I’m using “account servers” instead of “instances” or “servers”.) It’s good to have a lot of choice, with a lot of different parameters: software interfaces, financial structure, what have you.
- Commercial account servers are welcome. This variety includes commercial services. If they provide the right mix of features and trade-offs that certain people want, it’s good to have them, especially if they have a lot of users.
- Moderation can be automated. Shared blocklists, machine learning, and other tools can be used to catch most of the problematic interactions on the fediverse.
- Account servers can be big. It doesn’t matter how big they are: 1M, 10M, 100M, 1B people is fine.
- The fediverse should have secondary services. In order to grow, we need secondary services, like people-finders, onboarding tools, global search, bridges, and so on.
- The individual is central. People should be able to set up their environment how they like, including their social environment. They have the tools to do that. The account server may set some parameters around content or software usage, but otherwise it’s mostly a dumb pipe.
- Connections should be person-to-person. The main social connection is through following someone. Building up this follow graph is important.
- People I care about should be on the fediverse. I have a life outside the fediverse — friends, family, colleagues, neighbours. My governments, media, celebrities, sports figures, leaders in my industry. It would be good to have more of those people on the fediverse, so I can connect to them.
- People should get to make choices about their account server. Everybody has different priorities: privacy, open source, moderation, cost, stability, features. We can all make our own choices about the account server we prefer.
- It should be possible to have ad-free account servers. Technically and culturally, we should be able to set these up.
- It should be possible to have Open Source account servers. People who prefer free network services should be able to run them and use them.
- It should be possible to have algorithm-free account servers. You should be able to just follow things reverse chronologically.
- It should be possible to have individually-run account servers. A normal technically-minded person should be able to run their own account server for themself, friends, their household, or even for a larger communty.
- Harms that are mostly kept to account servers are up to people on those servers to solve. Good fences make good neighbours. If things become unbearable, people can move servers somewhat frictionlessly.
- Affinity groups should stretch beyond account server boundaries. Groups, lists, and other social network features are important and should be fully federated. They should provide a lot of features.
- There may be some harm that comes with growth; we can fix it later. We’re going to find problems as we go along. We can deal with them as we come to them.
- The fediverse is going to look very different over time. The way things work now are not how they’re going to be 1, 3, 5, 10 years from now. Especially as the fediverse grows, different structures and ways of working are going to develop.
- Open standards are important. By having public, open standards available through big standards organizations, we gain the buy-in from different account network operators to join the network. We definitely don’t have time to negotiate bilateral agreements; we need solid standards.
- Variety in types of account server operators is good. Different people have different needs and tolerances. If we want to have more people, we need to cater to those different needs with different account servers.
- Existing organizations can and should provide account servers. Not just existing tech companies; also businesses providing servers for their employees, universities for students, cities or other governments for their citizens.
- Existing services, even if they’re bad, will become somewhat better if they have fediverse features. People on those services will get to connect with a variety of new people. They’ll find out about the fediverse, and might move to another account server, or try something else new.
- It’s more important to bring good people to the fediverse than keep bad people off it. More people is good, and the people I care about on other networks are also good. There may be some bad people, too, but we’ll manage them.
Small Fedi
Here is a rough cluster of ideas that I’d call “Small Fedi”. Again, not everyone who agrees with one or two of these agrees with all of them.
- The fediverse should be safe. Safe from harassment, safe from privacy violations.
- Growth is not important. We’ve gotten along this long with a small fediverse. It’s OK how it is, so growth is not important. Growth is a capitalist mindset.
- People who aren’t on the fediverse don’t matter as much as people who are. Their needs, at least. When discussing the future of the fediverse, we don’t need to talk about people on other networks much at all.
- If people want to get on the fediverse, they can join an existing account server. We don’t need to bring new account servers to the fediverse; there are a lot already. People who really care about getting on the fediverse can join an existing account server, or set up their own. If they’re not willing to do this, they’re probably not that interested in the fediverse, so why should we bother trying to connect to them?
- If growth could cause harm, we either should fix the problem before growing, or we shouldn’t grow. We should examine opportunities carefully, but by default we should say no.
- Commercial account servers are discouraged. Most commercial services do harm. Even if they’re on the fediverse, they’re going to try to do harm to make more money. So, they should be avoided as much as possible.
- Secondary services can cause harm and should be severely limited if allowed at all. People search and content search can be used for privacy invasion or harassment. Shared blocklists can be manipulated to cause echo chambers. Machine learning can be biased. Onboarding services favour big account servers. They should be discouraged or, preferably, closed.
- The account server is central. Moderation decisions, cultural decisions, account decisions, most social decisions should happen at the account server level.
- Account servers are the primary affinity group. You should find an account server that feels like home. Any other groups are less important.
- Feeds like “fediverse” and “local” are important. There is a public community of account servers that your account server connects to, and the public feed from that community is important. You might use it more often than your home feed. Your local feed is also important, because your account server is a group you belong to.
- Moderation should be primarily by hand. The courage and wisdom necessary to make most moderation decisions can only be managed by hand. Automated tools can be manipulated.
- Account servers must be small. Human moderators can only do so much work, so the account servers they moderate can only be so big.
- The fediverse works just about right right now, and shouldn’t change. There’s a good reason for how everything works, and it’s fine. People who want to change the way things work just don’t get it.
- It’s not important that people from my real life are on the fediverse, and it’s kind of discouraged. The account server is the most important affinity group, then the larger “fediverse”. That’s enough; other people are needed or welcome. People who I know who aren’t on the fediverse don’t care about fediverse stuff, so they’d get bored here, anyway.
- It is highly discouraged to have ad-supported account servers. Even if they only show ads to their own users, they are causing harm. In particular, they’re showing our content next to ads, or using our content to develop ad algorithms. Either way, harm goes beyond the server border.
- It is highly discouraged to have proprietary account servers. They just can’t be trusted with their own users’ data. Also, they’re going to get some of our data, just through federation, and who knows what they’ll do with it.
- It is highly discouraged to have algorithmic timelines. Anyone having these causes problems. If you want one, you just don’t get it.
- Open standards are less important than making things work the way we want them. In particular, fiddling with standards to keep people safe, and to discourage particular account server structure, is an OK thing to do.
- Most existing institutions have proved themselves untrustworthy and should not provide account servers. Name any particular part of civil society, and I can come up with an example of at least one bad practice they have.
- Harms that happen on one account server are a problem for every account server. Server blocks, personal blocks, and protocol boundaries aren’t enough to isolate problems to their account server of origin. Secondary or tertiary effects can happen and cause harm.
- Existing services, if they’re bad, will make the fediverse worse. Bad practices, bad content, bad members will cause problems for everyone on the fediverse.
- It’s more important to keep bad people off the fediverse than to bring good people to it. Bad people can be really horrible. There aren’t actually that many good people on bad services, and if they really wanted to connect with us, they’d find another way.
Where do I land?
I’m mostly a Big Fedi person; I did the work on the fediverse that I’ve done in order to bring it to everyone on the planet. I don’t think people should have to pass a test to be allowed on the fediverse.
That said, I respect that harm can come from new technical decisions and new network connections. As someone deeply involved in the standards around ActivityPub and the fediverse, I’d like to make sure that we give people the tools they need to avoid harm — and stay out of the way when they use them. I very much like the Small Fedi suspicion of new services and account servers, and careful consideration of the possibilities.
I’d like to find ways to mitigate the problems of so many people on proprietary social networks being unconnected to the fediverse, but still centre the safety of existing fedizens. I don’t have an easy answer to how this can work, though.
Anyway, thanks for reading this far. Also, an acknowledgment: I borrowed the term “Small Fedi” without permission from Erin Kissane’s great piece on Untangling Threads. I’m also using it differently, stretching it out, which admittedly is an ingrateful thing with something you borrow. I hope it is not ruined by the time I return it.
Another acknowledgment: this framing is loosely based on the worse is better series of essays by Richard Gabriel. His lists of ideas are much shorter, more cohesive, and more algorithmic.
https://evanp.me/2023/12/26/big-fedi-small-fedi/
#bigfedi #fediverse #smallfedi
Small Fedi or Big Fedi?
Untangling Threads - Erin Kissane's small internet website
Meta's Threads service is joining the Fediverse, and I think there are some things about Meta—and about Fediverse mechanics—that it's important to include in that conversation.erinkissane.com
Last Week in Fediverse – ep 64
This edition of Last Week in Fediverse seems to be a President’s edition; Barack Obama turns on fediverse sharing for his Threads account, and Brazil’s president Lula joins Bluesky. Lots more going on this week, lets dive in:
The News
IFTAS, the nonprofit organisation for Trust & Safety on the social web, has put out a guide for the EU’s Digital Services Act (DSA). The guide caters towards ‘small and micro services’ that has member accounts in the EU, which is the large majority of fediverse servers. It is a practical and easy overview of what is expected if you are the operator of a fediverse server, and highly recommended if you are a server admin to check it out. Most requirements in the DSA that are applicable to ‘small and micro services’ (platforms with less than 50 employees and less than 10M EUR turnover) are on how to provide ways of communication with authorities and how to handle their requests. The requirement (art 13) in the DSA that might give server operators the most difficulty is that platforms that are located outside of the EU, but ‘serve EU users or make their services available in the EU are required to have an EU-based
legal representative to manage compliance and communication with EU authorities.’ It seems a significant number of fediverse servers are currently not in compliance with this requirement, and no clear direction yet on how to get there.
Sora is an iOS and MacOS client for the fediverse (for Mastodon, the Forkeys as well as Bluesky), which has been pushing the boundaries with what is possible with 3rd party fediverse clients. The app features a custom For You algorithmic feed, and the developer recently showed during FediForum how people have complete control over their algorithm. Now the developer is back with another update, this time adding P2P video calling to the client. A gif in the announcement post shows how it works. You can schedule a meeting, which send a link for the other person’s fediverse account to join. Both people need to use Sora to use the feature. The developer stated that if there is enough interest in the feature, he will work on making the feature available as a web client that does not require Sora.
Flipboard has reached another major milestone in their process fully federate Flipboard and have full interoperability with the rest of the fediverse. There is now two-way interaction with fediverse accounts and Flipboard accounts that are federated. CEO Mike McCue explains: “Now when a federated Flipboard user curates, people in the fediverse can reply, favorite, boost or follow those Flipboard users who will in turn see that activity in their usual notifications tab. Even better, Flipboard users can directly reply to people in the fediverse — and very soon they will also be able to follow each other.” Furthermore, Flipboard has enabled federation for another 11000 magazines, creating increasing the amount of curated content that is available in the fediverse.
Lyrak is a new social platform that focuses on real-time news and revenue sharing with creators that was announced this week. In the announcement post, Lyrak also stated that fediverse integration will be added to the platform ‘soon’. For more information on Lyrak, Sarah Perez has more extensive look, over at TechCrunch.
Russia’s censorship agency blocks access to the lgtbqia.space server in Russia. The admins of the lgtbqia.space server got a notification by the Russian agency demanding that they remove an account from their server. The account is for a ‘blog about LGBTQ+ people, literature, sports, humor, etc.’ The admins refused to comply, after which the server is now inaccessible in Russia.
During FediForum, Newsmast showcased their new project Patchwork. In a new update, Newsmast says that they ‘are looking at rolling out a Beta version in the coming months, with features like easy opt in or out of networking with Threads & Bluesky, spam management and content filters.’
Some news from Threads
- Barack Obama’s also turns on fediverse sharing for his Threads account, making him the second US President to do so.
- WeDistribute wrote a ‘A Beginner’s Guide to the Fediverse, for Threads Users’.
- A blog post on using Mastodon to follow on Threads accounts, from the perspective of someone who has mainly been using Threads. The blog showcases how third party clients are a major selling point for the fediverse.
- Meanwhile, Threads invites developers to sign up for API access, but it seems the API can only be used for posting into Threads, as well as analytics. It rules out the possibility of building full-featured third-party clients as you can with the rest of the fediverse.
Some news from Bluesky
Brazilian president Lula has opened an account on Bluesky, Brasil247.com reports. The news comes after an escalating conflict between X and the Brazilian Courts. Elon Musk publicly refused to follow orders by the Brazilian court to block certain accounts on X, and a Brazilian judge has ordered an investigation of Elon Musk for obstruction of justice. President Lula opening an account on Bluesky is a direct response to the ongoing conflict between the Brazilian government and X, and indicates how governments are starting to be fed up with the situation at X. President Lula used his first post on Bluesky to say that 38 slaughterhouses will be authorised to export meat to China. (?)
The Links
- Mastodon is hiring a new core team member for back-end development.
- An update on BridgyFed, the upcoming bridge between the fediverse and Bluesky, and the work to make it fully opt-in/consent based.
- Fediverse Event planning tool Mobilizon has transferred ownership recently, and the new team, Kaihuri, will give a presentation of the new version next week on Monday April 15th.
- A reading of the Canadian Online Harms Act, from the perspective of fediverse admins.
- An update on radio free fedi, who have launched their new website as well.
- Pixelfed open-sources their mobile apps.
- Annual Mastodon Pledge Drive.
- The University of Innsbruck expands their Mastodon server to all university employees.
- Notes on an setting up a fediverse relay with FediBuzz on an Ubuntu server.
- Lifehacker writes about the current state of the podcast landscape, and role that ActivityPub can play.
- How to get started with FediTest, a testing suite that is currently being build.
- An update by ForgeFed on their work on implementing federation into software forges.
- An overview of this week’s updates to fediverse products.
- An update from NodeBB and their work on ActivityPub Development.
- Lemmy’s biweekly development update.
That’s all for this week. If you want more, you can subscribe to my fediverse account or to the mailing list below:
https://fediversereport.com/last-week-in-fediverse-ep-64/
IFTAS is happy to announce the public availability of our DSA Guide for Decentralized Services – a practical guide for small and micro services that are subject to the EU’s Digital Services Act.
If your server has member accounts in the EU, or is publicly viewable in the EU, your service is most likely impacted by this regulation, even if you are not based or hosted in the EU.
Developed in collaboration with the great people at Tremau, our DSA Guide is designed to help independent social media service providers navigate these complex regulations and achieve compliance with these new rules without compromising the unique qualities of federated, open social networks.
As part of our Needs Assessment activities, we’ve heard a repeated need for help understanding the complex regulatory landscape that decentralized services need to consider, and this DSA Guide is the first of many in our plan to provide clear, actionable guidance to a range of regulations for the community.
As of February 2024, all online services and digital platforms that offer services in the European Union are required to be fully compliant with the DSA.
However, various portions of the DSA are not applicable to “small and micro” services, and this guide will show you clearly which parts apply and which do not.
For administrators of platforms like Mastodon, PeerTube, and Pixelfed, the DSA Guide can help demystify the requirements and offer practical advice on achieving compliance for the over 27,000 independent operators of these and other decentralized social media services who otherwise may not be able to obtain the guidance and advice that larger operations can afford to invest in.
Download the DSA Guide for Decentralized Fediverse Services.
To join the discussion, visit our community chat service at https://matrix.to/#/#space:matrix.iftas.org or stay tuned to join our community portal in the coming weeks!
https://about.iftas.org/2024/04/09/dsa-guide-for-the-fediverse/
#ActivityPub #BetterSocialMedia #DSA #Fediverse
The extraterritorial implications of the Digital Services Act - DSA Observatory
Laureline Lemoine & Mathias Vermeulen (AWO) As the enforcement of the Digital Services Act (DSA) is gathering speed, a number of non-EU based civil society and research organizations have wondered to what extent the DSA can have an impact on their wo…admin (DSA Observatory)
The Podcast Landscape Is a Mess, and That’s a Good Thing
Podcasting is one of the best examples of the open web working—and I hope it becomes the norm.Justin Pot (Lifehacker)
Today was a huge milestone in our quest to federate #Flipboard and tear down the walls around our own walled garden.
First, we launched a new version of Flipboard for iOS and Android which brings the promise of two way federation to life. Now when a federated Flipboard user curates, people in the fediverse can reply, favorite, boost or follow those Flipboard users who will in turn see that activity in their usual notifications tab. Even better, Flipboard users can directly reply to people in the fediverse -- and very soon they will also be able to follow each other.
Second, we federated some of our best curators today who are actively curating more than 10,000 magazines about everything from climate change to kale smoothie recipes. I'm grateful to our many curators and the service they provide to so many others who want to find the best content about a shared interest. I know our curators are excited to have millions more people who could potentially benefit from their curation. I also know that people in the fediverse will give a warm welcome to these curators. Especially now that everyone can hear and talk to each other over what was once two totally separate networks but now increasingly are in one and the same #fediverse.
✨Introducing Voice & Video call on the Fediverse ✨
Schedule a call by inputting the other’s Fediverse handles.
Invitees will see a Join button in your invite post to join the call.
- Calls are done using Peer-To-Peer (private)
- Encryption (GCM, ECDSA)
- Use AR Video effects.
- Works for iOS and Mac.
In the next version, I will add screen sharing!
Download Sora: https://apps.apple.com/jp/app/sora-for-mastodon-bluesky/id6450969760?l=en-US
#Fediverse
#Mastodon
#Misskey
#Bluesky
SoraSNS for Mastodon & Bluesky
"Sora offers access to Mastodon, Bluesky, and the federated networks Misskey and Firefish." - featured in TechCrunch article > Compatible: Connect with Mastodon, Misskey, Bluesky, and Pleroma instances.App Store
🥳 Manyfold v0.82.0 is out, with two BIG features!
First up, we're joining the #Fediverse proper - you can follow public Manyfold creators on other ActivityPub platforms like Mastodon!
And secondly, Manyfold will now index PDF, TXT and video content as well as models and images!
🗞️ Full release notes: https://manyfold.app/news/2024/10/13/release-v0-82-0.html
❤️ Support us on OpenCollective: https://opencollective.com/manyfold
🏷️ #3DPrinting @3dprinting #SelfHosted
Manyfold - Open Collective
A self-hosted 3d model organisation tool for 3d printing enthusiastsopencollective.com
New: Last Week in #Fediverse - ep 91
This week's news:
- @loops has launched and is now available for everyone!
- @radiofreefedi will shut down early next year
- Bridgy Fed talks about potential governance directions
Read at: https://fediversereport.com/last-week-in-fediverse-ep-91/
Last Week in Fediverse – ep 91Loops has finally launched, Radio Free Fedi will shut down, and governance for Bridgy Fed.
The News
Loops.video, the short-form video platform has finally launched, after weeks of delays. There is now an iOS app on TestFlight available, as well as an Android APK, and it there is no waitlist anymore. In some statistics shared by Loops developed Daniel Supernault, Loops now has more than 8000 people signed up and close to a 1000 videos posted. The app has the bare minimum of features, with only one feed that seems to be algorithmic, and there is no following feed. Supernault says that he is currently working on adding discovery features as well as notifications to the app. The app currently loads videos smoothly and quickly, and Supernault has already had to upgrade the server to deal with traffic. Loops is currently not federating with the rest of the fediverse, and you cannot interact with Loops from another fediverse account. This feature is planned, but there is no estimation when this will happen. Third party clients are already possible with Loops, and one is already available.Radio Free Fedi has announced that it will shut down in January 2025. Radio Free Fedi is a radio station and community that broadcasts music by people on the fediverse. The project has grown from a simple stream into multiple non-stop radio streams, a specialty channel and a channel for spoken word, and build up a catalogue of over 400 artists who’s art are broadcast on the radio. Running a project requires a large amount of work, and was largely done by one person. They say that this is not sustainable anymore, and that the way that the project is structured make handing the project over to someone else not an option. Radio Free Fedi has been a big part of the artist’s community on the fediverse, which has contributed to a culture of celebrating independent art, and the sunset of Radio Free Fedi is a loss for fediverse culture.
In an update on Bridgy Fed, the software that allows bridging between different protocols, creator Ryan Barrett talks about possible futures for Bridgy Fed. Barrett says that Bridgy Fed is currently a side project for him, but people make requests for Bridgy Fed to become bigger, and become ‘core infrastructure of the social web’. Barrett is open to that possibility, but not while the project is his personal side project, and is open for conversations to house the project in a larger organisation, and with someone with experience to lead the project.
The Social Web Foundation will organise a Devroom at FOSDEM. FOSDEM is a yearly conference in Brussels for free and open source software, and will be on February 1-2, 2025. The Social Web Foundation is inviting people and projects to give talks about ActivityPub, in the format of either a talk of 25 minutes for bigger projects, or a lightning talk of 8 minutes.
OpenVibe is a client for Mastodon, Bluesky and Nostr, and has now added support for cross-posting to Threads as well. OpenVibe also offers the ability to have a combined feed, that shows posts from your accounts on all the different networks into a single feed, which now can include your Threads account, as well as your Mastodon, Nostr and Bluesky accounts.
The shutdown of the botsin.space server lead to some new experiments with bots on the fediverse:
- Ktistec is a single-user ActivityPub server that added support for bots in the form of scripts that the server itself periodically runs.
- A super simple server scripts for bots.
The Links
- Fediblock, a Tiny History – Artist Marcia X.
- A faux “Eternal September” turns into flatness – The Nexus of Privacy.
- Fediverse Migrations: A Study of User Account Portability on the Mastodon Social Network – a paper for the Internet Measurement Conference.
- IFTAS is collaborating with Bonfire on building moderation tools into the upcoming platform.
- Another update on how traffic from different platforms compare to the German news site heise.de
- Lemmy development update for the last two weeks.
- An infographic and blog on how account recommendations work in Mastodon.
- Ghost’s weekly update on their work on ActivityPug.
- For Mastodon admins: a script to ‘restart delivery to instances that had some technical difficulties a while ago but are now back online’.
- Letterbook is a social networking platform build from scratch, currently under development, and is holding office hours for maintainers.
That’s all for this week, thanks for reading!
https://fediversereport.com/last-week-in-fediverse-ep-91/
Replies get an upgrade
Leisurely conversations and spirited debates, the conversation expands.Ghost (Building ActivityPub)
Tobias
3 weeks ago from Unimatrix Zero :: Primary Cluster :: Node One — (48, Leipziger Straße, Friedrichswerder, Mitte, Berlin, 10117, Germany)
If someone needs it, there is a Fediwall for the SFSCON in Bolzano next weekend. Really excited to get there again and be part of the Fediverse Track of the conference 😀
Sascha 😎 🏴 ⁂ (Fediverse)
3 weeks ago — (Rotwildgehege, Heuweg, Hardtwaldsiedlung, Oftersheim, Rhein-Neckar-Kreis, Baden-Württemberg, 68723, Germany)
[strong]Fehlerhafte Beiträge von Peertube Beiträgen?[/strong]
Soll das eigentlich so sein das Peertube Beiträge so dargestellt werden?
Das Video ist ja einwandfrei. Aber die Darstellung der Links darunter ist doch ein wenig sinnfrei.
#Friendica #Frage #Beiträge #Peertube #Fediverse #Darstellung !Friendica Support
Systemlosigkeit (feat. Pestalozzi) - Anarchonauten
Kinder sind HoffnungenNovalis
Sascha 😎 🏴 ⁂ (Fediverse)
3 weeks ago — (Rotwildgehege, Heuweg, Hardtwaldsiedlung, Oftersheim, Rhein-Neckar-Kreis, Baden-Württemberg, 68723, Germany)
[strong]Fehlerhafte Beiträge von Peertube Beiträgen?[/strong]
Soll das eigentlich so sein das Peertube Beiträge so dargestellt werden?
Das Video ist ja einwandfrei. Aber die Darstellung der Links (die zu einen anderen Video führen) darunter ist doch ein wenig sinnfrei.
#Friendica #Frage #Beiträge #Peertube #Fediverse #Darstellung !Friendica Support
Systemlosigkeit (feat. Pestalozzi) - Anarchonauten
Kinder sind HoffnungenNovalis
#Fediverse #Campact #DemokratieLiebe
On the two year anniversary of joining #Mastodon I'm super proud to share:
🚀 The Future is Federated - issue no.13 👩🚀
"The #Fediverse has empowered me to take back control from Big Tech. Now I want to help others do the same."
with mentions of @forgeandcraft (who made this awesome Fediverse t-shirt) @phanpy @ivory @Tusky @nimi @Roneyb @davidoclubb
#FOSS #FLOSS #Matrix #TheFutureIsFederated #blog #tech #activism #BigTech #socialmedia #education
The Fediverse has empowered me to take back control from Big Tech. Now I want to help others do the same.
The Fediverse has helped me regain control from the behavior modification empires of Big Tech. Now I want to help other people do the same.Elena Rossini