Skip to main content


I would love to ask the official opinion of @Gargron about the whole Fediblock situation and people who abuse it on both sides of the "trenches".
One side tries to fight for their right to post things under "free speech", another side finds that "free speech" is harassing, offensive, and lacking rules and moderation. Due to that, the fediblock list is now being heavily used daily multiple times a day to "root out the problematic instances". Some even go so far as creating tool for hunting down instances that don't match the "perfect" idea of the instance ideal.
Your own instance was accused of "insufficient/lacking moderation" at some point and was not recommended due to that for the newcomers from Twitter. What is your opinion about this whole situation and are you/were you aware of how wide the problem is?
I would also involve people who are responsible for the whole ActivityPub protocol, as the Fediblock may be a consequence of the behaviour emergent from the problematic protocol/network design.

@cwebber@torgo@evan

There has to be discussion on the matter, what the Fediblock is good for, why it exists, why it should exist and why it really shouldn't exist - the way we know it.

Because I actually see both sides of this argument, Fediblock being a good short-term solution (as a band-aid, pretty much), but actually awful long-term because it's ripe for abuse and tends to create unhealthy power dynamics.

I also like the solution proposed by @deadsuperhero where he proposes greylists which new instances are appended to. Basically, it's "Federation is automatic, trust is not" thing.

@Gargron
i've heard some people propose ideas based on #OCAP that would make interactions between users and instances more consent-based, but i don't understand it enough myself
here's an article i've seen about it: https://ariadne.space/2019/01/18/what-would-activitypub-look-like-with-capability-based-security-anyway/
My understanding is that OCap, compared to traditional ACLs, is a matter of where the permissions management is applied. Whereas ACL is a top-down management of permissions from the platform, OCap sets permissions at the object level and delegates from there.

In a federated system, where objects are being passed around and fetched, this means that people are only able to access or interact with objects that grant them permissions to do so. Those permissions can be changed by the author, of course, but because things are set up this way, it's far less likely that malicious actors can fall between the cracks and harass people.
I have been largely keeping out of the debate about #fediverse β€œgovernance”. As someone who has participated in OSS governance systems I see the value of them and I do think a #fediverse foundation could be a good idea. More importantly I think now that the overall #activitypub system is being used so much more there will be a need to revise and extend the protocol and for that we will need @w3c activity (workshop / working group).
The problem is, it's a social project, it's about people, so the question of governance is baked into the design of the system.

You designed the system on an assumption that there would be no bad actors - at least in the social sense - no trolls, no harassers, no fascists, everyone is polite and reasonable. Turns out, it's not true.

So the people have to pick up that slack - and deal with trolls, harassers, fascists and general morons the only way they know how - by organizing the servers that host them into centralized lists and blocking them wholesale.

It worked, for a while. But then again, it concentrated the power over the many in the hands of the few - and it also destined to fail someday. It's easy to add servers to a list. It's hard to investigate if a server should be on the list. So most people do the easy thing, not the right thing. And, depending on the design, those two are not always the same, as we know.

@deadsuperhero@rnd@ZySoua@cwebber@evan@Gargron@w3c
you make a lot of assumptions that just aren't true here.

We knew that there would be blocks and filtering.

Many of us had experience with huge social networks already. I know in my case we'd dealt with just about every kind of abusive behavior imaginable on Identi.ca and StatusNet.

Blocks and filtering aren't part of the protocol because the protocol says what gets delivered, not what doesn't get delivered.
It may out of scope of the #activitypub protocol -- probably a good idea --, but the conversation, and the resolution of the conversation, should there be consensus, for a "better" version of fediblock needs to happen somewhere. So where is "somewhere" and who needs to be there?
It doesn't make much sense to make them part of AP, from a technical point of view. You need to design defensive protocols on the assumption that no node other than your own may be trusted. So AP must deliver everything, and your local node must decide whether a remote node is trustworthy for inbound or outbound messages.

What you could do is create a block recommendation message, but you still need local control over whether to act on it. Even such a...
I mean, even the decision of making the system decentralized - is in and of itself a VERY MUCH a decision about governance: you decided that the network and all of its subjects should by and large be independent of any one central authority's will.

You might also call such problems, decisions and solutions "political", because that's what they really are. And you can't escape them, they're literally everywhere humans are.

@deadsuperhero@rnd@ZySoua@cwebber@evan@Gargron@w3c
making the architecture decentralised was indeed a decision. We mirrored the topology of the Web and of email.
Yes, of course - but WHY did you do this.

Probably because you like the idea of the Internet being the place where people have the power to govern themselves, right? Which, yeah, that, I completely agree with that choice.
⇧