Skip to main content


An experimental social media platform revealed that user moderation works; decentralized moderation can work well.

Related observation: Mastodon is decentralized.

#socialnetwork #socialmedia #moderation #censorship https://news.mit.edu/2022/social-media-users-assess-content-1116 /via @tchambers
I see one big problem based on my experience: on an open and generic network (like facebook) there are huge masses of ignorant/mislead/biased people as well as a lot of trolls (people abusing the system for the sake of abuse).
When any kind of scoring system is based on the size of the population these people may input significant amount of bogus results.
Building a web-of-trust may work but same problem exists: ignorance-wot would be huge.
The Fediverse enjoyed (and partly still does) a rather homogenously educated and open community but it's observable that lot of unreliably trustable people have joined and they keep joining. Community moderation this way may shift from reliable to unreliable, unless new people are following he local habits of taking care. [If there's such a thing here.]
@grin interesting. That makes sense. Any system that can be gamed will be, especially if there are incentives to do so. There are ways, programmatic and otherwise, to make gaming moderation systems more difficult or ever impossible. You’re right: so will try to poison data in a users-moderate set up. One vital piece will be filtering that poison data out. Not easy and also not impossible.
@grin
Also please not that I didn't even claimed bad intentions. A large amount of people simply isn't fit to judge "fake news" reliably. It's not gaming the system, it's rather a serious skew due to large masses of people with inability to judge.
But I agree that even just "reminding people to be alert" could make a _big_ difference.
@grin I wonder if anyone has done research on this and we wouldn't have to simply guess.
@grin
As for prevention: many systems start from a "known good" set and build web of trust on them. It depends how good the base is: if it's consisting responsible and unbiased people (or at least who can _act_ unbiased) then the system may work pretty well. I'd really love to see it in practice and analyse the numbers after a while!
@grin Phrases like "known good" and "homogenously [sic] educated and open community" might collapse remarkably easily into exclusionary and problematic things I would imagine.
@grin
I agree.
However it has been worked pretty well for #GPG #WoT in practice. People pick sides, eg. "virtual root node people" to trust and this trust is propagted along the chains of trust.
#gpg #wot
A similar system was StartCom's Community Certificate Authority, which used similat #WoT system with "certified" agents as "collector and verifier nodes".
#wot