Skip to main content

Do not underestimate this CSAM matter. As the recent attention from Facebook/Threads illustrates, big tech is starting to wake up to the danger the Fediverse represents to their regime of surveillance capitalism. The ostensible justification for attacking the Fediverse in the name of "rooting out child pornography" is a threat we need to take seriously.
@Mastodon Migration @Electronic Frontier Foundation Yes we need to take this seriously, but not in the way that big-korpo would like to push on us - that is, connecting to their API , which will scan all our images.
Since such APIs for scanning #CSAM will never be open and free (so that criminals can not "test" materials before publication) then the only option is a decent #moderation #fediverse. But decent means actually manually reviewing all photo/video material published on the servers. And this, in turn, indicates that instances should be no more than real moderation capabilities. Such manual moderation does not seem realistic on instances with tens-hundreds of thousands of accounts.
Is there any guide of best practices for single-user instance operators wanting to avoid unnecessary legal liability?