Search
Items tagged with: harassment
Earlier this year, I detailed a simple technique for deanonymizing scam sites on CloudFlare, by getting the back-end webserver to email you and reveal the server’s IP address (so you can forward your complaints to their ISP).
In a similar vein, I’d like to explain a simple technique for increasing the likelihood that your abuse reports on social media websites like Twitter get taken seriously.
Don’t Use the Easy Button
Every tweet (except your own) has a Report Tweet link attached to it. The user interface is different on web and mobile, but most people know how to find it.
The problem with this “easy button” is twofold:
- It’s low-effort and high-bandwidth, so a lot of people use it and therefore the signal-to-noise ratio isn’t very high.
- The “report tweet” workflow lets you select from one of a few narrowly defined categories of abuse without giving you any space to explain why it’s abusive.
For example: A lot of anti-furry hate is a dogwhistle for ableist or queerphobic rhetoric. Without knowing that context, how do you expect the folks handling abuse reports for social media companies to make the correct choice?
Instead, File an Abuse Report
This is actually a separate thing, and the link to the harassment report form is here. (This is one of many forms you can file with Twitter’s support team.)
Not only do you get to click the radio buttons that the Quick and Easy path allows, you also get to fill in a description of the problem.
A screenshot of the harassment report form.
The difference here isn’t theoretical; a concise explanation of the problem is the difference between your report being ignored and this:
https://twitter.com/SoatokDhole/status/1319983706858246146
If you have any friends that are frequent targets of social media harassment, and their reports aren’t taken seriously, share this article with them.
Art by Khia.
(That being said, I’m really sorry this is even necessary.)
What About Automation?
One motivation to still use the “easy button” when reporting abuse is if you’re hoping to trigger some automated mechanism (i.e. “If 3 different accounts report this as abuse, suspend their account until someone can investigate”).
In that case, press that easy button to your heart’s content.
https://twitter.com/packonines/status/1068746663764860929
https://soatok.blog/2020/11/12/deplatforming-hate-and-harassment/
#abuseReporting #cyberbullying #harassment #hateSpeech #onlineAbuse #SocialMedia #Twitter
Update (2021-01-09): There’s a newer blog post that covers different CloudFlare deanonymization techniques (with a real world case study).Furry Twitter is currently abuzz about a new site selling knock-off fursuits and illegally using photos from the owners of the actual fursuits without permission.
Understandably, the photographers and fursuiters whose work was ripped off by this website are upset and would like to exercise their legal recourse (i.e. DMCA takedown emails) of the scam site, but there’s a wrinkle:
Their contact info isn’t in DNS and their website is hosted behind CloudFlare.
CloudFlare.
Private DNS registration.You might think this is a show-stopper, but I’m going to show you how to get their server’s real IP address in one easy step.
Ordering the Server’s IP Address by Mail
Most knock-off site operators will choose open source eCommerce platforms like Magento, WooCommerce, and OpenCart, which usually have a mechanism for customers to register for an account and login.Usually this mechanism sends you an email when you authenticate.
(If it doesn’t, logout and use the “reset password” feature, which will almost certainly send you an email.)
Once you have an email from the scam site, you’re going to need to view the email headers.
With Gmail, can click the three dots on the right of an email then click “Show original”.
Account registration email.
Full email headers after clicking “Show original”.And there you have it. The IP address of the server behind CloudFlare delivered piping hot to your inbox in 30 minutes or less, or your money back.
That’s a fairer deal than any of these knock-off fursuit sites will give you.
Black magic and piss-poor opsec.
What Can We Do With The Server IP?
You can identify who hosts their website. (In this case, it’s a company called Net Minders.)With this knowledge in mind, you can send an email to their web hosting provider, citing the Digital Millennium Copyright Act.
One or two emails might get ignored, but discarding hundreds of distinct complaint emails from different people is bad for business. This (along with similar abuse complaints to the domain registrar, which isn’t obscured by DNS Privacy) should be enough to shut down these illicit websites.
The more you know!Epilogue
https://twitter.com/Mochiroo/status/1259289385876373504The technique is simple, effective, and portable. Use it whenever someone tries to prop up another website to peddle knock-off goods and tries to hide behind CloudFlare.
https://soatok.blog/2020/05/09/how-to-de-anonymize-scam-knock-off-sites-hiding-behind-cloudflare/
#cloudflare #deanonymize #DNS #fursuitScamSites #informationSecurity #OnlinePrivacy #opsec
Yesterday, Mastodon was abuzz regarding a strange new scraper that seemed to be pulling people’s profiles and content streams into a platform designed around monetization. Dubbed Content Nation, the site’s combination of strange design, stock images, and focus on getting paid for posts raised more than a few eyebrows. Indeed, the site visually resembles something akin to a domain parking page, with an eye-watering visual layout and strange mix of posts that don’t seem to fit anywhere.
- Holy stock art, Batman!
- I can’t make heads or tails of this.
Some long-standing admins poked and prodded at it, before declaring that Content Nation was, indeed, an effort to scrape Fediverse content for profit. It shared Unlisted posts in search results, seemingly rejected blocks, and deleted materials seemed to reappear on the website nearly instantly. All a person needed to do to verify it was put in their own user handle, and see their posts and profile get scraped out of nowhere.
A lot of people were angry, and readily pulled out the torches and pitchforks. The sad truth, though, was that this wasn’t a malicious scraper trying to crawl the network and make money off of people’s posts. It was some guy’s hobby project to build a service similar to micro.blog.
What is Content Nation?
Content Nation is, essentially, a feed reader for a small publishing community that just happened to be experimenting with ActivityPub. It’s a project developed by Sascha Nitsch, a backend developer and management consultant from Germany. Sascha is a relative outsider to the Fediverse that heard about the network, loved the idea behind it, and tried to integrate his site into the network.
The site is, understandably, somewhat jarring in its appearance, because Sascha is primarily a backend developer, not a frontend designer. He was more interested in building out a robust list of features prior to doing any visual design work, because the platform was still taking shape. As a one-man operation, this kind of approach made the most sense to him.
“The site was and is free,” Sascha wrote, “no ads, no cookies nor tracking. I did not make any money with the federation; it’s a service for users on the platform. And it was never intended to be to make money with those content.”
How did the Fediverse React?
Several people came forward to point out to Sascha that his platform interoperates very, very poorly with Mastodon, and that Sascha did not do sufficient research prior to launching his service. Until recently, Content Nation didn’t properly make use of a User Agent in its request, so it was easy to mistake for a scraper.
Compounding things further, people didn’t realize that the site implemented Webfinger in its search function, allowing people to load remote content by putting in an address into a search field. People would go to Content Nation, search for themselves, and inadvertently kick off a fetch request, leading them to believe they had just been scraped. In reality, this is how 99% of Fediverse servers operate by default.
When users sent GDPR takedowns, Sascha would comply, but the system had nothing in place to block anything. Those same users were distraught to once again search for themselves, only to find their own data all over again.
The High Barrier of Entry for Fediverse Development
The shortcomings described above paint a picture: Sascha was building a free ActivityPub library for his project While he managed to get the basic concepts down, there were still a lot of missing pieces that are essential for participating in the modern Fediverse. Unfortunately, a lot of those resources are not readily available to anyone.
Here’s the thing: if you were to take the ActivityPub specification from the W3C, and implement it as specified, you’d end up with something that wouldn’t correctly talk to any service in use today. Mastodon, and platforms designed to talk to it, have a dozen or so behaviors that are not actually in the spec at all: Webfinger, SharedInbox, Privacy Scopes, and Opt-Out for Search are just a few of them.
Many of these things are almost completely undocumented, and can only be developed by lengthy conversations with people who already built those things. Even Mastodon’s own specs say very little. The majority of people dismissed Content Nation as simply being a malicious attempt to slurp up their public and private content for profit. Even when Sascha tried to defend himself, he was ridiculed and mocked.
From Bad to Worse
Aside from simply blocking the domain and moving on, community members decided to have a little bit of extra fun, attempting to “make the crawler crash“, send angry emails to the service operator, and more. After some study of how the site worked, one person had the malicious idea to send a remote post containing child pornography to the site, before getting someone else to report Content Nation for Child Sexual Abuse Material.
To be clear: someone searched a list of known illegal material, loaded that remote content onto Content Nation locally, and then put up a red flag for someone to file a report. Given the server’s jurisdiction being in Germany, this could have been catastrophic: Germany’s laws regarding CSAM stipulate a one-year prison term minimum for possession of this kind of material. Just look at the recent case of a teacher who found out that a pornographic video was circulating about one of her students. When she tried to turn in evidence to the police, she was arrested.
It’s a case that causes people to shake their heads: A teacher wanted to help a student whose intimate video was circulating at school and now has to answer in court for, among other things, distributing child pornography.Following a complaint from the public prosecutor’s office, the Koblenz regional court overturned the decision of the Montabaur district court not to open main proceedings in this case. “The regional court, like the public prosecutor, considers the behavior of the accused to be criminal in principle,” said senior public prosecutor Thomas Büttinghaus. The educator is currently facing at least a year in prison – and with it the loss of her civil servant status.
Sascha’s life could have been turned upside down for absolutely nothing. Say what you will about how his website looked, or how his platform functioned: none of these things warranted such a disgusting level of abuse. Somebody basically tried to send a fledgling platform developer to prison, because they didn’t like what he was doing. A series of assumptions and misunderstandings escalated to this point.
Why is this Important?
Over the years, Mastodon’s user culture has become incredibly insular and hostile towards outsiders. Despite repeated claims of “People are just nicer here!” and “Everyone is just so welcoming!”, often those preaching about privacy and consent are the first to harass anyone doing something they don’t like. Reactions have extended to doxxing, death threats, DDoS attacks, and apparently, distribution of CSAM. Just the other week, Mastodon users were harassing a guy who built a protocol bridge that hadn’t even been enabled yet.
Neither of these things are first occurrences, either. People in the past have tried to build tooling for the Fediverse, from search engines to disposable account services for testing to indexes of verified accounts. People like Wil Wheaton were harassed off the network for their ignorance of nuances about who was on a given blocklist that they shared. Some Lemmy instsances have been flooded with CSAM as part of a community retaliation effort from other instances.
Mastodon’s user community have also long looked down their noses at other platforms such as Pleroma, due to a combination of platform rivalry, cultural clashes, personal squabbles, and an “us vs them” mentality. It wasn’t so long ago that simply using Pleroma was considered a valid reason for blocking someone on sight, because good people only used Mastodon.
Source: FediDB.org
Mastodon still makes up the majority of the Fediverse at this point, and acts as a defacto standard for ActivityPub. Many parts of the Mastodon community still threaten to block, doxx, or harass people simply because they expressed a thought or opinion that stands in contrast to what the hive mind demands.
Even Damon, at one point, has received death threats from total strangers for his perspective on FediPact and Threads that other people didn’t agree with. He’s told me on several occasions that the Fediverse doesn’t feel like it was made for people like him, and a good portion of it is due to Mastodon’s user culture.
Whatever this thing is, it’s not sustainable. A big aspect of Mastodon’s norms center around a type of Puritanical culture that half the time, isn’t even consistent with itself. We can’t advocate for a space and say that it’s so much better than everywhere else, when so many people are subjected to this.
The Aftermath
A report was filed with Content Nation’s host, Hetzner, due to the presence of CSAM being detected. However, Sascha’s platform was only set up to cache remote content for an hour prior to purging it. The best conclusion we can draw from this, at the moment, is that someone willingly set Content Nation up.
“I’m not sure it even was CSAM,” Sascha writes in a private chat, “I never saw the pictures, as they had already been deleted. The data was already removed from the cache, and the original server was down, so it wasn’t refreshed [on Content Nation].”
“My flat could have been raided, and I would not have an electronic device left to write this,” he added.
As of this writing, Content Nation has turned off all Fediverse integrations, and Sascha has been turned off of having anything to do with the network after having this experience. He has been effectively bullied off the network.
How can we avoid this happening again?
Throughout researching this article and situation, I think there are several things that really, really need to change for the better. The modern Fediverse operates involves a long list of internal knowledge that’s not really written down anywhere. No part of the ActivityPub spec or Mastodon talks about how to implement their special pieces, so that people writing new servers can be good actors.
As it stands today, no singular piece of Fediverse software includes instructions to load a “worst of the worst” blocklist when setting up an instance, or to put a Webfinger search form behind a login page. What seems like common sense to some people is literally a new concept to others.
Culturally, we need to accept that most people coming into the community for the first time are operating with a lack of prior knowledge. We can’t simply cross our arms and say “You should have known better”, and socially punish people, when in fact there was no way for them to learn about it.
https://wedistribute.org/2024/03/contentnation-mastodons-toxicity/
#CSAM #Harassment #TrustSafety
Looking like we have a new Fedi scraper at contentnation.net, if you're interested in not being a part of that
Twitter rival Mastodon isn’t safe from online mobs either
The mass reporting and suspension of actor Wil Wheaton prompts the open-source platform to examine its moderation tools.Megan Farokhmanesh (The Verge)
Mute and block only works if they stick to one account, which they don't. Mine has about 10 that I know of so far.
Instance blocking helps on occasion, but again, you can register accounts on many instances.
Can you filter words, sure.
Telling me to stay off socials is not a productive or useful answer. Thanks.
I am simply speaking up because others are experiencing prolonged and targeted harassment too and I want you to know it exists here.
Please read all the comments some are useful, many are not. 😅
Here are my links if you'd like to support my work or join my email list:
I work to protect privacy, profit & peace of mind. Need a consult? https://lockdownyourlife.as.me
Being stalked/harassed: https://lockdownyourlife.com/7-steps-protect/
Join the email list: https://lockdownyourlife.mykajabi.com/thetwitter
Support my work: https://ko-fi.com/lockdownyourlife
#harassment #stalking #infosec #WomenInTech #techie #InformationSecurity
Protect Yourself from Stalking & Harassment | Lock Down Your Life
How you protect yourself, depends on the type of stalker, the laws in your region, and the aggressiveness of the harasser/stalker.lockitdown (Lock Down Your Life)