Skip to main content


Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:

https://julianoliver.com/projects/science-is-poetry/

The page may grow a bit. Just wanted to get it out the door.

#AI #bigtech

Title image for the Science is Poetry project page, featuring the word Counteroffensive followed by auto-generated babble on a purple background.
This entry was edited (4 weeks ago)

If you're interested in learning more about implementations of resistance in this era of unchecked Big AI, direct action strategies and the techno-politics therein, be sure to check out ASRG's site (https://algorithmic-sabotage.gitlab.io/asrg/) and give them a follow here on Mastodon (@asrg@tldr.nettime.org).

They've put a lot of heartbeats and neurons - human stuff - into this area.

A newcomer frantically lost in the Caves of Babble.

----
3.215.221.125 - - [16/Apr/2026:06:14:25 +0200] "GET /noodles/images/primigenous/orchiepididymitis/Lord/havent.png HTTP/1.1" 200 111459 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36" "-"
----

Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?

If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.

https://julianoliver.com/projects/science-is-poetry/

#ai #bigtech #tacticalmedia

A bit over half a million page reads a day by crawlers rn. Just to say the server is doing some good work.
A screenshot of graphical webserver log reader output showing the cumulative hits the AI tarpit has received.
Thanks all for the fine domains! I've decided to spin up a new VM and do all the site configs and TLS chain for them at once - more efficient, less prone to error. I will get onto that on my tomorrow and report back here.

I have only linked them here and on the landing page, and already it's gone nuts.

These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.

I've started to harvest a list of AI crawler endpoint addrs for your blacklisting pleasure.

I'll try to keep it updated. I've been fastidious with ensuring I'm only pulling those related to the known user agent, so as not to have any false positives

https://scienceispoetry.net/files/parasites.txt

It is at the same path for all contributed domains.

For instance:

https://carrot.mro1.de/files/parasites.txt

It's approaching DoS at this point. This just one of the VMs, and just OpenAI's parasite.

Threading's holding up but need some more tuning of rate limits and burst. Trying sending 429's now to ask them to play nice.

To think the www was built for people.

And here we are

Even faster now.

Again, these pages are randomly generated, and each line is a page request from a crawler.

To think of the energy expended at a global scale, the waste. All the money, water & minerals thrown at this. These AI companies are near DoS'ing the human web as they deep-sea trawl our content.

Computationally, infrastructurally, & culturally, it's an obscenity,

This entry was edited (3 weeks ago)

- Mum, if you made a chain out of all the endpoint addresses of AI crawlers, how far would it reach?

- All the way to the moon, darling. All the way to the moon.

https://scienceispoetry.net/files/parasites.txt

Here's a thing I did in a couple of mins to ban all IPs in the parasites.txt serverside. You could ofc REJECT rather than DROP to send a message.

---
#!/bin/bash

while read parasite;
do
if [[ "$parasite" == *"."* ]]; then
iptables -I INPUT -s "$parasite" -j DROP
elif [[ "$parasite" == *":"* ]]; then
ip6tables -I INPUT -s "$parasite" -j DROP
fi
done < /path/to/parasites.txt
---

Actual hits dropping slightly, but more data is pulled from the tarpit day on day. This is reflected by a higher proportion of HTTP 200's - so less bad req's. Less reaching for what isn't there, just want the madness.

Unclear why this has changed.

Graphical console output rendered on the server showing daily hits since April 9, averaging at a bit over half a million a day.

My log analysis shows that what these AI crawlers do is swarm content to get around rate limiting; with many end-points each can be limited to sane human defaults and their automation can still harvest content at massive scales from the same source in little time.

I noticed however that (for unknown reasons) Anthropic started reducing the number of crawler endpoints, tapering down traffic from them. So I doubled the rate to 2/s. This added over 100k hits to the logs in a day.

Graphical console output showing the daily traffic volume, as hits, for the project Science is Poetry. One can see traffic rising to over 500k per day, and then tapering off as Anthropic reduced crawler endpoints, and then back up again when the webserver's rate limit was doubled.

Nearly a month later you would've thought that the crawlers would've given up by now, dropped off, blacklisted the IPs, or perhaps even the domains themselves.

And yet no. As I tentatively guessed, thanks to your donated domains (and the people linking them in their sites) it has only grown.

I don't expect it to run this hot for the long term, but yesterday's hit count (these are almost 100% reads of randomly generated pages by AI crawlers) was near 1M.

A section of a screenshot of log analysis output on the server showing almost 1M hits for the 6th of May.

For any naysayers out there as to how effective all this is, or could be, some recent research shows you can do a lot with a little:

https://arxiv.org/abs/2510.07192

Researchers found that a very small corpora of poison content has largely the same impact, regardless of the size of the data in the model itself:

"We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data."