Once again my personal web server is on its knees, this time thanks to Amazon who's probing a non-existent Health Check endpoint with a rare intensity. In the Apache access log, it looks like this:
<domain>:80 15.177.10.187 - - [25/Sep/2025:22:04:50 +0000] "GET /ok HTTP/1.1" 404 457 "-" "Amazon-Route53-Health-Check-Service (ref 0c1421fb-b0fe-4dbd-af57-dc05457a9d2e; report http://amzn.to/1vsZADi)"
<domain>:80 15.177.26.71 - - [25/Sep/2025:22:04:50 +0000] "GET /ok HTTP/1.1" 404 457 "-" "Amazon-Route53-Health-Check-Service (ref 0c1421fb-b0fe-4dbd-af57-dc05457a9d2e; report http://amzn.to/1vsZADi)"
<domain>:80 15.177.42.155 - - [25/Sep/2025:22:04:50 +0000] "GET /ok HTTP/1.1" 404 457 "-" "Amazon-Route53-Health-Check-Service (ref 0c1421fb-b0fe-4dbd-af57-dc05457a9d2e; report http://amzn.to/1vsZADi)"
<domain>:80 15.177.30.95 - - [25/Sep/2025:22:04:50 +0000] "GET /ok HTTP/1.1" 404 457 "-" "Amazon-Route53-Health-Check-Service (ref 0c1421fb-b0fe-4dbd-af57-dc05457a9d2e; report http://amzn.to/1vsZADi)"
<domain>:80 15.177.50.106 - - [25/Sep/2025:22:04:50 +0000] "GET /ok HTTP/1.1" 404 457 "-" "Amazon-Route53-Health-Check-Service (ref 0c1421fb-b0fe-4dbd-af57-dc05457a9d2e; report http://amzn.to/1vsZADi)"I followed the link provided and submitted a report, but I'm guessing they are only handled during US business hours. To be able to access my other web sites, I took offline the target vhost, and Amazon immediately switched to another one. I filed a report, took it offline, etc... Four vhosts down the line, the web form prevented me from submitting yet another report because of a rate limiting feature.
I am seething with rage and I want AWS IPs off my web server, but I'm off my depth in system administration. I see I could do it with iptables (by compiling a list of IP blocks from the JSON provided by Amazon) and I was hoping fail2ban would have a ready-made jail, but it seems to be meant for repeated authentication errors, not for crawler errors.
Neither of these solutions feel workable, would anyone have an easier method to cut AWS off my box?
Thanks a bunch!
Hypolite Petovan
•silverwizard
•@Hypolite Petovan You probably just need a rule for fail2ban - then use a couple of retries. I can, uh, probably make that more cogent if you need.
like this
Shiri Bailem and Hypolite Petovan like this.
Hypolite Petovan
•maxretry=?silverwizard
•@Hypolite Petovan yeah, in this jail config. I use 2 for endpoints like this, once is happenstance, twice is enemy action imo.
So yeah, maxretry=2 is what I meant
Hypolite Petovan
•*.login/var/log/apache2AND all its sub-folders. I tried/var/log/apache2/**/*.logdoesn't pick up the files in/var/log/apache2itself, I'm not sure what else to try.silverwizard
•Hypolite Petovan
•/var/log/apache2/*/*.logwould only match log files in the first-level subfolders.silverwizard
•@Hypolite Petovan Oh, yeah, possibly. I think that might be special case globbing - and I don't know how fail2ban globs.
Based on jail.conf's manpage I think it might be a problem if you're rotating logs constantly in a glob.
Can you explain how your apache logs are configured? I just am not sure what patterns to use.
Hypolite Petovan likes this.
Hypolite Petovan
•/var/log/apache2, but there is a root access.log in that folder also for all the requests that go to the top-level domain, i.e. not what I call a vhost even if it's also one in the Apache configuration. Push come to shove, I can probably ignore it in favor of the subfolder ones.silverwizard
•Hypolite Petovan likes this.
Hypolite Petovan
•@silverwizard I did it! I put on my big sysadmin pants, I straightened my Apache2 vhost configuration, neatly stored access logs in subfolders, then created my user-agent matching pattern in fail2ban and banned my first IPs for an hour.
I did it but man did it take a lot out of me. I'm reasonably satisfied though. Now I don't entirely remember which domains I took down to save the CPU. My
sites-availablefolder is full of dead domains. πsilverwizard
•@Hypolite Petovan Oooof - sysadminning is the worst! You deserve a break.
But hey - glad it's working out! And in the future, if you need more help, feel free to ask me throw more effort at these things (you've put a lot of valuable volunteer time into something I value - and I'm happy to return the favour).
But also - hey! Nice. Hopefully it helps.
Hypolite Petovan likes this.
Hypolite Petovan
•silverwizard likes this.
silverwizard
•Hypolite Petovan likes this.
Brad Koehn βοΈ
•Hypolite Petovan likes this.
Hypolite Petovan
•Robert "Anaerin" Johnston
•While it's not recommended, fail2ban has the "apache-badbots" config you can enable. You could then add a line/rule to block "Amazon-Route53-Health-Check-Service"
There's some details here: https://askubuntu.com/questions/1116001/block-badbot-with-fail2ban-via-user-agents-in-access-log
Block badbot with fail2ban via user agents in access.log
Ask UbuntuHypolite Petovan likes this.
Hypolite Petovan
•Robert "Anaerin" Johnston
•Hypolite Petovan likes this.
silverwizard
•Hypolite Petovan
•Shiri Bailem likes this.
Hypolite Petovan
•Shiri Bailem likes this.
Brad Koehn βοΈ
•If you have ufw installed, itβs just
ufw deny from <ip or cidr> to any.Or in Apache you can
Hypolite Petovan likes this.
Hypolite Petovan
•Hypolite Petovan
•@Shiri Bailem Nginx doesn't have a built-in module like Apache has, so it has to use PHP-FPM. I have considered it, but I would like to replicate the system where each vhost is served by its own user to enforce separation of concerns.
I let a few people host stuff on my server, so if one of them gets hacked, I want to avoid lateral traversal and limit the damage to that user's home directory. I'm not sure how to do this with php-fpm pools without exposing the server to resource depletion since resources are allocated per pool and not globally.
Although I'm already fighting with resource depletion, so I'm not sure what I'm afraid of.
It's been difficult lately, I have had issue wrapping my head around situations I face at work and off work, and I feel utterly inadequate. I might take you up on your offer to chat, even if little is achieved technically!
Brad Koehn βοΈ
•httpd.conf.Hypolite Petovan likes this.
Hypolite Petovan
•@Shiri Bailem Containers would have been smart, including to have multiple versions of PHP running at the same time depending on when the websites were initially published. I spent way too much time trying to fix deprecation issues in archives websites.
Alas, that ship has sailed too.
Shiri Bailem likes this.