Skip to main content


Your system is out of memory, so it kills random processes. You need more memory, possibly it is enough to enlarge the swap partition.

@Michael Vogel Thanks for response.

I have 3gigs free, the worker consumes it all (and I think that's not expected behavior). PHP limit is set at 256MB.

I just have seen it on htop - single worker process is consuming all the memory and then gets killed by OOM killer.

You don't have got any swap. Working without any swap can be problematic.

It worked out well for 2 years so far 😀

So it is actually expected behavior to have 3 gigs of RAM consumed by one worker thread? I expected it to be a bug I would like to help debugging somehow

Just for the sake of keeping all informations together - it's not really memory-heavy server while this single worker thread is not running. I don't understand the magic behind worker thread, but if it's trying to download 3gig of data, that could be propably some kind of attack, can't it?

@Schmaker Yes, it might indicate, that the worker had tried to download a large video file or so and then parsed it in-memory. The worker is a CLI process, therefore you need to look in the proper php.ini file. Here on my #Devuan Linux it is /etc/php/7.4/cli/php.ini. Have you edited this file?

cc: @Michael Vogel

@Roland Häder @Michael Vogel @Schmaker @Friendica Support

#Devuan

If you are using PHP in apache (or likewise) and at cli, remember to check the alternative settings so you are running the correct version (the same version) of PHP.

E.g. update-alternatives (command can very on different flavours of Linux)

It's installed on ArchLinux, PHP8 in both console and Apache server

php -i | grep -i version
PHP Version => 8.3.1
PHP Version => 8.3.1
Or decrease the number of workers from Friendica or the workers from Apache or fpm
As i noted in comment above - it seems like single process consumes all the memory even though PHP limit is set to 256MB

Thanks for all the support so far, you guys are great.

Yet, I'm stuck at the moment and can't find the way out. Is there maybe a way to pin-point problematic worker task to shut it down? I get like hundred of PHP oom kills daily.

Or maybe it's time to actually report it as bug?

@Schmaker I found a `LimitStream` class in #GuzzleHTTP, that is the library Friendica uses. We can change over that class and set a limit on read bytes (configurable!). The default should be the default for Friendica, too.

@Roland Häder That would mean my worker would not be going nuts consuming 3+gigs of RAM as it shuts itself down after configured limit, if I understand it correctly?

I still don't get it why worker bot does not respect php.ini limit in this single process, thats what puzzles me most

@Schmaker Yes, that is puzzling me the most, too.
@Schmaker In Friendica's code I found Network/HTTPClient/Factory/HttpClient. There is a line saying $resolver->setMaxResponseDataSize(1000000); which seem to set maximum size to 1 billion Bytes (not 1 GB!).
@Schmaker So Friendica uses mattwright\URLResolver wrap around cURL and not GuzzleHTTP's client and stream to download files.

@Schmaker If I'm mistaken, that class doesn't loop over read data blocks (e.g. to abort on maximum size reached). It seem to download the whole URL and then check header + body if they are bigger than maximum allowed? Or is somewhere a loop here which does that check?

See it's private function fetchURL() for instance. There is only a closure and nothing else, so far I also checked some method invoking this one and also there is no "limiting" loop.

Is it safe to change the line just for testing purpose to find out if it actually resolves the issue?
@Schmaker Yes, you can touch that line for testing purpose.
Thank you, I'll test that out

It seems like setting $resolver->setMaxResponseDataSize(100000); did the trick.

I'll let it set like this through the night and we will see if there will be any more OOM-killer e-mails from my VPS provider.

Thanks for the help Roland

Did some process watching and it seems like none of workers reached more than 20% of memory (roughly 800 MB)
@Schmaker I think as a first step that static value needs to be configurable and then next maybe the code needs to be rewritten to a more flexible class, like GuzzleHTTP offers with its LimitStream class.
@Schmaker But I guess it won't solve the problem, see my other findings in the code. And as I said: I might be wrong, maybe I oversaw a "limiting" loop.
Might be related? Someone is flooding #ActivityPub with fake servers?
@Schmaker I'm not sure if that is related to your OOM messages. I guess more some few files are close to that limit which causes the OOM. So I guess we need a new configuration option here which is trivial to add.
@Schmaker Oh, I was to lazy, I have to add another change.

If having problem fixed within one day means being lazy, then I'm totally fine with that 😀

Thanks, man - seriously

So, after the worker managed to get over all of the tasks without bigger issues the issue showed again and my worker is going nuts again.

Can't say for sure, but can't the worker be trying to process deffered tasks without specified limit?

Tried to reduce worker count to 1 and tasks per worker to 1 as well, no change.

Getting roughly 2,5 GB memory usage for the problematic worker before getting killed. Maybe it's expected behavior and I'm keeping you busy for no reason?

It definitely seems wierd to me. What's even more wierd is, that everything worked for roughly 2 days with like 6 OOM kills daily (exactly what I got for last 2 years or so 😀 )

@Schmaker Have you updated your instance to latest develop code? There is a new configuration entry performance.max_response_data_size which is being set at default 1000000. You can override it in your local configuration file.
@Roland Häder
I didn't, I only have changed the hard-coded value
@Roland Häder
Just to be clear - I'm running latest release, not git develop branch
Either server deffered the task or managed to get it through - last OOM kill was 3 hours ago

So, just to close this case somehow - it seems like problem disappeared for now. Heaviest workers are sitting with max. 1-1,5GB RAM - I consider that normal when requests are limited to 1GB.

If it shows up again, I'll open bug report as that may be something harder to troubleshoot.

Once again, thanks guys