Skip to main content


On friendica.utzer.de I have a very long queue, I know I have to do something to give the server more resources or change the setup somehow, but I am not sure if this is still normal that there is so many of this contact cleanup (kind of) jobs in the queue.

Message queues
99705 - 53703

Growing since I switched to stable from RC a few weeks ago (was it?).

83998430 	RemoveUnusedAvatars 		2022-01-19 02:00:40 	40
90694651 	ContactDiscovery 	https://friendica.utzer.de/profile/???? 	2022-02-24 22:01:37 	40
90694744 	UpdateContact 	716132 	2022-02-24 22:07:35 	40
90694913 	UpdateContact 	688567 	2022-02-24 22:19:07 	40
90694914 	UpdateContact 	1281 	2022-02-24 22:20:08 	40

The first 999 jobs seem to be of this kind.

Anyone seeing such behavior?

!Friendica Support #Friendica

10 jobs more again then when I copied the numbers.
and 100 more. Maybe some posts for delivery now.
As the node is much smaller not to this extent, but yes I'm seeing the raising queue number as well. For me it looks annoyingly as if the former possibility to not import inactive threads was (un-/intentionally) removed as I see threads completed that were intentionally not completed before.
Just ignore all stuff with the level 40 and above. These updates are done regularly and can be done at any time.

@Michael Vogel I know.

In the list would the lower task priority be shown further up in the list? The list only shows 999 items, so just want to know if it is sorted by prio increasing?

Yeah, I noted this kind of thing when I had to manually start the worker after a power failure. I always thought it was just part of what friendica did. It doesn't do it all that much on my end then again I have it running on Kubernetes and finely got the cron job working with it.
growing each day.
Message queues
105381 - 77385

About +5000 a day.

Now:

Message queues
117429 - 95014
I also run a friendica instance, and I have never seen such high queue numbers. For me, the queues are always around 0, unless the contact matching to Fediverse is running. In that case, there are about 1000 requests in the queue - but even those are processed in about 2 minutes.
Yes, high numbers are rare, but I habe a bit busy node there. Got some news bots there, some not safe for work accounts and some pretty active users.

Set the loglevel at least to "Notice" then have a look for entries with "Load:" in it. These lines can teil something about the performance of the system.

On squeet.me this - for example - looks this way:
Load: 4.21/20 - processes: 348/16/8052 - jpm: 1,244 (0:0, 20:208, 40:4) - maximum: 15/30

This means that the system is currently processing 1,244 jobs per minute (which is a really good number). 15 out of 30 workers can be used. 348 jobs are deferred, 16 workers are currently running and 8052 jobs are waiting. These are extremely good numbers.

A number of a maximum of 7 workers is much too low. Please increase it to 20 or even 30. Also the maximum load of 8 is much too low. You should easily be able to set it to 20.
I increased the load to 12 too, 12 cores, so 12 should be fine. Or not?
⇧