Skip to main content


@social elephant in the room After 15 years administering a dedicated server, I have yet to find a way to set up a backup strategy that I can then forget. If it requires any manual intervention from me or interrupts the service I host for the slightest time, I will eventually drop it.

Now, even with an all-in-one script like backup-manager, I ended with a backup compressed in LZMA that I mysteriously couldn't open on a different machine, and it wasn't uploaded to the FTP destination I gave it, probably because it was short on space, but I wasn't informed about the issue.

So I really don't know what to do about it. I've looked into MariaDB incremental backups to reduce the backup process load, but it can't easily be automatized and restoring a full backup requires as many steps as they were incremental backups, which makes it absolutely unworkable.

@social elephant in the room Thank you for your recipe, however I personally don't have a separate machine to attempt any restore. Also, it would require a manual intervention on my part, which I wouldn't do.

And I would tune out the daily emails real quick. I enjoy Uptime Robot because it only sends an email when the monitored websites are down (or up again).

Backing up files locally is easy enough to automate with rsync, but I still need a practical solution for database that doesn't involve a full SQL dump since the performance impact is sizable. I also need a solution that creates a few days worth of backups and then clean up after itself. backup-manager is supposed to do so, but the absence of alert when the process doesn't complete as expected is unnerving.

> I have yet to find a way to set up a backup strategy that I can then forget

I am doing some backups at work as part of my duties for years now and I am pretty sure no such thing exists. Everything fails and gets stuck, health checks included.

At home I also use backup manager but just to generate backups and then my own script to transfer them (and another one to send notification if the resulting files aren't produced).

However broken backups still happen, I was in similar situation when my machine got bad RAM. Everything worked for a while but not quite and while it produced archives just fine they were broken.

Heroic exploits indeed. Good work on the recovery.

There are two adages that spring to mind:
1) you can never have enough backups
2) if you don't test your backups, they don't work

Glad you’re back! As it’s the first of the month, my cloud servers are running full backups instead of the usual incrementals. It takes much longer, and surprises me every time I go into my chatops channel and find them missing. Your story resonates.
Whatever generates the ActivityPub version of this post strips all the formatting, including the numbers and carriage returns. 😵‍💫
@Brad Koehn ☑️ Where do you see the formatless version of this post? It looks fine on both hachyderm.io and diaspora.koehn.com.

@Александр @social elephant in the room

I am pretty sure no such thing exists. Everything fails and gets stuck, health checks included.


I am not surprised, but still disappointed. It just doesn't work with my own mindset. I have infrequently tried to set something up, but each time the drawbacks ended up being greater than even the potential to recover from complete loss.

I've come to live with the risk and pre-accept potential losses. It still doesn't feel great, but it just doesn't work otherwise.

As an aside, I'm almost surprised it was relatively easy to migrate server once I had access to both machines at the same time. The biggest hurdles were replicating the non-system users from my old server to the new one and figure out what PHP and Apache2 modules I was using. Full database backup went without a hitch, rsync worked wonderfully and DNS switch was quick to the new server's IP.

Image/photo

I’m using Ivory as a client.

@Brad Koehn ☑️ Oof, this doesn't look good. Since both Mastodon and Diaspora kept the formatting, I'm going to blame your client from stripping HMTL from the post.