Backups - what are your ingredients?

InceptionHostingInceptionHosting Hosting ProviderOG

What is your preferred method of backup?

Do you use an AIO 3rd party service, do you script it yourself, if so what is your preferred transport method i.e. rscync, rscync+ssh, scp, ftp, sftp, remote mount, something else?

Do you push or pull?

How fixed on your current method are you?

I am designing something at the moment and would appreciate input.

https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.

Thanked by (2)g4m3r bdl
«1

Comments

  • XsltelXsltel Hosting Provider
    edited May 2020

    I created my own bash scripts to repair & optimize all mysql databases then dump them to a specific folder.
    using BorgBackup to backup and dedup both dumped mysql and files incrementally through SSH. (that for cPanel & Directadmin shared servers)
    as for VPS Nodes I do LVM snapshot and dd it through BorgBackup to dedup the image and save only changed blocks on Hetzner storage.

    BorgBackup & Hetzner saved me a lot of money if I were to use JetBackup or Acronis for the number of servers I manage.

    PS: forget to mention I do push backups as I have some servers behind nat and don't have dedicated IP to connect to.

    Thanked by (1)Hetzner_OL

    Xsltel OU | A One-man show powered by 250 grams of brain
    Offering reliable hosting services, Server management since 2011 and free cPanel hosting since 2020

  • edited May 2020

    since 2013 when i learned: rsync+ssh

  • SolaireSolaire OG
    edited May 2020

    Wrote a small wrapper around borgbackup that is scheduled by cron (usually between 2 and 3 AM). It consists of dumping all MySQL databases and backing up any folders I configure (usually /var/www and docker volumes).

    Backups are pushed to a Borg repo on a remote VPS that pushes the backups to my homeserver later that night (rsync). Then there's a slab + slice I manually keep in sync (rsync) every once in a while to prevent a compromised Borg server from wiping all my backups. This is after I run backup verification, so no matter what, I always have a working backup. I'm about to add Google Drive as another backup destination for the verified / manual backup but am unfortunately lacking the time to set it up. But for what it's worth, backups have never failed on me so far in the current setup anyway.

    Thanked by (1)Ympker
  • Real men don't do backups...... but when they do, they just rsync.......

    Thanked by (2)verjin skorous
  • InceptionHostingInceptionHosting Hosting ProviderOG

    Thanks for the inputs, i suspect the trend will be dump and push for the majority.

    https://inceptionhosting.com
    Please do not use the PM system here for Inception Hosting support issues.

  • I use rsync+ssh, pull from shared host accounts. Using hardlinks for a sort-of incremental setup.
    Also testing rclone ATM ...

  • @Xsltel said:
    I created my own bash scripts to repair & optimize all mysql databases then dump them to a specific folder.
    using BorgBackup to backup and dedup both dumped mysql and files incrementally through SSH. (that for cPanel & Directadmin shared servers)
    as for VPS Nodes I do LVM snapshot and dd it through BorgBackup to dedup the image and save only changed blocks on Hetzner storage.

    BorgBackup & Hetzner saved me a lot of money if I were to use JetBackup or Acronis for the number of servers I manage.

    PS: forget to mention I do push backups as I have some servers behind nat and don't have dedicated IP to connect to.

    Right now I do via cPanel backups to Google Drive and S3. I am wondering why you are not using Backup feature of cPanel? Is there any specific reason or am I missing something related to performance?

    ⚆ ͜ʖ ͡⚆ Thanked by (1281): verjin

  • MichaelCeeMichaelCee Hosting ProviderOGServices Provider

    I mix one part past mistakes, one part huge regrets and two parts water.

    I do rsync,ssh, I am fixed on it because it's easy (when done right), it's what I know and other options are either not available yet or I'm unfamiliar.

  • XsltelXsltel Hosting Provider
    edited May 2020

    @verjin said:
    Right now I do via cPanel backups to Google Drive and S3.

    I'm not that kind of person who trusts google, amazon or any other non-encrypted cloud service with my personal or client's data.
    I prefer to use my own encrypted dedi servers to store backup and live data.

    @verjin said:
    I am wondering why you are not using Backup feature of cPanel? Is there any specific reason or am I missing something related to performance?

    have you tried using it on a 24/7 busy server? the performance and slowness is unbearable I believe its broken by default to push JetBackup sales.
    Once I tried BorgBackup I never looked back. performance-wise and features wise it's better. for example, if 100 user has same WordPress core version installed. why shall I back up all those files again and again ???
    BorgBackup dedup feature solve that by having 1 version and multi modified files per account.
    Also, the checksum verification is a great feature that cPanel backup lacks.

    Xsltel OU | A One-man show powered by 250 grams of brain
    Offering reliable hosting services, Server management since 2011 and free cPanel hosting since 2020

  • YmpkerYmpker OGContent Writer
    edited May 2020

    I mostly backup WP sites to remote storage so it's gotta be FTP/SFTP, Webdav or GDrive to be compatible with most WP Backup plugins. If I have access to JetBackup I backup remotely to GDrive (free) and then to by bigger Koofr Storage.

  • I use bash scripts to run BorgBackup via ssh pushing file backups to backup server.

  • Naked without backup since I forget to renew my storage server, since February. My provider do snapshots for me so I think I am safe. Or am I? :#

    Action and Reaction in history

  • Daily backup (except database hourly) custom shell script, pulled using rsync + ssh/rclone

    2 storage VPS,
    1 B2
    1 home server.

    1. Duplicacy to Wasabi
    2. Duplicacy from Wasabi to OneDrive (read only key for Wasabi)
    3. BorgBase Borg Backup
  • UnixfyUnixfy OG
    edited May 2020

    All of my AWS boxes have a EBS Snapshot Lifecycle policy set up. WordPress sites are backed up with Updraftplus to S3, and some servers are also backed up to S3 / BunnyCDN. I've been looking into something like Veeam for a while to automate backups but haven't had the time to actually implement it.

    Thanked by (1)Ympker
  • @Xsltel I will explore and move to BorgBackup soon. Thanks for detailed review on performance issue.

    I am really happy that I join this community. Previously I join LET for this but later I find its all about trolling there then I accidentally got into LES and I thought it is just the same as LET but LES is something really good. Thanks @AnthonySmith

    P.S I got retired aka ban because of trolling here :lol:

    Thanked by (1)InceptionHosting

    ⚆ ͜ʖ ͡⚆ Thanked by (1281): verjin

  • someshzadesomeshzade Hosting ProviderOG

    Rsync + Hetzner

    Thanked by (1)Hetzner_OL

    Nexa Racks - Reliable Web Hosting Company

  • Current Lazy mode : proxmox weekly vm dumps pushed with rsync. Should replace rsync with borg.

    Nothing important on vpses.

  • seanhoseanho OG
    edited May 2020

    Great topic!

    I've been using a little-known tool called burp for several years, but may move to borg in the future. Incremental with daily/weekly/etc history. Block dedup on the server, which helps with a few Windows clients for which I'm backing up the whole disk, or when large media collections are passed from one client to another. LUKS on the server. Push from client, triggered by cron. Server then pushes encrypted to offsite storage VPS.

    I suppose one weakness of this strategy is if the backup server gets compromised, both backups and offsite copy could be wiped (though not clients). Also, server can see all client data (I believe this is unavoidable if I want cross-client dedup).

    Ever since moving to ansible for all config, I no longer backup software installs or config, just user data and DBs. DBs are dumped and compressed using gzip --rsyncable for incremental/dedup backup.

    I've often pondered if it'd be worthwhile to use zfs/btrfs snapshots for very frequent (like hourly) incremental backups. I seem to recall it was William who had something like that?

  • Mostly just my own backup script for DB dumps + cron job + rclone.

  • MasonMason AdministratorOG

    I made a little writeup with a script that I use for all my personal stuff over on HT - https://hostedtalk.net/t/automated-backups-via-rclone-cloud-storage-gdrive-s3-dropbox-ftp-more/3406.

    Basically I just tarball important directories and upload it to an encrypted google drive via rclone. Not the most ellegant or efficient solution, but it gets the job done.

    Head Janitor @ LES • AboutRulesSupport

  • I print everything out as base64 and use OCR to scan it back it.

    My pronouns are like/subscribe.

  • mikhomikho AdministratorOG

    @WSS said:
    I print everything out as base64 and use OCR to scan it back it.

    I’ll tell Greta ....

    Thanked by (2)vimalware seriesn

    “Technology is best when it brings people together.” – Matt Mullenweg

  • @mikho said:

    @WSS said:
    I print everything out as base64 and use OCR to scan it back it.

    I’ll tell Greta ....

    Do us all a favor and send a few dozen refugees to live with her rich parents.

    My pronouns are like/subscribe.

  • @AnthonySmith said:
    What is your preferred method of backup?

    It depends.

    Personal stuff is Restic and Rsync.net for long running things like desktops, laptops, and servers. Config tarballs or a git repo for the experimental stuff. My personal stuff is really adhoc because stuff comes and goes, and I'd rather not have to remember to add it or remove it from the backup system.

    Professionally, Urbackup for server files, Barman for Postgres, and Ansible in a git repo for configs. Getting an alert when a backup failed, see job status, and having a central scheduler to keep from hammering the network and disk subsystem is pretty key.

    I've used to use Tapeware/Yosemite Backup, but that was several jobs ago. It was nice, and the licensing was incredibly cheap. $1,300 for the initial license with unlimited nodes plus tape library, and then $300 per year for support. :open_mouth: It was also cross-platform, so I could run the main server on RHEL/CentOS or Windows with the agent on Linux, Windows, FreeBSD, MacOS, whatever.

    At this point, Linux (Fedora, RHEL/CentOS, Alpine, Misc.), FreeBSD, and OpenBSD support is the only support that matters to me.

    Do you use an AIO 3rd party service, do you script it yourself, if so what is your preferred transport method i.e. rscync, rscync+ssh, scp, ftp, sftp, remote mount, something else?

    Aside from how SSH keys are added, I really like how Rsync.net is non-interactive SSH (SCP/SFTP) only.

    Professionally, it's whatever the program wants to do since my preference is to have the data pulled from the server via agents, SSH, or whatever.

    Do you push or pull?

    Personally, push.

    Professionally, pull. Pulling is easier to coordinate.

    How fixed on your current method are you?

    Personally, I'm pretty set. Restic is incredibly full featured, it's compiled, and it does everything I want it to do.

    Professionally, I'm open.

    Key requirements:

    • Self-hosted/On prem.
    • Centrally coordinated.
    • Pull from clients.
    • PostgreSQL support.
    • Linux (Fedora, RHEL/CentOS), FreeBSD support.

    Nice to have requirements:

    • REST JSON based API for scripting.
    • Flexible way to send notifications (status updates, errors, warnings, etc.). I try to keep my email from filling up with junk, so sending notifications to other things like Slack, Mattermost, XMPP, Telegram, Nagios-like monitoring, etc. is nice.
    • OpenBSD support.
    • MariaDB support.

    Interesting, but I'm not sure it's an actual requirement:

    • Ability to run scripts/progs.

    Overall, being compiled is key as I'm over bootstrapping interpreters onto my stuff. There are enough good options out there for compiled languages these days. (D, Rust, Go, Haskell, Ocaml, Zig, C, C++)

    @WSS said:
    I print everything out as base64 and use OCR to scan it back it.

    Why not QR codes?

    @seanho said:
    I've been using a little-known tool called burp for several years, but may move to borg in the future. Incremental with daily/weekly/etc history.

    How is burp? I've come across it, but there is always a reason it always gets eliminated which I don't remember.

    Have you looked at Restic? It's basically Borg except in Go instead of Python. There might be some more differences, but I haven't noticed/missed any after switching.

    Thanked by (1)SagnikS
  • MikeAMikeA Hosting ProviderOG

    I find archiving then ssh transfer in cron works fine...

  • ouvounouvoun OG
    edited May 2020

    I’m a fan of using dockerized Restic, so I can tweak a few env variables and have an automatic backup running in a few minutes. I back up to Wasabi.

    It don’t be like it is until it do.

  • comicomi OG
    edited May 2020

    Main storage box set up with snapshots sitting in private network pulls, usually with rsync over ssh, from all the public stuff.

    That way it is vulnerable pretty much only to fire, which you can solve by uploading selected files, encrypted of course, to public cloud/clouds.

    Very economical, reasonably bulletproof. Full bulletproof is quite a jump in cost and complexity.

    And of course, this doesn't do backup on demand, for that you need some push capability - I use storage VPS with webdav as a DMZ for when I need to push. Webdav because I can upload with curl and password if need be. The reason being if your webdav password is compromised it's less painful then ssh password compromised.

Sign In or Register to comment.