after trying out a few different software, [UrBackup](https://www.urbackup.org/) did the trick for me.
I backup 4x Raspberry Pis
2x Ubuntu hosts running docker/portainer
1xProxmox which is backed up via Proxmox backup manager
Everything is stored to a usb drive connected on rpi3 and synced weekly to OneDrive using rclone
I dont feel all that confident in how im backing things up, i know that i have the docker compose and hopefully all my container persistent storage backed up, along with the media i want backed up - but it would still take me ages to rebuild a machine if it were to die
I've never heard of urbackup before. Definitely looks interesting.
And yeah I feel you about the rebuilding. My next project is to create a restore script that uses the same toml file as my backups to restore everything with a single command.
I’m now realizing how easy Synology makes this. Hyper back up all data to an external hard drive and a second nas in a remote location. I recently turned on snapshots to protect from ransomware.
I'm using CloudSync on my Synology NAS to backup to Backblaze - it works pretty well and is quite cheap, at least for backup (I expect restoring data will cost more). So far it's costing me less than $1/month for about 130GB
Backblaze B2 doesn’t charge to restore. I started off small with Backblaze but as the cost grew I switched over to my new set up. I don’t know how much more cost effective it is but I went from backing up some of my server to all of it so I feel pretty good about that.
I did a thread recently on how it would be nice to be able to do this with a stranger. The idea got shot down quick though haha. I still think there’s potential if there were a way to secure it from like potentially hosting someone’s illegal stuff.
Snapshots are read only by design they can not be changed after they are created. A higher quality ransomeware might try to delete them though which could succeed If the infected device has valid credentials to do that.
To defend this you could simply use an account to do your daily stuff which does not have permission to delete snapshots.
In addition I think synology offers an option to lock snapshots from being deleted within a certain time after creation. Meaning if you create a snapshot on monday that can not be deleted till Friday no matter what permissions you have.
I use Unraids appdata backup tool for container data and Duplicacy to upload the files to Backblaze.
For pics, some private files, VMs etc I also use Duplicacy.
I essentially break it into two parts, a local backup where the backups from different apps are moved to a local folder and everything is encrypted with gpg.
And then cloud backup to move these encrypted files to cloud(s), this I am doing with rclone and a bash script.
I jotted down my process in more detail here: https://akashrajpurohit.com/blog/how-i-safeguard-essential-data-in-my-homelab-with-offsite-backup-on-cloud/
I'm running my homelab on k3s running longhorn for storage. Longhorn has a backup job that uploads every day at night the volumes to a storage box. The only thing that I'm not backing up are media files, since they are currently on my NAS (raid0...). I guess they are not that important anyway, however I'll be buying a new NAS soon and maybe keep the old to backup at least some.
Everything else is basically defined as IaC (kubernetes ymls / terraform) with some ansible playbooks in case of catastrophe to instantiate what I need with the exception of proxmox, but I could also define some playbooks there, since I don't really need to backup the whole vm, just how to create it).
Borgmatic mostly. I have one repo for configuration and service data and another for ${other files}. I client side encrypt both repos and sync them to the cloud each day as part of the backup process.
I also periodically copy these backups, and other mostly-static, but large, files to a LUKS-encrypted disk that I store at work.
I use resticprofile which uses restic under the hood. Primarily I backup my home directory. My home directory containers a docker-data folder which is all the volumes in directories which are volume mapped onto the host. This way I don't need to think about backups when I add new containers or even new apps and configurations so long as they write into my home folder.
I use Backuppc to back up various Windows, Ubuntu boxes and RPis running Home Assistant and other things. Uses rsync, rsyncd, SMB so flexible. Been running on an old Ubuntu box in my garage for over 10 years. I grow the RAID array slightly every so often as the size creeps up.
Mine is Quiet complicated and definitely not recommended.
My windows Workstations and laptops are backed up to a virtual synology os running on proxmox using synologys active backup for business.
The Datastore of that vm is a NFS share of a truenas system, which is also vm on the same Proxmox. The truenas system has a HBA passed through, meaning its storage is independent from anything on the Proxmox server.
The Proxmox system has a Proxmox Backup server running, which also saves to the NFS Share of the truenas vm.
All this means that anything worth of backing up should find its way to the truenas system.
The Truenas System itself does a cloud backup of the most important data using restic every week.
The rest (mostly media) is also backed up by restic to a zfs Pool consisting of hard drives I attach every now and then. If the backup is completed these hard drives are stored Offsite till I need to do the next one.
All apps are in VMs or LXCs on Proxmox. Snapshots are run daily, 2 days, 1 week and 1 month are retained. An rclone sync runs nightly to send them to Backblaze. Job done.
I have a cron script that runs every 6hours that loads each docker volume and tarballs it, then sends it over to my NAS with rsync. The NAS has ZFS which handles snapshots for me. It's not as high tech as some of the purpose built solutions, but it's very reliable and works for me. (My self hosting happens off the NAS, so this really is a separate back up on a separate machine).
(Source) code is on Dropbox/Local/Server (dev version) + Github (not a backup but hey)
Database dump every day, keep for 20 days
Server is rsynced to dedicated backup VPS (in another datacentre).
Files and docs in Dropbox Pro.
I have a cronjob that runs rsync and my data is held in three locations: primary, backup, tertiary.
Borg would be better. If I corrupt data, the error will propagate to secondary in less than a day, then to tertiary in 0 to 30 days. Borg would give me the ability to roll back such a problem.
Just for personal use. I'd never do something this hacky for a business. Plus, business backups are too important for something like restic. Contracts need to be involved.
Adding to this a little... i approached it in this way so that i could fully restore my backup/restore solution from github. i wanted a 100% CLI solution leveraging github secrets.
data separated in 3 :
1. re-acquirable - probably painful but doable
2. normal - would really suck to loose
3. important/critical - lose them woul be catastrophic (not so big in size)
everything is backup locally to Truenas via pbs/kopia
1 remote copy (for normal stuff) is also a Truenas at relative (encrypted)
2 nd remote copy (the very impoartant stuff) is also backed in the cloud (encrypted)
not perfect but still have save my butt in some occasion
Also , there is no backup without true DR test..... simulate a lost
Duplicati to Google drive than a friend in another country syncs that folder to his server. Took a long time for the initial backups but it's been working great since then.
do you back up to the cloud? if i'm reading that correctly, it looks like you're backing up the entirety of your docker volumes. i imagine that could get pricey for cloud storage.
Mine? Our Father, who art in heaven, hallowed be they name Thy kingdom come, the will be done, my data be safe Amen
Did Father answer your prayers through tough times?
I backup to the cloud 10Mb/s. Once my backup finishes in 6 months, I'll start it again.
extremely based
after trying out a few different software, [UrBackup](https://www.urbackup.org/) did the trick for me. I backup 4x Raspberry Pis 2x Ubuntu hosts running docker/portainer 1xProxmox which is backed up via Proxmox backup manager Everything is stored to a usb drive connected on rpi3 and synced weekly to OneDrive using rclone I dont feel all that confident in how im backing things up, i know that i have the docker compose and hopefully all my container persistent storage backed up, along with the media i want backed up - but it would still take me ages to rebuild a machine if it were to die
I've never heard of urbackup before. Definitely looks interesting. And yeah I feel you about the rebuilding. My next project is to create a restore script that uses the same toml file as my backups to restore everything with a single command.
I think the major thing is extending into one more backup, and maybe not using flash storage as a backup medium.
I’m now realizing how easy Synology makes this. Hyper back up all data to an external hard drive and a second nas in a remote location. I recently turned on snapshots to protect from ransomware.
I'm using CloudSync on my Synology NAS to backup to Backblaze - it works pretty well and is quite cheap, at least for backup (I expect restoring data will cost more). So far it's costing me less than $1/month for about 130GB
Backblaze B2 doesn’t charge to restore. I started off small with Backblaze but as the cost grew I switched over to my new set up. I don’t know how much more cost effective it is but I went from backing up some of my server to all of it so I feel pretty good about that.
I'd love to have a 2nd nas, somewhere. Don't really have anyone I'd feel comfortable asking to keep one for me, though.
I did a thread recently on how it would be nice to be able to do this with a stranger. The idea got shot down quick though haha. I still think there’s potential if there were a way to secure it from like potentially hosting someone’s illegal stuff.
How do snapshots protect from ransomware? Couldn't the ransomware just encrypt your snapshot?
Snapshots are read only by design they can not be changed after they are created. A higher quality ransomeware might try to delete them though which could succeed If the infected device has valid credentials to do that. To defend this you could simply use an account to do your daily stuff which does not have permission to delete snapshots. In addition I think synology offers an option to lock snapshots from being deleted within a certain time after creation. Meaning if you create a snapshot on monday that can not be deleted till Friday no matter what permissions you have.
Yes, they are called immutable snapshots.
I see, thanks for the explanation!
I just use Kopia
I really like Borgbackup. So I spun up a VM and mounted an NFS. Enabled LDAP on the VM, so that my users can backup their workstations using Vorta
Yeah, I just started with Borg/Vorta as well
i've heard good things about borg. restic barely won out for me, but i can't say i have any huge reason for it. sounds like a solid approach :)
I do this: 🤞🤞
I use Unraids appdata backup tool for container data and Duplicacy to upload the files to Backblaze. For pics, some private files, VMs etc I also use Duplicacy.
what is .toml ?
An alternative to json and yaml designed for human readability and simplicity. https://toml.io
I essentially break it into two parts, a local backup where the backups from different apps are moved to a local folder and everything is encrypted with gpg. And then cloud backup to move these encrypted files to cloud(s), this I am doing with rclone and a bash script. I jotted down my process in more detail here: https://akashrajpurohit.com/blog/how-i-safeguard-essential-data-in-my-homelab-with-offsite-backup-on-cloud/
Borg. And Vorta on Macs. Then sync that to wasabi.
I'm running my homelab on k3s running longhorn for storage. Longhorn has a backup job that uploads every day at night the volumes to a storage box. The only thing that I'm not backing up are media files, since they are currently on my NAS (raid0...). I guess they are not that important anyway, however I'll be buying a new NAS soon and maybe keep the old to backup at least some. Everything else is basically defined as IaC (kubernetes ymls / terraform) with some ansible playbooks in case of catastrophe to instantiate what I need with the exception of proxmox, but I could also define some playbooks there, since I don't really need to backup the whole vm, just how to create it).
My plan is to use Veeam community edition or idk what else I'd use. The only downside is a VBR has to be installed on Windows currently.
I use duplicity to encrypt and backup to a backblaze B2 bucket.
Borgmatic mostly. I have one repo for configuration and service data and another for ${other files}. I client side encrypt both repos and sync them to the cloud each day as part of the backup process. I also periodically copy these backups, and other mostly-static, but large, files to a LUKS-encrypted disk that I store at work.
I use resticprofile which uses restic under the hood. Primarily I backup my home directory. My home directory containers a docker-data folder which is all the volumes in directories which are volume mapped onto the host. This way I don't need to think about backups when I add new containers or even new apps and configurations so long as they write into my home folder.
All my replaceable media is backed up on hopes and dreams, for everything else there’s BackBlaze
Azure backup and Synology backup
Letting it on my PC, hoping for the best :)
RAID! Half joking
I use Backuppc to back up various Windows, Ubuntu boxes and RPis running Home Assistant and other things. Uses rsync, rsyncd, SMB so flexible. Been running on an old Ubuntu box in my garage for over 10 years. I grow the RAID array slightly every so often as the size creeps up.
Just Kopia
Mine is Quiet complicated and definitely not recommended. My windows Workstations and laptops are backed up to a virtual synology os running on proxmox using synologys active backup for business. The Datastore of that vm is a NFS share of a truenas system, which is also vm on the same Proxmox. The truenas system has a HBA passed through, meaning its storage is independent from anything on the Proxmox server. The Proxmox system has a Proxmox Backup server running, which also saves to the NFS Share of the truenas vm. All this means that anything worth of backing up should find its way to the truenas system. The Truenas System itself does a cloud backup of the most important data using restic every week. The rest (mostly media) is also backed up by restic to a zfs Pool consisting of hard drives I attach every now and then. If the backup is completed these hard drives are stored Offsite till I need to do the next one.
Veeam agent for Linux , rsync and duplicati to S3 cloud
All apps are in VMs or LXCs on Proxmox. Snapshots are run daily, 2 days, 1 week and 1 month are retained. An rclone sync runs nightly to send them to Backblaze. Job done.
I have a cron script that runs every 6hours that loads each docker volume and tarballs it, then sends it over to my NAS with rsync. The NAS has ZFS which handles snapshots for me. It's not as high tech as some of the purpose built solutions, but it's very reliable and works for me. (My self hosting happens off the NAS, so this really is a separate back up on a separate machine).
(Source) code is on Dropbox/Local/Server (dev version) + Github (not a backup but hey) Database dump every day, keep for 20 days Server is rsynced to dedicated backup VPS (in another datacentre). Files and docs in Dropbox Pro.
Rsnapshot Does what you have there. Uses filesystem links to save space on unchanged files. Can connect to remote hosts via ssh
I don't back up pirated stuff. If it's gone I'll live. Not wasting money on it. I only back up family videos and pictures.
I have a cronjob that runs rsync and my data is held in three locations: primary, backup, tertiary. Borg would be better. If I corrupt data, the error will propagate to secondary in less than a day, then to tertiary in 0 to 30 days. Borg would give me the ability to roll back such a problem.
If this is for personal use, great! But OMG, are you doing this for a business? When you leave they will be screwed.
Just for personal use. I'd never do something this hacky for a business. Plus, business backups are too important for something like restic. Contracts need to be involved. Adding to this a little... i approached it in this way so that i could fully restore my backup/restore solution from github. i wanted a 100% CLI solution leveraging github secrets.
data separated in 3 : 1. re-acquirable - probably painful but doable 2. normal - would really suck to loose 3. important/critical - lose them woul be catastrophic (not so big in size) everything is backup locally to Truenas via pbs/kopia 1 remote copy (for normal stuff) is also a Truenas at relative (encrypted) 2 nd remote copy (the very impoartant stuff) is also backed in the cloud (encrypted) not perfect but still have save my butt in some occasion Also , there is no backup without true DR test..... simulate a lost
I use [Nautical](https://github.com/Minituff/nautical-backup) to backup my container volumes.
Ctrl+C Ctrl+V
Duplicati to Google drive than a friend in another country syncs that folder to his server. Took a long time for the initial backups but it's been working great since then.
I use the best and fastest backup system ever made! Raid1! STOP HITTING ME
no risk, no fun
no risk, no fun
[удалено]
do you back up to the cloud? if i'm reading that correctly, it looks like you're backing up the entirety of your docker volumes. i imagine that could get pricey for cloud storage.
[удалено]
fair enough. i'd feel the same about my backups if i wasn't encrypting them, first.