this post was submitted on 01 Mar 2026
37 points (97.4% liked)

Selfhosted

57200 readers
588 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

My goal is to to fully ditch Google Photos for Immich. I have about ~3TB of photos and videos. Looking for a super simple way of backing up the library to cloud storage in case of a drive failure without spending a ton.

Ideally, this will require nothing on my part besides copying files into a given folder. And ideally the storage will be encrypted and have basic privacy assurances.

Also if it matters my home server is running Debian. But I'd prefer something that runs in docker so I can more easily check on it remotely.

you are viewing a single comment's thread
view the rest of the comments
[–] qjkxbmwvz@startrek.website 3 points 3 days ago (2 children)

Not the same, but for my Immich backup I have a raspberry pi and an HDD with family (remote).

Backup is rsync, and a simple script to make ZFS snapshots (retaining X daily, Y weekly). Connected via "raw" WireGuard.

Setup works well, although it's never been needed.

[–] yo_scottie_oh@lemmy.ml 2 points 3 days ago (2 children)

raspberry pi and an HDD with family (remote)

Is this the way to go for off-site backups w/ family? In terms of low power draw, uptime, etc.

[–] IsoKiero@sopuli.xyz 1 point 2 days ago (1 child)

That absolutely works, but when I built my offsite backup to hetzner I also thought about setting up own hardware and came to conclusion that for myself it doesn't really make a ton of sense. New RPi + 4TB ssd/m.2 drive with accessories adds up to something around 400€ (if that's even enough today), or few years worth of cloud backups. With own hardware there's always need to maintain it and hardware failures are always an option, so for me it makes more sense to just rely on big players with offsite backups. Your case might be different for various reasons, but sometimes renting capacity just makes more sense in the big picture.

[–] Scrollone@feddit.it 1 point 2 days ago (1 child)

Why would you use SSDs for backup? I think a HDD should be fine for that.

Especially because SSDs start losing data is they're powered off for some time.

[–] IsoKiero@sopuli.xyz 1 point 2 days ago

Sound and power consumption. At least in my case those are important if I was going to store data at my mothers house. Power consumption might not matter that much, but HDD sound definetly does. And even with spinning rust hardware cost would be somewhere around 250€ compared to ~20€/month of cloud storage.

YMMV, in my scenario it's just easier to use a cloud provider.

[–] qjkxbmwvz@startrek.website 1 point 3 days ago

I've been pleased with it. Family is very relaxed about projects like this, but yeah it's low power draw. I don't think I have anything special set up but the right thing to do for power would be to spin down drive when not in use, as power is dominated by the spinning rust.

Uptime is great. Only hiccups are that it can choke when compiling the ZFS kernel modules, triggered on kernel updates. It's an rpi 3/1GB RAM (I keep failing at forcing dkms to use only 1 thread, which would probably fix these hiccups 🤷).

That said, it is managed by me, so sometimes errors go unnoticed. I had recent issues where I missed a week of rsync because I switched from pihole to technitium on my home server and forgot to point the remote rpi there. This would all have been fixed with proper cron email setup...I'm clearly not a professional :)

[–] ikidd@lemmy.world 0 points 3 days ago (1 child)

If you're already running ZFS, sanoid would be an option.

[–] Imaginary_Stand4909@lemmy.blahaj.zone 0 points 3 days ago* (last edited 3 days ago) (1 child)

Okay, how do you get sanoid & syncoid to run, because I've tried, and I'm just too dummy. When it makes a backup, is it literally making a zfs data record/pool/whatever on the other machine? Or is it more like a file? I have a Proxmox running cockpit (SMB & NFS) and the machine is connected to a USB drive bay that has ZFS. My immich is saving pictures to my ZFS drive bay via SMB.

I've tried to do

syncoid pool_name/data/immich root@cockpit.service.IP.addr:mnt/samba/backups

but I get hit with:

::: spoiler Long ass error message

WARNING: ZFS resume feature not available on target machine - sync will continue without resume support.
INFO: Sending oldest full snapshot Orico2tera4/data/immich@syncoid_nova_2026-01-27:13:38:44-GMT-05:00 to new target filesystem root@192.168.0.246:/mnt/samba/backups (~ 42 KB):
/dev/zfs and /proc/self/mounts are required.
Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
44.2KiB 0:00:00 [ 694KiB/s] [===========================================] 103%            
CRITICAL ERROR:  zfs send  'Orico2tera4/data/immich'@'syncoid_nova_2026-01-27:13:38:44-GMT-05:00' | pv -p -t -e -r -b -s 43632 | lzop  | mbuffer  -q -s 128k -m 16M | ssh      -S /tmp/syncoid-root1921680246-1772385641-845218-1784 root@192.168.0.246 ' mbuffer  -q -s 128k -m 16M | lzop -dfc |  zfs receive  -F '"'"'/mnt/samba/backups'"'"' 2>&1' failed: 256

:::

I've tried reading the github docs and some forums but I'm dummy. I just want to have backups that I can encrypt and keep in a cloud for cheap somewhere. Does it literally have to be two different machines (god I'm dumb)? Can I just auto run ZFS snapshots and encrypt then save those to Drive/OneDrive/Whoever?

[–] ikidd@lemmy.world 1 point 3 days ago

You can do a sanoid sync to another zpool or dataset on the same machine or a remote host, they behave the same. It's replicating that dataset on the other machine, then sending the snapshots after that point over via zfs send. You can instruct sanoid to prune those snapshots after the send and start new ones for the next send, or just accumulate them so you have points in time to revert to.

IIRC, you can send a zfs snapshot to a file, but I can't recall how to do that, so AFAIK, you can't just send it to a file based service like Onedrive. You can use a service like zfs.rent and send them a harddrive with your base sync on it (encrypt it) and then once they've brought it online, you can sync to that. Best to test out your methods with the drive hooked up locally.

I know it's anathema to Lemmy, but the best help you'll get is Claude where you can paste the errors in and have it sort it out for you as you troubleshoot. It's pretty good at shit like that.