

I use GarHAge which uses open hardware and software and was pretty easy and cheap too. https://github.com/marthoc/GarHAge
I use GarHAge which uses open hardware and software and was pretty easy and cheap too. https://github.com/marthoc/GarHAge
This is my exact concern.
If I pay for the lifetime pass now, what’s to stop them from restricting even more features behind new types of subscriptions and paywalls. “We’re adding back the ‘Watch Together’ feature but it requires a Platinum Plex subscription and will not be a part of Plex Lifetime Pass users.”
Seems kind of inevitable honestly.
If you mean that you are using Proton VPN on your Raspberry Pi to mask your downloading traffic, then no that same VPN will not help you access services like Jellyfin on your home network while you are remote.
Instead you’ll want to use something like Tailscale (or Wireguard). You run it as a service on your home network and it then becomes your own VPN that you (or others) can use to connect to your home network when you are remote.
You could run Wireguard on the same RaspberryPi that you use for downloading but I would recommend against it assuming that you’re running Proton VPN right on the host itself (and not inside a container).
I’m assuming your phone has to be rooted for this right? Or is docker running without root? I didn’t realize anything like this was possible. This is interesting.
This is basically how I do it too.
I used to be more creative but then I got in the habit of running more servers and swapping hardware more frequently so it got harder to remember what hardware I was actually connecting to. Now they get hardware based names and everything else is named by service-based Ansible roles.
This is what I’m using and I haven’t found any reason to switch yet.
I use a Gnome implementation of this and it works great too.
I upgraded to a new GPU a few weeks ago but all I’ve been doing is playing Factorio which would run just fine on 15 year old hardware.
Debian + Containers is definitely the way. Literally so stable it’s boring.
Same here. I love DuckDNS but after the third DNS outage taking down all my services I migrated to Cloudflare and haven’t had a single problem since.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
DuckDNS is great… but they have had some pretty major outages recently. No complaints, I know it’s an extremely valuable free service but it’s worth mentioning.
Cloudflare has an api for easy dynamic dns. I use oznu/docker-cloudflare-ddns to manage this, it’s super easy:
docker run \
-e API_KEY=xxxxxxx \
-e ZONE=example.com \
-e SUBDOMAIN=subdomain \
oznu/cloudflare-ddns
Then I just make a CNAME for each of my public facing services to point to ‘subdomain.example.com’ and use a reverse proxy to get incoming traffic to the right service.
I’ve had a lot of good luck with Syncthing. If you’re just syncing files locally you can disable nat traversal.
In my opinion trying to set up a highly available fault tolerant homelab adds a large amount of unnecessary complexity without an equivalent benefit. It’s good to have redundancy for essential services like DNS, but otherwise I think it’s better to focus on a robust backup and restore process so that if anything goes wrong you can just restore from a backup or start containers on another node.
I configure and deploy all my applications with Ansible roles. It can programmatically create config files, pass secrets, build or start containers, cycle containers automatically after config changes, basically everything you could need.
Sure it would be neat if services could fail over automatically but things only ever tend to break when I’m making changes anyway.
Definitely do not do tapes.
I’d also recommend Backblaze. Their S3 compatible storage is pretty affordable. I backup to a Kopia repo and then replicate to Backblaze nightly.
Tapes require so much more work to keep up to date and mght not even be cheaper over time.