

I agree except crowedsec. The apps I use were frequently phoning home, causing all my devices to get banned by crowedsec. Setting up rules around it was just too painful so I got rid of it.
Gonna look into if I can set up fail2ban with it instead


I agree except crowedsec. The apps I use were frequently phoning home, causing all my devices to get banned by crowedsec. Setting up rules around it was just too painful so I got rid of it.
Gonna look into if I can set up fail2ban with it instead


Is it for downloading illegal content? i can’t tell
I assume some of it is related to torrenting, but I can’t tell which ones and how much. They can’t all be for torrenting, right???


Store a lot of things you never access
Hope that helps 😌


Ikr like… Give me a docker compose file and tell me what env vars need to be set to what. Why is it so complicated?


I hate how so many of the arr apps don’t describe what they do in a way that people who don’t already know can understand.
Even the tutorials and guides are frustratingly vague.


On CloudFlare, user224.com renews annually at less than $11
That’s where I got my domain (I was using them at the time, but it doesn’t matter), for that price, and that includes whois privacy.


I can’t answer many of the questions here, but I can help a little with two:
If you’re worried about noise, don’t get ironwolf drives. I just did and they’re noisy af. I brought some sound absorbing foam to put around the place where I keep my NAS, because they’re so much louder than I expected.
Don’t open up a port in your network.
Use something like tailscale to connect your devices to your home network, or rent an VPS to run a secure tunnel using pangolin (you’ll need to look into bandwidth limits).


Sorry I misread when you said “library” for some reason I thought you meant “external library”
The problem that I’m trying to solve and I think OP is also trying to solve, is that they want the files to be on their NAS because it is high capacity, redundant, and backed up, but many users have access to the NAS, so they cannot rely on immich alone to provide access permissions, they need access permissions on the files themselves.
I solved this by having a separate share for every user, and then mounting that user’s share on their library (storage label).
It sounds like OP wants a single share, so having correct file ownership is important to restrict file access to the correct users who are viewing the filesystem outside of immich.
Not sure what you mean by your last paragraph, how do you assign a share to individual files (assume you mean directories) outside of immich’s need for storage?


Library access won’t allow upload, this will.
My knowledge here isn’t super deep, but it seems like you can do mapping per-share-per-ip, which means you can say “all file access coming from the immich host to this share will act as this user” which I think is fine if that share belongs to that user, and you don’t have anything else coming from that host to that share which you want to act as a different user. Which are very big caveats.


I got excited and didn’t properly read your post before I wrote out a huge reply. I thought your problem was the per-user mapping to different locations on your NAS or to different shares, but its specifically file ownership.
whoops.
Leaving this here anyways, in case someone finds it helpful.
I kinda address file ownership at the end, but I don’t think its really what you were looking for because it depends on every user having their own share.
In docker, you’ll need to set up an external NFS volume for every user. I use portainer to manage my docker stacks, and its pretty easy to set up NFS volumes. I’m not sure how to do it with raw docker, but I dont think its complicated.
in your docker compose files, include something like this
services:
immich-server:
# ...
volumes:
- ${UPLOAD_LOCATION}:/data
- /etc/localtime:/etc/localtime:ro
- type: volume
source: user1-share
target: /data/library/user1-intended-storage-label
volume:
subpath: path/to/photos/in/user1/share
- type: volume
source: user2-share
target: /data/library/user2-intended-storage-label
volume:
subpath: path/to/photos/in/user2/share
# and so on for every user
# ...
volumes:
model-cache:
user1-share:
external: true
user2-share:
external: true
# and so on for every user
There are 3 things about this setup:
${UPLOAD_LOCATION}. For me this is fine, I dont want to pollute my NAS with a bunch of transient data, but if you want that info then for every user, in addition to the target: /data/library/user1 target you’ll also need a target: /data/thumbs/user1, target: /data/encoded-video/user1, etc.target, when you mount this volume it will mask that data. This is why it is important that no users exist with that storage label prior to this change, else that data will get hidden.You may also want to add similar volumes for external libraries (I gave every user an external “archive” library for their old photos) like this:
- type: volume
source: user1-share
target: /unique/path/to/this/users/archive
volume:
subpath: path/to/photo/archive/on/share
and then you’ll need to go and add that target as an external library in the admin setup.
and once immich allows sharing external libraries (or turning external libraries into sharable albums) I’ll also include a volume for a shared archive.
redeploy, change your user storage labels to match the targets, and run the migration job (or create the users with matching storage labels).
I honestly don’t think its important, as long as your user has full access to the files, its fine. But if you insist then you have a separate share for every user and set up the NFS server for that share to squash all to that share’s user. Its a little less secure, but you’ll only be allowing requests from that single IP, and there will only be a request from a single user from that server anyways.
Synology unfortunately doesn’t support this, they only allow squashing to admin or guest (or disable squashing).


What you’re looking for is probably something like certificate authentication, or mTLS. It exists, but it’s kind of a pain to set up on client devices so it’s not very common.
What’s more common and easier to set up and is nearly the same thing, is passkey authentication. Same in-flight security characteristics, but you typically need to pass a simple challenge for your device to unlock it.
There are a bunch of self-hosted auth options for both
I wanna try matrix, but it’s crazy to me that no clients, even the official clients, support all the features. It really makes me hesitate lol


Thank you!
This is almost exactly my motivation when I recently started my homelab journey. A bit of privacy, but what pushed me over the edge is that I was supporting these anti-social corporations with my money or data, when they went fully mask-off.
I’ve been going through a similar journey, and I’ll tell you want I did:
I ended up just getting a low-end 2 bay Synology NAS, because it is cheap, and easy to set up shares and backups, and 12tb mirrored is all I needed. I was too intimidated by the prospect of configuring trueNAS correctly, and Synology walked back their requirement of using their own branded drives.
If you want open source NAS software, then TrueNAS and OpenMediaVault are the main options. Truenas has the better pedigree afaict, but it has pretty significant requirements that mean you’ll need expensive hardware. In the end, I decided it was way more than what I needed, I wanted my NAS to be purely a NAS, and I’d do my server/cluster on different hardware.
I almost got a HexOS NAS (fork of trueNAS SCALE with a front-end written by a bunch of ex-unraid folks to be much easier to configure and admin), but it’s still beta and I didn’t wanna wait a few months for GA, and also it has the same requirements as trueNAS, so it’d be expensive and you also have to pay for a license.
If you go with a traditional OTS NAS, then you probably want raid 1 for a 2 bay or raid 5 if you have 4+ bays.
If you get something like truenas that uses ZFS then you want raidz1 (which is like raid5 with one parity disk). Current there are limitations with raiz if you wanna expand it later, but HexOS folks are sponsoring a ZFS feature called Any RAID, to make expanding raidz more flexible, which will presumably make it’s way to all ZFS NASes when it is finished.
I’m pretty early in my self-hosting journey, but so far I have a 2 bay Synology with cloud backup and a couple of shared volumes, a rasppi 5 running home assistant, a beelink ser5 running Ubuntu server for portainer, and a cheap VPS for pangolin.


I don’t think that is self hosting because I think that the game actually runs on their servers and your deck is just a client. But maybe it actually runs on the deck and the server is just for connecting the clients? 🤔


They’re party games you play together in person.
Some have analog-only equivalents but they often require you to have physical equipment, like pictionary basically requires an easel.
I don’t really disagree with you, but it’s good to meet people where they are. Party games that use a phone is better than no party games at all.


I’d love that too.
Some games like gartic phone / draw.io, codeword, balderdash, jackbox-alike type games, that I could host on my own server to avoid tracking/ads, and play with my family over holidays.


I’ve recently switched to pangolin, which works like cloudflared tunnels, and it’s been pretty good.
They offer docker support but they also support installing manually. You install pangolin on your vps via a setup script, and you install newt on a machine inside your homelab. It supports raw udp/tcp in addition to http.
I’d challenge what you said about docker, though. There is very little overhead in making a docker turduckin.
And actually docker is exactly for delivering turnkey applications, not for reproducable dev environment; I imagine that they don’t have a default data persistence because not everything needs it and that’s less secure by default. LXC (which is what you’ll mostly use in proxmox) and VMs seem more for reproducable dev environments, afaict. And there are some really good tools for managing the deployment of docker artifacts, compared to doing it yourself or using LXCs: for example dockge or portainer. I gave proxmox a try, but switched to portainer recently, because managing containers was easier and they still let you define persistent shared volumes like proxmox does.
Proxmox is still good if you need to run VMs, but if all you need is OCI/docker containers, then there are simpler alternatives, in my limited experience.
I understand they have different purposes, but one (container manager) seems far more suited to the typical things that people want to do in their homelabs, which is to host applications and application stacks.
Rarely do I see people need an interactive virtualized environment (in homelabs), except to set up those aforementioned applications, and then containers and containers stack definitions are better because having declarative way to deploy applications is better. Self-hosting projects often provide docker/OCI containers and compose files as the official way to deploy. I’m not deep in the community yet, but so far that has been my experience.
Additionally, some volume mounting options I wanted to use are only available via CLI, which is frustrating.
So I don’t really understand what value proposition proxmox provides that has causes homelabs folks to rally around it so passionately.
Having a one-stop-shop that can run VMs is handy for those last-resort scenarios where using an application container just isn’t possible, but thankfully I haven’t run into that yet. It doesn’t seem like OP has run into that yet either, if I read it correctly.
I’m not deep into my self-hosting journey, but it doesn’t seem like there are that many things that require a VM for hypervisor 🤞
If you’re getting a VPS I’d generally recommend getting pangolin. It’s basically like cloudflared tunnels, but self hosted (on the vps). It works the same, you use it to map your subdomains to IPs on the other end of the secure tunnel.
It has things like user access controls for each of the subdomains, the ability connect it to an identity provider, rules governing which paths need authentication and which don’t, etc.
It can optionally come preconfigured with crowedsec, but I had problems with it falsely classifying my normal traffic as an attack and banning my IPs.
Just be aware that even if your service has a login page, you first need to log into pangolin to be granted access to the service, and although that’s fine on the web (especially if you’re using an sso), some native apps don’t like the extra login. Homeassistant handles it better now, but I haven’t gotten jellyfin native android app working yet.