• 0 Posts
  • 82 Comments
Joined 3 years ago
cake
Cake day: July 13th, 2023

help-circle
  • I think that the problem, in both cases, is culture.

    It’s not that either of those are bad, or bad for people; it’s bad for people of this culture or people of this society. It’s how the two intersect that is the problem.

    It could be a tool that lifts up the worker or creative, but instead it’s a tool to devalue the creative and extract power and wealth.
    It highlights that people with power get a different set of rules and laws than the rest of us, and they’re using that to further entrench and enrich themselves.


  • I think it kinda depends on the context. If someone is just making a tool for themselves and they slap on MIT or GPL3 just because who cares someone else can have it, then sure. Who cares if it’s trash if the stakes are so low that they’re scraping the ground and the user base is expected to be single digits.

    But when you care about the reputation of your project, or if your project requires people trust it, then yeah for sure it’s not appropriate to vibe/slop it.

    I have ethical concerns about the realities of how this tech is used, mainly in what it’s doing to the economic and power dynamics in society. But I don’t have a problem with the tech itself. That said, I have to admit that it may not be realistic to separate the tech from its inevitable impact. Now I have become death, the destroyer of worlds, and all that.


  • My understanding is that that is because Google and Apple want to onboard it to their own home automation platforms, and HomeAssistant just piggybacked on that because it was easier, and it hasn’t been a priority to rewrite it. But this is based on a few old threads I just looked up, I’m not exactly an expert.
    I think there was some talk about Bluetooth onboarding, but that’d require the devices to have a Bluetooth radio, which is more expensive that a QR code sticker. Idk if anyone uses it.
    Having something like a WEP button would certainly be nice though.






  • If you’re getting a VPS I’d generally recommend getting pangolin. It’s basically like cloudflared tunnels, but self hosted (on the vps). It works the same, you use it to map your subdomains to IPs on the other end of the secure tunnel.

    It has things like user access controls for each of the subdomains, the ability connect it to an identity provider, rules governing which paths need authentication and which don’t, etc.

    It can optionally come preconfigured with crowedsec, but I had problems with it falsely classifying my normal traffic as an attack and banning my IPs.

    Just be aware that even if your service has a login page, you first need to log into pangolin to be granted access to the service, and although that’s fine on the web (especially if you’re using an sso), some native apps don’t like the extra login. Homeassistant handles it better now, but I haven’t gotten jellyfin native android app working yet.








  • I can’t answer many of the questions here, but I can help a little with two:

    If you’re worried about noise, don’t get ironwolf drives. I just did and they’re noisy af. I brought some sound absorbing foam to put around the place where I keep my NAS, because they’re so much louder than I expected.

    Don’t open up a port in your network.
    Use something like tailscale to connect your devices to your home network, or rent an VPS to run a secure tunnel using pangolin (you’ll need to look into bandwidth limits).


  • Sorry I misread when you said “library” for some reason I thought you meant “external library”

    The problem that I’m trying to solve and I think OP is also trying to solve, is that they want the files to be on their NAS because it is high capacity, redundant, and backed up, but many users have access to the NAS, so they cannot rely on immich alone to provide access permissions, they need access permissions on the files themselves.

    I solved this by having a separate share for every user, and then mounting that user’s share on their library (storage label).
    It sounds like OP wants a single share, so having correct file ownership is important to restrict file access to the correct users who are viewing the filesystem outside of immich.

    Not sure what you mean by your last paragraph, how do you assign a share to individual files (assume you mean directories) outside of immich’s need for storage?


  • Library access won’t allow upload, this will.

    My knowledge here isn’t super deep, but it seems like you can do mapping per-share-per-ip, which means you can say “all file access coming from the immich host to this share will act as this user” which I think is fine if that share belongs to that user, and you don’t have anything else coming from that host to that share which you want to act as a different user. Which are very big caveats.


  • Preface

    I got excited and didn’t properly read your post before I wrote out a huge reply. I thought your problem was the per-user mapping to different locations on your NAS or to different shares, but its specifically file ownership.
    whoops.

    Leaving this here anyways, in case someone finds it helpful.
    I kinda address file ownership at the end, but I don’t think its really what you were looking for because it depends on every user having their own share.

    Prerequisites

    1. you need to be using Storage Templates.
    2. you’re willing to change the storage labels for all existing users
      • if not, then change the storage labels for all users to something temporary and run the migration job before you begin. You’ll change it back later.
    3. you’re willing to switch to NFS instead of samba, where each user gets their own share.
      • might not actually be necessary, but its what I use, so YMMV

    Configuration

    Volumes

    In docker, you’ll need to set up an external NFS volume for every user. I use portainer to manage my docker stacks, and its pretty easy to set up NFS volumes. I’m not sure how to do it with raw docker, but I dont think its complicated.

    Compose

    in your docker compose files, include something like this

    services:
      immich-server:
        # ...
        volumes:
          - ${UPLOAD_LOCATION}:/data
          - /etc/localtime:/etc/localtime:ro
         - type: volume
            source: user1-share
            target: /data/library/user1-intended-storage-label
            volume:
              subpath: path/to/photos/in/user1/share
        - type: volume
            source: user2-share
            target: /data/library/user2-intended-storage-label
            volume:
              subpath: path/to/photos/in/user2/share
        # and so on for every user
      # ...
    
    volumes:
      model-cache:
      user1-share:
        external: true
      user2-share:
        external: true
      # and so on for every user
    

    There are 3 things about this setup:

    1. it does not scale automatically. this is fine as long as you don’t intend to be adding/removing users often.
    2. it is only saving the photos and videos. all thumbnails and transcoded videos, etc, get saved to ${UPLOAD_LOCATION}. For me this is fine, I dont want to pollute my NAS with a bunch of transient data, but if you want that info then for every user, in addition to the target: /data/library/user1 target you’ll also need a target: /data/thumbs/user1, target: /data/encoded-video/user1, etc.
    3. If there is already data at the target, when you mount this volume it will mask that data. This is why it is important that no users exist with that storage label prior to this change, else that data will get hidden.

    You may also want to add similar volumes for external libraries (I gave every user an external “archive” library for their old photos) like this:

        - type: volume
            source: user1-share
            target: /unique/path/to/this/users/archive
            volume:
              subpath: path/to/photo/archive/on/share
    

    and then you’ll need to go and add that target as an external library in the admin setup.
    and once immich allows sharing external libraries (or turning external libraries into sharable albums) I’ll also include a volume for a shared archive.

    Migrate

    redeploy, change your user storage labels to match the targets, and run the migration job (or create the users with matching storage labels).

    File ownership

    I honestly don’t think its important, as long as your user has full access to the files, its fine. But if you insist then you have a separate share for every user and set up the NFS server for that share to squash all to that share’s user. Its a little less secure, but you’ll only be allowing requests from that single IP, and there will only be a request from a single user from that server anyways.
    Synology unfortunately doesn’t support this, they only allow squashing to admin or guest (or disable squashing).