• 1 Post
  • 34 Comments
Joined 3 years ago
cake
Cake day: June 7th, 2023

help-circle


  • Can attest that Folder Sync is excellent. I use it all day (in the background) for two-way sync (notes) and backup of photos videos etc

    Though a small PSA on setting up:
    I once set up a new share on a new phone with two-way sync, and the app decided to sync the (newer) empty directory to the server (i.e. delete everything) instead of pulling the files from the server to the phone.
    Easy fix: Restore notes from backup (step 0: have backups in the first place), then do an initial 1-way sync from server to phone, then change the sync job to two-way.


  • For jpg’s, no they will not get smaller. Maybe even a smidge bigger if you zip them. Usually not enough to make a practical difference.

    Zip does generic lossless compression, meaning it can be extracted for a bit-perfect copy of the original. Very simplified it works by finding patterns repeating and replacing a long pattern with a short key, and storing an index to replace the keys with the original pattern on extraction.

    Jpg’s use lossy compression, meaning some detail is lost and can never be reproduced. Jpg is highly optimized to only drop details that don’t matter much for human perception of the image.

    Since jpg is already compressed, there will not be any repeating patterns (duplicate information) for the zip algorithm to find.



  • I highly recommend you use Proxmox as the base OS. Proxmox makes it easy to spin up virtual machines, and easy to back up and revert to backups. So you’re free to play around and try stupid stuff. If you break something in your VM, just restore a backup.

    In addition to virtual machines, Proxmox also does “LXC containers” , which are system level containers. They are basically a very light weight virtual machine, with some caveats like running the same kernel as the host.

    Most self-hosting software is released as a docker-image. Docker is application level containers, meaning only the bare minimum to run the application is included. You don’t enter a docker container to update packages, instead you pull down a new version of the image from the author.

    There are 3 ways to run docker on Proxmox:

    • Install docker inside a virtual machine (recommended).
    • Install docker inside a LXC Containers (not recommended because of various edge cases)
    • Install docker directly on the Proxmox host (not recommended for various reasons).
    • (There is ongoing work for running docker images directly in Proxmox, this is in beta/preview since Proxmox 9.1).

    The “overhead” of running docker inside a VM on the host is so negligible, you don’t need to worry about it.


  • I had never heard of dockge before, but this sounds like the killer feature for me:

    File based structure - Dockge won’t kidnap your compose files, they are stored on your drive as usual. You can interact with them using normal docker compose commands

    Does that mean I can just point it at my existing docker compose files?
    My current layout is a folder for each service/stack , which contains docker-compose.yaml + data-folders etc for the service. docker-compose and related config files are versioned in git.
    I have portainer, but rarely use it , and won’t let it manage the configuration, because that interfered with versioning the config in git.



  • Thanks for sharing! TIL about autofs. Now I’m curious to try NFS again.

    What’s the failure mode if the NFS happens to be offline when PBS initiates a backup? Does PNS try to backup anyway? What if the NFS is offline while PBS boots?

    EDIT: What was the reason for bind mounting the NFS share via the host to the container, and NFS mounting from NAS to host?
    I did the NFS-mount directly in the PBS. (But I am running my PBS as a VM, so had to do it that way)


  • I run PBS as a virtual machine on Proxmox, with a dedicated physical harddrive passed through to PBS for the data.

    While this protects from software failures of my VMs, it does not protect from catastrophic hardware failure. In theory I should be able to take the dedicated harddrive out and put it in any other system running a fresh PBS, but I have not tested this.

    I tried running the same PBS with an external NFS share, but had speed and stability issue, mainly due to the hardware of the NFS host. And I wasn’t aware of autofs at the time, so the NFS share stayed disconnected



  • SingleFile is a browser addon to save a complete web page into a single HTML file. SingleFile is a Web Extension (and a CLI tool) compatible with Chrome, Firefox (Desktop and Mobile), Microsoft Edge, Safari, Vivaldi, Brave, Waterfox, Yandex browser, and Opera.

    SingleFile can also be integrated with bookmark managers hoarder and linkding browser extensions. So your browser does the capture, which means you are already logged in, have dismissed the cookie banner, solved the capthas or whatever else annoyance is on the webpage.

    ArchiveBox and I believe also Linkwarden use SingleFile (but as CLI from the server side) to capture web pages, as well as other tools and formats. This works well for simple/straightforward web pages, but not for annoying we pages with cookie banners, capthas, and other popups.



  • Reading your post again, you should start by moving your docker management from CasaOS to vanilla docker-compose files, and keep them in a git repo.

    I still think you definitely should look in to NixOS and what it can offer, cause it seems like that is where your mindset is going.

    But NixOS is a drastic change, you should start by just converting your individual services one by one from CasaOS management to docker-compose files. One compose file for all services is possible, but I would recommend one compose file for each service. Later you can move from Debian to NixOS while using the same docker-compose files.


  • I would like to have a system when I know what I did, what is opened/installed/activated and what is not

    You sound like you need to to look in to Nix and NixOS. The TLDR is that everything is declared in a configuration file(s), which you can and should back up in git. The config files tell you exactly what you did , and the config file comments together with git commit history tell you why.

    The whole system is built from this configuration file. Rollback is trivially easy, either by rebooting and selecting an older build during the boot manager, or reverting to an older git commit and rebuilding (no reboot required, so usually faster)

    Now fair warning, Nix (and NixOS) is a big topic, very different from normal way of thinking about software distribution and OS. Nix is not for everyone.

    You should also at the very least have a git repo for docker-compose files for your services. Again, that will declaratively tell you what you did and why.

    Also, if NixOS is too extreme, you should also look in to declarative management tools like Ansible etc



  • Hard to say without knowing which method you used to install HomeAssistant.

    But I never found mdns .local addresses to be very reliable. They work 80-90% of the time, but the remaining 10-20% are a hassle.

    Instead I’d recommend you install PiHole (in a docker container is easiest). PiHole is a DNS server intended for network-level ad-blocking. But it also have a handy feature of defining local DNS entries, so you can have HomeAssistant.myhome or HomeAssistant.whatever (.local should not be used with PiHole local DNS because .local is meant for mdns)


  • Some key points regarding Proxmox:

    • Even if you only want to run two services, you still want to keep them isolated. This can save you much pain and frustration in the future when they require upgrades
    • Proxmox let’s you easily manage VM and LXC containers. So you can easily manage backups, or spinning up a separate test instance of your service. Which again, can save you pain and frustration when it comes to future updates of your services.
    • Backups are even better if you can deploy the separate Proxmox Backup Server
    • Should you ever want to add another service in the future, you can test it out in a new VM or container without it affecting your existing services at all
    • ZFS is indeed quite memory hungry, but AFAIK it’s mainly used for the read cache, and can be tuned to use less RAM at the cost of performance
    • ZFS is mentioned a lot because it’s good, but Proxmox also supports a range of other storage technologies: LVM, mdraid, EXT4, CEPH
    • Proxmox is just standard Debian and KVM/QEMU virtual machines under the hood. Which means you can use standard tooling and workflow should you need it for some edgecase.
    • You mentioned Jellyfin in a container: My understanding is that Jellyfin in Docker has some extra limitations or complexities when it comes to hardware encoding.
      • Jellyfin also has official documentation for how to deploy in LXC container and get HW transcoding working (Less complex than in Docker).
      • LXC containers are not like Docker containers. While a Docker container is meant to be an immutable image of a (single) application, LXC is more like a full fledged VM, but without the overhead of virtualization. LXC containers are full systems, and you install software via the usual apt, dnf etc
      • The “correct” way to run Docker in Proxmox is to run Docker in a Virtual machine. Installing Docker inside a LXC container is also possible, with some caveats. Installing Docker directly on the Proxmox host is not recommended

    For reference, my oldest Proxmox server is a 2013 AMD dualcore 16GB DDR2 ram with VMs on LVMthin on a single SSD, with legacy VM doing mdraid of 3 HDDs using hardware passthrough. Performance is still OK, the overhead from Proxmox is negligible compared to strain from the actual workloads