LVM itself does not provide redundancy, that’s RAID.
I think this is potentially a bit confusing.
LVM does provide RAID functionality and can be used to set up and manage redundant volumes.
See --type and --mirror under man 8 lvcreate.
LVM itself does not provide redundancy, that’s RAID.
I think this is potentially a bit confusing.
LVM does provide RAID functionality and can be used to set up and manage redundant volumes.
See --type and --mirror under man 8 lvcreate.


My next suspicion from what you’ve shared so far apart from what others suggested would be something out of the http server loop.
Have you used some free public DNS server and inadvertently queried it with the name from a container or something? Developer tooling building some app with analytics not disabled? Any locally connected AI agents having access to it?


You say you have a wildcard cert but just to make sure: I don’t suppose you’ve used ACME for Letsencrypt or some other publicly trusted CA to issue a cert including the affected name? If so it will be public in Certificate Transparency Logs.
If not I’d do it again and closely log and monitor every packet leaving the box.


deleted by creator


deleted by creator


If anyone else is seeing high resource use from seeding: There’s quite some spam and griefing happening to at least Debian and Arch trackers and DHT.
Blocking malicious peers can cut down that by a lot. PeerBanHelper is like a spam filter for torrent clients.
https://github.com/PBH-BTN/PeerBanHelper/blob/dev/README.EN.md


On 1: Autoseeding ISOs over bittorrent is pretty easy, helps strengthening and decentralize community distribution, and makes sure you already have the latest stable locally when you need it.
While a bit more resource intensive (several 100GB), running a full distribution package mirror is very nice if you can justify it. No more waiting for registry sync and package downloads on installs and upgrades. apt-mirror if you are curious.
Otherwise, apt-cacher-ng will at least get you a seamless shared package cache on the local network. Not as resilient but still very helpful in outage scenarios if you have more than one machine with the same dist. Set one to autoupgrade with unattended-upgrades and the packages should be available for the rest, too.


Yes, Home Assistant has this.
https://rhasspy.readthedocs.io/en/latest/
Works great. My biggest challenge was finding a decent microphone setup and ended up like many do with old Playstation 3 webcams. That was a while back and I would guess it’s easier to find something more appropriate today.
I am currently trying to transition from docker-compose to podman-compose before trying out podman quadlets eventually.
Just FYI and not related to your problem, you can run docker-compose with podman engine. You don’t need docker engine installed for this. If podman-compose is set up properly, this is what it does for you anyway. If not, it falls back to an incomplete Python hack. Might as well cut out the middle-man.
systemctl --user enable --now podman
DOCKER_HOST=unix://${XDG_RUNTIME_DIR}/podman/podman.sock docker-compose up
I think Mora is on the ball but we’d need their questions answered to know.
One possibility is that you have SELinux enabled. Check by sudo getenforce. The podman manpage explains a bit about labels and shares for mounts. Read up on :z and :Z and see if appending either to the volumes in your compose file unlocks it.
If running rootless, your host user also obviously needs be able to access it.


How about using sieve rules? A nice plus is that if you ever move to self-hosted in the future, you can bring it with you.
I know at least Fastmail supports user-configured sieve. I don’t have experience with Fastmail myself but in general mostly heard good things.
https://www.cstrahan.com/blog/taming-email-with-fastmail-rules/
You don’t interact much with lawyers and government in your work, I take it?


It sounds like notmuch is your bag. While it has its own CLI, it also works great with neomutt, aerc, and others.
https://youtube.com/watch?v=pBs_P_1--Os
You can also do very powerful presorting with sieve if your server supports it.


Auhorities in other European countries are known to MitM SSL certs at VPS providers for years already. Switzerland is moving their legislation towards the EU direction. Proton themselves have been vocal about their concerns about this.
How long until someone realizes they can demand Proton to inject some extra JS into the webmail for desired targets? Folks in a sensitive situation should follow the established best-practice of not relying on remotely served JS for client-side encryption. To be safe against this vecor, handle your encryption and signing outside of the webmail; either in your own client or copy/pasting.


https://discuss.privacyguides.net/t/proton-mail-discloses-user-data-leading-to-arrest-in-spain/18191
Before that: https://www.wired.com/story/protonmail-amends-policy-after-giving-up-activists-data/
There are many, many more cases we don’t hear about in media.
If you consistently connect to Proton via I2P or tor and don’t link a phone number or tracable recovery mail, you’re covering up at least some of the juicy metadata.


The expected payout is negative so no, not similar to investing. It becomes the opposite, pretty much.


Another thing they may have in mind is ATX PSUs. The pinouts on those for the same physical plug vary not only by maker and model but sometimes even by year. So if you get an aftermarket ATX-to-SATA cable that fits just fine in the SATA plug on your ATX PSU, it may put 12v on the 5v and fry your drives or mobo when you plug it in even if it’s from the same brand.
Don’t ask me why there is a voltmeter on my desk.
Personally I’m too paranoid about security and sus of Intel to be comfortable with vPro but you do you.
That said, I’d go for 1, considering you already have that 6th gen on hand in case you need a spare.
Otherwise 3 or 4 (whichever is available on secondary markets for a decent price) and hang on to that Pentium in case need arises. Doesn’t sound like the extra power draw of an i7 is worth it for this build.


Up to 300 or so could be reasonable if the RAM and SSD are decent.
What you can do is segregate networks.
If the browser runs in, say, a VM with only access to the intranet and no internet access at all, this risk is greatly reduced.