

EDIT: removing this comment because I don’t think you will use this feedback responsibly


EDIT: removing this comment because I don’t think you will use this feedback responsibly


If the container you’re hosting has a http web service on say port 8080, then you’d want to curl something at http://localhost:8080/. The particular URL/path you hit will depend on the app. If the app is particularly cloud-y, it might even have a specific endpoint for health checking by a container platform. If you share the name of the app I might be able to point you in the right direction.


Wat? Why are people health checking their containers by curl’ing example.com and not the service actually running in the container? Did they not understand that they’re supposed to change the curl URL to point at their actual service?


Yea, after reading the article, this is an overall of the electronic application process that needs to happen before entry. And it’ll include not just social media handles, but also email addresses. Seems reasonably easy for a “bad guy” to skirt.


Honestly, I suspect this is a sneaky way to get CBP access to what ever data sharing shit the social media companies have with the rest of the spooks. Simply by attempting to enter the US someone “agrees” to an automatic search of their social data.


It’s probably because entry for Canadians is specified by a different program. Even the State Department website seems to exclude Canada from the VWP.


If you’re considering video transcoding, I’d give Intel a look. Quicksync is pretty well supported across all of the media platforms. I do think Jellyfin is on a much more modern ffmpeg than Plex, and it actually supports AMD. But, I don’t have any experience with that… Only Nvidia and Intel. You really don’t need a powerful CPU either. I’ve got my Plex server on a little i5 NUC, and it can do 4k transcodes no problem.


You really don’t need an AIO with a 5600X. Just grab a reasonably sized tower cooler and call it a day. There’s less to fail, and less risk of water damage if it fails catastrophically. I’ve found thermalright to be exceptionally good for how well priced they are. Not as quiet as Noctua, but damn near the same cooling performance.
Another thing to consider is that a 5600X doesn’t have built in graphics. I think you’d need to jump up to AM5/7600X for that.


In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don’t think it’s horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that’s one less thing to set up if you need to re-provision your docker host.
(quick edit) I don’t think docker compose reads and re-reads compose files. They’re read when you invoke docker compose but that’s it. So…
If you’re simply invoking docker compose to interact with things, then I’d say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you’re concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.
As long as the user invoking docker compose can read the compose files, you’re good. When it comes to mounting data into containers from NFS… yes permissions will matter and it might be a pain as it depends on how flexible the container you’re using is in terms of user and filesystem permissions.


Docker’s documentation for supported backing filesystems for container filesystems.
In general, you should be considering your container root filesystems as completely ephemeral. But, you will generally want low latency and local. If you move most of your data to NFS, you can hopefully just keep a minimal local disk for images/containers.
As for your data volumes, it’s likely going to be very application specific. I’ve got Postgres databases running off remote NFS, that are totally happy. I don’t fully understand why Plex struggles to run it’s Database/Config dir from NFS. Disappointingly, I generally have to host it on a filesystem and disk local to my docker host.


In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you’re going to need to manually copy data out of the containers. Personally, if all you’re talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.
All this said though, some applications really don’t like their data stored on NFS. I know Plex really doesn’t function well when it’s database is on NFS. But, the Plex media directories are fine to host from NFS.


Yea, I don’t think this is necessarily a horrible idea. It’s just that this doesn’t really provide any extra security, but even the first line of this blog is talking about security. This will absolutely provide privacy via pretty good traffic obfuscation, but you still need good security configuration of the exposed service.


If I understand this correctly, you’re still forwarding it a port from one network to another. It’s just in this case, instead of a port on the internet, it’s a port on the TOR network. Which is still just as open, but also a massive calling card for anyone trolling around the TOR network for things to hack.
This isn’t about social platforms or using the newest-hottest tech. It’s about following industry standard practices. You act like source control is such a pain in the ass and that it’s some huge burden. And that I just don’t understand. Getting started with git is so simple, and setting up an account with a repo host is a one time thing. I find it hard to believe that you don’t already have ssh keys set up too. What I find more controversial and concerning is your ho-hum opinion on automated testing, and your belief that “most software doesn’t do it”. You’re writing software that you expect people to not only run on their infra, but also expose to the public internet. Not only that, but it also needs to protect the traffic between the server on public infra and client on private infra. There is a much higher expectation of good practices being in place. And it is clear that you are willingly disregarding basic industry standard practices.
Github and Gitlab are free, and both even allow private repos for free at this point. Git is practically one of the first tools I install on a dev machine. Likewise, git is the defacto means of package management in golang. It’s so built in that module names are repo URLs.
Git was literally written by Linus to manage the source of the kernel. Sure patches are proposed via mailing list, but the actual source is hosted and managed via git. It is literally the gold standard, and source control is a foundational piece of software development. Same with not just unit tests, but functional testing too. You absolutely should not be putting off testing.
Gotta be honest, downloading security related software from a random drive is sending off sketchy vibes. Fundamentally, it’s no different than a random untrusted git repo. But, I really would suggest using some source control rather than trying to roll your own with diff archives.
Likewise, I would also suggest adding in some unit and functional tests. Not only would it help maintain software quality, but also build confidence in other folks using the software you are releasing.
An issue with your statement “know what you’re doing by doing it” is that without an actually educated teacher to provide trustworthy feedback, you are going to struggle the learn from your mistakes. The LLMs can only provide so much, and they will lie out their ass to you. Unless explicitly prompted to provide critical feedback, they will find any way to provide positive feedback even to your actual detriment. They will happily skirt their sandboxes, and fight your every attempt to make them actually safe.
At a quick glance, nothing in the project indicates that you are not an expert and that an AI Agent provided the code. The quality of the code is also quite poor, even by Claude standards. I’m actually kinda mind blown you got it to built this without any tests… Something we’ve recently been talking about at my job in terms of AI agents is “cognitive debt” that is incurred in the project. LLMs are fundamentally a statistical next-word generator. If they are given something of poor quality, they will tend to produce more and more poor quality work. And without intervention, it just snowballs.
I’ll never tell someone to stop trying to learn. But, your hubris is going to negatively impact your learning outcomes. And to be clear, YOU are not writing the code and the code is what runs on the server and people interact with. What you are doing is using an AI Agent. If you want to get feedback on that, then be honest about it.