Alt account of @Badabinski

Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.

  • 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2024

help-circle

  • Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.

    EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.

    EDIT: In response to this:

    There’s a reason a huge portion of docker images are alpine-based.

    After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.


  • Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?

    EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.


  • Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.

    EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.






  • Wireguard was written with the explicit goal of having sane, secure defaults. I totally feel you w.r.t. openvpn or ipsec, since it’s easy to do something wrong. Wireguard is much easier because it simply refuses to give you the choice to do things incorrectly.

    w.r.t. the certificate thing, you could set up a reverse proxy and do HSTS to ensure nobody can load up a rogue CA on your devices. HSTS has the issue that SSH has (trust on first use or whatever it’s called), but you just need to make sure nobody is MITM you for that first connecting and then you’ll be good to go. This would let you use a self-signed certificate if you do desired.







  • Nah, the slow hop-hop-hop is like a jog. Mustelids can fucking zoom if they’re in danger or after prey. Like, even dopey-ass domesticated ferrets can get going pretty damn quick when they’ve been hurt or feel threatened. Nobody has posted what species of otter attacked this lady, but river otters can reach speeds of 47 kph (29 mph) on land. Sea otters are slow and fat, but these weren’t sea otters.

    You aren’t outrunning a pack of otters in a sprint. It’s no question that you could outrun them over a long distance, but mustelids are zoomy little fuckers.

    (note that I like mustelids and had 4 ferrets, so please don’t mistake my tone as being sour on them)

    EDIT: holy shit, ferrets can be bred and trained to run at like 22 mph. That’s insane!



  • The other person may have responded with a fair amount of hostility, but they’re absolutely correct. I run Kubernetes clusters hosting millions of containers across hundreds of thousands of VMs at my job, and OOMKills are just a fact of life. Apps will leak memory, and you’re powerless to fix it unless you’re willing to debug the app and fix the leak. It’s better for the container to run out of memory and trigger a cgroup-scoped OOM kill. A system-wide OOM kill will murder the things you love, shit in your hat, and lick your face like David Tennant licked Krysten Ritter.