

Yeah with Amazon’s sheer size this has definitely been done before, curious what limits op is going to hit. My guess is they have a quota for submissions, and they’ll be banned from submitting tickets.
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms
Yeah with Amazon’s sheer size this has definitely been done before, curious what limits op is going to hit. My guess is they have a quota for submissions, and they’ll be banned from submitting tickets.
I mean go for it? They literally can’t do anything, you might as well complain that fire is hot though. It’s part of being in the Internet. They provide safety gloves, via VPCs and firewalls, but if you choose not to use them then… yeah I mean youre probably gonna get burned
Uh sorry dude, but no this isn’t a script kiddy, these are bots that scan every IP address every day for any open ports, it’s a constant thing. If you have a public IP, you have people, govs, nefarious groups scanning it. AWS will tell you the same as if you were hosting it locally, close up the ports, put it on a private network. Use a vpc and WAF in AWS’ case.
I get scanned constantly. Every hour of every day dark forced attempt to penetrate our defences.
I really wanted it to work, for me it made the most sense I thought, as little virtualization as I could do. VM felt like such a heavy layer in between - but it just wasn’t meant to work that way. You have to essentially run your LXC as root, meaning that it’s essentially just the host anyway so it can run docker. Then when you get down to it, you’ve lost all the benefits of the LXC vs just running docker. Not to mention that anytime there was even am minor update to proxmox something usually broke.
I’m surprised Proxmox hasn’t added straight-up support for containers, either by docker, podman, or even just containerd directly. But, we aren’t it’s target audience either.
I’m glad you can take my years of struggling to find a way to get it to work well and learn from it.
Not at all. Proxmox does a great job at hosting VMs and giving a control plane for them - but it does not do containers well. LXCs are a thing, and it hosts those - but never try to do docker in an LXC. (I tried so many different ways and guides and there were just too many caveats, and you end up always essentially giving root access to your containers, so it’s not great anyway). I’d like to see proxmox offer some sort of docker-first approach will it will manage volumes at the proxmox level, but they don’t seem concerned with that, and honestly if you’re doing that then you’re nearing kubernetes anyway.
Which is what I ended up doing - k3s on proxmox VMs. Proxmox handles the instances themselves, spins up a VM on each host to run k3s, and then I run k3s from within there. Same paradigm as the major cloud providers. GKE, AKS, and EKS all run k8s within a VM on their existing compute stack, so this fits right in.
Just focus on one project at a time, break it out into small victories that you can celebrate. A project like this is going to be more than a single weekend. Just get proxmox up and running. Then a simple VM. Then a backup job. Don’t try to get everything including tailscale working all at once. The learning curve is a bit more than you’re probably used to, but if you take it slow and focus on those small steps you’ll be fine.
I think at this point I agree with the other commenter. If you’re strapped for storage it’s time to leave Synology behind, but it sounds more like it’s time to separate your app server from your storage server.
I use proxmox, and it was my primary when I got started with the same thing. I recommend build out storage in proxmox directly, that will be for VM images and container volumes. Then utilize regular backups to your Synology box. That way you have hot storage for drives and running things, cold storage for backups.
Then, inside your vms and containers you can mount things like media and other items from your Synology.
For you, I would recommend proxmox, then on top of that a big VM for running docker containers. In that VM you have all of your mounts from Synology into that VM, like Jellyfin stuff, and you pass those mounts into docker.
If you ever find yourself needing to stretch beyond the one box, then you can think about kubernetes or something, but I think that would be a good jump for now.
I don’t know of any millennial or younger who assumes there will be a safety net for them at the end of the road. We just don’t trust those in charge to keep it. I’ll fight for it, I paid into it and I want others to have it, but I can’t bank on it either
Seconded. If they can’t optimize their code (which, I have never seen applications require 256 gigs of ram even in FAANG so I find that doubtful), then they need to rent a machine. The cloud is where you rent it. If not Google, then AWS, Azure, Digital Ocean, any number of places let you rent compute
Researchers always make some of the worst coders unfortunately.
Scientists, pair up with an engineer to implement your code. You’ll thank yourself later.
I don’t blame them, they come from Reddit and expect the exact same. To them it looks like a feature is missing - when in reality it was deliberately chosen
OP don’t take the downvotes personally, it’s just that the question has been asked many times before
Is there a site like this for other refurbished items beyond storage? More cases, tam, cpus?
The article said there’s a phone companion app
There’s a good chunk of us who used it with the web, and they’re adding that to the vast Google graveyard. That in and of itself makes me excited to see an alternative because Google will kill the app version on a whim too.
Agree with this guy. If you like, I’ve done the tech job hunt too many times now, feel free to DM me your linked in and I’d be happy to give impressions on it
That’s what I thought was happening, There’re no other system things that need to do?
Agreed. Needs to be a required mount in fstab. System won’t even start then if the mount fails, docker always has access
There is more than one spiderman
I honestly agree. The casting of the movies was perfect in my opinion. Mr and Mrs Weasley. Snape. Trewlawney, mcGonagle. Harry, Ron, Hermione. It was great casting up and down. They’re never going to get a cast like that again, so honestly who cares. That’s how I view this whole show. It’s just a cash grab, so who cares.
I’ll post more later (reply here to remind me), but I have your exact setup. It’s a great way to learn k8s and yes, it’s going to be an uphill battle for learning - but the payoff is worth it. Both for your professional career and your homelab. It’s the big leagues.
For your questions, no to all of them. Once you learn some of it the rest kinda falls together.
I’m going into a meeting, but I’ll post here with how I do it later. In the mean time, pick one and only one container you want to get started with. Stateless is easier to start with compared to something that needs volumes. Piece by piece brick by brick you will add more to your knowledge and understanding. Don’t try to take it all on day one. First just get a container running. Then access via a port and http. Then proxy. Then certs. Piece by piece, brick by brick. Take small victories, if you try to say “tomorrow everything will be on k8s” you’re setting yourself up for anger and frustration.