

Idk of any but I’m interested, commenting to add traffic 👍


Idk of any but I’m interested, commenting to add traffic 👍
Considering the high overlap between Lemmy users and internet savvy people, I would say that we are not a good representation.


For relationships: “Is what i’m about to say/do benifical to me, this person, or the relationship?”
If not, maybe it is me ego or hurt feelings, and so I should ignore it for now and if I feel the need to, I can later analyse it and decide again.
Also, never go hungry or emotional to the grocery store.


Buying new: Basically all of the integrated memory units like macs and amd’s new AI chips, after that any modern (last 5 years) gpu while focusing only on vram (currently nvidia is more properly supported in SOME tools)
Buying second hand: not likely to find any of the integrated memory stuff, so any GPU from the last decade that is still officially supported and focusing on vram.
8gb is enough to run basic small models, 20+ for pretty capable 20-30b models, 50+ for the 70b ones and 100-200+ for full sized models.
These are rough estimates, do your own research as well.
For the most part with LLMs for a single user you really only care about VRAM and storage speed(ssd) Any GPU will perform faster than you can read for anything that fully fits on it’s VRAM, so the GPU only matters if you intend on running large models at extreme speeds (for automation tasks, etc) And the storage is a bottleneck at model load, so depending on your needs it might not be that big of an issue for you, but for example with a 30gb model you can expect to wait 2-10 minutes for it to load into the vram from an HDD, about 1 minute with a sata SSD, and about 4-30 seconds with an NVMe.


You can sniff the network and see if the TV is connecting anywhere.


It’s very very unlikely that your TV and your device connected to it both support and enable ethernet over HDMI by default. But if you are unsure you can test it by connecting and seeing if the TV is getting a connection.
Personally I also opened my TV and disconnected the wifi card since in theory the TV could also just try to connect to any open wifi in the area without me knowing, but to each their own threat model.


Anything exposed to the internet gets a daily / weekly update, depending on how exposed it is, how stable the updates are and how critical a breach would be. For example nginx would be a daily update.
Anything behind a vpn gets a more random update schedule mostly based on when I feel like it (probably around once a month or every other month)
Tip, look at second hand sites/fb marketplace (i know 😒) you can find great deals.
Maybe, show me a fashionable ghost, I might believe in it.


So many ways, it’s almost unbelievable that it still works


Ollama + open webui + tailscale/netbird
Open webui provides a fully functional docker with ollama, so just find the section that applies to you (amd, nvidia, etc) https://github.com/open-webui/open-webui?tab=readme-ov-file#quick-start-with-docker-
And on that host install netbird or Tailscale, install the same on your phone, in tailscale you need to enable magicdns but in netbird I think it provides dns by default.
Once the docker is running and both your server and phone are connected to the vpn (netbird or tailscale) you just type the dns of your server in your phone’s browser (in netbird it would be “yourserver.netbird.cloud” and in tailscale it would be “yourserver.yourtsnet.ts.net”)
Checkout networkchuck on youtube as he has a lot of simple tutorials.
There are a few reasons why someone might use Proxmox. It doesn’t have to be just security, it can also be different network architectures that don’t work as well in Docker and it can also be just greater control over the services which is less comfortable to do in Docker as it’s meant to have built images that are running and are ephemeral. There are also certain services that either don’t have a pre-built docker and someone might not want to bother with making their own docker infrastructure around it or have technologies that are not well supported or are not well executed in docker.
There is also the fact that Proxmox is meant to be used in production, which means that it’s more stable (than some casual docker rubning on whatever distro they have) and it does have a very low overhead, even if you do use dockers you can use them within Proxmox and it gives you a lot of capabilities that add to stability and manageability.
Generally speaking if your threat model is very small, you’re running this within your private network, and it’s not exposed to the internet or anything large like that, then it doesn’t really make a big difference and you should probably just use whatever is comfortable for you.
I personally moved to Proxmox for three reasons which are security, customizability and stability. I felt that within Docker containers it was a lot more annoying to have to pull the images and make my own Docker files and update them and build them every time. I find it easier to have my own server with its dedicated service and that I know how to update and how to modify more properly and that I built from scratch. There is also the advantage that I can use whatever OS I want for different situations. Of course I personally use exclusively Linux but even within that I can use different distros and I can have all kinds of different services running without interfering with one another in any way, and in extreme cases I can have a windows vm.
And another major factor for me was that I just wanted to learn how to do it. I think it’s cool and it was interesting and I have already experienced Docker to a level that I felt comfortable with it and it was time to move on and expand my horizons.


So basically “if they are not a state we kill them slowly, if they are a state we kill them quickly”?
Tip, if you have the room for it, looking for second hand servers (as in actual servers with server hardware) is often really useful.
As you start hosting more stuff you realize that ram and cpu cores are very limited in consumer hardware. With a shitty second hand server you could have more cores and more ram than anything in the consumer category, and you can stick an old GPU on it if you want some better media performance.
But if you truly believe that you won’t spread out and that potentially 64gb ram and 8 cores will suffice, just go ahead and build it however you want. It is no different from a regular build. Get a nice ssd, get a wired ethernet connection and you are like 90% of the way there.
Edit: everyone else is giving much better advice, ignore my overkill here. For media and simple game servers with a low energy consumption target you are probably better off with a mini pc with an integrated gpu or if you want to future proof a bit, maybe one of those unified memory ones where you ram is also the vram and can produce pretty good performance.


Canons are a dangerous weapon


Only tailscale fpr vpn and backblaze for backup
Are you urine? Cuz pee my heart
No shit. If you tell people what they want to hear, they trust you more.
The sad thing is that as a society we are stupid, as individuals we are (possibly) the smartest living beings, and this means that the smart get stuck either respecting the dumb as adults and being honest with them (which leads to where we are now) or treat them like children that can’t yet handle reality as it is (which probably leads to other issues)
We are at the tipping point where we now find out that giving all adults the same treatment leads to the smart enough ones can abuse and manipulate the not smart enough ones to go against their own interests and against the really smart ones


Hmm, can’t really argue with that
The Great Ice Cream would be post WWIII where Germany would be forced to buy everyone ice cream.