

Yes! One of my most favorite Johnny Depp roles.
A person with way too many hobbies, but I still continue to learn new things.
Yes! One of my most favorite Johnny Depp roles.
They don’t want to admit they’ve been screwing us over even though we all know it’s happening. All these companies could have rolled out suitable internet speeds a decade earlier but they would rather limit everyone to the lowest common denominator so they don’t have to admit just how terrible their equipment is in most locations.
I’ve gotta say, having city-owned fiber is great, folks here don’t have to wait weeks for Comcast to send out a tech who conveniently never shows up on the scheduled day, and customer service actually has a clue what they’re talking about. This is how a public service should operate.
I would say cable TV coax has quite a lot more capacity than what the providers let on. In my city they offered up to 50mbps at over $100/month. Then they lost their lawsuit trying to prevent the city from installing its own fiber network and suddenly the cable company decided they could offer 150mbps for around $75/month (with no equipment changes). Once the fiber network started becoming operational (offering 1gbps bidirectional for$50/month) the cable company decided they’re better also offer gigabit connection speeds, but once again they simply flipped a switch to increase your bandwidth. This capability has been in place for quite some time, they just didn’t want to offer it and their illegal “monopoly” gave them no incentive to provide competitive speeds.
*I say “monopoly” even though we technically also have DSL available in town. However when I asked one of the techs why DSL couldn’t give me more than 896kps upload speed, I was told that the cable company had an arrangement with them which prevented the DSL from providing the speeds needed by businesses. After the lawsuit that broke up the state-wide bans on other providers, this practice was exposed and also broken up, so now the telco is able to max out their DSL speeds.
You might check if a simple CPU upgrade would get you there. I previously ran some 2005 Poweredge servers that came with a Pentium D processor, and it cost me something like $8 from ebay to upgrade to a Xeon and start running KVM.
Keep an eye out for people trashing perfectly good desktop machines because Windows 10 is being retired.
If you want a server that “does it all” then you would need to get the most decked-out top of the line server available… Obviously that is unrealistic, so as others have mentioned, knowing WHAT you want to run is required to even begin to make a guess at what you will need.
Meanwhile here’s what I suggest – Grab any desktop machine you can find to get yourself started. Load up an OS, and start adding services. Maybe you want to run a personal web server, a file server, or something more extensive like Nextcloud? Get those things installed, and see how it runs. At some point you will start seeing performance issues, and this tells you when it’s time to upgrade to something with more capability. You may simply need more memory or a better CPU, in which case you can get the parts, or you may need to really step up to something with dual-CPU or internal RAID. You might also consider splitting services between multiple desktop machines, for instance having one dedicated NAS and another running Nextcloud. Your personal setup will dictate what works best for you, but the best way to learn these things is to just dive in with whatever hardware you can get ahold of (especially when it’s free), and use that as your baseline for any upgrades.
Crepuscular rays: shafts of light which are seen just after the sun has set and which extend over the western sky radiating from the position of the sun below the horizon. They form only when the sun has set behind an irregularly shaped cloud or mountain which lets the rays of the sun pass through a cloud in bands.
In this case, I think the light is making it look like the cloud is dark bands, when in fact the entire cloud is probably all dark but being lit up by the rays. (And yes, this affect can be seen both at sunrise and sunset.)
Since I’m 57 and have paid some attention to how I’ve changed over the years, perhaps I can add a little insight? Quite frankly, you get tired. I’ve been on the scene since the home computer revolution took off and I’ve seen so many things come and go. It’s not that we can’t learn new forms of communication, etc., but rather that after awhile you start asking yourself why bother when the “next big thing” is going to be another forgotten memory in 5-10 years. It’s not you who are being criticized for wasting items, it’s all the people like you over the years who have collectively wasted so much. Our brains remember all those things and they add up, causing us to fixate on the wrong info (although this last bit isn’t really something that comes with age).
Last night I re-watched The Fifth Element. Afterwards I was thinking about when it first came out in 1997. My god, that’s 28 years ago. I remember things from the 90’s. I remember things from the 80’s and from the 70’s. I remember that after 9/11 the 00’s were boring as fuck. But when you put all of that together, and start thinking about how much you’ve experience… holy hell that’s quite a lot to face squarely. And if I tell you something inappropriate about a co-worker… what? HR will pull me away from the monotony and have a talk with me? Experience tells us what we can get away with, and sometimes it’s fun just to see what people’s reactions are.
So yeah, I’ve observed these things, but I refuse to be pulled down into misery and monotony. Keep yourself busy doing things that you enjoy. Never be afraid to go down the rabbit hole and learn crazy new things. I’m working on assembling a couple swords from parts, looking into bluing some steel pieces I made. And just this week I learned about “rust bluing” which is a crazy concept but is easy to do at home. I learned something new and fun, and I refuse to ever stop learning. I may not care about Instagram or Facebook, but I installed Signal on my phone and I love being able to create my own 3D models and printing them out.
The future is always amazing. Age doesn’t make us care less about it, it just makes us more choosy in what parts are worth investing in. If you don’t want to become a listless old geezer, then don’t… all you need to do is keep enjoying the wonders of the world.
But why doesn’t it ever empty the swap space? I’ve been using vm.swappiness=10 and I’ve tried vm.vfs_cache_pressure at 100 and 50. Checking ps I’m not seeing any services that would be idling in the background, so I’m not sure why the system thought it needed to put anything in swap. (And FWIW, I run two servers with identical services that I load balance to, but the other machine has barely used any swap space – which adds to my confusion about the differences).
Why would I want to reduce the amount of memory in the server? Isn’t all that cache memory being used to help things run smoother and reduce drive I/O?
And how does cache space figure in to this? I have a server with 64GB of RAM, of which 46GB is being used by system cache, but I only have 450MB of free memory and 140MB of free swap. The only ‘volatile’ service I have running is slapd which can run in bursts of activity, otherwise the only thing of consequence running is webmin and some VMs which collectively can use up to 24GB (though they actually use about half that) but there’s no reason those should hit swap space. I just don’t get why the swap space is being run dry here.
If your card has an x4 pinout, then it probably needs the additional bandwidth. Plugging it into an x1 slot (if it was possible) would slow down the network traffic. Get a better motherboard with an x4 slot on it so you can use the hardware you want. or find something else that will fit your computer.
Honestly even the 1Gb quad port card I have requires an x4 slot, although I saw some dual-port 2.5Gb x1 cards on ebay. Maybe you could just use two of those?
Yeah? OK well it’s certainly worth taking a closer look at, and I was also doing some reading on Yacy. I’ve run one in the past called mnogosearch, with a lot of customization, but it would be nice to get into a community project like this.
But is it decentralized? Do the results from multiple spiders get added to give everyone the same quality searches or do I need to scan the whole internet myself?
[edit] I was looking at this earlier and couldn’t find the info. Started searching again just now and found it immediately… of course… (The answer is YES)
Yep, that’s exactly what I was looking at (https://github.com/searx/searx). As I said, it was a QUICK dive but the wording was enough to make me shy away from it. For all the years I’ve been running servers, I won’t put up anything that requires the latest/greatest of any code because that’s where about 90% of the zero-days seem to come from. Almost all the big ones I’ve seen in the last few years where things that made me panic until I realized that oh, if your updates are more than a year old then none of this affects you. And the one that DID affect me had already been updated through a security release.
I just did a quick dive into this and have some concerns. SearX appears to no longer be maintained and was last updated three years ago. SearXNG was forked to use more recent libraries but there were concerns that those are not always stable or fully vetted. There were also concerns that SearXNG did not follow the same concerns for user privacy. It’s a shame that SearX shut down, that one actually sounds like a project I would have jumped on.
More drives also equals larger power consumption so you would need a larger battery backup.
It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.
I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.
I wonder if anyone ever pointed out that the lettering on the door should actually be forward, since it would be reversed as you see it through the glass in the door, but then reversed again by the mirror?
Yeah I figured there would be multiple answers for you. Just keep in mind that you DO want to get it fixed at some point to use the disk id instead of the local device name. That will allow you to change hardware or move the whole array to another computer.
Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.
OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…
zpool offline poolname /dev/nvme1n1p1
zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename
Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.
What kind of crops are you going to grow at 125°? That’s still within OP’s specification of triple digits and with temps getting hotter we’re likely to see a lot more of this happening within our lifetime.