There are really two reasons ECC is a “must-have” for me.
- I’ve had some variant of a “homelab” for probably 15 years, maybe more. For a long time, I was plagued with crashes, random errors, etc. Once I stopped using consumer-grade parts and switched over to actual server hardware, these problems went away completely. I can actually use my homelab as the core of my home network instead of just something fun to play with. Some of this improvement is probably due to better power supplies, storage, server CPUs, etc, but ECC memory could very well play a part. This is just anecdotal, though.
- ECC memory has saved me before. One of the memory modules in my NAS went bad; ECC detected the error, corrected it, and TrueNAS sent me an alert. Since most of the RAM in my NAS is used for a ZFS cache, this likely would have caused data loss had I been using non-error-corrected memory. Because I had ECC, I was able to shut down the server, pull the bad module, and start it back up with maybe 10 minutes of downtime as the worst result of the failed module.
I don’t care about ECC in my desktop PCs, but for anything “mission-critical,” which is basically everything in my server rack, I don’t feel safe without it. Pfsense is probably the most critical service, so whatever machine is running it had better have ECC.
I switched from bare-metal to a VM for largely the same reason you did. I was running Pfsense on an old-ish Supermicro server, and it was pushing my UPS too close to its power limit. It’s crazy to me that yours only pulled 40 watts, though; I think I saved about 150-175W by switching it to a VM. My entire rack contains a NAS, a Proxmox server, a few switches, and a couple of other miscellaneous things. Total power draw is about 600-650W, and jumps over 700W under a heavy load (file transfers, video encoding, etc). I still don’t like the idea of having Pfsense on a VM, though; I’d really like to be able to make changes to my Proxmox server without dropping connectivity to the entire property. My UPS tops out at 800W, though, so if I do switch back to bare-metal, I only have realistically 50-75W to spare.
It’s actually surprising how much just having a person in the room can alter the temperature and humidity levels. In my master bathroom, I have my bathroom fan set to activate when the dew point reaches a certain level (I’ve found that dew point produces better results than just humidity); the idea is that the bathroom will be ventilated when someone takes a shower and for however long it takes for the humidity to dissipate after they’re done. The funny thing is that every so often, I’ll take an excessively long poop (lets me honest, I’m scrolling on my phone), and the fan will kick on. Just being in the bathroom will alter the dew point enough that it triggers the fan.
I also have a room that contains all my server/networking equipment. It’s climate-controlled, and I’m constantly monitoring temperatures. The times that in the room working, I can see a noticeable spike in the temperature graph, even though the only variable that’s changed is that there’s a person in the room.
So my point is: OP might not have been having fun that night; it’s entirely possible someone just came in and went to bed.