• mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    17 hours ago

    They’re fucked.

    Local models are already winning. Those benchmarked a year behind the biggest of big boys, a year ago. Six months ago they were six months behind. Yesterday Qwen released 3.6 27B and it outperforms 3.5 397B… from February.

    Either we’re plateauing toward the asymptotic limit of LLM capabilities, and the endgame runs as well on a toaster as it does on a server - or breakthroughs use big fat models as a glorified search space to be rapidly discarded. Both options point toward neural networks as a lump of algebra that sits on your hard drive and occasionally spins your fans. Remote computing loses, as it basically always must, and the drastically reduced requirements for competing on local software favor clever new competitors who aren’t a bajillion dollars in debt.

    • baller_w@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      17 hours ago

      I agree with this. I have an openclaw setup since I want to own my own data and services. A few months ago Sonnet was the clear leader for general use task for me. Now Gemma 4 performs nearly as well hosted off my gaming PC. Based on resource utilization, I actually think I can run it from the same nuc that openclaw is hosted from.