The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned.

Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products’ capabilities and potential flaws are fully understood.

The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said.

“It’s the classic technology scenario,” he said. “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable.”

Wooldridge, who will deliver the Royal Society’s Michael Faraday prize lecture on Wednesday evening, titled “This is not the AI we were promised”, said a Hindenburg moment was “very plausible” as companies rushed to deploy more advanced AI tools.

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    Market collapse will not erode confidence in AI (among general public). Did housing market crash erode confidence in housing technology?

    He’s talking about “scenario such as deadly self-driving car update or AI hack [that] could destroy global interest” which I don’t think is likely at all. Unless some AI terrorist kidnaps a plane and flies it into a skyscraper I don’t think most people will care and I don’t think AI killing hundredths of people is likely.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      I think you underestimate the security nightmare that modern AI poses. The world is already hilariously insecure and AI builds software in the most naive way possible.

      • ExLisper@lemmy.curiana.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        But do you think average users understand this? You think that people will connect a mass ransomware attack on Windows that exploits vulnerability introduced by AI agent with their AI girlfriend? I highly doubt it. Maybe if the attack was also done by AI agents the public would start demanding some restrictions on LLMs but I don’t think such a scenario is likely. It’s theoretically possible but not likely.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 hours ago

          That’s what I’m saying. It won’t be a PR nightmare like the Hindenburg. It will be an actual destruction of global systems. PR doesn’t mean a damn thing if, for example, your debit card can no longer make purchases or your email gets hijacked.