Throughout history many traditions have believed that some fatal flaw in human nature tempts us to pursue powers we don’t know how to handle. The Greek myth of Phaethon told of a boy who discovers that he is the son of Helios, the sun god. Wishing to prove his divine origin, Phaethon demands the privilege of driving the chariot of the sun. Helios warns Phaethon that no human can control the celestial horses that pull the solar chariot. But Phaethon insists, until the sun god relents. After rising proudly in the sky, Phaethon indeed loses control of the chariot. The sun veers off course, scorching all vegetation, killing numerous beings and threatening to burn the Earth itself. Zeus intervenes and strikes Phaethon with a thunderbolt. The conceited human drops from the sky like a falling star, himself on fire. The gods reassert control of the sky and save the world.

Two thousand years later, when the Industrial Revolution was making its first steps and machines began replacing humans in numerous tasks, Johann Wolfgang von Goethe published a similar cautionary tale titled The Sorcerer’s Apprentice. Goethe’s poem (later popularised as a Walt Disney animation starring Mickey Mouse) tells of an old sorcerer who leaves a young apprentice in charge of his workshop and gives him some chores to tend to while he is gone, such as fetching water from the river. The apprentice decides to make things easier for himself and, using one of the sorcerer’s spells, enchants a broom to fetch the water for him. But the apprentice doesn’t know how to stop the broom, which relentlessly fetches more and more water, threatening to flood the workshop. In panic, the apprentice cuts the enchanted broom in two with an axe, only to see each half become another broom. Now two enchanted brooms are inundating the workshop with water. When the old sorcerer returns, the apprentice pleads for help: “The spirits that I summoned, I now cannot rid myself of again.” The sorcerer immediately breaks the spell and stops the flood. The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control.

  • Treedrake@fedia.io
    link
    fedilink
    arrow-up
    30
    ·
    3 months ago

    Luckily the only “AI” we have are LLMs which seem to have hit their peak, and probably will start corrupting itself with its own training data now that they’ve scoured the web clean.

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      3 months ago

      LLM’s on their own aren’t much a concern. What is a concern is strapping weapons to one of those Boston Dynamics robots, loading an LLM, and training it to kill.

      Governments already kill based on metadata — analyzed by statistical models — so the above isn’t far from reality.

      • xmunk@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        “Turn it on, let us kill our enemies”

        immediately starts quoting Shakespeare

        I am uncertain why you think an LLM would be well suited to this task - it’s an inappropriate model for that function…

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          An LLM = machine learning. The language part is largely irrelevant. It finds patterns in 1’s and 0’s, and produces results based on statistical probability. This can be applied to literally anything that can be represented in 1’s and 0’s (e.g. everything in the known universe).

          Do you not understand how that could be used to target “terrorists”, or how it could be utilized by a killbot? They can fine tune what metadata = “terrorist”, but (most importantly) false positives are a guaranteed mathematical certainty of statistical models, meaning innocent people are guaranteed to be classified as “terrorist”. Then there’s the more pressing concern of who gets to define what a “terrorist” is.

    • ravhall@discuss.online
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      I think there’s still a lot of room to grow with LLMs, but nothing will ever be 100% trustworthy. Especially the human brain.