• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 days ago

    A link to the paper itself, if like me you have a math background, and are wondering WTF that means and how you measure creativity mathematically. Or for that matter what amateur-tier creativity is. Unfortunately, it’s probably too new to pirate, if you don’t have a subscription to the Journal of Creative Behaviour.

    At least according to the article, he argues that novelty and correctness are opposite each other in an LLM, which tracks. The nice round numbers used to describe that feel like bullshit, though. If you’re metric boils down to a few bits don’t try and pad it by converting to reals.

    That’s not even the real kicker, though; the two are anticorrelated in humans as well. Generations of people have remarked at how the most creative people tend to be odd or straight-up mentally ill, and contemporary psychology has captured that connection statistically in the form of “impulsive unconventionality”. If it’s asserted without evidence that it’s not so in “professional” creative humans, than that amounts to just making stuff up.

    • yeahiknow3@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      10 days ago

      If we increase an LLM’s predictive utility it becomes less interesting, but if we make it more interesting it becomes nonsensical (since it can less accurately predict typical human outputs).

      Humans, however, can be interesting without resorting to randomness, because they have subjectivity, which grants them a unique perspective that artists simply attempt (and often fail) to capture.

      Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Ah, but if there’s no random element to a human cognition, it should produce the exact same output time and time again. What is not random is deterministic.

        Biologically, there’s an element of randomness to neurons firing. If they fire too randomly, that’s a seizure. If they don’t ever fire spontaneously, you’re in a coma. How they produce ideas is nowhere close to being understood, but there’s going to be an element of an ordered pattern of firing spontaneously emerging. You can see a bit of that with imaging, even.

        Anyways, however we eventually create an artificial mind, it will not be with a large language model; by now, that much is certain.

        It does seem to be dead-ending as a technology, although the definition of “mind” is, as ever, very slippery.

        The big AI/AGI research trend is “neuro-symbolic reasoning”, which is a fancy way of saying embedding a neural net deep in a normal algorithm that can be usefully controlled.

        • yeahiknow3@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 days ago

          if there’s no random element to human cognition

          I didn’t say there’s no randomness in human cognition. I said that the originality of human ideas is not a matter of randomized thinking.

          Randomness is everywhere. But it’s not the “randomness” of an artist’s thought process that accounts for the originality of their creative output (and is detrimental to it).

          For LLMs, the opposite is true.

            • yeahiknow3@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 days ago

              randomness is a central part of a human coming up with an idea.

              So, here’s how I understand this claim. Either

              1. As an endorsement of the Copenhagen Interpretation about the ubiquity of randomness at the quantum level. Or
              2. As a rejection of subjectivity (à la eliminative materialism), which reduces thoughts, emotions, and consciousness to facts about neural activation vectors.

              (1) means randomness is background noise cancelled out at scale. We would still ask why some people are more creative than others, (or why some planets are redshifted compared to others) and presumably we have more to say than “luck,” since the chances that Shakespeare wrote his plays at random is 0.

              Interpretation (2) suggests that creativity doesn’t exist and this whole conversation is senseless.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 days ago

                Well, what is creativity? Does it have to be transcendent? Or does it just mean original and useful or coherent, like in the paper? If it’s the latter, a collection of cells can be creative, and an extremely large mathematical system embodied in a GPU could also, potentially, be creative. It’s just a matter of being able to reach the creative concept (probably somewhat randomly), without outputting incoherent garbage first.

                Isn’t that what coming up with an idea feels like? Wandering through the space of concepts until everything clicks together all the sudden?

                This goes towards answering your other reply, too. I have no idea what it’s “like” to be an LLM, and how much it differs from “being” nothing, but if experience (for the sake of argument) is necessary to output decent art, then isn’t an AI replacing artists evidence it has an experience? That is something that has empirically happened, at least for some kinds of artists and to some degree.

                • yeahiknow3@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  6 days ago

                  isn’t an AI replacing artists evidence it has an experience

                  I can only speak about the literary world, and I was quite sanguine about ChatGPT in the early days, before I learned about how LLMs actually work. Having experimented with these tools extensively, I am certain that not a single page of good fiction has ever been produced by these statistical models. Their banality is almost uncanny — unless you know how they work, in which case it makes sense.

                  Now to be fair, fewer than 1 in 100 people can write fiction well, and fewer than 1 in 10,000 can do it at a level I’d consider “art” (as opposed to amateur dabbling).

                  LLMs are limited by the mathematics of their design. They’re just tracking weighted averages about what word comes next. That’s why they’re so good at corpospeak and technical writing, and so utterly worthless and cringey at writing fiction (or “art”).

                  If a collection of cells can be creative, then an extremely large mathematical system embodied in a GPU could also, potentially, be creative.

                  Sure. And a hundred monkeys with typewriters could reproduce the works of Shakespeare. Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

                  I have no idea what it’s “like” to be an LLM

                  The same thing that it’s “like” to be a fax machine. They’re not significantly different, and you can literally program an LLM inside a fax machine if you wanted to.

                  Anyway, leaving you with the thought that you can’t compare “a collection of cells” to digital computers for two reasons.

                  1. Cellular activity is the domain of biologists, who do not study creativity or art. We have absolutely no idea how the tiny analog machinery of multicellular organisms give rise to consciousness.

                  2. Comparing digital stuff to analog stuff is a category error.

                  • “If a collection of cells can be creative, why not a mathematical system in a GPU?”

                  • “If a collection of cells can be creative, why not cheeseburgers?”

                  In both cases the answer is potato.

                  • CanadaPlus@lemmy.sdf.org
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    6 days ago

                    Biological neurons are actually more digital than artificial neural nets are. They fire with equal intensity, or don’t fire (that at least is well understood). Meanwhile, a node in your LLM has an approximately continuous range of activations.

                    They’re just tracking weighted averages about what word comes next.

                    That’s leaving out most of the actual complexity. There’s gigabytes or terabytes of mysterious numbers playing off of each other to decide the probabilities of each word in an LLM, and it’s looking at quite a bit of previous context. A human author also has to decide the next word to type repeatedly, so it doesn’t really preclude much.

                    If you just go word-by-word or few-words-by-few-words straightforwardly, that’s called a Markov chain, and they rarely get basic grammar right.

                    Like you said, the issue is how to do it consistently and not in an infinite sea of garbage, which is what would happen if you increase stochasticity in service of originality. It’s a design limitation.

                    Sure, we agree on that. Where we maybe disagree is on whether humans experience the same kind of tradeoff. And then we got a bit into unrelated philosophy of mind.

                    and you can literally program an LLM inside a fax machine if you wanted to.

                    Absolutely, although it’d have to be more of an SLM to fit. You don’t think the exact hardware used is important though, do you? Our own brains don’t exactly look like much.

            • yeahiknow3@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              8 days ago

              Consider the following question: “why did you write something sad?”

              • for an LLM, the answer is that a mathematical formula came up heads.
              • for a person, the answer is “I was sad.”

              Maybe the sadness is random. (That’s depression for you.) But it doesn’t change the fact that the subjective nature of sadness fuels creative decisions. It is why characters in a novel do so and so, and why their feelings are described in a way that is original and yet eerily familiar — i.e., creatively.

    • yeahiknow3@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 days ago

      novelty and correctness are opposite each other in humans

      So, when it comes to mental illness and creativity, despite some empirical correlations, “There is now growing evidence for the opposite association.”

      However, there are inverse-U-shaped relationships between several mental characteristics and creativity:

      Although you’ll notice that disinhibition rapidly becomes detrimental.

      • CanadaPlus@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        On actual mental illness specifically, as opposed to just “weirdness” in general, I have no hard data. If it’s caused at the physiological level, it makes sense that it wouldn’t follow the same pattern. You can of course name a bunch of mentally ill but prominent thinkers and artists from the past, but there’s almost certainly a lot of neglect of base rate going on there.

        It’s worth noting production LLMs choose randomly from a significant range of tokens they deem fairly likely, as opposed to choosing the most likely one every time. If they were too conservative with it, they too would fall on the near side of that curve.

        • yeahiknow3@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 days ago

          My point is that “weirdness” is rooted in subjectivity. Since LLMs have no subjectivity, they’re forced to rely on randomness, monkey-with-a-typewriter style, which is why their outputs are either banal or nonsensical.