Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • mashbooq@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    14 days ago

    There’s a preprint paper out that claims to prove that the technology used in LLMs will never be able to be extended to AGI, due to the exponentially increasing demand for resources they’d require. I don’t know enough formal CS to evaluate their methods, but to the extent I understand their argument, it is compelling.

  • Zexks@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    7
    ·
    13 days ago

    Lemmy is full of AI luddites. You’ll not get a decent answer here. As for the other claims. They are not just next token generators anymore than you are when speaking.

    https://eight2late.wordpress.com/2023/08/30/more-than-stochastic-parrots-understanding-and-reasoning-in-llms/

    There’s literally dozens of these white papers that everyone on here chooses to ignore. Am even better point being none of these people will ever be able to give you an objective measure from which to distinguish themselves from any existing LLM. They’ll never be able to give you points of measure that would separate them from parrots or ants but would exclude humans and not LLMs other than “it’s not human or biological” which is just fearful weak thought.

    • chobeat@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      13 days ago

      you use “luddite” as if it’s an insult. History proved luddites were right in their demands and they were fighting the good fight.

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      13 days ago

      Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.

      Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.

      Yes, they are very impressive models, but they’re a long way from AGI.

  • lunarul@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    13 days ago

    Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI

    • Michal@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      13 days ago

      AGI could be possible if a new breakthrough is made. Currently LLMs are just pretty good text predictor, and any intelligence exhibited by them is because they are trained on texts exhibiting intelligence (written by humans) . Make a large enough model, and it will seem like an intelligent being.

      • lunarul@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        Make a large enough model, and it will seem like an intelligent being.

        That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.

        And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?

        But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.

  • Pumpkin Escobar@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    13 days ago

    I’ll preface by saying I think LLMs are useful and in the next couple years there will be some interesting new uses and existing ones getting streamlined…

    But they’re just next word predictors. The best you could say about intelligence is that they have an impressive ability to encode knowledge in a pretty efficient way (the storage density, not the execution of the LLM), but there’s no logic or reasoning in their execution or interaction with them. It’s one of the reasons they’re so terrible at math.

  • IHave69XiBucks@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    13 days ago

    Imo, which is backed a bit by some pretty new studies, not only do LLMs not have intelligence at all, they are incapable of it.

    Human intelligence and conciousness likely has a lot to do with nanotubes that trigger quantum wave function collapse, and allow for decision making. Computers simply do not function in this way. Computers are processing machines. They have logic gates with 2 states. 101101110011 binary logic.

    If new studies related to nanotubes are right biological brains are simply operating on an entirely diffetent level and playing by a different set of rules than computers. Its not a issue of getting the software right, or getting more processing power. Its an issue of physical capability of the machine to perform certain functions.