• NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    4 days ago

    Hopefully there are people still working on non-llm type general AI, because i don’t think we’re ever going to get there with LLMs. The architecture just seems wrong to ever get there, and even Altman has said they probably can’t solve hallucinations. We can probably go very far down this road and get them pretty good, but it’s the wrong road if you want a real AI.

  • ShinkanTrain@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    5 days ago

    Wanting a LLM do not hallucinate is like wanting a heater to not generate heat.

  • utopiah@lemmy.ml
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    4 days ago

    The word “hallucination” itself is a marketing term. It’s not because it’s been frequently used in the technical literature that it is free of any problem. It’s used because it highlights a problem (namely that some of the output of LLM are not factually correct) but the very name is wrong. Hallucination implies there is someone, perceiving and with a world model, who typically via heuristics (for efficient interfaces like Donald Hoffman suggests) do so incorrectly leading to bad decision regarding the current problem to solve.

    So… sure, “it” (trying not to use the term) is structural but it is simply because LLM have no notion of veracity or truth (or anything else, to be clear). They have no simulation to verify from if the output they propose (the tokens out, the sentence the user gets) is correct or not, it is solely highly probably based on their training data.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 days ago

      Brand new example : “Skills” by Anthropic https://www.anthropic.com/news/skills even though here the audience is technical it is still a marketing term. Why? Because the entire phrasing implies agency. There is no “one” getting new skills here. It’s as if I was adding bash scripts to my ~/bin directory but instead of saying “The first script will use regex to start the appropriate script” I named my process “Theodore” and that I was “teaching” it new “abilities”. It would be literally the same thing, it would be functionally equivalent and the implement would be actually identical… but users, specifically non technical users, would assume that there is more than just branching options. They would also assume errors are just “it” in the process of “learning”.

      It’s really a brilliant marketing trick, but it’s nothing more.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      To be clear, I’m not saying the word itself shouldn’t be used but I bet that 99% of the time if it’s not used by someone with a degree in AI or CS it’s going to be used incorrectly.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    5 days ago

    It’s worth noting that humans aren’t immune to the problem either. The real solution will be to have a system that can do reasoning and have a heuristic for figuring out what’s likely a hallucination or not. The reason we’re able to do that is because we interact with the outside world, and we get feedback when our internal model diverges from it that allows us to bring it in sync.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      LLMentalist is a mandatory read.

      Stop making LLMs happen, we don’t need energy hungry bullshit generators for anything.

      There are so many more important AIs that need attention and funding to help us with real problems.

      LLMs won’t solve anything.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        There is a lot of hype around LLMs, and other forms of AI certainly should be getting more attention, but arguing that this tech no value is simply disingenuous. People really need to stop perseverating over the fact that this tech exists because it’s not going anywhere.

        • msage@programming.dev
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          3 days ago

          Any benefits are by far outweighted by the cost and dangers.

          Tell me more about the value when every LLM company is hemorrhaging money.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 days ago

            You seem to have a very US centric perspective on this tech the situation in China looks to be quite different. Meanwhile, whether you personally think the benefits are outweighed by whatever dangers you envision, the reality is that you can’t put toothpaste back in the tube at this point. LLMs will continue to be developed. The only question is how that’s going to be done and who will control this tech. I’d much rather see it developed in the open.

            • msage@programming.dev
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              3 days ago

              You dense motherfucker.

              No LLMs are being developed in the open.

              Even provided weights mean nothing.

              It’s not knowledge LLMs retain, just the ingressed text.

              LLMs should be skipped after confirming that they are indeed a dead end they always were. And the entire world should focus on anything else.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                2 days ago

                You’re such an angry little ignoramus. The GPT-NeoX repo on GitHub is the actual codebase they used to train these models. They also open-sourced the training data, checkpoints, and all the tools.

                However, even if you were right that the weights were worthless, which they’re obviously not, and there were no open projects which there are, the solution would be to develop models from scratch in the open instead of screeching at people and pretending this tech is just going to go away because it offends you personally.

                And nobody says LLMs are anything other than Markov chains at a fundamental level. However, just like Markov chains themselves, they have plenty of real world uses. Some very obvious ones include doing translations, generating subtitles, doing text to speech, and describing images for visually impaired. There are plenty of other uses for these tools.

                I love how you presumed to know better than the entire world what technology to focus on. The megalomania is absolutely hilarious. Like all these researchers can’t understand that this tech is a dead end, it takes the brilliant mind of some lemmy troll to figure it out. I’m sure your mommy tells you you’re very special every day.

  • pineapple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    If humans are neural networks yet humans know when they don’t know and ai is also a neural network can’t they also have the ability to know when they are wrong? Maybe not llms specifically but there must be an ai system that could be made that knows when it is wrong.

  • Zerush@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    Generally hallucinations are frequent in pure chatbots, ChatGPT and similar, because they are based on an own knowledge base and LLM, so, if they don’t know an answer, they invent it, based on their data set. Different are AI with web access, they don’t have an own knowledge base, retrieving their answers in realtime from webcontents, because of this with a similar reliability as traditional search engines, with the advantage that they find relevant sites which are related with the context of the question, listing sources and summarizing the contents in a direct answer, instead of 390.000 pages of sites, which have nothing to do with the question in the traditional keyword search. IMHO for me, the only AI apps which result usefull for normal users, as search assistant, not an chatbot which tell me BS.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    4 days ago

    The thing that always bothered me about the Halting Problem is that the proof of it is so thoroughly convoluted and easy to fix (simply add the ability to return “undecidable”) that it seems wanky to try applying it as part of a proof for any kind of real world problem.

    (Edit: jfc, fuck me for trying to introduce any kind of technical discussion in a pile-on thread. I wasn’t even trying to cheerlead for LLMs, I just wanted to talk about comp sci)

    • ThirdConsul@lemmy.ml
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      4 days ago

      How would token prediction machine arrive at undecidable? I mean would you just add a percentage threshold? Static or calculated? How would you calculate it?

      (Why jfc? Because two people downvoted you? Dood, grow some.)

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        It’s easy to be dismissive because you’re talking from the frame of reference of current LLMs. The article is positing a universal truth about all possible technological advances in future LLMs.

        • ThirdConsul@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 days ago

          Then I’m confused what is your point on Halting Problem vis-a-vis hallucinations being un-mitigable qualities of LLMs? Did I misunderstood you proposed “return undecided (somehow magically, bypassing Halting Problem)” to be the solution?

          • skisnow@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 days ago

            First, there’s no “somehow magically” about it, the entire logic of the halting problem’s proof relies on being able to set up a contradiction. I’ll agree that returning undecidable doesn’t solve the problem as stated because the problem as stated only allows two responses.

            My wider point is that the Halting problem as stated is a purely academic one that’s unlikely to ever cause a problem in any real world scenario. Indeed, the ability to say “I don’t know” to unsolvable questions is a hot topic of ongoing LLM research.

  • Ilixtze@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    8
    ·
    5 days ago

    This is a feature not a bug. Right wing oligarchs, a lot of them in tech, have been creaming their pants on the fantasy of shaping general consensus and privatizing culture for decades. LLM hallucination is just a wrench they are throwing on the machinery of human subjectivity.