• taiyang@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      3 days ago

      Not a good analogy, except there is one interesting parallel. My students who overuse a calculator in stats tend to do fine on basic arithmetic but it does them a disservice when trying to do anything more elaborate. Granted, it should be able to follow PEDMAS but for whatever weird reason, it doesn’t sometimes. And when there’s a function that requires a sum and maybe multiple steps? Forget about it.

      Similarly, GPT can make cliche copy writing, but good luck getting it to spit out anything complex. Trust me, I’m grading that drinble. So in that case, the analogy works.

        • taiyang@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          LLMs by their very nature drive towards cliche and most common answers, since they’re synthesizing data. Prompts can attempt to sway it away from that, but it’s ultimately a regurgitation machine.

          Actual AI might be able to eventually, but it would require a lot more human like experience (and honestly, the chaos that gives us creativity). At that point it’ll probably be sentient, and we’d have bigger things you worry about, lol

    • Lucky_777@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      3 days ago

      This. It’s a tool, embrace it and learn the limitations…or get left behind and become obsolete. You won’t be able to keep up with people that do use it.