• Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    14
    ·
    3 days ago

    An actually interesting use of artificial intelligence being able to accomplish something, when put in the hands of expert mathematicians. Definitely a lot of coaxing it back to doing the task correctly but it is pretty cool that it can solve problems (even if they are math nerd ones) in a way that are independently verifiable.

    • panda_abyss@lemmy.ca
      link
      fedilink
      arrow-up
      17
      ·
      3 days ago

      In the hands of experts these are definitely useful. I’ve always felt that.

      Ai should be used to augment humans, not replace them.

      Unfortunately we have idiots making decisions looking at the sycophant BS machine without knowing what the job actually does

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          10
          arrow-down
          1
          ·
          3 days ago

          Exactly, when you dig into all the complaints people have about this tech, they’re ultimately just symptoms of the underlying capitalist relations.

        • Juice@midwest.social
          link
          fedilink
          arrow-up
          11
          ·
          3 days ago

          You can read Marx’s chapters on technology in Capital volume 1, and what he describes from his own time about how tech is developed and for whose benefit and specifically how it has to exploit workers in order to be useful to capital; it matches so closely with the development of AI that we are seeing.

        • panda_abyss@lemmy.ca
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          3 days ago

          Yes.

          I’d feel a lot less annoyed at my code being used to train the AI (without my consent) if the AI’s benefits weren’t funnelled into private pockets.

          I’d feel a lot less annoyed at AI if it wasn’t constantly use to replace jobs and then fail at it. Actually, AI isn’t replacing jobs, it’s being used as an excuse to do layoffs while pretending your company is being innovative, so as not to scare off investors.

          Without a profit motive there wouldn’t be ChatGPT Health, which is just faking medical skills while being wrong as often as a coin toss, in exchange for money. If I did that I’d be sued for negligence and/or fraud.

      • All Ice In Chains@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        3 days ago

        The problem is always techbros. Large Language Models, Deep Learning, these kinds of things are potentially valuable when put to work in the right arena.

        A techbro will never put them in the right arena. It’s always a false promise built on flimsy reputational credit.