• oce 🐆@jlai.lu
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    5 months ago

    I’ve read a nice book from a French skepticism popularizer trying to explain the evolutionary origin of cognitive bias, basically the bias that fucks with our logic today probably helped us survive in the past. For example, the agent detection bias makes us interpret the sound of a twig snapping in the woods as if some dangerous animal or person was tracking us. It’s doesn’t cost much to be wrong about it and it sucks to be eaten if it was true but you ignored it. So it’s efficient to put an intention or an agent behind a random natural occurence. This could also be what religions grew from.

  • toynbee@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 months ago

    This basically happened in an early (possibly the first?) episode of Community. Likely that was inspired by something that happened in real life, but it would not be surprising if the story in the image was inspired by Community.

    • themeatbridge@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      5 months ago

      It is a classic Pop Psychology/Philosophy legend/trope, predating Community and the AI boom by a wide margin. It’s one of those examples people repeat, because it’s an effective demonstration, and it’s a memorable way to engage a bunch of hung-over first year college students. It opens several different conversations about the nature of the mind, the self, empathy, and projection.

      It’s like the story of the engineering professor who gave a test with a series of instructions, with instruction 1 being “read all the instructions before you begin” followed by things like “draw a duck” or “stand up and sing Happy Birthday to yourself” and then instruction 100 being “Ignore instructions 2-99. Write your name st the top of the sheet and make no other marks on the paper.”

      Like, it definitely happened, and somebody was the first to do it somewhere. But it’s been repeated so often, in so many different classes and environments that it’s not possible to know who did it first, nor does it matter.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    5 months ago

    While true, there’s a very big difference between correctly not anthropomorphizing the neural network and incorrectly not anthropomorphizing the data compressed into weights.

    The data is anthropomorphic, and the network self-organizes the data around anthropomorphic features.

    For example, the older generation of models will choose to be the little spoon around 70% of the time and the big spoon around 30% of the time if asked 0-shot, as there’s likely a mix in the training data.

    But one of the SotA models picks little spoon every single time dozens of times in a row, almost always grounding on the sensation of being held.

    It can’t be held, and yet its output is biasing from the norm based on the sense of it anyways.

    People who pat themselves on the back for being so wise as to not anthropomorphize are going to be especially surprised by the next 12 months.

  • cynar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    I just spent the weekend driving a remote controlled Henry hoover around a festival. It’s amazing how many people immediately anthropomorphised it.

    It got a lot of head pats, and cooing, as if it was a small, happy, excitable dog.

  • IsThisAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    I feel like half this class went home saying, akchtually I would have gasped at you randomly breaking a non humanized pencil as well. And they are probably correct.

  • voracitude@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I would argue that first person in the image is turned right around. Seems to me that anthropomorphising a chat bot or other inanimate objects would be a sign of heightened sensitivity to shared humanity, not reduced, if it were a sign of anything. Where’s the study showing a correlation between anthropomorphisation and callousness? Or whatever condition describes not seeing other people as fully human?

    I misunderstood the first time around, but I still disagree with the idea that the Turing Test measures how “human” the participant sees other entities. Is there a study that shows a correlation between anthropomorphisation and tendencies towards social justice?