• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    10 days ago

    A more accurate conclusion would be: human-like object concept representations emerge when fed data collected by humans, curated by humans, annotated by humans, and then tested by representation learning methods designed for humans.

    Again, I’m not disputing this point, but I don’t see why it’s significant to be honest. As I’ve noted, human representation of the world is not arbitrary. We evolved to create efficient models that allow us to interact with the world in an effective way. We’re now seeing that artificial neural networks are able to create similar types of internal representations that allow them to meaningfully interact with the data organized in a way that’s natural for humans.

    I’m not suggesting that human style representation of the world is the one true way to build a world model, or that other efficient representations aren’t possible. However, that in no way detracts from the fact that LLMs can create a useful representation of the world, that’s similar to our own.

    Ultimately, the end goal of this technology is to be able to interact with humans, to navigate human environments, and to accomplish tasks that humans want to accomplish.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      10 days ago

      LLMs create a useful representation of the world that is similar to our own when we feed them our human created+human curated+human annotated data. This doesn’t tell us much about the nature of large language models nor the nature of object concept representations, what it tells us is that human inputs result in human-like outputs.

      Claims about “nature” are much broader than the findings warrant. We’d need to see LLMs fed entirely non-human datasets (no human creation, no human curation, no human annotation) before we could make claims about what emerges naturally.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 days ago

        You continue to ignore my point that human representation are themselves not arbitrary. Our brains have emerged naturally, and that’s what makes the representations humans make natural. You could evolve a representation of the model from scratch by hooking up a neural network to raw sensory inputs, and its topology will eventually become tuned to model those inputs. I don’t see what would be fundamentally more natural about that though.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          10 days ago

          If we define human inputs as “natural” then the word basically ceases to mean anything.

          It’s the equivalent of saying that paintings and sculptures emerge naturally because artists are human and humans are natural.

            • queermunist she/her@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              9 days ago

              I’m saying that the terms “natural” and “artificial” are in a dialectical relationship, they define each other by their contradictions. Those words don’t mean anything once you include everything humans do as natural; you’ve effectively defined “artificial” out of existence and as a result also defined “natural” out of existence.

              • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                link
                fedilink
                arrow-up
                2
                ·
                9 days ago

                I haven’t defined artificial out of existence at all. My definition of artificial is a system that was consciously engineered by humans. The human mind is a product of natural evolutionary processes. Therefore, the way we perceive and interpret the world is inherently a natural process. I don’t see how it makes sense to say that human representation of the world is not natural.

                An example of something that’s artificial would be taking a neural network we designed, and having it build a novel representation of the world that’s unbiased by us from raw inputs. It would be an designed system, as opposed to one that evolved naturally, with its own artificial representation of the world.

                • queermunist she/her@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  9 days ago

                  My definition of artificial is a system that was consciously engineered by humans.

                  And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.

                  Human-like object concept representations emerge from datasets made by humans because humans made them.

                  • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    edit-2
                    9 days ago

                    Human-like object concept representations emerge from datasets made by humans because humans made them.

                    And humans made them that way because human minds evolved to represent data in this way. As I keep pointing out, we’re feeding data into neural networks that’s organized in a way that’s natural for our brains to operate on. It’s an artificial system that mimics the way we naturally represent data in our own minds.

                    The artificial aspect of the system lies in the implementation details. The ways we’ve come up to encode data. These are not essential. It’s like a difference between an algorithm, and its concrete implementation in a programming language. The fact that the data is encoded using human designed formats is incidental to the structure of the data which is derived from the way our brains encode information.

                    Human-like object concept representations emerge from the way our brains are structured. These are the representations that are encoded into data sets by humans.

                    Also, you’ve talked about a dialectical relationship, but dialectics are about understanding evolution of dynamic systems. The contradictions represent the opposing forces within a system that guide its development over time. When we talk about a distinction between natural and artificial, what’s the system that we’re discussing here what are the opposing forces?