8
Human-like object concept representations emerge naturally in multimodal large language models | Nature Machine Intelligence
www.nature.comUnderstanding how humans conceptualize and categorize natural objects offers critical insights into perception and cognition. With the advent of large language models (LLMs), a key question arises: can these models develop human-like object representations from linguistic and multimodal data? Here we combined behavioural and neuroimaging analyses to explore the relationship between object concept representations in LLMs and human cognition. We collected 4.7 million triplet judgements from LLMs and multimodal LLMs to derive low-dimensional embeddings that capture the similarity structure of 1,854 natural objects. The resulting 66-dimensional embeddings were stable, predictive and exhibited semantic clustering similar to human mental representations. Remarkably, the dimensions underlying these embeddings were interpretable, suggesting that LLMs and multimodal LLMs develop human-like conceptual representations of objects. Further analysis showed strong alignment between model embeddings and neural activity patterns in brain regions such as the extrastriate body area, parahippocampal place area, retrosplenial cortex and fusiform face area. This provides compelling evidence that the object representations in LLMs, although not identical to human ones, share fundamental similarities that reflect key aspects of human conceptual knowledge. Our findings advance the understanding of machine intelligence and inform the development of more human-like artificial cognitive systems. Multimodal large language models are shown to develop object concept representations similar to those of humans. These representations closely align with neural activity in brain regions involved in object recognition, revealing similarities between artificial intelligence and human cognition.
I haven’t defined artificial out of existence at all. My definition of artificial is a system that was consciously engineered by humans. The human mind is a product of natural evolutionary processes. Therefore, the way we perceive and interpret the world is inherently a natural process. I don’t see how it makes sense to say that human representation of the world is not natural.
An example of something that’s artificial would be taking a neural network we designed, and having it build a novel representation of the world that’s unbiased by us from raw inputs. It would be an designed system, as opposed to one that evolved naturally, with its own artificial representation of the world.
And humans consciously decided what data to include, consciously created most of the data themselves, and consciously annotated the data for training. Conscious decisions are all over the dataset, even if they didn’t design the neural network directly from the ground up. The system still evolved from conscious inputs, you can’t erase its roots and call it natural.
Human-like object concept representations emerge from datasets made by humans because humans made them.
And humans made them that way because human minds evolved to represent data in this way. As I keep pointing out, we’re feeding data into neural networks that’s organized in a way that’s natural for our brains to operate on. It’s an artificial system that mimics the way we naturally represent data in our own minds.
The artificial aspect of the system lies in the implementation details. The ways we’ve come up to encode data. These are not essential. It’s like a difference between an algorithm, and its concrete implementation in a programming language. The fact that the data is encoded using human designed formats is incidental to the structure of the data which is derived from the way our brains encode information.
Human-like object concept representations emerge from the way our brains are structured. These are the representations that are encoded into data sets by humans.
Also, you’ve talked about a dialectical relationship, but dialectics are about understanding evolution of dynamic systems. The contradictions represent the opposing forces within a system that guide its development over time. When we talk about a distinction between natural and artificial, what’s the system that we’re discussing here what are the opposing forces?