How Much Do AI Models Resemble a Brain? (foommagazine.org)
- Reference: 0180608082
- News link: https://slashdot.org/story/26/01/17/2350259/how-much-do-ai-models-resemble-a-brain
- Source link: https://www.foommagazine.org/language-models-resemble-more-than-just-language-cortex-show-neuroscientists/
> [R]esearchers at the Swiss Federal Institute of Technology (EPFL), the Massachusetts Institute of Technology (MIT), and Georgia Tech revisited earlier findings that showed that language models, the engines of commercial AI chatbots, show strong signal correlations with the human language network, the region of the brain responsible for processing language... The results lend clarity to the surprising picture that has been emerging from the last decade of neuroscience research: That AI programs can show strong resemblances to large-scale brain regions — performing similar functions, and doing so using highly similar signal patterns.
>
> Such resemblances have been exploited by neuroscientists to make much better models of cortical regions. Perhaps more importantly, the links between AI and cortex provide an interpretation of commercial AI technology as being profoundly brain-like, validating both its capabilities as well as the risks it might pose for society as the first synthetic braintech. "It is something we, as a community, need to think about a lot more," said Badr AlKhamissi, doctoral student in computer science at EPFL and first author of the preprint, in an interview with Foom . "These models are getting better and better every day. And their similarity to the brain [or brain regions] is also getting better — probably. We're not 100% sure about it...."
>
> There are many known limitations with seeing AI programs as models of brain regions, even those that have high signal correlations. For example, such models lack any direct implementations of biochemical signalling, which is known to be important for the functioning of nervous systems. However, if such comparisons are valid, then they would suggest, somewhat dramatically, that we are increasingly surrounded by a synthetic braintech. A technology not just as capable as the human brain, in some ways, but actually made up of similar components.
Thanks to Slashdot reader [2]Gazelle Bay for sharing the article.
[1] https://aclanthology.org/2025.emnlp-main.1237/?ref=foommagazine.org
[2] https://slashdot.org/~Gazelle+Bay
Re: (Score:2)
It's very obvious the neural networks aren't doing what the human brain does.
Among other things, humans don't consume the entire internet to string together a coherent sentence. Humans learn to read usually with a single textbook. The difference in information volume is astounding.
Furthermore, humans don't have a training then a production mode. We are constantly learning, and can modify our brain in real time. The cognitive dissonance is a bit painful, though.
Another thing is recursion: human brains
Re: (Score:2)
Ok, how many textbooks did it take you to learn how to read? A billion?
chemical signals (Score:2)
this sounds interesting to me what if we built simple simulations of biochemical reactions into how the models reach their conclusions and make connections could we then better imitate/simulate larger processes like emotion and get better results; and tune for things we want like ethics
Re: (Score:3)
IBM tried doing that with their Blue Brain project. [1]https://en.wikipedia.org/wiki/... [wikipedia.org]
The main issue is that we can't do a perfect simulation at the quantum level, it would be too expensive (if it were even possible). So it's necessary to do an approximation. Then the question becomes, "How much approximation is too much?" and we don't really know.
The Blue Brain project made good headlines but didn't do much.
[1] https://en.wikipedia.org/wiki/Blue_Brain_Project
not even a little bit (Score:1)
We don't know enough about how the brain works (at a higher level) so we can't make something that WORKS like the brain. The best we can do is make something that BEHAVES like a brain.
LLMs and other AI implementations can be thought of as "brain emulators". They're trying to achieve similar behaviors any way they can. Our current technology could be looked at as "serial", where as brains work almost exclusively in parallel. It's still possible to build a neural network that works like a brain at a low l
Re: (Score:2)
> because it can't BE designed.
why?
Re: (Score:3)
Artificial neural networks are not serial. You can simulate them on a serial computer (although nobody does anymore) but that has nothing to do with their function, only their efficiency.
Re: (Score:3)
That doesn't mean what you think it means.
AI models are not a brain (Score:2)
How is this even a newsworthy question?