'AI Can't Think' (theverge.com)
- Reference: 0180209551
- News link: https://slashdot.org/story/25/11/25/2146258/ai-cant-think
- Source link: https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
> The article goes on to point out that we use language to communicate. We use it to create metaphors to describe our reasoning. That people who have lost their language ability can still show reasoning. That human beings create knowledge when they become dissatisfied with the current metaphor. Einstein's theory of relativity was not based on scientific research. He developed it as thought experiment because he was dissatisfied with the existing metaphor. It quotes someone who said, "common sense is a collection of dead metaphors." And that AI, at best, can rearrange those dead metaphors in interesting ways. But it will never be dissatisfied with the data it has or an existing metaphor.
>
> A different [3]critique (PDF) has pointed out that even as a language model AI is flawed by its reliance on the internet. The languages used on the internet are unrepresentative of the languages in the world. And other languages contain unique descriptions/metaphors that are not found on the internet. My metaphor for what was discussed was the descriptions of the kinds of snow that exist in Inuit languages that describe qualities nowhere found in European languages. If those metaphors aren't found on the internet, AI will never be able create them.
>
> This does not mean that AI isn't useful. But it is not remotely human intelligence. That is just a poor metaphor. We need a better one.
Benjamin Riley is the founder of [4]Cognitive Resonance , a new venture to improve understanding of human cognition and generative AI.
[1] https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems
[2] https://slashdot.org/~RossCWilliams
[3] https://gwern.net/doc/psychology/linguistics/2024-fedorenko.pdf
[4] https://www.cognitiveresonance.net/
IĆ¢(TM)m probably speaking for a lot of people (Score:2)
Who? But I agree with the points outside of the market babble.
Really? (Score:2)
> Posted by BeauHD on Wednesday November 26, 2025 @11:40AM from the language-doesn't-equal-intelligence dept
Don't you mean "from the well duh! dept"?
Re: Really? (Score:2)
Half of the human population doesn't think either, they just echo their favorite chamber.
Wrong Name (Score:2)
It's almost as if we shouldn't have included "intelligence" in the actual fucking name. But once again our language has been co-opted by marketing BS and now here we are trying to set the record straight so people aren't confused or deceived.
What is thinking? (Score:2)
As much as I agree with the statement that contemporary LLMs certainly differ a lot from what we experience as "thinking" from other human beings, the problem with this line of argument remains that there is no consensus on what exactly manifests "thinking", and so it is unconvincing to claim that LLMs "cannot think". It is like claiming "chocolate bars do not contain dark matter!" while not being able to tell what dark matter actually is. Also, people would probably not claim that "pocket calculators canno
perhaps correct, but a load of bullshit (Score:2)
"The problem is that according to current neuroscience, human thinking is largely independent of human language -- and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own..."
Complete bullshit:
"according to current neuroscience, human thinking is largely independent of human language"
False, but so what? LLMs are "largely independent of human language" as well.
"...and we have little reason to believe ever more sophis
PR article (Score:2)
This is a PR "thought leadership" BS article by Benjamin Riley, Cognitive Resonance, who "provides direct consulting support to organizations to improve understanding of how generative AI works."
This doesn't mean they're wrong but it's probably nothing terribly original (there is a reason why it's not on openreview.net as a submission into one of the relevant AI conferences).