News: 0177975671

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com)

(Monday June 09, 2025 @03:34AM (EditorDavid) from the machine-language dept.)


The Atlantic makes that case that " [1]the foundation of the AI industry is a scam " and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines."

> [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's [2]improved "emotional intelligence ," which [3]he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, [4]argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, [5]said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.

A sociologist and linguist even teamed up for a new book called [6] The AI Con: How to Fight Big Tech's Hype and Create the Future We Want , the article points out:

> The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

>

> Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who [7]declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you [8]with AI friends to replace the human pals you have lost in our alienated social-media age....

>

> The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, [9]only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.



[1] https://www.msn.com/en-us/technology/artificial-intelligence/artificial-intelligence-is-not-intelligent/ar-AA1GcZBz

[2] https://www.inc.com/jason-aten/openai-says-chatgpt-4-5-comes-with-a-killer-feature-emotional-intelligence/91154092

[3] https://x.com/sama/status/1895203654103351462

[4] https://www.darioamodei.com/essay/machines-of-loving-grace

[5] https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html

[6] https://bookshop.org/a/12476/9780063418561

[7] https://nypost.com/2025/05/14/health/chatgpt-is-my-therapist-its-more-qualified-than-any-human-could-be/

[8] https://www.wsj.com/tech/ai/mark-zuckerberg-ai-digital-future-0bb04de7

[9] https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/



Frenetic churn (Score:5, Insightful)

by sinkskinkshrieks ( 6952954 )

From the whole, utopian "we don't need software developers anymore" to "we really need software developers now because no one understands how to do anything more": from one extreme to another, which is incompatible with stable employment or investment by anyone. Perhaps instead of chasing hype with gusto extremism, folks should realize "AI" is basically glorified tab-completion prone to hallucinating incorrect results like a college intern trying to BS their way through life.

Re: (Score:2)

by burtosis ( 1124179 )

> like a college intern trying to BS their way through life.

Look, a Bachelors of Science is a perfectly acceptable degree, not everyone is suited to a Masters or PhD. Most of our scientific workforce have just fine BS degrees helping them through life. Oh, maybe you meant the other kind of BS degree getting people through life.

Re: (Score:2)

by jd ( 1658 )

You are correct.

When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.

Because AIs have no understanding of semantics, only association, that's about as good as AI gets.

Books (Score:3, Interesting)

by dpille ( 547949 )

Books aren't intelligent, either. I guess it's just my tendency to equate language with thought that has made me mistakenly believe books have taught me things and saved me time.

Wrong argument (Score:2)

by Viol8 ( 599362 )

No one is saying AI isn't useful, the argument is whether its intelligent in the human sense. The point is it doesn't need to be and it doesn't matter anyway - all that matters is whether it gives useful output that would be difficult or impossible to reproduce with conventional single level (ie writing code to solve the problem directly, not using a learning simulation of neurons) programming.

Re: Wrong argument (Score:2)

by fluffernutter ( 1411889 )

Said like a person who isn't investing billions of dollars in it to replace employees.

Re:Books (Score:4, Insightful)

by locater16 ( 2326718 )

You accidentally hit the nail on the head, Transformers are search engines, they're a cool form of book. The mathematical model they started from was designed from the ground up for translating languages, putting an enormous translation book into a super convenient form factor. Unfortunately the current "AI" industry claims they sell machines that will do the thinking for you, rather than a book that might teach you things to think about. So the AI boom is a lie, and they don't care because they're currently making money off that lie.

Re: (Score:2)

by burtosis ( 1124179 )

Sure, if the books are often quite incoherent and make things up. But books that teach are devoid of such things if they are worth anything realistic about actually conveying proper information. But AI is more like buying a science textbook from a religious institution with an agenda.

Re:Books (Score:4, Insightful)

by pjt33 ( 739471 )

No, it's your tendency to focus on the immediately perceived object rather than its cause. The intelligent authors of many books have taught you things using the books as an instrument. Observe the difference with LLMs: unless they're really mechanical Turks (and examples of that kind of "fake it until you make it" have been observed and will probably continue to be observed), they're producing carefully tuned noise rather than conveying intentionally considered ideas.

Re: (Score:2)

by Cyberpunk Reality ( 4231325 )

I was under the impression that most LLMs are mechanical turks - not in an immediate sense, but in the sense that a lot of workers around the world were involved in annotating the datasets LLMs are trained on.[1] So, from that perspective, what an LLM is doing is outsourcing (in time and space) your conversation to a random person using the internet. Insofar as there's thinking involved, what you're getting are reflections of the thinking involved in building the Chinese Room in the first place.

1. [1]https://w [economist.com]

[1] https://www.economist.com/international/2025/04/10/there-is-a-vast-hidden-workforce-behind-ai

Re: (Score:2)

by jd ( 1658 )

You will find that books written by the infinite monkeys approach are less useful than books written by conscious thought, and that even those books are less useful than books written and then repeatedly fact-checked and edited by independent conscious thought.

It is not, in fact, the book that taught you things, but the level of error correction.

It's social media's fault (Score:2)

by devslash0 ( 4203435 )

There is a saying that if something is free, you become the product; milked for data and attention.

When social media came to be, their business model centered around advertising.

But as users departed to different platforms, the social platforms' creators faced a big problem - there was not enough users left around to generate content that they could serve ads with.

So they realised that they needed a replacement for their users to make an impresion that their platform was still alive and kicking. They needed

Neither are we (Score:1)

by Tailhook ( 98486 )

We are, each of us, about 3 lbs of low frequency nerve cells burning approximately 20 watts. Evolution used this bundle of nerves to create a staggeringly complex search engine, combining an inherited model with a limited learning mechanism and goal seeking.

Now, our huge, high frequency, inefficient, machine search engines are replicating our capabilities. Many (most?) have labored under the hubris that there is something mysterious and unattainable about the human mind: it's somehow beyond any conceiva

Re: Neither are we (Score:2)

by fluffernutter ( 1411889 )

Stop drinking the Kool Aid.

Bollocks (Score:2)

by Viol8 ( 599362 )

If we're not intelligent then the term is meaningless. And we're more that just a search engine (well, you might not be) because we have self awareness (yes, we do , its not an "illusion" as some idiots claim because otherwise who or what is the illusion fooling?)

"Many (most?) have labored under the hubris that there is something mysterious and unattainable about the human mind"

Few people claim that. What they do claim is that the human mind is way more complicated than was assumed plus it works in a differ

Re: (Score:2)

by burtosis ( 1124179 )

20 watts is just the electrical energy and neglects the chemical energy but the point still stands. A human brain does with a handful of watts the same tasks using thousands of times less energy than the power most efficient processors we have invented so far.

> AlphaEvolve is actively improving itself right now, both hardware and software.

And it’s going to be a complete piece of trash consuming vast amounts of energy for less and less of a return until improvements no longer are feasible on human lifetimes with the current pool of information. You see, the AI is derived from e

Re: (Score:2)

by jd ( 1658 )

They're not replicating our capabilities, nor could they. The architecture is completely wrong, as is the design philosophy. Brains are not classifiers, the way neural network software is, they are abstraction engines and dynamic compositors.

Re: (Score:2)

by LoadLin ( 6193506 )

> Aside from researchers, nobody really cares if "AI" is intelligent, what we care about is the results and those are very, very interesting.

Well... You can say it's a "research" zone, but it has huge implications.

If the AI is mainly an advanced parrot, once the data become too small, or self-feeding, it has a huge problem of becoming wrong.

I have to say that these critics are too focused on LLMs while more and more AI is currently mixing different concepts while LLM remain more in the "talkative" layer more than in the "thinking" layer.

But it remains an acceptable concern that a lot of enterprises can be selling automated AI, and if that AI dep

Entirely mechanical (Score:2)

by DrXym ( 126579 )

Almost every LLM works like this - given this series of input tokens, give me a list of the potential tokens, choose one with some random weighting, append to list of input tokens, rinse, repeat. With sufficient "parameters", or nodes trained on a sufficient body of input it makes the AI look like it is generating meaningful output whereas it's practically a mechanical process.

Re: (Score:2)

by Tailhook ( 98486 )

You have the basics down: it's a search engine, searching fabulously abstract, emergent model.

What you've missed is: that's all you are. You just use different wiring, and a model refined over a longer interval.

Re: Entirely mechanical (Score:2)

by simlox ( 6576120 )

Our wiring is based on the physical world around us, including complex social interactions. That is way more detailed and complicated than what the LLMs are trained on. Plus, we are born with a lot of pre-training or what ever evolution gave us.

How is it not intelligent? (Score:1)

by jkechel ( 1101181 )

We don't really have any definition for 'intelligence,' nor do we understand how or why neural networks behave the way they do.

In principle, we tried to emulate the same basic functioning as in organic brains.

So why should AI be any different from biological neural networks?

Why wouldn't 'god' [1] infuse life into a silicon structure just as it does in carbon structures that meet the requirements?

Are pigs intelligent? Are snails?

[1] 'God' being either 'not yet explainable by science' or some eternal power, d

You mean now I can SHOOT YOU in the back and further BLUR th'
distinction between FANTASY and REALITY?