News: 1761555609

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

The Chinese Box and Turing Test: Is AI really intelligent?

(2025/10/27)


Opinion Remember [1]ELIZA ? The 1966 chatbot from MIT's AI Lab convinced countless people it was intelligent using nothing but simple pattern matching and canned responses. Nearly 60 years later, ChatGPT has people making the same mistake. Chatbots don't think – they've just gotten exponentially better at pretending.

Alan Turing's 1950 test set a [2]simple standard : if a judge can't tell whether they're conversing with a human or machine, the machine passes.

By this metric, many chatbots are already "intelligent." You can test this yourself at [3]Turing Test Live . Recent [4]studies from Queen Mary University and University College London found people can't reliably distinguish human voices from AI clones.

[5]

That's great news for scammers, not so good for the rest of us. Keep that in mind the next time your kid calls to ask for a quick loan via Venmo to pay for a car accident – it may not actually be your child in trouble but you and your bank account if you pay up.

[6]

[7]

But is the AI being used for this actually intelligent or just very, very good at faking it? This is not a new question. American philosopher John Searle came up with the [8]Chinese Room , aka the "Chinese Box" argument, all the way back in 1980. He argued that while a computer could eventually simulate understanding – i.e. it could pass the Turing Test – that doesn't mean it's intelligent.

The Chinese Box experiment imagines a person who does not understand Chinese shut inside a room, using a set of instructions (e.g. a program) to respond to written Chinese messages (data) slipped under the door. Although the person's answers, with enough training (machine learning), are perfectly fluent, they are derived only from symbol manipulation, not from understanding. Searle argues this situation is analogous to how computers "understand" language. The man in the middle still doesn't have a clue what either the incoming or outgoing messages mean. It's syntactic processing without semantic comprehension. Or, as I like to put it, very sophisticated mass-production copy and paste.

[9]We're all going to be paying AI's Godzilla-sized power bills

[10]OpenNvidia could become the AI generation's WinTel

[11]Terminators: AI-driven robot war machines on the march

[12]AI web crawlers are destroying websites in their never-ending hunger for any and all content

For example, I was recently accused of writing a Linux story using AI. For the record, I don't use AI for writing. Search, yes, [13]Perplexity, for one, is a lot better than Google ; writing, no. I looked into it, and what did I find? That ChatGPT, in this case, did indeed give answers that appeared a lot like my writing because, when I dug into it, it had "learned" by stealing words from my earlier articles on Linux.

According to Searle's argument, current AI can never have true understanding, no matter how sophisticated they may get or how easily we're fooled by them. I agree with him as far as today's AI goes. Generative AI really is just copy and paste, and agentic AI, for all the chatter about it being a new step, is just GenAI large language models (LLMs) talking with each other. Neat, useful, but in no way, shape, or form a fundamental step forward.

[14]

Come the day we have Artificial General Intelligence (AGI), we may have a truly intelligent computer. We're not there yet. Despite all the hype, we're not even close to it today.

Sam Altman, head of OpenAI and the company's number one cheerleader, may have said: "We are now [15]confident we know how to build AGI as we have traditionally understood it," but that's crap. Will we eventually have a truly smart AI? Sure. I can see that happening.

The air is hissing out of the overinflated AI balloon [16]READ MORE

Wake me up when we have one that can pass the [17]Chinese researchers' Survival Game test. This test requires an AI to find answers to a wide variety of questions through continuous trial and error. You know, like we do. Their best guesstimate of when we can expect such a system to say and know precisely what it's saying and doing, such as [18]HAL saying "I'm sorry Dave, I'm afraid I can't do that," won't be until 2100 AD.

I think we can get there faster than that. Technology always tends to improve faster than we think it will, even though we're terrible at predicting exactly how it will improve. I still want my flying car, but I've given up hope that I'll ever get one.

You may ask yourself: "Does this really matter? If my GenAI girlfriend says she loves me and I believe her, isn't that enough?" OK, for the terminally lonely, that may be fine. Profoundly sad, but OK. However, when we think of AI as intelligent, we tend to also think they're reliable. They're not. Maybe StacyAI 2.0 won't cheat on you, but for work, we want a wee bit more.

[19]

We're not there yet. Kevin Weil, OpenAI's VP for science, recently claimed "GPT-5 just found solutions to 10 (!) previously unsolved Erdös problems." Ah, nope. No, it hadn't. [20]OpenAI's latest model had simply scraped answers off the internet and regurgitated them as its own.

Like people, Anthropic has discovered that AI programs will [21]lie, cheat, and blackmail . But they're not coming up with it on their own. Once again, they're just copying us. Sad, isn't it? ®

Get our [22]Tech Resources



[1] https://web.njit.edu/~ronkowit/eliza.html

[2] https://abcnews.go.com/US/turing-test-determines-computers/story?id=101486628

[3] https://turingtest.live/

[4] https://www.theregister.com/2025/10/09/voice_clone_detection_study/

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aP9Qx1cnEyASahARUBEoTAAAARE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[6] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aP9Qx1cnEyASahARUBEoTAAAARE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aP9Qx1cnEyASahARUBEoTAAAARE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[8] https://iep.utm.edu/chinese-room-argument/

[9] https://www.theregister.com/2025/10/13/ai_power_bills/

[10] https://www.theregister.com/2025/09/29/nvidia_openai_alliance_opinion_column/

[11] https://www.theregister.com/2025/09/12/terminators_aidriven_robot_war_machines/

[12] https://www.theregister.com/2025/08/29/ai_web_crawlers_are_destroying/

[13] https://www.theregister.com/2024/12/16/opinion_column_perplexity_vs_google/

[14] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aP9Qx1cnEyASahARUBEoTAAAARE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[15] https://blog.samaltman.com/reflections

[16] https://www.theregister.com/2025/08/25/overinflated_ai_balloon/

[17] https://www.theregister.com/2025/03/05/boffins_from_china_calculate_agi/

[18] https://www.youtube.com/shorts/5lsExRvJTAI

[19] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aP9Qx1cnEyASahARUBEoTAAAARE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[20] https://www.bloomberg.com/opinion/articles/2025-10-21/openai-s-latest-breakthrough-is-a-sobering-reality-check?utm_medium=email&utm_source=newsletter&utm_term=251021&utm_campaign=sharetheview

[21] https://www.theregister.com/2025/06/25/anthropic_ai_blackmail_study/

[22] https://whitepapers.theregister.com/



Wiretrip

'Bullshit Machines' is the perfect name for LLMs. Or 'Mechanical Boris Johnsons'

Anonymous Coward

Well he does have Turkish heritage.

Christoph

Seen elsenet: AI is three autocompletes in a trenchcoat

So much hype

Guy de Loimbard

Bluff and bluster.

I do wonder if there will ever be anything solid from this "AI" period.

I read an article over the weekend that was comparing the AI hype with other events in history, such as the industrial revolution, gold rush and the dot com boom.

IMHO AI/LLM/ML will find a place in society, much like the PC has, but the bullshit being spun around what it will do and how it will change this that and the other is just that, BS.

The Muppet Test

steelpillow

> if a judge can't tell whether they're conversing with a human or machine, the machine passes.

As I have said before, only if a judge of Turing's calibre cannot tell will the machine pass the Turing test.

If it fools only Muppet judges, then it has only passed the Muppet test.

Do not underestimate the value of print statements for debugging.