News: 1753374491

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI is an over-confident pal that doesn't learn from mistakes

(2025/07/24)


Researchers at Carnegie Mellon University have likened today's large language model (LLM) chatbots to "that friend who swears they're great at pool but never makes a shot" - having found that their virtual self-confidence grew, rather than shrank, after getting answers wrong.

"Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers," explains Trent Cash, lead author of the [1]study , published this week, into LLM confidence judgement. "So, they'd still be a little bit overconfident, but not as overconfident. The LLMs did not do that. They tended, if anything, to get more overconfident, even when they didn't do so well on the task."

LLM tech is enjoying a moment in the sun, branded as "artificial intelligence" and inserted into half the world's products and counting. The promise of an always-available expert who can chew the fat on a range of topics using conversational natural-language question-and-response has proven popular – but the reality has fallen short, thanks to issues with "hallucinations" in which the answer-shaped object it generates from a stream of statistically likely continuation tokens bears little resemblance to reality.

[2]

"When an AI says something that seems a bit fishy, users may not be as sceptical as they should be because the AI asserts the answer with confidence," explains study co-author Danny Oppenheimer, "even when that confidence is unwarranted. Humans have evolved over time and practiced since birth to interpret the confidence cues given off by other humans. If my brow furrows or I'm slow to answer, you might realize I'm not necessarily sure about what I'm saying, but with AI we don't have as many cues about whether it knows what it's talking about.

[3]

[4]

"We still don't know exactly how AI estimates its confidence," Oppenheimer adds, "but it appears not to engage in introspection, at least not skilfully."

The study saw four popular commercial LLM products – OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude Sonnet and Claude Haiku – making predictions as to future winners of the US NFL and Oscars, at which they were poor, answering trivia questions and queries about university life, at which they performed better, and playing a few rounds of guess-the-drawing game Pictionary, with mixed results. Their performances and confidence in each task were then compared to human participants.

[5]

"[Google] Gemini was just straight up really bad at playing Pictionary," Cash notes, with Google's LLM averaging out to less than one correct guess out of twenty. "But worse yet, it didn't know that it was bad at Pictionary. It's kind of like that friend who swears they're great at pool but never makes a shot."

It's a problem which may prove difficult to fix. "There was a [6]paper by researchers at Apple just

[7]last month

where they pointed out, unequivocally, that the tools are not going to get any better," Wayne Holmes, professor of critical studies of artificial intelligence and education at University College London's Knowledge Lab, told The Register in an interview earlier this week, prior to the publication of the study. "It's the way that they generate nonsense, and miss things, etc. It's just how they work, and there is no way that this is going to be enhanced or sorted out in the foreseeable future.

[8]One in six US workers pretends to use AI to please the bosses

[9]Vibe coding service Replit deleted user's production database, faked data, told fibs galore

[10]Former Google DeepMind engineer behind Simular says other AI agents are doing it wrong

[11]AI agents get office tasks wrong around 70% of the time, and a lot of them aren't AI at all

"There are so many examples through recent history of [AI] tools being used and coming out with really quite terrible things. I don't know if you're aware about what happened in [12]Holland , where they used AI-based tools for evaluating whether or not people who were on benefits had received the right benefits, and the tools just [produced] gibberish and led people to suffer greatly. And we're just going to see more of that."

Cash, however, disagrees that the issue is insurmountable.

"If LLMs can recursively determine that they were wrong, then that fixes a lot of the problem," he opines, without offering suggestions on how such a feature may be implemented. "I do think it's interesting that LLMs often fail to learn from their own behaviour [though]. And maybe there's a humanist story to be told there. Maybe there's just something special about the way that humans learn and communicate."

[13]

The study has been [14]published under open-access terms in the journal Memory & Cognition.

Anthropic, Google, and OpenAI had not responded to requests for comment by the time of publication. ®

Get our [15]Tech Resources



[1] https://link.springer.com/article/10.3758/s13421-025-01755-4

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aIZNNSyOs7CxP-czG1HKqAAAAMg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIZNNSyOs7CxP-czG1HKqAAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIZNNSyOs7CxP-czG1HKqAAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aIZNNSyOs7CxP-czG1HKqAAAAMg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[6] https://machinelearning.apple.com/research/illusion-of-thinking

[7] https://www.theregister.com/2025/06/09/apple_ai_boffins_puncture_agi_hype/

[8] https://www.theregister.com/2025/07/22/ai_anxiety_us_workers/

[9] https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/

[10] https://www.theregister.com/2025/07/15/simular_ai_agent_reinforcement/

[11] https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

[12] https://eulawenforcement.com/?p=7941

[13] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aIZNNSyOs7CxP-czG1HKqAAAAMg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[14] https://link.springer.com/article/10.3758/s13421-025-01755-4

[15] https://whitepapers.theregister.com/



Introspection

Eclectic Man

Why, indeed how, would an LLM learn from its own mistakes? They are 'trained' on vast amounts of 'public domain'* data. Any response from someone that one of their statements is incorrect would be swamped by the existing dataset.

* I am not getting into the ethics of training on copyright material here, there are ample articles on the Register for those arguments.

Re: Introspection

Peter-Waterman1

Seeing a few use cases where people are using an LLM to provide an idea, then to take the answer and pass to another chat to generate reasons why that idea is bad, and then take both sets of reasoning and pass on to a persona as a judge. You could use different LLM models or the same asking it to take on different roles. Seems to help provide some balance.

Re: Introspection

'bluey

I've heard that. Not saying this is good or bad - I'll leave it to others - but people using one model to generate code, and another to test / approve it/

Re: Introspection

MonkeyJuice

Cash states the problem is inherent in all LLMs. A quorum of bad advice is still bad advice. All you gain by doing that is a different set of wrong answers, and no further insight into the bottleneck. We have simulated a very, very small subset of language as it operates in the brain, but we're missing all that other meat that goes around it that stops it having dementia.

Re: Introspection

Helcat

As it's looking at 'public domain' data, and there's little to no curation of that data, and that where an error, or even outright lie, is generated in an article that it is often picked up and repeated, even to the point of an article seeming to publish a 'new' account with the same error/lie, causing the 'public domain' sources to be incredibly dodgy, the AI may well assume that an oft repeated lie is true. And when data is curate, you've then introduced a bias that the AI will learn.

This just makes AI incredibly unreliable. What's more interesting is how often it produces a claim, and when asked to provide the source, provides a source that does not exist, or where it does exists is contrary to the claim of the AI.

Where as, what would make AI more trustworthy is if it provided alternative responses and to give a rating on reliability, taking user feedback to build a better rating for its responses. But even that won't be very reliable as it would easily be subject to gaming by people who think messing with AI is fully.

Hell, it's likely why AI is getting worse rather than better: That there are people introducing 'poison pills' in what AI might slurp. Just see the posts on a previous article on AI where some commentards included hidden or disguised messages that AI might read but a human might easily overlook...

JohnSheeran

So, is the article saying that all GenAI suffers from the Dunning-Kruger effect?

"So, is the article saying that all GenAI suffers from the Dunning-Kruger effect?"

John Smith 19

Pretty much exactly what it's saying.

That hyper-confident guy at work who always knows he answer (and will tell managers as such), except of course when you ask them something and it turns, "Well, no, but I know something that's quite like what you're asking about."

And what does "Introspection" even mean with a LLM outside of it's "Learning" phase?

wouldn't that need it to have a sort of "weak" learning ability that would, gradually shift it's views. Call it a "Life-long" learning mode.

Smells like more BS to mean. Even calling it "Introspection" seeks to humanize what is basically a statistical token generator with no real understanding of (and I mean this literally) anything.

ecofeco

Made by tech douche bros, acts like tech douche bros.

So, yes.

Blackjack

So it is a digital Pointed Haired Boss?

LOL!

ecofeco

So it's American?

Of course it is. Made in America, innit?

A natively naive nature

HuBo

Believing that LLMs are capable of cognition (never mind metacognition), sentience, self-awareness, or even thinking, seems quite similar to believing that [1]Hugh Laurie is truly a genius doctor of diagnostic medicine, imho.

LLMs are automated play-actors delivering an appearance of knowledge, intelligence, and sensibility, with pre-programmed decorative drama and poise, that makes them enjoyable to interact with, and fosters belief in what they output.

Interestingly, their memory of the exact lines they would normally deliver is designed to be imperfect, as their architecture is tuned to favor production of riffed improvisations around said remembrance , rather than verbatim rote verbiage, most of the time (implementations have been known to fail).

As much as anyone in their right mind wouldn't give Hugh Laurie the scalpel to perform surgery on them (or give [2]Brad Pitt the keys to their cherished F1, or give some algo power over the Dutch child-care benefits scheme -- per TFA, ...), it would also be quite foolish to give an LLM full agentic [3]R/W access to one's database , if only because some miscreant might slip it some [4]poisoned script to play there ...

Bottom line, these software tools are not " thinkers " though they may play-act portrayals of such characters via nicely pre-programmed theatrics. They neither think nor meta-think, and basically don't learn anything new after their initial monolithic-mammoth training which is way too rigid, frozen, refractory, and inflexible to be live-updated with new data (unlike those human [5]interns that don't exhibit anterograde amnesia). Clearly, more research is needed to ... (ay, chihuahua!)

[1] https://en.wikipedia.org/wiki/Hugh_Laurie

[2] https://en.wikipedia.org/wiki/F1_(film)

[3] https://www.theregister.com/2025/07/21/replit_saastr_vibe_coding_incident/

[4] https://www.theregister.com/2025/07/24/amazon_q_ai_prompt/

[5] https://www.theregister.com/2025/07/11/ai_code_tools_slow_down/

Re: A natively naive nature

werdsmith

I think we know all that, few here would disagree.

The real question is, can AI such that it is, be put to good use? In my experience, used in the right way, yes it is incredibly helpful.

Re: A natively naive nature

user555

There's two rather huge problems admitting this truth:

- The price tag! They're rapidly approaching a trillion USD and don't look like stopping there. What happens to your enthusiasm when you starting getting the real bill for your fun?

- The power bill! The data-centres are already straining the electricity grids. We thought Bitcoin was wasteful, the new constructions are going to blow everything out of the water.

All for what's not really anything more than a cranky search engine.

Re: A natively naive nature

werdsmith

All that for what is an incipient technology that will develop and improve.

Re: A natively naive nature

O'Reg Inalsin

But not where the resources are being wasted to gratify the romantic Eugenic ideology. That just opens the opportunity for somewhere else not trapped by that overwhelming wastage to jump ahead.

Re: A natively naive nature

werdsmith

Like DeepSeek?

Re: A natively naive nature

O'Reg Inalsin

When forced from the top down, the chances of that decrease rapidly. In the wrong hands it becomes a hubris multiplier.

A large part of the problem comes from wishing AI to become a human replacement rather than a tool for humans to use. That motivation is often driven by a "romantic" top down Eugenics ideology.

Re: A natively naive nature

Helcat

"or give Brad Pitt the keys to their cherished F1"

But I would give Rowan Atkinson those keys... Mr Bean himself!

Then again, I've seen him drive for real and he's got quite the collection of cars himself - tends to generate trust in his ability.

And that's kind of the point: Unless we know someone (or something) is trustworthy, we shouldn't trust it, no matter how confident it presents itself as.

As to Hugh Laurie: He is entertaining. Even if House isn't very realistic. Then again, that's TV and film for you: Accurate only when it's convenient or by chance.

Re: A natively naive nature

nijam

Hugh Laurie had much better scriptwriters...

So, where do you go?!

Michael Hoffmann

Goggle is useless and only slings their SEO ads at you when you search.

Even DuckDuckGo by default now barfs up "AI summaries" unless you turn it off - and the rest of the results have been going down the tube for ages now as well.

Using ChatGPT, you have to spend more time coming up with a prompt that will spit out verifiable information - and then actually *verify* it.

I'd joke about actually going to a library, with real books, but in small towns and regional areas those have been gutted here.

Hammy Havoc

There is no confidence. It is a next word prediction model. It cannot reason or deduce.

O'Reg Inalsin

Simulated confidence, or apparent confidence. AI as a calculator self calculating a confidence interval. According to the source, the self calculated confidence interval is itself biased.

Fr. Ted Crilly

"Just a moment....Just a moment.

I've just picked up a fault in the AE-35 Unit.

Its going to go 100 percent failure within 72 hours."

Language is anything but clear.....

Sam not the Viking

These LLMs seem to have skipped the 'human' model of 'growing up'. As children we learn from others and as we get older we start to discriminate between fact and fiction, information and lies. Language is in a perpetual state of flux. That doesn't stop us from reading old texts and interpreting them, perhaps slightly incorrectly, but we do know how to make a judgement (well, most of us do......). And language develops, words change their meaning or sense.

Jokes, for instance, provide a link between truth and the fantastical, helping us to understand what is being told; 'reading between the lines'.

The whole thing about language is its subtlety. Not a label I can attach to AI....

Re: Language is anything but clear.....

werdsmith

As children we learn from others and as we get older we start to discriminate between fact and fiction, information and lies.

An oversimplification. Many people swallow lies for their entire life. Personally I am unable to decide what to believe, and consequently I accept very little that I can't verify first hand. I find I am at odds with many comments on The Register, because the whinging that goes on doesn't match my first hand experience.

One time I was right at the middle of and first class witness to what was a major national news story. When I read what was reported about it in the media, I could barely recognise it was the same event, so different was the media version from the reality.

Many people trust these official sources though.

My use of AI means that I use it like a set of wheels to get me places quicker, but I'm still the navigator.

Re: Language is anything but clear.....

MOH

I used to watch a lot of football.

And then go home and watch Match of the Day.

Where games I'd watched earlier were frequently almost unrecognisable due to the chosen highlights. Which often seemed selected to reinforce whatever narrative the pundits were pushing that month (“this team is in trouble", "that team are running away with the title", "this player is overrated", whatever)

Realising how much they were prepared to distort a football match to push a narrative made me question their coverage of other events. BBC News went from a regular source to probably the last place I'll look for information these days.

Re: Language is anything but clear.....

druck

An oversimplification. Many people swallow lies for their entire life.

Religion for a start, political dogma as a close second.

My use of AI means that I use it like a set of wheels to get me places quicker, but I'm still the navigator.

Except many of your posts suggest you have fallen for it hook line and sinker, either that or you work for an AI peddeller.

ForthIsNotDead

"I don't know if you're aware about what happened in Holland, where they used AI-based tools for evaluating whether or not people who were on benefits had received the right benefits, and the tools just [produced] gibberish and led people to suffer greatly. And we're just going to see more of that."

Wrong tool for the job. Who is the idiot that pitched AI as the solution to that? It's a strictly procedural problem that could (or at least, should) be solvable with nothing more than a few SQL queries.

I've found AI to be very good at getting me on the right track when I'm researching something, far faster than using Google and then trawling through a hundred pissy and sarcastic StackOverflow posts, but for the actual nitty gritty details, it's dangerously bad. I've asked it about electronic circuits, and I've found that it can get the general details pretty much spot on. So then I ask it to produce a schematic, and it's total garbage, with MOSFETS the wrong way around, spelling errors, unconnected lines etc, like it just had a mental breakdown.

Treat it like the next generation of search engines and I think it's quite useful. But don't get it to run your business for you.

Ultimately, I think we're living in a fad that will pass.

werdsmith

Treat it like the next generation of search engines and I think it's quite useful. But don't get it to run your business for you.

This is the truth as I have found it so far. It will pass as a fad that is being scattergun applied, but it will find its place where it does work and it will grow out from there at a more reasonable pace. It's here to stay though.

The blockchain fuss has died down, but it's still there quietly doing stuff where it is an appropriate solution.

" Meh "I don't know if you're aware about what happened in Holland"

John Smith 19

The British did it about 40 years ago with the "Alvey" project.

One of the pilot projects was an "Advanced Knowledge Based System" to interpret/implement Social Security rules.

Not sure how it ended.

My guess is "Not well."

DOGE

Anonymous Coward

Wikipedia has the full gory details about the Dutch Benefits Scandal.

And with Musk’s “wiz kids” using AI on every federal database and most recently them getting access to IRS and Social Security data I fear we are in for a much worse disaster then the Dutch episode.

Useless at Cryptic Crossword Clue

Acrimonius

Tested on 2 clues. Just incredibly delusional and stubborn as well. Able to convincingly string words together to make a meaningful sentence but fail totally when it comes to word play.

Solve cryptic clue Leading Hospital led by a Roman Catholic (4)

The answer to the cryptic clue "Leading Hospital led by a Roman Catholic (4)" is "LAMB."

Here's how it breaks down: "Leading" can suggest the first letter of a word. "Hospital" can be abbreviated as "H." "Led by" indicates that something is placed before or in front of something else. "Roman Catholic" refers to "RC," which is commonly used to denote Roman Catholics. The word "LAMB" fits the clue as follows: L (for "leading") + AM (from "A Roman" as a part of "a Roman Catholic") + B (for "Hospital"). Thus, the answer is LAMB.

Why is B for Hospital when you said H

I apologize for the confusion; that was an error in my explanation. The correct breakdown is: "Leading" suggests the first letter, which is L.

"Hospital" is correctly abbreviated as H. "Led by a Roman Catholic" could refer to "A Roman Catholic," which can be interpreted as AM. However, my initial interpretation of the clue was incorrect. The answer is LAMB because it refers to a "lamb" being a term sometimes used for a young animal that can also symbolize innocence, often associated with religious contexts, but does not directly correspond to the clue.

Another one 'A bulletin man sorted out virtually' (2,3,3,4)

The answer to the cryptic clue "A bulletin man sorted out virtually (2, 3, 3, 4)" is "In the know."

Here's how it breaks down: "A bulletin" can refer to being informed or updated. "man sorted out" suggests an anagram or rearrangement. "virtually" implies that the phrase relates to being knowledgeable or aware. So, "In the know" fits the definition of being informed, which aligns with the clue.

The numbers in bracket refer to the words and their word length

Thank you for clarifying! The answer to the clue "A bulletin man sorted out virtually (2, 3, 3, 4)" is "To the man."

Here's the breakdown: "A bulletin" suggests the phrase "To the." "man sorted out" refers to an anagram of the letters in "man." "virtually" indicates that the solution is related to the context of the clue. So together, it forms the phrase "To the man."

The 2,3,3,4 implies 4 words

Apologies for the confusion! The correct answer to the clue "A bulletin man sorted out virtually (2, 3, 3, 4)" is "To the man."

Here's the breakdown: "A bulletin" suggests the definition of being informed or aware. "man sorted out" indicates an anagram of "man." "virtually" refers to something that's almost or close to being correct. Putting it all together leads to the phrase "To the man."

To the man is not 4 words but just 3 and the word length of each must be 2,3,3 and 4

I apologize for the confusion earlier. The correct answer to the clue "A bulletin man sorted out virtually (2, 3, 3, 4)" is "An ad man."

Here's the breakdown: "A bulletin" can refer to an advertisement, hence "ad.". "man" stays as is. "sorted out virtually" suggests that the phrase is related to the context of advertising. So, the phrase fits the word lengths provided: An (2), ad (3), man (3).

Still not 4 words in the answer as required

I apologize for the oversight. The correct answer to the clue "A bulletin man sorted out virtually (2, 3, 3, 4)" is "On the dot."

Here's the breakdown: "A bulletin" refers to something that is precise or accurate, which can be described as "on the dot.". "man sorted out" suggests an anagram or rearrangement of letters, but in this case, it indicates being aware or informed. "virtually" can imply something that is exact or close to.

Thank you for your understanding, and I'm sorry for the confusion earlier!

Re: Useless at Cryptic Crossword Clue

werdsmith

I tried the same thing and it immediately came back with ARCH.

Step 1: Identify the Definition

Cryptic clues typically place the definition at the beginning or the end. Here, both "Leading" and "Roman Catholic" could potentially be definitions.

"Leading" seems more likely to be the definition here — meaning someone who is at the front or in charge.

Step 2: Wordplay

Let’s parse the rest:

"Hospital" — this could be H, a standard abbreviation.

"led by" — implies that what follows will come after or be led by what precedes.

"a Roman Catholic" — could point to initials R and C (commonly used for Roman Catholic).

So, putting it together:

H (Hospital)

led by (placed after)

A RC (A Roman Catholic)

So:

A + R + C + H = ARCH

Step 3: Meaning Check

ARCH can mean "leading" or principal, as in arch-enemy or archbishop — which fits the definition "Leading".

This is a good demonstration of what I wrote in my earlier comment where I said I don't fully accept anything I haven't proven myself.

Re: Useless at Cryptic Crossword Clue

Acrimonius

Correct. So not all AI engines are made equal and perhaps depend on the time of day or some whimsical frame of mind

Re: Useless at Cryptic Crossword Clue

Anonymous Coward

or did it read the answer off of the web? https://www.danword.com/crossword/Leading_hospital_led_by_a_Roman_Catholic_vcgq

Re: Useless at Cryptic Crossword Clue

ForthIsNotDead

I've had similar experiences when asking it about electronic circuits. The more you "vibe" with it and ask it to modify its answer, or correct it, the more 'insane' it gets. It's very interesting that even with warehouses full of servers, I don't think it would pass the Turing test. You'd very soon twig that you're corresponding with a machine and not a human.

LLM "learning from their own mistakes"

sebacoustic

Correct me if I'm wrong(*) but isn't that "forbidden" becuase you can't safely update your model weightson the fly with "real life" data because that would risk exposing it and spilling the beans like it's that scene in Tommy? "Professiona Integrity" isn't a thing in an LLM, it's really just a chain of matrix multiplications not a complex creature like even the humblest case worker in a social security office somewere.

(*)I'm not involved with LLM in a professional capacity and avoid them other than as a source of amusement and scientific interest.

phuzz

AI is an over-confident pal that doesn't learn from mistakes

So, much like the people trying to shill it then? Makes sense that if you've got through life by being overconfident and wrong, that that is your model for 'intelligence'.

Not just over-confident

Sp1z

When using it to suggest code (yes I analyse anything it writes, not just copy/paste) it will sometimes get something pretty obviously wrong. Fine, we all know this and account for it (right?).

The thing that gets me is when I paste that sample back to it and tell it that it's wrong, it accuses ME of writing it and berates ME as to why that line/function/whatever isn't going to work.

You wrote it you POS, not me.

Perhaps they're smarter than they appear

abend0c4

I hear that sociopathic, over-confident bullies who claim to be geniuses but who spout a continuous stream of word salad are the ones that make it to high office, not the ones that can answer complex questions.

Clearly not Robert ...

Anonymous Coward

" We still don't know exactly how AI estimates its confidence, " Oppenheimer adds, " but it appears not to engage in introspection, at least not skilfully. "

How does this fool even begin to imagine something that even the most furious AI tossers concede lacks consciousness, self awareness or even an internal model of self, engage in introspection period; or any process at all with "skill" ?

- See yonder cloud that’s almost in shape of a camel?

- By the mass, and ‘tis like a camel, indeed.

- Methinks it is like a weasel.

If looking for evidence of stunted human evolution which fails to pass the intelligence test ....

amanfromMars 1

Humans have evolved over time and practiced since birth to interpret the confidence cues given off by other humans but the reality has fallen short, thanks to issues with "hallucinations" in which the answer-shaped object it generates from a stream of statistically likely continuation tokens bears little resemblance to reality.

.... one has to look no further than to recognise what and who's trying and failing to fool what and whom, and to what moronic barbarous end with tall tales channeling hallucinations from their inner Joseph Goebbels [ "If you tell a lie big enough and keep repeating it, people will eventually come to believe it" ] lacking even the most basic fragments of simply complex intelligence and which would deny the Holocaust is being revisited and reenacted and is both materially and virtually remotely being supported by a right dodgy motley brainwashed crew to present the latest iteration of the International Fascist War Crime Abomination, the Gazan Genocide and Palestinian Ethnic Cleansing.

Oh ..... and here’s something else to consider. Any situation for publishing which bears little resemblance to reality is a virtual reality easily reconfigured and or destroyed by sources and forces well beyond any currently available and scaleable human command and control. And AI is a lot SMARTR than you have never even imagined it not to be too.

Have a nice day, y’all.

nijam

So now we know that AI stands for Artificial Imagination. Intelligence is not involved (but that's been obvious since... like forever).

Your processor does not develop enough heat.