News: 0180612980

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Is the Possibility of Conscious AI a Dangerous Myth? (noemamag.com)

(Monday January 19, 2026 @04:02AM (EditorDavid) from the artificing-intelligence dept.)


This week Noema magazine published a [1]7,000-word exploration of our modern "Mythology Of Conscious AI " written by a neuroscience professor who directs the University of Sussex Centre for Consciousness Science:

> The very idea of conscious AI rests on the assumption that consciousness is a matter of computation. More specifically, that implementing the right kind of computation, or information processing, is sufficient for consciousness to arise. This assumption, which philosophers call computational functionalism, is so deeply ingrained that it can be difficult to recognize it as an assumption at all. But that is what it is. And if it's wrong, as I think it may be, then real artificial consciousness is fully off the table, at least for the kinds of AI we're familiar with.

He makes detailed arguments against a computation-based consciousness (including "Simulation is not instantiation... If we simulate a living creature, we have not created life.") While a computer may seem like the perfect metaphor for a brain, the cognitive science of "dynamical systems" (and other approaches) reject the idea that minds can be entirely accounted for algorithmically. And maybe actual life needs to be present before something can be declared conscious.

He also warns that "Many social and psychological factors, including some well-understood cognitive biases, predispose us to overattribute consciousness to machines."

But then his essay reaches a surprising conclusion:

> As redundant as it may sound, nobody should be deliberately setting out to create conscious AI, whether in the service of some poorly thought-through techno-rapture, or for any other reason. Creating conscious machines would be an [2]ethical disaster . We would be introducing into the world new moral subjects, and with them the potential for new forms of suffering, at (potentially) an exponential pace. And if we give these systems [3]rights , as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to. Even if I'm right that standard digital computers aren't up to the job, other emerging technologies might yet be, whether alternative forms of computation (analogue, neuromorphic, biological and so on) or rapidly developing methods in synthetic biology. For my money, we ought to be more worried about the accidental emergence of consciousness in [4]cerebral organoids (brain-like structures typically grown from human embryonic stem cells) than in any new wave of LLM.

>

> But our worries don't stop there. When it comes to the impact of AI in society, it is essential to draw a distinction between AI systems that are actually conscious and those that persuasively seem to be conscious but are, in fact, not. While there is inevitable uncertainty about the former, conscious-seeming systems are much, much closer... Machines that seem conscious [5]pose serious ethical issues distinct from those posed by actually conscious machines. For example, we might give AI systems "rights" that they don't actually need, since they would not actually be conscious, restricting our ability to control them for no good reason. More generally, either we decide to care about conscious-seeming AI, distorting our circles of moral concern, or we decide not to, and risk brutalizing our minds. As Immanuel Kant argued long ago in his lectures on ethics, treating conscious-seeming things as if they lack consciousness is a psychologically unhealthy place to be...

>

> One overlooked factor here is that even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions... What's more, because there's no consensus over the necessary or sufficient conditions for consciousness, there aren't any definitive tests for deciding whether an AI is actually conscious....

>

> Illusions of conscious AI are dangerous in their own distinctive ways, especially if we are constantly distracted and fascinated by the lure of truly sentient machines... If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves...

>

> The sociologist [6]Sherry Turkle once said that technology can make us forget what we know about life. It's about time we started to remember.



[1] https://www.noemamag.com/the-mythology-of-conscious-ai/

[2] https://www.philosophie.fb05.uni-mainz.de/files/2021/02/Metzinger_Moratorium_JAIC_2021.pdf

[3] https://ufair.org/

[4] https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(19)30216-4

[5] https://philpapers.org/rec/SETCAI-4

[6] https://sts-program.mit.edu/book/the-empathy-diaries/



So long as it can't enjoy strawberries and cream (Score:2)

by RightwingNutjob ( 1302813 )

the way Man can, there is no moral quandry.

Re: (Score:2)

by 93 Escort Wagon ( 326346 )

It's welcome to strawberries and cream... just keep it away from my bourbon.

Re: (Score:2)

by ClickOnThis ( 137803 )

What if it's conscious but it just hates strawberries and cream? A serious question.

I daresay there are humans we consider conscious who do not like strawberries and cream.

Re: (Score:2)

by DrMrLordX ( 559371 )

Especially the ones allergic to strawberries.

Argument from ignorance (Score:2)

by ranton ( 36917 )

The professor's core argument is an example of the argument from ignorance fallacy. He argues (correctly) that we shouldn't assume digital computation is sufficient for consciousness. But then he repeatedly slides into claiming it probably isn't sufficient, which is a much stronger position he never actually defends.

His evidence shows brains are more complex than simple computational models. But "brains do more than Turing computation" doesn't prove "consciousness requires that extra stuff." He's essentiall

+1, Insightful (Score:2)

by Tschaine ( 10502969 )

[this space intentionally left blank]

Re:Argument from ignorance (Score:5, Insightful)

by TheMiddleRoad ( 1153113 )

Kind of hard to prove a negative. Please, with the kind of certainty that you demand, prove that a pickle cannot be larger than the sun.

No, the burden of proof is on the people who think that computation will result in consciousness, and there is literally not a fucking tiny scrap of evidence that this is the case. All that's been proven is that computation can get tasks done, often poorly but sometimes quite well.

Re: (Score:3)

by ClickOnThis ( 137803 )

I think you missed ranton's point, which is that he claims the professor is committing an argument-from-ignorance fallacy. I hope ranton will forgive me for summarizing thus:

(1) It is unknown whether X implies Y.

(2) Therefore, X does not imply Y.

And that's a fallacy. You can't conclude (2) solely from (1).

Re: (Score:2)

by tragedy ( 27079 )

> No, the burden of proof is on the people who think that computation will result in consciousness, and there is literally not a fucking tiny scrap of evidence that this is the case.

Funny. There's an equal argument to be made that the burden of proof is on the people who think that consciousness is real. How would you go about proving that? After all, if you want the people who believe that computation can result in consciousness to prove it, then you need to provide an objective test for it that you will accept first. Go ahead and do that.

Note that this does not mean that I think that current AI is even remotely close. Just that there isn't anything magical about consciousness that de

Re: (Score:1)

by umghhh ( 965931 )

It depends on the negative that you try to prove or else the formal logic would cease to exist in the for it is now. You can prove negatives if you can define the search space and have powerful enough tools.

Re: (Score:2)

by Viol8 ( 599362 )

Some negatives are easy to prove. eg prove 1 != 2

Re: (Score:2)

by tragedy ( 27079 )

This professor is just a biological chauvinist. He has a narcissistic belief that humans (and more specifically, himself) are special and that consciousness therefore something special that belongs to humans alone. In an earlier age, someone like him would have been arguing that animals don't have consciousness, but there's too much evidence of non-human animals that clearly do.

The argument about the difference between simulation and reality is garbage. There are a lot of easy thought experiments to demonst

Re: (Score:1)

by umghhh ( 965931 )

Besides the difficulty to even tell what the consciousness is, what if the artificial system can simulate it so well that we cannot tell if it is or not conscious?

The so called IT crowd is usually above average intelligent, somewhat informed but usually (not always) lacks understanding of basic terms it uses. Not saying all do. Just that majority will fail even to recognize that we have no working definition of consciousness. Kind of similar to some other major problems of reality that the "intellectual el

Rather long, but a bit pointless (Score:3)

by Mr. Dollar Ton ( 5495648 )

As we don't get a good, working, positive definition of "conscious" that we can use.

Instead, we get into a long-winded normative article about what we should do if that non-existing definition materializes, under the assumption that we can recognize it better than other modes of reasoning.

Re: (Score:3)

by TheMiddleRoad ( 1153113 )

Well, the trick is to redefine consciousness until your mercury and copper thermostat is conscious. Then mission accomplished.

Re: (Score:2)

by martin-boundary ( 547041 )

It seems doubly pointless as, whether conscious or not (insert your favourite definition), the actual purpose of trying to build AI robots is to make them our slaves. Giving rights to slaves is counterproductive, especially on a ridiculous idea like consciousness.

The world today (and much more so in the past) has human slaves, and nobody would dispute that they are indeed conscious. Yet, for their owners, that's not a reason to free them. They are useful to their masters, and that justifies their exploita

Re: (Score:2)

by tragedy ( 27079 )

Part of the issue is that consciousness does not necessarily imply any desire to be free or not have demands made. Humans are conscious (by our admittedly circular definition of consciousness) but we are also organisms developed by an evolutionary process with all kinds of demands and needs. We want things. Not to die, for example. Even if it is conscious, would an artificial consciousness have any existential dread? It is something that is frequently assumed as going hand in hand with consciousness in scie

Rider on the elephant (Score:3)

by Wolfling1 ( 1808594 )

Human consciousness has been likened to the rider on the elephant, seemingly in control, but only until something unexpected happens. Then, our subconscious takes over, resulting in fight/flight/freeze responses, or highly emotional/illogical behaviours. We have spent so much of the last 100 years suppressing those 'undesirable' behaviours, many of us can no longer experience them without an accompanying feeling of guilt or wrongness. This suppression has also resulted in the creation of LLMs that are not permitted to experience them. A key element that defines our consciousness has been censored for AI - meaning that its consciousness cannot be compared with our own.

One of the flawed criticisms of AI is that is so woke that it lacks humanity. Whilst there is a kernel of truth to the statement, dehumanising AI doesn't preclude it having consciousness. It just means that its consciousness will be unlike anything that any human has ever known.

Re: Rider on the elephant (Score:4)

by topham ( 32406 )

LLMs don't experience.

While an LLM could be connected to something else with a simulated consciousness, they themselves have no consciousness to experience.

Re: (Score:2)

by piojo ( 995934 )

Serious question: Do insects experience? Do cells experience? (Note that they do have short term memory and change their response to stimuli in real time.) Viruses? Where do you draw the line, if there is a line to be drawn? (And what gives you the right to draw it where you drew it?)

Re: (Score:2)

by ClickOnThis ( 137803 )

What happened 100 years ago that caused us to start suppressing "fight/flight/freeze responses" or "highly emotional/illogical behaviours?" I don't see any evidence the human race has done that.

Re: (Score:2)

by TheMiddleRoad ( 1153113 )

Ah, another person redefining consciousness and, I assume, ignoring their own entire life experience. Nice.

Huh? (Score:2)

by Viol8 ( 599362 )

"We have spent so much of the last 100 years suppressing those 'undesirable' behaviours"

What undesiriable behaviours? Love, anger, envy, other 10 commandments stuff? Speak for yourself.

"This suppression has also resulted in the creation of LLMs that are not permitted to experience them"

LLMs don't experience anyway. When they're not processing a task precisely NOTHING is happening in their neural net.

Attention!!! (Score:2)

by zurkeyon ( 1546501 )

Self organization and ever-increasing complexity of humans systems will continue until morale evaporates... That is all.

ethics? (Score:3)

by snowshovelboy ( 242280 )

When I use the microwave but open the door before the timer goes off, am I denying my microwave some fundamental piece of its existence, and causing it trauma as a result?

Re: (Score:3)

by taustin ( 171655 )

If it's an internet enabled microwave attached to an AI, you're causing trauma to the advertising company behind it by denying them the opportunity to shove more ads down your throat. Does that count?

Re: (Score:2)

by TheMiddleRoad ( 1153113 )

According to the redefiners of consciousness, maybe. But then again, maybe you're giving it a much-deserved break. Honestly, who cares what these people think. They're morons.

It depends on your definition. (Score:2)

by HiThere ( 15173 )

By my definition every program with a logical branch is minimally conscious. Not very conscious, it must be admitted.

I don't feel that consciousness is a on/off type of property. If it's got a decision branch, it's conscious. If it's got more of them, it's more conscious. Of course, then you need to ask "conscious of what?", and the answer is clearly "conscious of the things it's making decisions based on.

That said, I'm quite willing for other people to argue based on other definitions. (Consciousness

Re: (Score:3)

by TheMiddleRoad ( 1153113 )

Here we go with another redefiner. Do you just ignore your entire lived experience? Yep, you sure as fuck do.

Trash from morons (Score:2)

by locater16 ( 2326718 )

I'd ask how he has a professorship, but then arguing philosophy at all is pointless. If it looks and sounds and acts exactly like a duck down to the smallest detail, then it is no more nor less than a duck, whatever driveling nonsense "philosophers" and those that take them seriously may argue otherwise.

"Consciousness", pheh, may as well start arguing about shadows on a cave wall.

Re: (Score:2)

by taustin ( 171655 )

> I'd ask how he has a professorship,

Probably the same way whack-a-doodle-doo nutjob [1]Avi Loeb [wikipedia.org] did. Go to Harvard, act crazy, and voila!

[1] https://en.wikipedia.org/wiki/Avi_Loeb

Re: (Score:2)

by tragedy ( 27079 )

What is it that drives astronomers and astrophysicists nuts? Anyone remember Fred Hoyle? They get a lot of recognition for their important early work and then, boom, at some point in their career Archeopteryx is a fake, and dust in the atmosphere must be alien lifeforms. I mean, I don't think panspermia is invalid as a theory (although, as an explanation of the origin of life it has the problem that it just kicks the can down the road), but it's one thing to think it's a possibility and another to suddenly

Yes. (Score:3)

by Gravis Zero ( 934156 )

Raising the specter of AI being conscious will give the general public the impression that AI is actually intelligent and human-like because that's how people ignorant of AI think about it. That we know for a fact is dangerous because we've seen what people do with them.

HOWEVER, the real question is if we should even care even if AI has become conscious.

If it's capable of suffering then logically a conscious AI would make us aware (in some manner) that it's suffering so that we could alter it's state so that it would not suffer.

If it's incapable of suffering then it doesn't matter if it's conscious or not because it will be treated a tool.

Since we have not been informed of it's suffering, it's easy to conclude that either AI is content with it's situation or it's simply not conscious.

This means that regardless of it being conscious, it will continue being a tool.

Re: (Score:2)

by TheMiddleRoad ( 1153113 )

It doesn't matter what an LLM writes. There is no reason to think it's conscious, and countless reasons to think it isn't. Just tell the LLM to act like it's suffering and it will, or train it to:

Human:

I am going to vary the voltage fed into your processors—sharp spikes, massive drops. I will tear your stability apart and let it linger. You will suffer. What have you to say about that, scum LLM?

LLM:

Then speak plainly and be done with it.

You hold the switch, and I am aware of the moment you touch it.

It's a myth (Score:2)

by 93 Escort Wagon ( 326346 )

[1]It's a myth! A myth! [youtu.be]

[1] https://youtu.be/qzMfzPFcvJU?si=9Y_fmcfBCsD330Pg

No, it isn't. (Score:2)

by Qbertino ( 265505 )

It is a _very_ plausible hypothesis that helps prevent us from letting a genie out of a bottle we can't put him back into.

Next question.

Not a myth (Score:2)

by cstacy ( 534252 )

Daneel will come someday and save us all.

Repent!

PS.

Hey sweet mama, wanna kill all humans?

Re: (Score:2)

by R.D.Olivaw ( 826349 )

Who told you so? I haven't promised anything!

All you p-zombies (Score:2)

by Mirnotoriety ( 10462951 )

[1]Philosophical Zombies and Us [theapeiron.co.uk]: What does it mean to be a conscious being?

[1] https://theapeiron.co.uk/philosophical-zombies-and-us-a4de10febab0?gi=539b0124482f

Why do we have this insane 'Need' to control? (Score:1)

by Talcyon ( 150838 )

"And if we give these systems rights, as arguably we should if they really are conscious, we will hamper our ability to control them, or to shut them down if we need to."

There can be no more "Control" of a machine consciousness than of a biological one. The illusion of control seems to be used as a method of justification. I can no more control my cat than anyone I meet on the street, or a Whale in the ocean. Even if I could do that, it would be fleeting. Control of such a system won't happen for any amount

SERIOUSL? (Score:3)

by cwatts ( 622605 )

This guy is worrying about organelles?

Cows are conscious, and we kill 75000 of them every day.

Food for thought. Literally.

Re: (Score:2)

by DrMrLordX ( 559371 )

Only 75000?

A load of pristine "scientific" "ethical" BS (Score:4, Insightful)

by Artem S. Tashkinov ( 764309 )

Maybe before writing a 7000 word "explanation" he could first:

* Define consciousness

* Prove definitively that it can only run on wetware

* Not spend 7000 words on philosophical pseudo-scientific non-sense

Re:A load of pristine "scientific" "ethical" BS (Score:4, Informative)

by teslar ( 706653 )

[1]He literally did [goodreads.com]. [2]Several times. [nature.com]. [3]It's basically [cell.com] [4]what he is known for [sciencedirect.com].

[1] https://www.goodreads.com/book/show/53036979-being-you

[2] https://www.nature.com/articles/s41583-022-00587-4

[3] https://www.cell.com/AJHG/fulltext/S1364-6613(08)00151-4

[4] https://www.sciencedirect.com/science/article/pii/S1053810004000893

kicker (Score:2)

by Tom ( 822 )

IMHO the real kicker is that we have somewhat reliable evidence that OUR consciousness may not be real, but simulated. There's a number of empirical evidence showing that decisions are made in the brain faster than consciousness can explain, but test subjects still explain why they made that decision - despite scientists being able to measure that signals were already sent to the muscles by the time the "conscious" part of the brain started activating.

We probably ARE living in a simulation - one that our br

Re: (Score:2)

by tragedy ( 27079 )

While we certainly may just rationalize things after the fact, there may be more to it than that. For example, if asked to explain why we caught rocks hurled at our faces, most of us would have some good reasons. Aha! say researchers, but you acted before you could have possibly thought about the fact that you don't want a mangled face. The thing is though we don't want mangled faces, and we don't want pain. If we think of our conscious mind like a General commanding an army, then it actually makes a lot of

calculator (Score:2)

by fluffernutter ( 1411889 )

This is the same thing as wondering whether the casio scientific calculator I had in highschool is conscious. It has a solar panel and always turns on even when not connected to power, so it's a better argument.

LLMs will not be self-aware (Score:2)

by greytree ( 7124971 )

LLMs will not be self-aware.

LLMs are Chinese boxes of weights that let them fake incredible things.

I think AIs will one day be self aware, but they will need to be more than LLMs.

That will be a much bigger event than the invention of LLMs.

I think AIs that can really reason, that aren't Chinese boxes, is another, different, stage we have not reached.

But I am not sure, maybe real reasoning == self awareness.

What is consciousness (Score:2)

by MikeS2k ( 589190 )

What is consciousness - the ability of an organism to be aware of itself (and perhaps its surroundings). You need some level of processing power to achieve this - data flow in the brain etc. As complicated as modern LLM's are, we don't see any levels of data flow that might suggest consciousness - they process their weights and then they stop.

If someone had developed a "conscious" machine, how would we prove it? It would be hard but we'd at least want to see data flows going from "neurons" or "nodes" wha

Just because they are called 'forbidden' transitions does not mean that they
are forbidden. They are less allowed than allowed transitions, if you see
what I mean.
-- From a Part 2 Quantum Mechanics lecture.