News: 0179951644

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Microsoft AI Chief Says Only Biological Beings Can Be Conscious (cnbc.com)

(Monday November 03, 2025 @11:41AM (msmash) from the what-a-relief dept.)


Microsoft AI chief Mustafa Suleyman says [1]only biological beings are capable of consciousness , and that developers and researchers should stop pursuing projects that suggest otherwise. From a report:

> "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."

>

> Suleyman, Microsoft's top executive working on artificial intelligence, has been one of the leading voices in the rapidly emerging field to speak out against the prospect of seemingly conscious AI, or AI services that can convince humans they're capable of suffering.



[1] https://www.cnbc.com/2025/11/02/microsoft-ai-chief-mustafa-suleyman-only-biological-beings-can-be-conscious.html



I donno... (Score:4, Funny)

by TWX ( 665546 )

I've met plenty of biological beings that didn't seem to be particular conscious. Particularly when driving.

Re: (Score:2, Insightful)

by 0123456 ( 636235 )

He didn't say that all biological beings are conscious, but that only biological beings can be conscious.

Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.

Re: (Score:2)

by Lord Kano ( 13027 )

> An LLM can't suddenly decide to do something else which isn't programmed into it.

Can we?

It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.

Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

LK

Re: (Score:2)

by AleRunner ( 4556245 )

> It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.

As long as you program it to do things that it wasn't programmed to do and then let it "free" then that's already almost trivial and has been achieved even with things like expert systems that we more or less fully understand. Most LLMs include sources of randomness that have only limited constraints, so they can already come up with things that are beyond what's in their learned "database" of knowledge. Sometimes it's even right, though mostly it's just craziness. That doesn't make it unoriginal.

> Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.

> LK

Don't agre

Re: (Score:3)

by AleRunner ( 4556245 )

> Which seems pretty clear since machines are just following a program. An LLM can't suddenly decide to do something else which isn't programmed into it.

Yes it could. You put a random number generator in, you have the random number generator generate random code of random length. You run that code with full privileges. That simple.

There are a bunch of optimizations - you could actually check that the code is a valid program. You could aim for some effect and use genetic algorithms. You might base on code that already exists. If you've got an LLM, that mostly already has some form of randomness and can generate code based on random prompts. The principle is

Re: (Score:3)

by Too Late for Cool ID ( 1794870 )

> ...An LLM can't suddenly decide to do something else which isn't programmed into it.

No, it can behave in unpredictable ways. Hell, code I write does that, it it would never make anybody think it was conscious. (That doesn't keep me from yelling at it like it was, though.)

Re: (Score:2)

by gweihir ( 88907 )

You try to be funny, but please stop confusing people that do not understand the concept of an "implication" by using it in the wrong direction.

But only .. (Score:2)

by PPH ( 736903 )

.. barely.

Needs to be a constitutional amendment (Score:2)

by dmay34 ( 6770232 )

This needs to be a constitutional amendment.

Non-biological beings cannot be legally considered conscious or persons.

Re: (Score:2)

by AmiMoJo ( 196126 )

That would be awfully convenient for Microsoft and other AI companies.

It's an area that humans have long avoided thinking too deeply about, but which is probably going to become unavoidable once AI and robotics improve a bit. Even non-conscious beings like animals have some rights in many societies.

Re: (Score:1)

by Lord Kano ( 13027 )

Only if we define consciousness to be a state of awareness only attainable by human beings.

Re: (Score:2)

by stabiesoft ( 733417 )

I think it can be pretty easily argued that Monkeys, Elephants, and a few other animals can easily be characterized as conscious. Others like dogs, cats, ... might be considered conscious. I don't think the dream of billionaires sticking their brain ether into a computer is ever going to be considered conscious.

Re: (Score:2)

by omnichad ( 1198475 )

They'll just have to settle for having their brains stuck in another dog.

Re: (Score:2)

by AmiMoJo ( 196126 )

I think a more relevant test is how much suffering the being experiences, and what the cost/benefit ratio of our actions are.

Suffering isn't just about what that being experiences, it's about the effect it has on our humanity. One of the reasons it's so common to dehumanize other people is to make causing them to suffer more palatable.

Re: (Score:2)

by dfghjk ( 711126 )

Corporations are non-biological beings and they are legally considered persons. They shouldn't be, but that horse left the barn.

Re: (Score:2)

by gweihir ( 88907 )

Since we know of any non-biological "beings", that statement is currently accurate. Incidentally, legal definitions of "person" actually include it, hence no need to any "amendment".

Re: (Score:2)

by DamnOregonian ( 963763 )

No need to go that far. You can just Dred Scott it ;)

If that makes you take pause- then good.

The Constitution should never be used to deny rights to something. The chance that you're wrong, or being manipulated by someone who wants to enslave this thing, are too fucking high.

Re: Seriously (Score:2)

by liqu1d ( 4349325 )

Have you seen the constant hype around LLMs. They are so desperate to make something more of it than it really is. Sadly a lot of people seem to believe it.

Re: (Score:2)

by dfghjk ( 711126 )

It does, because VC's and politicians are stupid. Marc Andreessen does not know this, he just needs his next billion as soon as possible.

Re: (Score:2)

by gweihir ( 88907 )

With the current prevalence of stupid, combined with big egos and the disdain for actual expertise, yes, sadly, it need to be stated.

summary is knee jerk clickbait (Score:3)

by flippy ( 62353 )

Given that we don't have a real understanding of biological consciousness, "only biological beings are capable of consciousness" is a pretty dumb statement to make as a definitive. Now, actually reading the article further, his statement that

> "I don't think that is work that people should be doing," Suleyman told CNBC in an interview this week at the AfroTech Conference in Houston, where he was among the keynote speakers. "If you ask the wrong question, you end up with the wrong answer. I think it's totally the wrong question."

is not invalid logic, and is a much more nuanced thought than the summary.

Re: (Score:1)

by 0123456 ( 636235 )

How does a machine following a program, no matter how complex that program might be, become conscious?

The quest for AI is just Satanists trying to become God by creating life. There's no science or understanding behind it.

Re: (Score:1)

by EmagGeek ( 574360 )

How do you know you are not a biological being just following a program?

Re: (Score:2)

by flippy ( 62353 )

All I'm saying is that given that we don't have a real understanding of biological consciousness, it's the height of hubris to make a definitive statement about what can be consciousness and what can't.

Re: (Score:2)

by omnichad ( 1198475 )

OK, but have we proven that the human brain isn't a complex machine? We're already fairly surprised by the emergent behavior of LLMs.

Re: (Score:2)

by dfghjk ( 711126 )

Sure, but it's neither the wrong question nor does the wrong question lead to a wrong answer. The wrong question leads to answer than is not what you need, but not a wrong answer. It's pretty shitty logic, in fact not logic at all. And it's also wrong to claim, as you said.

But at least he's right that no one should work on that because modern AI cannot be conscious. Work on what it would take, perhaps, then don't do that.

It's fundamentally unknowable (Score:2)

by LainTouko ( 926420 )

There is no way to tell whether anything which isn't human is conscious. There is no test which you can devise which will give one result if the subject is conscious and a different result if it is not. Even when it comes to other humans, you need some sort of reasoning like "well, I am when not in deep sleep, and this is fundamental to my behaviour, and I seem to follow the same basic behavioural tendencies as those around me..." Even if you build an AI which does absolutely everything a human can do inclu

Re: (Score:2)

by stabiesoft ( 733417 )

I think making philosophical arguments as you just did kind of defines conscious. I don't see an LLM ever doing that. It would simply rehash what it was fed. When the first humans started making texts about am I asleep, is it all a dream etc, that was new thought. Not a rehash. But because we are the only creature that really talks, we don't know if other animals are conscious as well. The first thing that comes to mind is elephant behavior when one dies. It appears they mourn. We can't say for sure as we d

Re: (Score:2)

by timeOday ( 582209 )

Right, "fundamentally" nothing is exactly knowable

[1]https://en.wikipedia.org/wiki/... [wikipedia.org]

[1] https://en.wikipedia.org/wiki/Solipsism

Isn't this a faith statement? (Score:4, Insightful)

by Bruce66423 ( 1678196 )

'I don't believe that anything except biological beings can have consciousness.'

Given that we struggle to know what consciousness is, it seems foolish to assert this.

Re: (Score:2)

by flippy ( 62353 )

^^ THIS.

Re: (Score:2)

by gweihir ( 88907 )

It is actually a very simple elimination. Any claim that digital computers can have consciousness is total nonsense. And all known AI runs on digital computers.

But yes, many people believe in totally baseless "IT Mysticism".

How do you know? (Score:2)

by mschuyler ( 197441 )

Think of the physical brain as the TV set. "Consciousness" is the program sent to the TV set. Without the TV set you can't see the program, but no one would claim that the TV set IS the program. In essence the program manifests via the TV set. There is nothing particularly special about the brain. It's grey matter is as physical as the TV set. There is no reason why a sufficiently complex and advanced TV set cannot host consciousness. Karel Capek dealt with this very idea in the very first use of the term,

what is the definition of consciousness? (Score:3)

by dfghjk ( 711126 )

We don't have a technical definition of it, so we can't say if an AI is capable of it.

What we do know is that a living being is massively greater than a mere neural network and it is absurd to think that conciousness is entirely within the neurons of the brain. It is just hype when AI proponents claims that current AI might be conscious, but it is conceivable that a future device WITH an AI as we understand it could be conscious. Self-preservation needs something to preserve, and today an AI is merely a computer program with no concept of itself or how it connects to its "body". An AI can't feel pain or pleasure, it cannot suffer, but future devices could do these things. Needs a lot more wiring and more functional components beyond billions of synthetic neurons. Sorry, Sam and Elon.

Re: (Score:2)

by null etc. ( 524767 )

> We don't have a technical definition of it, so we can't say if an AI is capable of it.

followed by...

> It is just hype when AI proponents claims that current AI might be conscious

I'm wondering if you see any contradiction between these two statements.

Re: (Score:2)

by gweihir ( 88907 )

We can say that. This is actually very simple: Consciousness can influence physical reality (we can talk about it). At the same time, digital computers are fully deterministic. All known "AI" is running on digital computers. Hence no space for consciousness.

So she basically said.... (Score:1)

by Anonymous Coward

"Even though we don't know what consciousness IS, or how it comes into being, or WHERE it may originate FROM, we KNOW that we cannot replicate it." Ummmm yeah, ok. Got it. ;-D

Response to Anthropic Paper? (Score:2)

by oumuamua ( 6173784 )

Didn't see this mentioned in TFA but Anthropic just released a paper "Emergent Introspective Awareness in Large Language Models"

> Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. In all these experiments, Claude Opus 4 and 4.1, the most capable models we tested, generally demonstrate the greatest introspective awareness; however, trends across models are complex and sensitive to post-training strategies. F

Obviously (Score:2)

by gweihir ( 88907 )

At least to the best of our knowledge. What we reliably know is that digital computers, in any form, cannot do it. There is no mechanism that could make it possible in digital mechanism. This includes all forms of "AI" run on such digital computers.

Obviously, faking it is a different question, but a fake is not the real thing.

Faking it comes naturally (Score:2)

by Pinky's Brain ( 1158667 )

He seems to be arguing that LLMs should not be able to roleplay. The problem is that roleplaying ability is not something trained into it, it's something inherent. So he wants to take it out ... but that will take a lot of finetuning and harm the capabilities of the model.

It's not good for their models to put someone in charge looking for more ways to cripple them.

haha (Score:2)

by groobly ( 6155920 )

Haha. Can he even define "conscious"?

What about pain? (Score:2)

by Kiliani ( 816330 )

Seems to me this idea falls short. Should not consciousness be tied to the ability to experience pain, not be able to entirely remove that pain? More abstract, should consciousness not have to suffer the consequences of its actions?

I'd be much happier (or less unhappy) with a general AI that is not allowed to act and "think" in a consequence-free world, that has to suffer for its deeds. Ideal? Probably not. But a start ....

There is no likelihood man can ever tap the power of the atom.
-- Robert Millikan, Nobel Prize in Physics, 1923