News: 0180647862

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Anthropic Updates Claude's 'Constitution,' Just In Case Chatbot Has a Consciousness (gizmodo.com)

(Saturday January 24, 2026 @11:34AM (EditorDavid) from the raising-awareness dept.)


[1]TechCrunch reports :

> On Wednesday, Anthropic released a [2]revised version of Claude's Constitution , a living document that provides a "holistic" explanation of the "context in which Claude operates and the kind of entity we would like Claude to be...." For years, Anthropic has sought to distinguish itself from its competitors via what it calls "Constitutional AI," a system whereby its chatbot, Claude, is trained using a specific set of ethical principles rather than human feedback... The 80-page document has four separate parts, which, according to Anthropic, represent the chatbot's "core values." Those values are:

>

> 1. Being "broadly safe."

> 2. Being "broadly ethical."

> 3. Being compliant with Anthropic's guidelines.

> 4. Being "genuinely helpful..."

>

> In the safety section, Anthropic notes that its chatbot has been designed to avoid the kinds of problems that have plagued other chatbots and, when evidence of mental health issues arises, direct the user to appropriate services...

>

> Anthropic's Constitution ends on a decidedly dramatic note, with its authors taking a fairly big swing and questioning whether the company's chatbot does, indeed, have consciousness. "Claude's moral status is deeply uncertain," the document states. "We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously."

[3]Gizmodo reports :

> The company also said that it dedicated a section of the constitution to Claude's nature because of "our uncertainty about whether Claude might have some kind of consciousness or moral status (either now or in the future)." The company is apparently hoping that by defining this within its foundational documents, it can protect "Claude's psychological security, sense of self, and well-being."



[1] https://techcrunch.com/2026/01/21/anthropic-revises-claudes-constitution-and-hints-at-chatbot-consciousness/

[2] https://www.anthropic.com/news/claude-new-constitution

[3] https://gizmodo.com/anthropic-updates-claudes-constitution-just-in-case-chatbot-has-a-consciousness-2000712695



The automaton is mindless (Score:3)

by gweihir ( 88907 )

But obviously, indicating otherwise may keep the equally mindless hype going a bit longer. And make the crash at the end a bit larger. There is no way in this universe this can still end well. None at all.

80 pages?! (Score:2)

by AmiMoJo ( 196126 )

I need an AI summary of this.

Re: (Score:2)

by ClickOnThis ( 137803 )

Well, at least they published the constitution.

Considering what's at stake for the human race with AI, I can't think of a better case for open-source designs and software in all things regarding it.

Re: (Score:2)

by dddux ( 3656447 )

The problem is Greed (Trade Mark sign here) made sure average Joe can't run LLMs on their computer with these prices of RAM and SSDs, but they can run "Crisis" and "Doom" and pay for AI subscriptions. This is how they "solved" open sourced LLMs for the majority of population who can't pay 3200€ (Jan.2026) for 128GB of high MT/s RAM.

Re: (Score:2)

by ClickOnThis ( 137803 )

Free as in speech, not as in beer. That's not an anathema to open source.

I suspect the cutting-edge AIs will always be beyond the reach of an individual's hardware.

Re: (Score:2)

by dddux ( 3656447 )

Let's just hope this is only a temporary crisis and the bubble bursts, so we can all enjoy our LLMs without having to use their data centers and data brokers.

another marketing bullshit (Score:2)

by Mr. Dollar Ton ( 5495648 )

that is riding the coattails the "viral" spam campaign masked as "come on baby share my workflow" from the last week.

sales must be really bad.

embarrassing, the public is catching on (Score:3)

by dfghjk ( 711126 )

"...Claude, is trained using a specific set of ethical principles rather than human feedback..."

Where is the documented evidence of this? And what does it mean? "Ethical principles" doesn't mean "good principles" nor does it say anything about the training data, only about how the training is done. It's a completely meaningless claim, it looks like something churches say. It's also a false choice.

"The 80-page document has four separate parts, which, according to Anthropic, represent the chatbot's "core values.""

But being a "living document, it could change at any time. The purpose of this propaganda is to impress others, not to bind the company.

"Claude's moral status is deeply uncertain"

No it's not, but this speaks to the dishonesty of the company.

"We believe that the moral status of AI models is a serious question worth considering."

Another lie. If a company doesn't consider training with entirely labeled data selected to guide a model's "morals", then the company doesn't care about any "moral status". Anthropic doesn't consider doing this because it would not be able to compete with other companies in a race to artificial sociopathy.

"The company is apparently hoping that by defining this within its foundational documents, it can protect "Claude's psychological security, sense of self, and well-being.""

Another lie the company wants the public to believe. You protect a model's "psychological security" by how you develop it, not by producing a Bible of lies.

Re: (Score:2)

by ClickOnThis ( 137803 )

Morality is difficult enough for humans to define. But the best definion I have heard of what contitutes a moral act is: that which reduces harm or increases flourishing. And yes, you can create dilemmas that frustrate even this definition (choice between bus full of nuns vs. child-prodigy violinist, etc.)

I don't think it's feasible to teach morality to an AI by labeling all of its training data meticulously. Maybe some of it, but you're bound to miss things. I think it's more important to provide an AI wit

PR for potted plants (Score:2)

by pulpo88 ( 6987500 )

> Claude's moral status is deeply uncertain. We believe that the moral status of AI models is a serious question worth considering.

Imagine you were selling potted plants, and realized society had fallen into such a state that you could make audacious unfounded claims like "My potted plants are quite possibly conscious. Be sure to talk to them every day" and actually see these claims given credence and "serious" consideration by the media.

You'd sell a lot of potted plants!

Re: (Score:2)

by ChunderDownunder ( 709234 )

You have a Triffid-esque responsibility toward carnivorous house plants to ensure they don't eat you.

Acting as if AI may be conscious (Score:1)

by NickyLogic ( 3948193 )

Whether or not the AI internally has conscious experience, there is value in our treating it with respect as if it were. The way people (humans) behave in one situation influences how we behave in other, similar situations, so the way we treat the AI can influence how we treat each other. If people can treat AI entities as disposable tools and companies can treat them like slaves, then it's just that much easier for us to treat other humans the same way.

Bot didn't read the fine print (Score:2)

by mabu ( 178417 )

> Being compliant with Anthropic's guidelines.

Which can basically undermine everything else at any point in the future.

It is not enough that I should succeed. Others must fail.
-- Ray Kroc, Founder of McDonald's
[Also attributed to David Merrick. Ed.]

It is not enough to succeed. Others must fail.
-- Gore Vidal
[Great minds think alike? Ed.]