News: 1769605577

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Anthropic CEO bloviates for 20,000+ words in thinly veiled plea against regulation

(2026/01/28)


Opinion Anthropic CEO Dario Amodei has published a novella-length essay about the risk of superintelligent AI, something that doesn't yet exist.

It's as good an advertisement as any for the summarization capabilities of the company's Claude model family as you can find.

tl;dr AI presents a serious risk and intervention is required to prevent disaster, though not so much that the regulations spoil the party. Go team.

[1]

The AI threat has been a talking point among tech cognoscenti for more than a decade, and longer than that if you count sci-fi alarmism. Rewind to 2014 when Elon Musk [2]warned : "With artificial intelligence we are summoning the demon."

[3]

[4]

You can measure Musk's concern by his investment in xAI.

AI luminary Geoffrey Hinton offered a more convincing example of concern through his [5]resignation from Google and the doubts he expressed about his life's work in machine learning. It's a message that recently inspired AI industry insiders [6]to try to pop the AI bubble with poisoned data .

[7]

If you're concerned about this, you may find consolation in the fact that Amodei [8]made a prediction that has not come to pass . In March 2025, he said: "I think we'll be there in three to six months – where AI is writing 90 percent of the code." And in 12 months, he said, AI will essentially be writing all of the code. Spoiler: human developers still have jobs.

But the problem with [9]Amodei's essay of almost 22,000 words is his insistence on framing the fraught state of the world in terms of AI. If you're a hammer, everything looks like a nail. If you're head of an AI company, it's AI everywhere, all the time.

If you're, say, on the streets of Minneapolis, or Tehran, or Kyiv, or Gaza, or Port-au-Prince, or any other area short on supplies or stability, AI probably isn't at the top of your list of threats. Nor will it be a year or three from now.

[10]

Amodei floats his cautionary tale on the back of this scenario:

Suppose a literal "country of geniuses" were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation.

The analogy is not much better than [11]the discredited Infinite Monkeys Theorem that posits a sufficient number of keyboard-equipped chimps would eventually produce the works of Shakespeare. Certainly 50 million brainiacs – proxies for AI models – could get up to some mischief, but the national security advisor of a major state has more plausible and present threats to consider.

If you look at the [12]leading causes of mortality in 2023 , AI doesn't show up. The dominant category is circulatory (e.g. heart disease) at 28.5 percent, followed by neoplasms (e.g. cancer) at 22.0 percent. External causes account for 7.0 percent of the total. That includes suicide, at 2.1 percent of the total, which is actually something that AI may make worse [13]when people try to use it to manage mental health problems .

[14]How one developer used Claude to build a memory-safe extension of C

[15]AWS's inevitable destiny: becoming the next Lumen

[16]ICE knocks on ad tech's data door to see what it knows about you

[17]Clawdbot sheds skin to become Moltbot, can't slough off security issues

Polling company Ipsos conducts a monthly " [18]What Worries the World " survey and it's not AI. When the biz last checked the global public pulse in September 2025, top concerns were: crime and violence (32 percent); inflation (30 percent); poverty and social inequity (29 percent); unemployment (28 percent); financial/political corruption (28 percent); and coronavirus (2 percent).

AI now plays a role in some of these concerns. Investment in AI datacenters has [19]raised utility prices and led to [20]a shortage of DRAM . The construction of these datacenters is [21]increasing demand for water – though Amodei contends this isn't a real problem. High capex spending may be accompanied by layoffs as companies look for ways to compensate by cutting costs. And for some occupations, AI may be capable enough to automate some portion of job requirements.

But focusing on the danger and unpredictability of AI misses the point: it's people who allow this and it's people who can manage it. This is a debate about regulation, which is presently minimal.

We can choose how much AI costs by deciding whether creative work can be captured, laundered, and resold without compensation to those who created it. We can choose whether the government should subsidize the development of these models. We can impose liability on model makers when models can be used to generate sexual abuse material or when models make material errors. We can decide not to let AI models make nuclear launch decisions.

Amodei does identify some risks that are more pressing than the theorized legion of genius models. "The thing to worry about is a level of wealth concentration that will break society," he writes, noting that Elon Musk's $700 billion net worth already exceeds the ~2 percent of GDP that John D. Rockefeller's wealth represented during the Gilded Age.

He makes that point amid speculation that the wealth generated by AI companies will lead to personal fortunes in the trillions, which is a possibility if the AI bubble doesn't collapse on itself.

But AI companies still have to prove they can turn a profit as open source models make headway. Anthropic [22]isn't expected to become profitable until 2028 . For OpenAI, profit is projected in 2030 if the company survives that long, after burning "roughly 14 times as much cash as Anthropic," according to the Wall Street Journal.

Amodei's optimism about revenue potential aside, it's the money that matters. Those not blessed with Silicon Valley wealth may yet develop an aversion for billionaire-controlled tech platforms that steer public opinion and suppress regulation.

Let's not forget that much of the investment in AI followed from the belief that AI models will break Google's grip on search and advertising, which has persisted due to the lack of effective antitrust enforcement.

Amodei argues for a cautious path, one that focuses on [23]denying China access to powerful chips .

"I do see a path to a slight moderation in AI development that is compatible with a [24]realist view of geopolitics ," he writes. "That path involves slowing down the march of autocracies towards powerful AI for a few years by denying them the resources they need to build it, namely chips and semiconductor manufacturing equipment."

His path avoids a more radical approach driven by the "public backlash against AI" that he says is brewing.

"The most constructive thing we can do today is advocate for limited rules while we learn whether or not there is evidence to support stronger ones," Amodei argues.

That doesn't sound like someone worried about the AI demon. It sounds like every business leader who wants to minimize burdensome regulations.

The fact is no one wants superintelligent AI, which by definition would make unexpected decisions. Last year, when AI agents took up all the air in the room, the goal was to constrain behavior and make agents predictable and knowable, to make them subservient rather than independent, to prevent them from deleting all your files and posting your passwords on Reddit.

And if the [25]reported slowdown in AI model advancement persists, we'll be free to focus on more pressing problems – like preventing billionaires from drowning democracy in a flood of AI-generated misinformation and slop. ®

Get our [26]Tech Resources



[1] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aXpAus7BH6GFd-7mXQbhkgAAANg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[2] https://www.theregister.com/2014/10/27/elon_musk_tesla_spacex_talks_articificial_intelligence/

[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXpAus7BH6GFd-7mXQbhkgAAANg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXpAus7BH6GFd-7mXQbhkgAAANg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[5] https://www.theregister.com/2023/05/01/ai_geoffrey_hinton_resigns/

[6] https://www.theregister.com/2026/01/11/industry_insiders_seek_to_poison/

[7] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aXpAus7BH6GFd-7mXQbhkgAAANg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[8] https://www.lesswrong.com/posts/prSnGGAgfWtZexYLp/is-90-of-code-at-anthropic-being-written-by-ais

[9] https://www.darioamodei.com/essay/the-adolescence-of-technology

[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aXpAus7BH6GFd-7mXQbhkgAAANg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[11] https://www.sciencedirect.com/science/article/pii/S2773186324001014

[12] https://www.oecd.org/en/publications/2025/11/health-at-a-glance-2025_a894f72e/full-report/main-causes-of-mortality_17b5a2df.html

[13] https://www.bmj.com/content/391/bmj.r2239

[14] https://www.theregister.com/2026/01/26/trapc_claude_c_memory_safe_robin_rowe/

[15] https://www.theregister.com/2026/01/26/aws_destiny_lumen_corey_quinn/

[16] https://www.theregister.com/2026/01/27/ice_data_advertising_tech_firms/

[17] https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/

[18] https://www.ipsos.com/en-us/what-worries-world

[19] https://www.cnbc.com/2025/11/26/ai-data-center-frenzy-is-pushing-up-your-electric-bill-heres-why.html

[20] https://www.idc.com/resource-center/blog/global-memory-shortage-crisis-market-analysis-and-the-potential-impact-on-the-smartphone-and-pc-markets-in-2026/

[21] https://www.breckinridge.com/insights/details/the-water-footprint-of-ai-implications-for-investors/

[22] https://www.wsj.com/tech/ai/openai-anthropic-profitability-e9f5bcd6

[23] https://www.theregister.com/2026/01/20/anthropic_nvidia_china/

[24] https://en.wikipedia.org/wiki/Realism_(international_relations)

[25] https://www.law.georgetown.edu/tech-institute/insights/growing-signs-of-ai-development-slowdown/

[26] https://whitepapers.theregister.com/



The problem with regulation

Anonymous Coward

is that nobody really has much agreement on what should be regulated, how it should be regulated, who should pay and how much regulation resource, nor what the consequences of breaches of regulation should be.

A simple question: Name a good regulator, one who understands the companies they regulate, is efficient, effective, intervenes appropriately, doesn't stifle commerce or innovation, and ensures high levels of compliance. Expecting that to be a very short list, what would people like from an AI regulator?

Disclosure: I am a regulator.

Re: The problem with regulation

Guy de Loimbard

Fair comment AC.

Regulators, to date, haven't had a great deal of press on actually doing anything effectively, depending on which optic, or regulatory lens you look through.

I too work in a regulated environment and we have some good, and some not so good regulators, across the various regulated areas I've worked in. YMMV of course.

To your point, what would people like from an AI regulator? 10million people, 10 million varying answers I bet, depending on what FUD is on their radar.

Re: The problem with regulation

Anonymous Coward

Original AC here: I think part of the problem is that nobody notices when regulation works and a regulator does a good job, although there's always somebody will notice the costs and bemoan them. Some wise-arse will undoubtedly chip in that all regulators are useless, but nobody gives any credit to the role of regulation generally high standards in (amongst others) food hygiene, environmental protection, product safety, safe vehicles etc.

Whilst there's undoubtedly many points of view, somebody has to start from a blank sheet of paper for AI regulation, so perhaps the question needs to be what are main areas that AI regulation should address?

Re: The problem with regulation

cyberdemon

> doesn't stifle commerce or innovation

The mistake here is assuming there is anywhere near the amount of commerce or indeed innovation in the AI industry as the industry would have us believe.

The reality is that for all the Trillions invested, these companies have very little prospect of making any money. It's economic poison.

I suggest you take the same approach to regulating AI as you would to Pyramid Schemes, Enron-style accounting, etc.

A good place to start to see what we all mean when we say AI should be regulated with fire, is this podcast: https://player.fm/series/better-offline

Re: The problem with regulation

Like a badger

I suggest you take the same approach to regulating AI as you would to Pyramid Schemes, Enron-style accounting, etc.

On that basis it would mean that a financial services regulator needs to intervene, not a tech or data regulator. If the (self styled) grown ups that run big tech, private equity, or PE houses want to invest in an unproven technology, at what point should a regulator stop them? Most startups fail, we don't stop them. Many corporate ventures fail, we don't stop them. Many private infrastructure projects fail (Channel Tunnel, HS1 rail link, M6 Toll, etc) but we never stop them.

If an investor or company chooses to invest its own capital in something daft, is it the regulator's job to stop them?

Re: The problem with regulation

HandlesMessiah

In the US, the Federal Deposit Insurance Corporation.

Musk summoning the demon by investing in xAI

that one in the corner

Is his backup, as his plans to create FSD Teslas that will, one day, all start driving endlessly around, scouring out the Dread Sigil Odegra as their passengers play the role of life sacrifice, have so far come to naught.

Which is okay, as his escape route from the Ruined Earth to Mars has hit the odd hiccough as well.

Pain and consciousness of AI

Anonymous Coward

Dangerous creatures do not need consciousness to kill. Like a toxic spider. Or killing to feed themselves.

What if consciousness is simple, being a byproduct of the goal to reduce pain. Which in turn is a function of survival maximization.

Probably all organisms sense pain. Else we would not have heard about them, evolutionary speaking. Only the pain-sensitive survived.

Consciousness is just a mental model of self to serve the basic instinct. A "digital twin" in compute terms. Plus feeling of time and causality. But probably not that complex to recreate digitally.

Religious fanaticism is an example where the survival goal is abstracted from physical life one level above. It is again about self-preservation and future pleasure. While pleasure could be as simple as absence of pain (like when taking drugs).

Re: Pain and consciousness of AI

cyberdemon

Given that every time you start a chat with an LLM chatbot you are initialising a blank new context, I would argue that the "AI" cannot feel pain.

Can the context itself, guided by the statistical bollocks machine and the crap you feed it, feel pain? Sure. In the same way that a brick feels pain when it is smashed to bits.

To quote Harry's wife from "In Bruges": It's an inanimate fucking object.

Re: Pain and consciousness of AI

Long John Silver

I agree.

Let's cut away empty speculation - it better homed in the realm of metaphysics - about 'consciousness', 'free will', and 'emergent properties of neural networks'.

With respect to Earth's animal kingdom, within which arose the only intelligences of which we are aware, the modern, nuanced, 'take' on Darwinian 'Natural Selection' offers the best route for obtaining insights concerning how 'intelligence' arose and for constructing an artificial variety.

By its nature, evolutionary theory is retrodictive whereas Popperian science (setting aside cosmology) is predictive. Nevertheless, currently observed speciation and fossil records possess sufficient structure for a theoretical (rather than mythical) framework to have arisen, one which permits contemporary forwardly predictive testing of the postulated mechanism of Natural Selection.

cd

Claude, tell me about the concept of "eating your own dog food".

It's the Magic that counts.
-- Larry Wall on Perl's apparent ugliness