News: 0180925410

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io)

(Sunday March 08, 2026 @11:34AM (EditorDavid) from the what-if dept.)


Palantir's CEO [1]was blunt . "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is [2]thinking about the same thing , writes long-time Slashdot reader [3]destinyland :

> "It has seemed to me for a long time it might be better [4]if building AGI were a government project ," Sam Altman publicly [5]mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

>

> Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — [6]were government-funded and largely government-directed ." And Fortune added that last week the Defense Department threatened Anthropic with the [7]Defense Production Act , which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when [8]answering questions on X.com .

>

> How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even [9]broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

>

> "No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in [10]an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader [11]ptorrone ) [12]sees analogies to America's atomic bomb-building Manhattan Project , and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's [13]designating Anthropic a "supply chain risk " before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book [14] The Making of the Atomic Bomb ...



[1] https://x.com/SulkinMaya/status/2028866859756408867

[2] https://thenewstack.io/openai-defense-department-debate/

[3] https://slashdot.org/~destinyland

[4] https://thenewstack.io/openai-defense-department-debate/

[5] https://x.com/sama/status/2027921762319827330

[6] https://fortune.com/2026/03/03/the-pentagons-fight-with-anthropic-was-the-first-real-test-for-how-we-will-control-powerful-ai-the-bad-news-we-all-failed/

[7] https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950

[8] https://news.slashdot.org/story/26/03/01/0233230/sam-altman-answers-questions-on-xcom-about-pentagon-deal-threats-to-anthropic

[9] https://x.com/thedogfather/status/2027949322344526140

[10] https://notdivided.org/

[11] https://slashdot.org/~ptorrone

[12] https://blog.adafruit.com/2026/03/03/the-making-of-the-atomic-bomb-1986-by-richard-rhodes/

[13] https://slashdot.org/story/26/03/05/2233247/pentagon-formally-designates-anthropic-a-supply-chain-risk

[14] https://amzn.to/4rl3eOl



Re:offensive (Score:5, Insightful)

by ozmartian ( 5754788 )

I'd say "MAGA" is the current day, most valid alternative to the R word.

Re: offensive (Score:2)

by fluffernutter ( 1411889 )

Yes because those were never associated with real autistic people.

Re:offensive (Score:5, Insightful)

by ozmartian ( 5754788 )

Just use the word "MAGA" instead. Its the most valid, current day alternative.

Re: offensive (Score:5, Insightful)

by Mr. Dollar Ton ( 5495648 )

This is apparently very offensive to the people who proudly proclaimed "fuck your feelings" once, lol.

Re: (Score:2)

by Computershack ( 1143409 )

Finding something offensive is purely subjective. I'm quite sure you do and say lots of things people find offensive but they don't demand you stop doing it.

Re: (Score:2)

by fluffernutter ( 1411889 )

Admittedly, my big problem is remembering not to refer people in a meeting as "guys" when there are women there. I'm not sure what other pitfalls I would be falling into. I use the term "first nations people". I don't use the 'N' word. I'm very careful not to offend people when I'm talking.

Re: (Score:2)

by organgtool ( 966989 )

I got myself into the habit of saying "everyone" when addressing all participants in a meeting. Oddly enough, many of the women still say "guys", even when there's one or more other women in the meeting.

Re: (Score:2)

by fluffernutter ( 1411889 )

It doesn't bother some women, but I don't want to offend the ones that it does. Being offensive is just not what I'm about. But I imagine that is much different between Canadians and Americans. Americans seem to take pride in being offensive, almost like the right to offend people is in the Constitution and any violation is a threat to their freedom.

Re: (Score:2)

by skam240 ( 789197 )

Ha, using "guys" for a mixed gender groups is a stumbling point for me too. It's worth the hassle of change though, I know I wouldn't want to be refered to as a woman or anything else that I'm not.

Re: offensive (Score:2)

by devslash0 ( 4203435 )

Ban one word, and people will find another. You can't ban an idea.

Re: (Score:2)

by fluffernutter ( 1411889 )

Really.. what is this other word that is derogatory to autistic people?

Re: (Score:2)

by gweihir ( 88907 )

"Retarded" is actually merely descriptive and a perfectly fine technical term. But somebody that is retarded may not be able to understand that.

LLMs != AGI and never will (Score:5, Insightful)

by ozmartian ( 5754788 )

That is what is "retarded", if we're to use such a word. Can we stop using AGI and current day LLM's in the same sentence? Current LLM will not lead to AGI, not even close.

Re:LLMs != AGI and never will (Score:5, Insightful)

by martin-boundary ( 547041 )

"Autocomplete on steroids" is the correct technical term, not AGI.

Re: LLMs != AGI and never will (Score:2)

by LindleyF ( 9395567 )

There is no doubt the technology is far better at pretending to be intelligent than it was a few years ago. Whatever happens in the next decade will simulate it better still. So the interesting question is, so long as we kind of understand how a tech works, will we ever be willing to call it AGI? Or is an inability to understand intelligence one of its key qualities (for us)?

Re: (Score:2)

by gweihir ( 88907 )

Pretending to be intelligent works when you are dealing with human fools. As soon as you apply such a tool to reality, it gets a completely merciless reality check though. Remember that "vibe coded" social network for AIs, that did not even get basic authentication right?

Re: LLMs != AGI and never will (Score:2)

by LindleyF ( 9395567 )

Throwing a junior dev at a project out of their depth would have similar results.

Re: (Score:2)

by allo ( 1728082 )

What makes you so sure? We have basically two really large pools of training data: Written and digitalized text, and videos.

Videos are large and mostly redundant with low information density, whereas text is usually high density and a lot of information content. Both are useful in different ways, but given limited resources text is more valuable right now for "thinking", while we will probably need video for more robotic tasks.

There are good arguments for better architectures than LLM, but these would often

Re: (Score:2)

by gweihir ( 88907 )

There is mathematical proof that LLMs cannot do AGI. To a smart person that ends the discussion.

I think you mistake what the average person does for General Intelligence. It is not. If you want regular use of General Intelligence in a human, you need an "independent thinker" (about 10-15% or all people) or at least somebody that can be convinced by rational argument (about 20% of all people, includes the independent thinkers). Merely "thinking" is not enough. You need to do it successfully.

Re: (Score:2)

by gweihir ( 88907 )

There are still idiots that think LLMs can deliver AGI? Fascinating. We have solid mathematical proof that this is impossible. And anybody with a working mind (a minority, to be fair) saw it long before.

The LLM approach cannot ever develop insight. (Here "insight" = knowing something and knowing it is reliably true.) Not possible. Statistical models cannot do that unless you run them in a way that is essentially non-statistical. But then they cannot perform anymore at all. And insight is they core ingredien

Let governments pay all the bills (Score:4, Interesting)

by thesjaakspoiler ( 4782965 )

and Sam and his gang will just start another company using all knowledge learned.

Re:Let governments pay all the bills (Score:4, Interesting)

by martin-boundary ( 547041 )

You've got it. Sam Altman is in a bind. He doesn't have a business plan, and he has a lot of debts and expectations that are coming due soon. He's been talking about getting bailed out by the government since last year, IIRC. He *wants* to be bailed out.

That would, of course, be the worst decision since the 2008 GFC when banksters got bailed out.

Then again, you know who is in charge, so it may happen.

Re: Let governments pay all the bills (Score:2)

by commodore73 ( 967172 )

I don't know, government can't be much worse than current US AI companies.

Re: Let governments pay all the bills (Score:2)

by Mr. Dollar Ton ( 5495648 )

[1]Are we sure, though? [apnews.com]

In my book betting on who's gonna kill themselves next because of what you do to them is pretty Gulag rock bottom.

[1] https://apnews.com/article/suicide-ice-detention-centers-b2d1cb0e4b579e0d89caabd00aa04e34?taid=69aafa0d404f690001cb12a7

Re: Let governments pay all the bills (Score:2)

by zmollusc ( 763634 )

Well, he has two thirds of a business plan. He is only missing Phase 2. Once he has Phase 2, he can proceed to Phase 3 : Profit

Re: (Score:2)

by gweihir ( 88907 )

Indeed. The "core LLM" scammers are all close to collapse at any time. They need massive influxes of money because they do not produce anything that is even remotely valuable enough to justify the mountains of money they are burning. Google, Microsoft and some others can (maybe) survive the collapse of the LLM hype, but Altmann and OpenAI cannot.

Re: (Score:2)

by nullhero ( 2983 )

I think this is the plan. Get the government contracts, then ownership, and make US citizens foot the bill for surveillance that will definitely be used against them. There really isn't anything more they can do with such limited thinking when it comes to algorithm. Tech Bros are trying to make their sci-fi dreams a reality. They are such idiots, maybe skilled in business grifting, those sci-fi futures written about are nightmares and the heroes typically dismantle them.

Other countries (Score:3)

by heikkile ( 111814 )

If the US government would nationalize the US AI companies, I am sure other companies would spring up in other countries. And I suspect the us gov run AI projects would be mired in bureaucracy and incompetence, giving an edge to the competing ones.

Re: (Score:2)

by martin-boundary ( 547041 )

Why do you think that other countries do not have an edge over current US AI companies? Did you *already* forget DeepSeek? It was only last year but surely your memory is not that bad.

Re: (Score:2)

by gweihir ( 88907 )

> Why do you think that other countries do not have an edge over current US AI companies?

That one is pretty simple: The AI hype in the US is extreme and mainly irrational.

Duh (Score:2)

by jrnvk ( 4197967 )

The Defense Production Act has been around for a long time, and it is actually actively used for some purpose by every President elected in the past 70+ years. This should not be a surprise.

Re: Screwing the military... (Score:1)

by Mr. Dollar Ton ( 5495648 )

Quite the opposite, dear, you have chosen as your leaders and your mouthpieces people who mock the military, call them losers and traitors and try to swindle them out of pay and benefits.

People like cadet bone spurs, or his vp, or his campaign spokesman Oddjob, the wife-warrior, etc.

Re: Screwing the military... (Score:2)

by Mr. Dollar Ton ( 5495648 )

Keep telling yourself this. How's that little victorious war going, I hear there will be more caskets and veterans soon.

Re: (Score:2)

by gweihir ( 88907 )

That is a very .... disconnected and romanticized view of reality you have there. Also utterly dysfunctional.

But what I see here is a very small person deeply in fear of anything they do not understand. And that seems to be a lot of things.

War always takes precedence (Score:2)

by drinkypoo ( 153816 )

> But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used."

The ultimate limit on what you will accept and the arbiter of what you are willing to do always comes down to survival. Warmongers are fearful people who believe that you can only ever be secure by subjugating all potential rivals, and danger becomes their excuse for making the ability to make war more important than anything else.

True security only comes from being part of an order that benefits everyone in it more than it would benefit them from turning it over. Making war more important than everything e

Re: (Score:2)

by gweihir ( 88907 )

Indeed. Well said.

What I think we are currently seeing is that the end of the "American Century" will lead into the "Raise of the Middle Powers". At least I hope that will be the outcome or we are all really, really, really screwed. A long-term coalition of middle powers is exactly that: High mutual benefits and generally agreed on rules that everybody in there respects. That is what Carney and others are currently trying to create and I think it may just work.

War, on the other hand, needs one big bully tha

It's funny ... (Score:2)

by cascadingstylesheet ( 140919 )

... how some people are just now realizing the dangers of nationalization.

"Wait, you mean that my guys might not always be the ones actually running the government???"

Re: (Score:2)

by gweihir ( 88907 )

Yep. Many people are too retarded to understand that a country is a community and that a government needs to serve all of the population, or things go to hell. We see a nice example of that currently in the US, but that is by far not the only Kakistocracy on the planet. It is just the most pathetic one, because information is actually available in the US and people could really have known better.

Keep in mind it's not because (Score:3)

by rsilvergun ( 571051 )

They think that the government is going to nationalize AI for the good of the people. What they are afraid of is that the current authoritarian fascist administration will nationalize it for their own profit.

Basically we are seeing a fight between oligarchs. Similar to what you used to see in Russia before Putin just started killing them all after taking control.

None of this is good for you or me.

If governments created AI (Score:2)

by devslash0 ( 4203435 )

...we wouldn't have any.

wah we want to milk the governemnt (Score:2)

by Growlley ( 6732614 )

teat!

We need companies to be punished (Score:2)

by xack ( 5304745 )

Too many people have been scammed by companies by getting degrees but not getting jobs from them being gaslit that they studied the "wrong subject" or you didn't do enough "slave internships" all while being loaded up with student debt all while they wanted to replace our jobs with slave labour all along, first they tried actual slavery, then they tried third world wage arbitrage now they are just using AI to get pesky humans out of the loop altogether.

All human knowledge destroyed by AI and all human la

FOSS (Score:3)

by bradley13 ( 1118935 )

Fortunately, somehow, there are a lot of FOSS models. We need to work to keep as much of AI open-source as possible. It's a lot harder for a government to declare FOSS a supply-chain risk, or to nationalize it, because anyone can make a copy and keep going.

Re: (Score:2)

by gweihir ( 88907 )

Indeed. These models are not as "spectacular" though, because they focus on things that LLMs can actually do somewhat well. For example, Apertus (the free Swiss model) has been trained on apparently over 1000 languages and mainly targets translations and chatbots. That is an area where reliability does not need to be 100% (unlike, say, writing code or running LLM "agents" that will both bet subjects of targeted attacks) and hence this model focuses on actual usefulness instead of making grand claims to rake

Be careful what you wish for (Score:2)

by smoot123 ( 1027084 )

Some might cheer the nationalization of AI companies. They should be careful what they wish for. I doubt investors would continue to pour unlimited billions into AI if they knew they were going to be run by and for the government. I doubt governments will pour the same billions into the same parts of AI deployment.

If you think AI is dangerous and want to slow it down, that's a feature rather than a bug. If you want rapidly advancing capabilities (to pick a random example, to run fleets of armed, autonomous

Re: (Score:2)

by sound+vision ( 884283 )

The government money isn't going to come in until after the bubble crashes, i.e. the investors disappear on their own. Then the AI companies will be too big to fail, they'll be national security, and whatever else needed to obtain a good old-fashioned bailout .

Your economy will be wrecked once the cycle's complete, but OpenAI's balance sheet will be made whole.

Yes, please do that (Score:2)

by gweihir ( 88907 )

Then the nations that are not on board can actually continue to have a working economy.

The actual fact of the matter is that LLM-type AI can deliver very little. Not nothing, but not even remotely what currently gets claimed. If you go "all in" on LLM-type AI, not only will your economy collapse because because you cannot provide work for people anymore, it will also collapse because most things will get very unreliable and many things will stop working altogether.

In a related note, I really understand how

Adding human oversight to kill list isnt hard (Score:1)

by NewID_of_Ami.One ( 9578152 )

Adding human oversight step to kill lists / plans during a war is not really that big of an issue.

It will anyway always be there if only to avoid waste of a few 10s of millions of dollars due to some AI hallucination

Re: (Score:1)

by NewID_of_Ami.One ( 9578152 )

Have i posted in the wrong tab - i am not sure

AI CEOs? (Score:2)

by jenningsthecat ( 1525947 )

I've encountered AI help desk attendants, but wasn't aware that any AI had risen to the exalted position of CEO!

Despite fears of an AI apocalypse, I can't help wondering if an hallucinating LLM would be less dangerous - and less creepy - than an hallucinating Alex Karp.

Build their own! (Score:2)

by ColdBoot ( 89397 )

This is easily solved. They can build their own. Problem solved. I have no doubt that some of the AI vendors would be happy to bid on setting one up.

The best that we can do is to be kindly and helpful toward our friends and
fellow passengers who are clinging to the same speck of dirt while we are
drifting side by side to our common doom.
-- Clarence Darrow