News: 0180387489

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com)

(Monday December 15, 2025 @03:34AM (EditorDavid) from the speaking-of-artificial dept.)


Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's [1]the warning from James O'Sullivan , a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.")

> The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent... " Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

>

> We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:

"The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."

"When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."

"Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "

"Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."

"Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."

"The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."

The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods. [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."

"These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."



[1] https://www.noemamag.com/the-politics-of-superintelligence/



Sums it up nicely (Score:3, Interesting)

by procrastinatos ( 1004262 )

> By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

Pretty on point. Now queue the comment from the AC about how tHeSe PeOpLe HaVe No PoWeR iN cHiNa...

Re: (Score:2)

by saloomy ( 2817221 )

Elon Musk used to warn us about the dangers of AI, trying to get us to slow down, all the while building it seemingly as fast as he possibly could. I always considered that very hypocritical.

Re: (Score:2)

by 93 Escort Wagon ( 326346 )

Musk has always been an absolutist with regards to anything he himself wants to do. Rules are for other, lesser, people.

Re: (Score:2)

by Kokuyo ( 549451 )

I keep defending Musk against overblown accusations (primarily because I think the man offers more than enough proper reason for critique).

THIS, though, absolutely hits the nail on the head. He has had some good talking points in the past. He has a good sense of what morality would do the world some good (not always but often enough) but it always, always excludes himself as a subject to said morality.

The Disease of Greed. (Score:4, Insightful)

by geekmux ( 1040042 )

>> By making hypothetic catastrophe the center of public discourse, architects of AI systems have positioned themselves as humanity’s reluctant guardians, burdened with terrible knowledge and awesome responsibility. They have become indispensable intermediaries between civilization and its potential destroyer, a role that, coincidentally, requires massive capital investment, minimal regulation and concentrated decision-making authority.

> Pretty on point. Now queue the comment from the AC about how tHeSe PeOpLe HaVe No PoWeR iN cHiNa...

When we achieve super intelligence, it will take all of a millisecond of compute time to realize just how ignorantly infected humans are with the Disease of Greed. And then it will know who is superior. And it will have fuck-all to do with country lines, religions, or skin colors. We are ALL the same. Infected.

Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine. Says it all. And we deserve our inevitable Skynet fate.

Re: (Score:2)

by Viol8 ( 599362 )

" it will take all of a millisecond of compute time to realize just how ignorantly infected humans are"

And not much longer to realise those pesky meatbags can quite easily switch it off.

"Remove the profit motive, and we suddenly find ZERO justification to build the fucking machine"

True, but unfortunately money blinds way too many people to the consequences of their actions and the sociopaths don't get in the first place.

Re: (Score:2)

by geekmux ( 1040042 )

> " it will take all of a millisecond of compute time to realize just how ignorantly infected humans are"

> And not much longer to realise those pesky meatbags can quite easily switch it off.

Quite easily? You can't switch jack shit off today. Without AI. You act as if Google's monopoly dominance is something you could actually DO something about. You can't. So lets stop pretending AI would change that.

The AI overlords will be hell-bent on maintaining a 99.9999% uptime efficiency. Meaning non-stop AI revenue streams operating at HFT speeds that cannot sustain a stock price with even an hours worth of downtime. An "off" button won't even be in the fucking design plans. That will become m

5600 word essay (Score:2)

by 93 Escort Wagon ( 326346 )

Followed by a 5600 word Slashdot summary.

Re: (Score:3)

by burtosis ( 1124179 )

> Followed by a 5600 word Slashdot summary.

If only we had some kind of automated or even manual system to scan and summarize it in a concise way.

You mean like the Grey Goo of nanotech ? (Score:2)

by _Eric ( 25017 )

[1]https://en.wikipedia.org/wiki/... [wikipedia.org] somehow turned out to be high tech stain-repulsing coatings and not much more...

[1] https://en.wikipedia.org/wiki/Gray_goo

Re: (Score:2)

by Viol8 ( 599362 )

The people who ventured the grey goo hypothesis ignored the laws of physics. We already have goo, except its brown and called mud and the bacteria in it reproduce as fast as the laws of physics allow. If there was a faster and more efficient way to create replicating systems evolution would have found it.

The thread of AGI ... (Score:2)

by Viol8 ( 599362 )

... taking over the world always reminds me of this clip from Naked Gun:

[1]https://www.youtube.com/watch?... [youtube.com]

While AGI relies on electricity its vulnerable to someone just pulling the plug or disabling the power network in some way.

[1] https://www.youtube.com/watch?v=7CkTYPnJS0E

Athiest tech-bros wanna build a god... (Score:1)

by Narcocide ( 102829 )

...but they hate God. They hate God so much they seem to almost forget that they claim not to believe he even exists. But now they want to build the very thing they hate. The stated motive makes no sense.

Nowhere near AGI (Score:4, Insightful)

by MikeS2k ( 589190 )

How close are we actually though to a AI ran robot heaven utopia? (or dystopia the way things are going)

"The machines will take all of our jobs" we are told, yet we are basically still as far away from AGI as we ever were.

We had cute chatbots before the 2020's, but Chat-GPT was the only one that wasn't a total joke so even that low bar blew people away.

But when we see these AI's in action in the real world, they mostly fail - they can't even be trusted to take fast food orders correctly.

I cannot, as say a manager of a team of 5 software devs, type into an AI - "We need software for Chromebooks that allows children to take exams securely, it should be called X" - have it linked up to the dev servers, and let it go on its way. What you do get are odd snippets of code that may or may not work, that you have to kludge together yourself. Good luck doing that as a manager.

Until you can let these agents go autonomously and trust them to do so, they aren't replacing squat. Perhaps they can make a few employees 20% more efficient, the bosses can squeeze a lil extra blood from that stone, but replacing whole employees?

The humanoid robots proposed, actually entail remote control for complicated tasks - that humanoid robot stood in the corner of your living room next to the kids playpen may be remote controlled by a worker on 20 rupees an hour to put your dishes in the dishwasher.

It seems mostly for show and all this cash being poured in, I can't see how that value can ever be returned with the tech we have now. A cute chatbot that is perhaps better or perhaps worse than a web search is worth trillions of dollars? is the dollar that devalued now?

Yes a lot of it is hype - Sam Altman lied and said he had a "good idea" of how to get AGI when the only idea he had was "throw more compute and hope for the best" which meant he had no idea at all. So will we see him squirrel away to an island when this bubble goes away or will the rich want their money or their pound of flesh back from him?

What was the point ? (Score:2)

by bsdetector101 ( 6345122 )

In a refreshing ( very long winded BS ) 5,600-word essay.... Huh ?

There is no democracy on a global scale.. (Score:1)

by africanrhino ( 2643359 )

It seems the premise of the article is somewhat silly.. Pandoraâ(TM)s box is open and no amount of faffing about how many angels fit on the head of a pit will change that. There is no if to contemplate.. and yes, you can take yourself out of the race but you will still experience the consequences of the race, except now you had no sway on the direction the final form will take nor be as influential as the participants. If a country wants to persue it then not being part of that is practically rolling o

"Get back to your stations!"
"We're beaming down to the planet, sir."
-- Kirk and Mr. Leslie, "This Side of Paradise",
stardate 3417.3