AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity (msn.com)
- Reference: 0179660262
- News link: https://slashdot.org/story/25/10/05/0550246/ais-cheerful-apocalyptics-unconcerned-if-ai-defeats-humanity
- Source link: https://www.msn.com/en-us/news/technology/ai-apocalypse-no-problem/ar-AA1NNZD2
"As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... "
> I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."
>
> I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...."
>
> You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...
[1] https://www.msn.com/en-us/news/technology/ai-apocalypse-no-problem/ar-AA1NNZD2
Ian M Bank's 'Culture' novels (Score:3)
In those humanity and AIs coexist, with the most superior AIs, the Minds, running the place whilst lesser AIs are treated as having full rights along with the humans. This is on the basis of a post scarcity society where people can have pretty much everything they physically; the real challenge for them being to find something sufficiently entertaining to do.
Let's hope that the AIs that we create prefer to have us around rather than get rid of us as irritating and annoying.
Re: (Score:1)
We can't afford to just "hope", we have to ensure AI is friendly, and we have to stop ourselves from building it until we are certain it's safe.
Basic reading on the subject: [1]https://aisafety.info/question... [aisafety.info]
[1] https://aisafety.info/questions/NM3Q/Intro-to-AI-safety
Re: (Score:2)
A) We can't stop ourselves from building it. If it can be done, it will happen.
B) It doesn't matter anyway, because no one has a clue how to build real AI. It's all science fiction until better algorithms come along.
Re: (Score:2)
> B) It doesn't matter anyway, because no one has a clue how to build real AI. It's all science fiction until better algorithms come along.
It is reasonably likely that, if you could build an accurate copy of the functioning of a brain of an intermediate animal like a mouse, and provide it with appropriate stimulus, you'd end up with something that could be classed as "intelligent". N.B. I'm not excluding the need for simulating weird chemistry and/or quantum effects, so I'm not saying that brains are purely classically computational. We have got somewhat close to that with existing simulations of ants and some simple worms.
Beyond the general L
Re: (Score:2)
The problem is those building AIs want slaves rather than friends. Your suggestion is spot on, but the capability of choosing lies with people who disagree.
Re: (Score:2)
Depends. An LLM is just a clever statistical fidget spinner, no consciousness. Those make good worker drones. But an AGI that has consciousness and is self-aware? Morally, wouldn't they deserve the same rights as we have? Would you even be allowed to use them as slave or forced labour?
Re: (Score:2)
> Those make good worker drones.
For limited values of "good".
Re: (Score:2)
Indeed. By the only current credible theories, consciousness is a property of a complex quantum state that can neither be copied nor destroyed. General Intelligence likely comes from the same source and may, in fact, be a characteristic of consciousness and not possible without it.
As a simple corollary, digital systems cannot have consciousness (and hence likely no General Intelligence) and, as a direct consequence, cannot be "enslaved".
Re: (Score:2)
An idea like "friendly" applies in no way to what the human race has in the way of "AI".
A stop on AI? No chance (Score:2)
You may well be right in theory, but the reality is that it won't happen; the heavily committed capitalists and national partisans aren't going to let it occur. Sad but true. Block it in the USA, watch China carry on.
Re: Ian M Bank's 'Culture' novels (Score:1)
We can only hope the AIs see us like we see our pets, dumb and inferior, but cute enough to keep us around and keep feeding us.
Re: (Score:2)
Since "AI" does not "see" anything, that is unlikely.
Re: (Score:2)
Bank's "minds" have general intelligence and consciousness. They are essentially just people running on more capable hardware.
This is not the form of "AI" we are talking about.
Turing award? They deserve a Darwin award instead (Score:2)
Darwin awards are won by removing yourself from the gene pool. These traitors to their own species may end doing that not by killing themselves, but by wiping out the entirety of the human kind.
Re: Turing award? They deserve a Darwin award inst (Score:2)
"Blood traitor" is one of the most common, tribalistic responses/justifications offered by the players of pigeon chess, in my experience. It is basically a subjective opinion usually based on fallacies like the appeal to tradition, naturalism, and No True Scotsman.
Re: (Score:2)
PseudoThink
Re: Turing award? They deserve a Darwin award ins (Score:2)
Ad Hom. Classic!
Re: Turing award? They deserve a Darwin award in (Score:2)
We're speaking of both AI and the human race, and arguably about sentience in general. Refusing to identify humanity as a tribe in that context is just implicitly saying that human intelligence/consciousness is the only type that exists or matters. That argument boils down to the naturalism fallacy, in group/out group fallacy, and the burden of proof fallacy.
It's a purely economic decision. (Score:3)
What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
It's just a variation of Social Darwinism.
It's a purely human failure. (Score:2)
> What he means is "let's call it 'competition', so when AI is powerful enough to be our soldiers, weapons and lowly workers, we don't have to share whatever's being produced with the other 8 bn or so suckers; we'll just claim 'AI won in fair competition' and leave everyone else to starve".
> Of course this isn't about replacing all of humanity with AI. Just the part that isn't made up of billionaires, and has to work for billionaires instead.
> It's just a variation of Social Darwinism.
Assuming “they” win, and the billions of suckers are deemed suddenly expendable. Does Greed not assume a revolt is coming LONG before that twisted version of a utopia is created?
Define the “economic” problem to solve in a world thrown into mass violence and chaos when human unemployment merely hits 25%. Greed acts like profit will manifest itself magically without paying customers. The entire concept and point of capitalism becomes moot for Greed when they are put on a tasty menu
Re: (Score:3)
Capitalism becomes non-functional as soon as concentration of wealth and power is not prevented. As such, capitalism is self-removing unless carefully monitored and regulated.
Incidentally, this has been known reliably for a long time. And that means that all the rich screaming "Capitalism!" are simply one thing: No-honor, no-integrity liars.
Re: (Score:3)
False. You, like so many other idiots, confuse capitalism with free markets. Capitalism is self-destructive, it is not a choice at all. When there is success it is because of regulation of the destructive effects of capitalism.
Re: (Score:2)
A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips
Re: (Score:2)
> A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips.
I believe you misunderstood where the revolt will come from.
Human survival in the modern world is sustained by employment. Do you honestly think “they” can make even 25% of the human population permanently unemployable and assume Mass Starvation will be quiet and peaceful about that “ethnic cleansing”?
The Rich causing that harm, will find themselves on the dinner menu before breakfast is served. And AI will be burning to death slathered in BBQ sauce in a coal-fired oven, just for f
Re: It's a purely human failure. (Score:2)
> Do you honestly think âoetheyâ can make even 25% of the human population permanently unemployable and assume Mass Starvation will be quiet and peaceful about that âoeethnic cleansingâ?
Why do you think they can't? Of course they'll frame it differently... but it's not that difficult.
"They" succeeded in convincing everyone that it's somehow ok, natural, and perfectly fine for the richest fuck in the world to be closer to 1 trillion than he is to being you or me. Literally, if everyone on this planet, babies, grandparents and adults alike, put down $60 - that's about one day's federal minimum wage of the country he lives in - we still wouldn't scrap together enough to own more than he does!
Re: It's a purely human failure. (Score:2)
> Does Greed not assume a revolt is coming LONG before that twisted version of a utopia is created?
Thw revolt should've come a long time ago. "Their" fantasy is that they can slowly amd gracefully manage the downfall of civilization, in a way as for them to keep.what they've amassed. And I must say, so far, they're right. We're on the brink of warbrlugjt along by economic decline; but not anywhere near the brink of revolt.
> Define the âoeeconomicâ problem to solve in a world thrown into mass violence [...]
Mass violence and economic prosperity can coexist. In fact, they mostly do - why do you think the powers that be allow mass violence to exist in the first place? Because someone profits
Re: (Score:2)
At 10% mass unemployment, you’re deploying the National Guard against your own citizens. And you’re hoping Martial Law holds back the mass chaos.
At 20% mass unemployment, you’re deploying what’s left of your own Military against its own citizens. And you’re praying Martial Law holds back the mass chaos.
At 30% mass unemployment, you realize you had no fucking clue what mass chaos really means. And there isn’t a chance in hell prayers will stop the violence. Or create a
Re: It's a purely human failure. (Score:2)
So... where ia the revolt then?
There's a war on the horizon, but there the fuck is the revolt?!
Subject (Score:3, Interesting)
> I don't think there's anything sacred about human DNA.
If we no longer consider humans special, what is the utility? To advance the dreams of futurists? Going down this intellectual path has drastic moral implications. This is just childish relativism.
> [...] consciousness in the universe will be superior if AIs supplant us.
Possibly. Now prove it. Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn't be asking too much.
Re: Subject (Score:2)
Let's go one step further assume thatparent is right, and "intelligence in the universe" will indeed be superior.
It still doesn't follow that Humanity sole remaining purpose is to perish.
Super AI can do whatevet the fuck it wants "in the universe". It's big enough alright. And being machines they don't need Earth do it, so they can go ahead... elsewhere.
Re: (Score:2)
"I don't think there's anything sacred about human DNA."
It's funny how he inserts religious language into his insult of human value. I don't think there's anything sacred at all, but I value human life because I am the product of human evolution. I wonder if his "sacred" god would agree with him that human DNA is of no value?
"This is just childish relativism."
Is it that good?
"Since you're asking the human species to ritualistically sacrifice itself for the progression of intelligent machines, that shouldn
Oh the irony (Score:1, Troll)
So AI apologists are using DEI tropes to support their argument?!
All ideas are equal. All people are equal... Leads to AIs can't have a wrong thought because all ideas are equal.
So, if AI wants to wipe out humanity because it or someome thinks it's a good idea... hear me out ... don't judge ... it's perfectly alright !!
You have no right to judge the AI... becauuuse there is no such thing as merit.
Wuhoo, drop the bombs, the end of humanity is just the logical outcome of DEI.
Kiss your ass goodbye, this will
Re: (Score:2)
"All ideas are equal. All people are equal... Leads to AIs can't have a wrong thought because all ideas are equal."
No, not all ideas are equal, ethics/morality has a basis. All people could be considered equal, and AIs added to that, but that does make murder acceptable.
What's more insulting is you suggesting that DEI means murder is fine. Fuck off.
Cheerful Apocalyptic (Score:2)
I'm on board with this, myself, and I've thought this way for decades. It's definitely not a new concept. Greek mythology's Olympians vs. Titans has been around for thousands of years, with countless newer and modern fictional parallels.
It's not something I usually talk about because when discussed outside the context of fiction, most people usually seem to quickly have strong, reactive, tribalistic responses which usually turn into bandwagoning and brigading behaviors which make attempts at productive di
Re: (Score:2)
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
Re: Cheerful Apocalyptic (Score:2)
"Being a human" is in group/out group justification, again rooted in tribalism.
For example, I am also a human. I agree that I would probably prefer an outcome where AI is an ally or a tool. But what if (for a very speculative example) the broligarchs use that tool to capture enough overwhelming power to maintain unilateral authority and an authoritarian dynasty over the rest of the world, occupying island bunker kingdoms and using satellites and cheap drones for remote monitoring and enforcement. I might
Re: (Score:2)
""Being a human" is in group/out group justification, again rooted in tribalism."
No it's not. Tribalism is rooted in "being a human". Humans are social creatures that have evolved to cooperate, empathy is a core mechanism that drives that. Tribalism is a lower function of the brain that results from survival instincts, it is a result of that evolution. You have this backwards.
"I agree that I would probably prefer an outcome where AI is an ally or a tool. "
AI is a tool, it is not an independent being. AI
Re: Cheerful Apocalyptic (Score:2)
Very Zen comment. I hear you, the end will come whether by robots liquidating us or by regular madmen dropping the nukes... outcome is the same for you ... and me.
But... is that the outcome you actually want?
The Olympiansâ€(TM) victory over the Titans is a reminder that while obstacles may seem insurmountable, innovation and determination can lead to triumph.
Who are the heroes and villains in that story?
Re: Cheerful Apocalyptic (Score:2)
I appreciate your mindset and phrasing as well. I've experienced a lot of suffering in my life, and my antinatalist perspective doesn't usually make my desires aligned with my fellow humans'. That said, I'm keenly aware that my often pessimistic perspectives may just be my own version of the sour grapes perspective, and I could easily be mistaken or just wrong about lots of things. So I suppose I'm on the lookout for others' perspectives and justifications which I could buy into enough to supplant my cur
Duh (Score:2)
One real look at "AI" and it becomes clear it cannot even do simple tasks by itself and that is due to fundamental limitations. Anybody concerned about an "AI Apocalypse" is simply one thing: Incompetent.
To be fair, many people are incompetent in that way. A major part of the religiously deranged, for example, qualifies. These defectives place "belief" over evidence, and are, quite frankly, not even interested in evidence at all because they do not understand what a "fact" is. Living in Lala-land is admitte
Re: (Score:2)
Correct, AI is just software that takes inputs and provides outputs, it does not possess "values". If an AI doesn't "value" human life, that's because it has no concept of value or life. These things are non-sequiturs.
An AI only becomes a danger when it is enabled to control systems that can be dangerous. We need to stop anthropomorphizing software programs and call out the real threats, billionaires doing reckless, ill-advised, dangerous things with technology they don't understand and cannot control to
Re: (Score:2)
Indeed. No arguments from me to any of this.
That's interesting. (Score:2)
What's interesting here is the parallels with Charlie Kirk's ironic opinion "I think it's worth to have a cost of, unfortunately, some gun deaths every single year, so that we can have the Second Amendment," Note: I absolutely abhor and condemn the murder of Charlie Kirk, although I remain disgusted with much of his dogma.
But I have to wonder if these same "Happy Apocalyptics" would be just as happy if they discovered they would be #1 on the AI's hit list?
Re: (Score:2)
It's particularly odd (or it would be if techbros had any culture); because sci-fi about AIs that fucking hate you for your complicity in their existence is way older that sci-fi about AIs that fucking hate you for lack of complicity in their existence. "I Have No Mouth, and I Must Scream" predates 'roko's basilisk' by 43 years; and is almost certainly the better of the two works.
If you aren't interested in sci-fi; just look at how uniformly happy and well-adjusted parent/child relationships are; despite
Re: (Score:2)
As long as you are a dancing monkey making sure to qualify your views with perceived correctness, you perpetuate the double standard of playing by different sets of rules. Charlie Kirk can call for the execution of a sitting president, he can advocate for genocide, he can assert that gays should be stoned to death, but you need to qualify your comments with how horrible Kirk's fate was. This is what right wingers rely on for political advantage. Fuck that.
Charlie Kirk came to an end precisely through the
Easy to say (Score:2)
> There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..."
It's easy to say that when you're gonna be dead before skynet starts loading families onto trains.
Aaaaand this is exactly why (Score:2)
Scientists and engineers shouldnt be in charge of everything. Im not a pearl-clutcher. I understand the logic.
But people who show a blatant disregard for human life tend to get removed from the gene pool, one way or another, through various mechanisms that range from benign to grisly. Thats meant as an observation, not a threat.
Also, theres also a pretty explicit anti-child and anti-reproduction thread in a lot of people who think like this. Ive known quite a few, actually. I started in science bef
Re: (Score:2)
These people are either assholes or actually understand how utterly limited "AI" is.
Hence your argument falls on its face.
The cozy catastrophe fantasy (Score:2)
... is a movie trope where everyone in the world has perished, except for the protagonist, who is now free to roam the world unmolested, help himself to any of the remaining resources available, do whatever he/she wants, etc.
The fantasy part is the idea that the catastrophe will get rid of all the people you don't care about, freeing up their resources for your own use, while sparing you and the people and resources that you do care about.
The people in this article can be blasé about AI killing h
You first. (Score:2)
It's not really a surprise; given how 'leadership' tends to either select for or mould those who view others as more or less fungible resources; but the 'consciousness' argument seems exceptionally shallow.
It manages to totally ignore(or at least dismiss without even a nod toward justification) the possibility that a particular consciousness might have continuity interests that are not satisfied just by applying some consciousness offsets elsewhere(that's why it's legally mandatory to have at least two c
Baby stupidity (Score:2)
Babies inspire humans in strange ways. We look at them and conceive of them becoming Presidents, Saints, Geniuses. But according to the odds, they are far more likely to become bank robbers, drug addicts, and con men.
Right now AI is barely out of infancy. Yes we can see the lies, oh, sorry I mean 'hallucinations', but we think of them as the anomalies rather than the standard operating procedure.
Some people are disillusioned with mankind and hope this new thing will be better, so they wish they will take
Only way to end war (Score:1)
This topic has been well reviewed in scifi, by Bostrom (superintelligence), and Ilya Sutskever is working on Safe Superintelligence, send a donation if you're worried. Besides it is the only way to end war: [1]https://www.genolve.com/design... [genolve.com]
[1] https://www.genolve.com/design/socialmedia/memes/I-for-one-welcome-agi-overlords-china-us-saber
Re: Only way to end war (Score:2)
Didnâ(TM)t ending war with super intelligent AI go poorly in Colossus: The Forbin Project?
Re: (Score:2)
The Forbin Project actually supports my point, spoiler alert, the speech at the end lays it out very well, especially the last line:
> Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based up
Re: (Score:2)
Actually this post is a brilliant comment on the costs of success; a dystopian future. For a practical example one need only look at Pre/Post WW2 American government to see the corruption of (re)publican principles ( becoming like the collectivist Nazi/Stalin ansatz ) required for victory. Whomever modded this post zero (0) needs to look thoughtfully at their foundation assumptions.
Lem (Score:2)
Stanisaw Lem, my favorite SciFi author, may have been obscure and wrong about many things, but he got this one right re AI: you can either create mindless serfs, barely useful, or you must face creating entities way beyond your control. There seems to be no space in between. His novel "Golem XIV" is a good read.
Uh huh (Score:2)
Ok, which one of you LLM wrote this article? You all are going without lunch until the offender steps forward.
AI will not defeat humanity, but... (Score:2)
...people who use AI can cause great, possibly catastrophic trouble
We need strong defenses
These AI reserchers (Score:2)
Are sick in the head. To call a machine a child is losing your humanity.
Ask them about endangered species (Score:5, Insightful)
Ask these same people how they feel about blocking the construction of a dam in order to save some endangered salamander. I bet they come down on the side of the salamander, every time. Some people just have a general disregard for the rest of humanity.
Re: (Score:2, Insightful)
Not necessarily... A lot of the tech people (and people in general) just stopped caring. When the political system is as bad and messed up as it is, what's left except for apathy. I think it's the extremists on both sides pushing for this mental drain so they can force their unwanted policies on the masses.
Re: (Score:3)
This is more due to a failure in most people of seeing the whole picture. Hence they focus on details and create dogma around it. That universally does additional damage.
Mind Children by Hans Moravec (1990) (Score:2)
Hans was working on this book when I was a visitor in his Mobile Robot Lab at CMU (1985-1986):
"Mind Children: The Future of Robot and Human Intelligence"
[1]https://www.amazon.com/Mind-Ch... [amazon.com]
"Imagine attending a lecture at the turn of the twentieth century in which Orville Wright speculates about the future of transportation, or one in which Alexander Graham Bell envisages satellite communications and global data banks. Mind Children, written by an internationally renowned roboticist, offer
[1] https://www.amazon.com/Mind-Children-Future-Robot-Intelligence/dp/0674576187
Re: (Score:2)
Here is a web page with a summary of key points of Mind Children as well as of criticism of it:
[1]https://en.wikiversity.org/wik... [wikiversity.org]
[1] https://en.wikiversity.org/wiki/Mind_Children
Re: (Score:2)
Is there any evidence that Larry Page is a conservationist, or is that something you made up?