News: 0175234949

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Silicon Valley Is Debating If AI Weapons Should Be Allowed To Decide To Kill (techcrunch.com)

(Friday October 11, 2024 @05:30PM (BeauHD) from the pros-and-cons dept.)


An anonymous reader quotes a report from TechCrunch:

> In late September, Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone. "Congress doesn't want that," the defense tech founder told TechCrunch. "No one wants that." But Tseng spoke too soon. Five days later, Anduril cofounder Palmer Luckey [1]expressed an openness to autonomous weapons -- or at least a heavy skepticism of arguments against them. The U.S.'s adversaries "use phrases that sound really good in a sound bite: Well, can't you agree that a robot should never be able to decide who lives and dies?" Luckey said during a talk earlier this month at Pepperdine University. "And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?"

>

> When asked for further comment, Shannon Prior, a spokesperson for Anduril said that Luckey didn't mean that robots should be programmed to kill people on their own, just that he was concerned about "bad people using bad AI." In the past, Silicon Valley has erred on the side of caution. Take it from Luckey's cofounder, Trae Stephens. "I think the technologies that we're building are making it possible for humans to make the right decisions about these things," he [2]told Kara Swisher last year. "So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously." The Anduril spokesperson denied any dissonance between Luckey (pictured above) and Stephens' perspectives, and said that Stephens didn't mean that a human should always make the call, but just that someone is accountable.

>

> Last month, Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons. At an event hosted by the think tank Hudson Institute, Lonsdale expressed frustration that this question is being framed as a yes-or-no at all. He instead presented a hypothetical where China has embraced AI weapons, but the U.S. has to "press the button every time it fires." He encouraged policymakers to embrace a more flexible approach to how much AI is in weapons. "You very quickly realize, well, my assumptions were wrong if I just put a stupid top-down rule, because I'm a staffer who's never played this game before," he said. "I could destroy us in the battle."

>

> When TC asked Lonsdale for further comment, he emphasized that defense tech companies shouldn't be the ones setting the agenda on lethal AI. "The key context to what I was saying is that our companies don't make the policy, and don't want to make the policy: it's the job of elected officials to make the policy," he said. "But they do need to educate themselves on the nuance to do a good job." He also reiterated a willingness to consider more autonomy in weapons. "It's not a binary as you suggest -- 'fully autonomous or not' isn't the correct policy question. There's a sophisticated dial along a few different dimensions for what you might have a soldier do and what you have the weapons system do," he said. "Before policymakers put these rules in place and decide where the dials need to be set in what circumstance, they need to learn the game and learn what the bad guys might be doing, and what's necessary to win with American lives on the line." [...]

"For many in Silicon Valley and D.C., the biggest fear is that China or Russia rolls out fully autonomous weapons first, forcing the U.S.'s hand," reports TechCrunch. "At the Hudson Institute event, Lonsdale said that the tech sector needs to take it upon itself to 'teach the Navy, teach the DoD, teach Congress' about the potential of AI to 'hopefully get us ahead of China.' Lonsdale's and Luckey's affiliated companies are working on getting Congress to listen to them. Anduril and Palantir have cumulatively spent over $4 million in lobbying this year, according to OpenSecrets."



[1] https://techcrunch.com/2024/10/11/silicon-valley-is-debating-if-ai-weapons-should-be-allowed-to-decide-to-kill/

[2] https://nymag.com/intelligencer/2023/02/on-with-kara-swisher-trae-stephens-on-autonomous-warfare-ai.html



the answer will be ... (Score:2)

by Big Hairy Gorilla ( 9839972 )

yes.

No need to debate it.

Re: the answer will be ... (Score:2)

by wazerface ( 752726 )

Autonomous weapons are needed for the nuclear defense system that Elon is working on (and now Trump is advertising at rallies), [1]https://www.reddit.com/r/WikiL... [reddit.com] Stunning this isn't all over the news.

[1] https://www.reddit.com/r/WikiLeaks/comments/1fy10k1/comment/lqqmoct/

Re: (Score:1)

by Nobius2 ( 10026968 )

> [1]WikiLeaks on Starship & Starlink [reddit.com]

This explains Musk completely.

[1] https://www.reddit.com/r/WikiLeaks/comments/1fy10k1/comment/lqqmoct/

Counterheadline: (Score:2)

by Pseudonymous Powers ( 4097097 )

World Debating Whether Silicon Valley Should Be the Ones Debating This

Re: (Score:2)

by Roger W Moore ( 538166 )

I suspect they are the only ones still debating it and doing so publicly. My guess would be that the answer has already been decided by the ministry of defence (or equivalent) of every country with significant AI capability.

Re: (Score:2)

by RossCWilliams ( 5513152 )

Exactly right. No one cares what Silicon valley thinks. They aren't the ones fighting wars and wars as won based on "iron and blood". This is a little like nuclear weapons. If someone is losing and using nukes will prevent it, they will use them. We will eventually all pay a very heavy price for not abolishing them. A lot higher price than AI can extract.

Strict liability (Score:2)

by mysidia ( 191772 )

I want strict liability both civil and criminal applied to the management of any company involved in manufacturing AI weaponry that makes life and death decisions.

Re: (Score:2)

by ls671 ( 1122017 )

I don't think that they really care about what you want :(

Re: (Score:2)

by viperidaenz ( 2515578 )

Who's basement do they send the request for your opinion?

Re: (Score:2)

by newcastlejon ( 1483695 )

OK. Right after you can sue Rheinmetall when a missile fails and blows up your house accidentally... and right after you can sue Armalite for killing children in schools.

Silicon Valley's decision is moot (Score:3)

by Wolfling1 ( 1808594 )

You can be guaranteed that the Chinese AI efforts have already made the decision, and you won't like it.

Re: (Score:1)

by Nobius2 ( 10026968 )

That's absurd logic used by warmongers.

Can't tell the difference (Score:2)

by TheNameOfNick ( 7286618 )

> where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

That's absurd. Obviously you're going to allow a computer to make the decision, but what has a moral high ground to do with that? Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?

Re: Can't tell the difference (Score:2)

by drinkypoo ( 153816 )

"Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank? Do you know how expensive modern weapons are?"

Found the guy who doesn't know how the MIC works

Re: (Score:2)

by dfghjk ( 711126 )

Where's the moral high ground when a landmine is deployed in an area that allows a school bus full of kids to hit it?

Where's the moral high ground for the asshole who makes this bad faith argument?

"Obviously you're going to allow a computer to make the decision..."

What decision? I'm not going to allow a computer to make the decision to detonate a mine. The failure, moral and intellectual, has already occurred when that question must be asked.

"Why would you waste a mine on a school bus if the mine can decid

Re: (Score:2)

by ShanghaiBill ( 739463 )

> Why would you waste a mine on a school bus if the mine can decide to keep waiting for that tank?

Actually, the best strategy is to blow up the fuel trucks.

Re: (Score:2)

by newcastlejon ( 1483695 )

It's nonsense anyway. Mines (and cluster munitions) by their very nature are indiscriminate, which is why most civilised countries have agreed to not use them.

Won't someone think of the kids? (Score:2)

by Krishnoid ( 984597 )

> And my point to them is, where's the moral high ground in a landmine that can't tell the difference between a school bus full of kids and a Russian tank?

And shouldn't we also consider the [1]machine ethics [youtu.be] of taking out a Russian tank full of kids? It's all so confusing. I have to wonder, though, if he [2]strategically selected "landmines" [whitehouse.gov] as his example, considering how recent U.S. policy has changed in this regard.

[1] https://youtu.be/wfqyR5tftNM?si=7Ox352ZG26temp2H

[2] https://www.whitehouse.gov/briefing-room/statements-releases/2022/06/21/fact-sheet-changes-to-u-s-anti-personnel-landmine-policy/

Re: (Score:2)

by DarkOx ( 621550 )

This is all so silly. The only reason the position on landmines has changes is that strategists no longer think they are the most effective solution for their applications anyway.

Who needs mines which are slow to deploy, either costly up front if you put fancy electronics in them, or tedious and hazardous to remove if you dont.

Now you can send a swarm of drones or use a microwave, sonic, or laser weapon to take out the target or at least that will be the reality before the USA's next big land conflict, and

Re: (Score:2)

by RossCWilliams ( 5513152 )

> strategists no longer think they are the most effective solution for their applications anyway.

Apparently neither Ukraine nor Russia got the message on this. They are both making extensive use of mines to great effect. In fact, Ukraine attributed the failure of their offensive in part to Russia's extensive mining of its defensive positions.

Re: (Score:2)

by ShanghaiBill ( 739463 )

> Landmines are already supposed to be illegal

America is not a signatory.

America uses landmines along the Korean DMZ and the Guantanamo perimeter.

Re: (Score:2)

by viperidaenz ( 2515578 )

To make things worse, Ukraine signed that treaty, yet now 30% their country is covered in Russian landmines.

Re: How big (Score:1)

by retchdog ( 1319261 )

what do you mean âoeshort ofâ?

Re: (Score:2)

by ShanghaiBill ( 739463 )

> In fact, there should not even BE any so-called 'AI' on battlefields to begin with.

Both sides use AI extensively in Ukraine.

One application is terrain guidance of drones when GPS is jammed. Ukraine is building drones with on-board neural processors for this reason.

AI is also used for EW and target acquisition.

Re: (Score:2)

by viperidaenz ( 2515578 )

> Changes nothing, it's just plain wrong for shitty fucked-up braindead people to decide whether someone lives or dies, and if any one of you still think that specific point needs to be debated, then I question whether you have even a single shred of humanity in you.

FTFY

Sounds like ... (Score:2)

by PPH ( 736903 )

... the Doomsday Machine from Dr. Strangelove. If we are letting Palmer Luckey (a.k.a. General Jack D. Ripper) make these decisions, we need to make sure we have solved the mine shaft gap issue first.

Re: Sounds like ... (Score:1)

by retchdog ( 1319261 )

goddammit you beat me to it.

We cannot allow an automated killing gap!!

Absolute height of arrogance (Score:2)

by Whateverthisis ( 7004192 )

"Shield AI cofounder Brandon Tseng swore that weapons in the U.S. would never be fully autonomous -- meaning an AI algorithm would make the final decision to kill someone."

"Five days later, Anduril cofounder Palmer Luckey expressed an openness to autonomous weapons..."

"Palantir co-founder and Anduril investor Joe Lonsdale also showed a willingness to consider fully autonomous weapons."

Who the hell cares what these guys think? Maybe they should go back and take a look at how the world actually works.

Re: Absolute height of arrogance (Score:1)

by retchdog ( 1319261 )

Ethics: a powerful negotiation tool.

Re: (Score:2)

by RossCWilliams ( 5513152 )

To think some silicon valley tech bros even have a say in this discussion is the most arrogant, asinine thing I've ever heard.

The very notion of self-government is now declared dead.

Autonomous Weapon AI (Score:2)

by divide overflow ( 599608 )

With Palantir, Anduril and the Hudson Institute calling the shots you can be assured that all the AI needs to make a kill/no kill decision will be the detection of a solid gold Rolex on your wrist.

WE CANNOT ALLOW (Score:1)

by retchdog ( 1319261 )

Mr. Free Market, we cannot allow an automated killing gap!!

there is money to be made (Score:2)

by zeiche ( 81782 )

the answer will be YES, of course.

The article misrepresents the original comment. (Score:2)

by inthegreenwoods ( 4272563 )

The comment compared a landmine, which is a very simple computing device that is already in use on a massive scale, and which is programmed to kill indiscriminately, with an AI device which can be programmed to kill in a selective way. It clearly implies that the AI device would be a morally superior alternative. The article completely failed to show that the current debate that proposes that all AI weapons are evil and should be banned, would be worse than the current situation. This is about comprehension

Degrees of latitude (Score:2)

by silentbozo ( 542534 )

Having certainty is great, but lack of certainty doesn't mean that a decision won't be made to take out a city block to get one target if you lack the means to narrow your targeting.

Is it a war crime? Since I'm not a lawyer, I'll the Hague decide...

If your target is a fugitive terrorist mastermind, taking refuge in neutral country, yeah, you probably don't want to piss off the host country by killing its citizens. Use a special forces raid or the flying ginsu knife bomb to reduce collateral damage.

What if

If the master dies and the disciple grieves, the lives of both have
been wasted.