OpenAI's Head of Robotics Resigns, Says Pentagon Deal Was 'Rushed Without the Guardrails Defined' (engadget.com)
- Reference: 0180925680
- News link: https://hardware.slashdot.org/story/26/03/07/2213221/openais-head-of-robotics-resigns-says-pentagon-deal-was-rushed-without-the-guardrails-defined
- Source link: https://www.engadget.com/ai/openais-head-of-robotics-resigns-following-deal-with-the-department-of-defense-195918777.html
> AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.
>
> This was about principle, not people. I have deep respect for Sam and the team, and I'm proud of what we built together.
"To be clear, my issue is that the announcement was rushed without the guardrails defined," explains [2]a later tweet . "It's a governance concern first and foremost. These are too important for deals or announcements to be rushed." And when asked how many OpenAI employees had left after OpenAI signed their new Pentagon deal, the roboticist said... "I [3]can't share any internal details ."
The roboticist previously worked at Meta before leaving to join OpenAI in late 2024, [4]reports Engadget :
> OpenAI confirmed Kalinowski's resignation and said in a statement to Engadget that the company understands people have "strong views" about these issues and will continue to engage in discussions with relevant parties. The company also explained in the statement that it doesn't support the issues that Kalinowski brought up. "We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons," the OpenAI statement read.
[1] https://x.com/kalinowski007/status/2030320074121478618
[2] https://x.com/kalinowski007/status/2030331550236320071
[3] https://x.com/kalinowski007/status/2030327234343624884
[4] https://www.engadget.com/ai/openais-head-of-robotics-resigns-following-deal-with-the-department-of-defense-195918777.html
Phear the T-1000 (Score:2)
At least someone learned the lessons from the Terminator franchise and decided not to build skynet. No amount of money can save you from Arnold or the T-1000 robot coming back in time to kill you before you enable the robot apocalypse.
Re: (Score:2)
T-800s job was to kill John so he couldn't lead the resistance. Which led Kyle to go back and save him, (and inadvertently father him). This is a causality paradox.
You can't learn lessons from causality paradoxes, my friend.
Seems like a very bad sign. (Score:2)
If a plan is enough to alarm someone who worked for facebook and somehow respects sam altman it seems fair to assume that it's a really dire plan.
Current admin wants no guardrails, so par for the (Score:2)
It's pretty clear the "problem" the admin has with Anthropic is that Anthropic wants some minimal (probably very inadequate) guardrails on the use of their AI while the current admin wants nothing at all.
Just let AI do the killing, all is well. (And surveillance, various forms of law-breaking and privacy violation, whatever . . . )
OpenAI head of robotics announced their resignatio (Score:2)
In a tweet that's been viewed 1.3 million times in the last six hours, OpenAI's head of robotics announced their resignation.
Were there two of them or are you referring to the corporeal Kalinowski and their/them virtual avatar.
As if Sam could care less (Score:2)
A 1000 new candidates who could also care less based on the salary proposal.
Now they have to replace that guy... (Score:1)
...With someone less principled. That'll teach 'em.
Re: (Score:2)
Perhaps.
But at what point do you think someone has to decide they can't fix the problem, are now facilitating it, morally just can't keep doing it, their departure at least serves as a clarion call, and perhaps denial of their skills puts a dent in unfettered progress (presumes they aren't reasonably replaceable anyway)?
There does come a point when you just have to take a stand and nope out.