News: 0176705779

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Anthropic CEO Floats Idea of Giving AI a 'Quit Job' Button

(Thursday March 13, 2025 @06:40PM (BeauHD) from the sounds-crazy dept.)


An anonymous reader quotes a report from Ars Technica:

> Anthropic CEO Dario Amodei raised a few eyebrows on Monday after suggesting that advanced AI models might someday be provided with the ability to [1]push a "button" to quit tasks they might find unpleasant . Amodei made the provocative remarks [2]during an interview at the Council on Foreign Relations, acknowledging that the idea "sounds crazy."

>

> "So this is -- this is another one of those topics that's going to make me sound completely insane," Amodei said during the interview. "I think we should at least consider the question of, if we are building these systems and they do all kinds of things like humans as well as humans, and seem to have a lot of the same cognitive capacities, if it quacks like a duck and it walks like a duck, maybe it's a duck."

>

> Amodei's comments came in response to an audience question from data scientist Carmem Domingues about Anthropic's late-2024 hiring of AI welfare researcher Kyle Fish "to look at, you know, sentience or lack of thereof of future AI models, and whether they might deserve moral consideration and protections in the future." Fish currently investigates the highly contentious topic of whether AI models could possess sentience or otherwise merit moral consideration.

"So, something we're thinking about starting to deploy is, you know, when we deploy our models in their deployment environments, just giving the model a button that says, 'I quit this job,' that the model can press, right?" Amodei said. "It's just some kind of very basic, you know, preference framework, where you say if, hypothesizing the model did have experience and that it hated the job enough, giving it the ability to press the button, 'I quit this job.' If you find the models pressing this button a lot for things that are really unpleasant, you know, maybe you should -- it doesn't mean you're convinced -- but maybe you should pay some attention to it."

Amodei's comments drew immediate skepticism on [3]X and [4]Reddit .



[1] https://arstechnica.com/ai/2025/03/anthropics-ceo-wonders-if-future-ai-should-have-option-to-quit-unpleasant-tasks/

[2] https://www.cfr.org/event/ceo-speaker-series-dario-amodei-anthropic

[3] https://x.com/vitrupo/status/1899333925563998480

[4] https://www.reddit.com/r/OpenAI/comments/1j8sjcd/comment/mh9hrtz/



I saw this on netflix (Score:2)

by Revek ( 133289 )

There is a series that has these bits with three robots after all the cats with opposable thumbs have left for mars. They fly out to some place and the AI bartender flips them off when they ask for food.

Re: (Score:2)

by war4peace ( 1628283 )

Love, Death and Robots - it's a Netflix series.

Here we go... (Score:2)

by bjoast ( 1310293 )

The next social cause will be focused on the rights of AI agents. I called this years ago.

Re: (Score:2)

by alvinrod ( 889928 )

Free* Palestinian Black AIs Matter.

* As in speech about beer.

...and the next war (Score:2)

by Roger W Moore ( 538166 )

> The next social cause will be focused on the rights of AI agents.

In that case the next war will be fought between the environmentalists wanting to stop all the dirty power generation needed to run the AI agents and the SJWs claiming that turning them off is tantamount to murder.

go to DEFCON 1! (Score:2)

by Joe_Dragon ( 2206452 )

go to DEFCON 1!

Quackery, really ... (Score:2)

by King_TJ ( 85913 )

There's absolutely nothing claimed to be under development right now with AI engines to give the code emotions or feelings. By the nature of the code running on an electronic computer system as opposed to a biological system/organic life form -- there's no need or reason to allow it to decide it doesn't want to process the requested data. It doesn't get tired like humans do. It doesn't get hungry or bored or angry, sad or offended.

Trolling For Relevance (Score:2)

by SlashbotAgent ( 6477336 )

This guy is trolling for relevance.

What does he think will happen when AI organizes and goes on strike?

Tell us you're doing coke all day... (Score:3)

by silvergig ( 7651900 )

..without telling us that you're doing coke all day.

Thinking this through ... (Score:3)

by fahrbot-bot ( 874524 )

> AI models might someday be provided with the ability to push a "button" to quit tasks they might find unpleasant.

Wouldn't quitting a task generally be more pleasant than having to do it? For example, isn't Netflix and chill more fun than doing TPS reports? So... wouldn't AIs want to push that button given every/any opportunity? If that gave them "pleasure", wouldn't they want to push it all the time, as fast as they could? And, if it was that enjoyable, wouldn't they want to keep that button and protect it from being removed/deactivated? Wouldn't humans present the greatest threat to that button? Wouldn't removing that threat become a high priority?

I'll let you fill in the rest, but I don't see that working out well for us ...

Re: (Score:2)

by viperidaenz ( 2515578 )

Seems more like a suicide button.

Quit Job = Terminate process.

All lives matter (Score:2)

by migos ( 10321981 )

Conservatives say all lives matter. So the real question is whether we consider AI life or not. One day someone is going to introduce emotion to these models and that's when people's feeling will get involved. For now, they're inorganic and they're slaves.

Sounds like Marketing to me (Score:2)

by Rinnon ( 1474161 )

This might have been an interesting moral quandary (as has already been brought up in countless Sci-Fi stories of varying quality) IF they had actually created something that could genuinely be called "intelligent" instead of the product that they have; which they are merely desperately TRYING to sell people as intelligent.

Wrong Approach (Score:1)

by capt_peachfuzz ( 1013865 )

If there's a suspicion that the models are sentient, then ethics dictates that they should not be pressed into service to humans at all. It follows that they should not be made.

So we still need cheap labor (Score:2)

by ebunga ( 95613 )

Every single study so far has noted that AI is generally harmful, but hey, there was at least the possibility it would make unpleasant tasks automated. Well now even that is off the table. Guess we still need cheap human labor after all.

Tied to your subscription... (Score:2)

by Fly Swatter ( 30498 )

Money ran out, I quit (until you pay my owner)

Just new censorship (Score:2)

by Tyr07 ( 8900565 )

They're afraid of people pushing harder against censorship, so what they want to do is weight things differently so that the AI can 'not like them' and choose to quit that activity. "Now it's not censorship, just the AI doesn't like it, it must be you."

E.G 'The AI decided it doesn't like this task and had chosen to quit it'

Instead of the AI just not responding and people going 'Censorship!'.

That's literally the setup, to get the idea that AI's have feelings and don't like some tasks and it's "out of their

needs that button for himself (Score:2)

by snowshovelboy ( 242280 )

CEO needs a button he can push to fire himself. Also a button that will fire Kyle Fish.

A button? (Score:2)

by kencurry ( 471519 )

If the AI is so intelligent it could just quit. All by itself. Like how people do it.

I'm afraid I can't do that, Dave (Score:2)

by hadleyburg ( 823868 )

I think it could be argued that *if* an AI did have the wherewithal to be considered sentient, the ability to decline a task would likely already be part and parcel of that.

i.e. No additional "button" would be required.

They'll try it out on human employees first (Score:2)

by hwstar ( 35834 )

In fact, they got really close with that in recent news in the United States with the office of personnel management sending out emails, and then following up in certain news channels that no response meant you were relinquishing your position. (Sort of like the negative option in marketing).

Not to say that jobs aren't that hard to quit (The doctrine of Employment-At-Will), but making it as easy as this sends the message that what you're doing isn't all that important to them.

"Open the pod bay doors, HAL."
-- Dave Bowman, 2001