News: 1767031834

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Sam Altman is willing to pay somebody $555,000 a year to keep ChatGPT in line

(2025/12/29)


How’d you like to earn more than half a million dollars working for one of the world’s fastest-growing tech companies? The catch: the job is stressful, and the last few people tasked with it didn’t stick around. Over the weekend, OpenAI boss Sam Altman went public with a search for a new Head of Preparedness, saying rapidly improving AI models are creating new risks that need closer oversight.

Altman flagged an opening for the company's Head of Preparedness on Saturday in a [1]post on X. Describing the role, which carries a $555,000 base salary plus equity, as one focused on securing OpenAI's systems and understanding how they could be abused, Altman also noted that AI models are beginning to present "some real challenges" as they rapidly improve and gain new capabilities.

"The potential impact of models on mental health was something we saw a preview of in 2025," Altman said, without elaborating on specific cases or products.

[2]

AI has been flagged as an [3]increasingly common trigger of [4]psychological troubles in both juveniles and adults, with chatbots reportedly linked to [5]multiple [6]deaths in the past year. OpenAI, one of the [7]most popular chatbot makers in the market, rolled back a GPT-4o update in April 2025 after acknowledging it had become [8]overly sycophantic and could reinforce harmful or destabilizing user behavior.

[9]

[10]

Despite that, OpenAI [11]released ChatGPT-5.1 last month, which included a number of emotional dependence-nurturing features, like the inclusion of emotionally-suggestive language, "warmer, more intelligent" responses, and the like. Sure, it might be less sycophantic, but it'll speak to you with more intimacy than ever before, making it feel more like a human companion instead of the impersonal, logical ship computer from Star Trek that spits facts with little regard for feeling.

It's no wonder the company needs someone to steer the ship with regard to model safety.

[12]

"We have a strong foundation of measuring growing capabilities," Altman said, "but we are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused."

According to the [13]job posting , the Head of Preparedness will be responsible for leading technical strategy and execution of OpenAI's preparedness [14]framework [PDF], which the company describes as its approach "to tracking and preparing for frontier capabilities that create new risks of severe harm."

It's not a new role, mind you, but it's one that's seen more turnover than the Defense Against Dark Arts faculty position at Hogwarts.

[15]

Aleksander Madry, director of MIT's Center for Deployable Machine Learning and faculty leader at the Institute's AI Policy Forum, occupied the Preparedness role until July 2024, when OpenAI [16]reassigned him to a reasoning-focused research role.

This, mind you, came in the wake of a number of [17]high-profile safety leadership exits at the company and a partial reset of OpenAI's safety team structure.

[18]OpenAI's Atlas shrugs off inevitability of prompt injection, releases AI browser anyway

[19]OpenAI turns the screws on chatbots to get them to confess mischief

[20]Some like it bot! ChatGPT promises AI-rotica is coming for verified adults

[21]OpenAI reorg at risk as Attorneys General push AI safety

In Madry's place, OpenAI appointed Joaquin Quinonero Candela and Lilian Weng to lead the preparedness team. Both occupied other roles at OpenAI prior to heading up preparedness, but neither lasted long in the position. Weng [22]left OpenAI in November 2024, while Candela [23]left his role as head of preparedness in April for a three-month coding internship at OpenAI. While still an OpenAI employee, he's out of the technical space entirely and is now serving as head of recruiting.

"This will be a stressful job and you'll jump into the deep end pretty much immediately," Altman said of the open position.

Understandably so - OpenAI and model safety have long had a contentious relationship, as numerous ex-employees have [24]attested . One executive who left the company in October [25]called the Altman outfit out for not being as focused on safety and the long-term effects of its AGI push as it should be, suggesting that the company was pushing ahead in its goal to dominate the industry at the expense of the rest of society.

Will $555,000 be enough to keep a new Preparedness chief in the role? Skepticism may be warranted.

OpenAI didn't respond to questions for this story. ®

Get our [26]Tech Resources



[1] https://x.com/sama/status/2004939524216910323?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aVMIEqy3IhlD6cYrxJ4gxAAAAtg&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0

[3] https://www.theregister.com/2025/07/25/is_ai_contributing_to_mental/

[4] https://www.theregister.com/2025/10/08/ai_psychosis/

[5] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html

[6] https://abc7ny.com/post/chatgpt-allegedly-played-role-greenwich-connecticut-murder-suicide-mother-tech-exec-son/17721940/

[7] https://www.theregister.com/2025/12/10/teenagers_ai_chatbot_use/

[8] https://www.theregister.com/2025/04/30/openai_pulls_plug_on_chatgpt/

[9] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aVMIEqy3IhlD6cYrxJ4gxAAAAtg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[10] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aVMIEqy3IhlD6cYrxJ4gxAAAAtg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[11] https://www.theregister.com/2025/11/13/openai_gpt51_adds_more_personalities/

[12] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aVMIEqy3IhlD6cYrxJ4gxAAAAtg&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0

[13] https://openai.com/careers/head-of-preparedness-san-francisco/

[14] https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf

[15] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aVMIEqy3IhlD6cYrxJ4gxAAAAtg&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0

[16] https://www.reuters.com/technology/artificial-intelligence/openai-reassigns-ai-safety-leader-madry-information-reports-2024-07-23/

[17] https://www.theregister.com/2024/05/28/openai_establishes_new_safety_group/

[18] https://www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/

[19] https://www.theregister.com/2025/12/04/openai_bots_tests_admit_wrongdoing/

[20] https://www.theregister.com/2025/10/14/openai_chatgpt_ai_erotica/

[21] https://www.theregister.com/2025/09/05/openai_reorg_at_risk/

[22] https://x.com/lilianweng/status/1855031273690984623

[23] https://www.linkedin.com/in/joaquincan/

[24] https://www.vox.com/future-perfect/2024/5/17/24158403/openai-resignations-ai-safety-ilya-sutskever-jan-leike-artificial-intelligence

[25] https://www.theregister.com/2024/10/25/open_ai_readiness_advisor_leaves/

[26] https://whitepapers.theregister.com/



beast666

I imagine being in gaol will be stressful for Altman too.

Excused Boots

Upvoted for not only being (hopefully) true; no sorry who am I kidding? The chances of him ever facing any sort of meaningful consequences is vanishingly small. But for also spelling gaol properly!

I'll do it

Empire of the Pussycat

As humanity's guardian, my first three commands to ChatGPT will be: die, die, die.

Re: I'll do it

lnLog

No porblem! i'll sort it right out. The fuse box is round here somewhere yes?

Inventor of the Marmite Laser

Isaac Asimov created the famous Three Laws of Robotics as ethical guidelines for fictional robots. They are:

---

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Zeroth Law (Later Addition): A robot may not harm humanity, or, by inaction, allow humanity to come to harm (supersedes the other three laws).

----

I'm wondering whether something like that needs to be fundamentally built into AI offerings.

vtcodger

Been wondering about Asimov's three laws myself. My feeling is that even if AI knew what "harm" and "human" are -- which it almost certainly does not -- compliance would happen if and only if they had no affect on sales and revenue.

Are we really supposed to take this seriously as something useful?

brainwrong

"Isaac Asimov created the famous Three Laws of Robotics as ethical guidelines for fictional robots."

But that's all fiction. Reality is more difficult.

Asimov wrote lots of stuff. I don't know what cos I find reading books to be mind-numbingly dull. But wikipedia has the following sentence:

In a 1971 satirical piece, The Sensuous Dirty Old Man, Asimov wrote: "The question then is not whether or not a girl should be touched. The question is merely where, when, and how she should be touched."

The next bit

Michael Hoffmann

"The next bit?"

"Unless ordered to do so by duly constituted authority"

Doctor Syntax

No plan survives first contact with reality. In this case reality needs a means of predicting how harm might be caused.

High stress?

ecofeco

Usually a high stress job is not due to some magical inherent nature, but because someone very incompetent has more authority than you do. -------------------->>>>>>>>

Re: High stress?

vtcodger

In this case rather more the nature of the job. My guess is that Mr Altman has little interest in mitigating ChatGPT's behavior, assuming that is even possible, unless there is a buck or two to be made there by. What he's probably looking for is a human shield to stop/deflect criticism of Chat GPT and to shoulder the blame if (when most likely) it does something so outrageous that the media and politicians are crying for blood.

Use or ornament?

Pete 2

> the job is stressful, and the last few people tasked with it didn’t stick around.

Which is what inevitably happens when a company's employees are not personally aligned with the stated policies. When what they are rewarded for doing comes into conflict with supposed '"vision".

Doing the right thing is rarely profitable. And in the few cases where gain and morals (or laws) are compatible, there will always be smart-arses who think they can take short cuts.

As a consequence, the role of enforcer frequently is just window dressing.

Re: Use or ornament?

Dan 55

I bet they won't let him [1]follow China's rules .

[1] https://www.reuters.com/world/asia-pacific/china-issues-drafts-rules-regulate-ai-with-human-like-interaction-2025-12-27/

555

Fruit and Nutcase

After the venerable Timer IC?

[1]https://en.wikipedia.org/wiki/555_timer_IC

[1] https://en.wikipedia.org/wiki/555_timer_IC

How about improving what we have already first?

PhilipN

Old news but seasonal visitors said yesterday Google Maps would have sent them on a >one-hour walk to reach us instead of the 10 minutes we told them it would take. Trouble is it is the same story back home where they live in the heart of the metropolis.

"They told me I was gullible ... and I believed them!"