OpenAI is Hiring a New 'Head of Preparedness' to Predict/Mitigate AI's Harms (engadget.com)
- Reference: 0180466673
- News link: https://slashdot.org/story/25/12/27/2347200/openai-is-hiring-a-new-head-of-preparedness-to-predictmitigate-ais-harms
- Source link: https://www.engadget.com/ai/openai-is-hiring-a-new-head-of-preparedness-to-try-to-predict-and-mitigate-ais-harms-220330486.html?src=rss
> OpenAI is looking for a new Head of Preparedness who can help it anticipate the potential harms of its models and how they can be abused, in order to guide the company's safety strategy.
>
> It comes at the end of a year that's seen OpenAI hit with numerous accusations about ChatGPT's impacts on users' mental health, including a few [2]wrongful death [3]lawsuits . In a post on X about the position, OpenAI CEO Sam Altman [4]acknowledged that the "potential impact of models on mental health was something we saw a preview of in 2025," along with other "real challenges" that have arisen alongside models' capabilities. The Head of Preparedness "is a critical role at an important time," he said.
>
> Per [5]the job listing , the Head of Preparedness (who will make $555K, plus equity), "will lead the technical strategy and execution of OpenAI's [6]Preparedness framework , our framework explaining OpenAI's approach to tracking and preparing for frontier capabilities that create new risks of severe harm."
"These questions are hard," Altman [7]posted on X.com , "and there is little precedent; a lot of ideas that sound good have some real edge cases... This will be a stressful job and you'll jump into the deep end pretty much immediately."
The [8]listing says OpenAI's Head of Preparedness "will lead a small, high-impact team to drive core Preparedness research, while partnering broadly across Safety Systems and OpenAI for end-to-end adoption and execution of the framework." They're looking for someone "comfortable making clear, high-stakes technical judgments under uncertainty."
[1] https://www.engadget.com/ai/openai-is-hiring-a-new-head-of-preparedness-to-try-to-predict-and-mitigate-ais-harms-220330486.html?src=rss
[2] https://www.engadget.com/ai/lawsuit-accuses-chatgpt-of-reinforcing-delusions-that-led-to-a-womans-death-183141193.html
[3] https://www.engadget.com/ai/the-first-known-ai-wrongful-death-lawsuit-accuses-openai-of-enabling-a-teens-suicide-212058548.html
[4] https://x.com/sama/status/2004939524216910323?s=20
[5] https://openai.com/careers/head-of-preparedness-san-francisco/
[6] https://openai.com/careers/head-of-preparedness-san-francisco/
[7] https://x.com/sama/status/2004939524216910323?s=20
[8] https://openai.com/careers/head-of-preparedness-san-francisco/
High-stakes decisions under uncertainty (Score:2)
Just like your models do all the time.
as expected (Score:2)
"... in order to guide the company's safety strategy."
The more interesting thing is what "safety strategy" means. The job is most definitely NOT to improve or ensure safety, it's to provide the appearance that they care about safety. They are to produce metrics that show safety, not to actually improve safety.
Making public the salary is interesting, especially with the recent talk about how AI engineers are paid much more than this position. Odd that would be true.
Pop goes the bubble (Score:2)
They should hire a head of preparedness for post AI bubble pop. It is not that Nvidia or Open AI are worthless companies offering worthless technologies, but they will have to go through bankruptcies to discard all the debt. Having someone start preparing for post-bankruptcy would be highly beneficial even to current investors, whom might get extra penny on a dollar in the settlements.
Re: (Score:2)
The job of mopping up the AI-bubble consequences will be, as always, with everyone else, in this case their "elected representatives".
It will be just like the last time, when the Obama government was forced to mop up the consequences of the "subprime crisis", a product of the Bush era policies of deregulation and the "quant easing" of one Greenspan, exploited by the "investment banking community".
This time some other government will have to mop up the consequences of the trumpist voluntarism and ignorance,
Job requirements.... (Score:2)
I can only hope the job requirements include :
- Ability to be nearby in our data center with a large bucket of salt water ready to take action if the "safe word" is sounded.
Reminds me of the Simmons Episode... (Score:2)
After a hurricane devastates Springfield the church sign read "God Welcomes His Victims".
More like "head of appearing to do something"... (Score:2)
These people obviously do not care what amount of damage they do.
Re: (Score:2)
$555k? That salary sounds, ermmm.... artificial.
Re: (Score:1)
Why not make it $666k?
Re: (Score:1)
damage?
to stupid people who would blindly do what an AI tells them? They would join a dumb cult just as easily. or get conned.
Seems the "safety" is only needed for snowflakes and morons.
This will only be useful... (Score:2)
...to humanity if they hired John Connor.
Hire me! (Score:2)
I'll ensure at least a dozen John Connors are born and trained since childhood in the art of leading an anti machine war effort as a backup.
Not just the hiring... (Score:2)
...but the inevitable firing of this person when bad things happen and this person failed to stop them, will allow OpenAI to say "see! Look! We're doing something about how terrible we are!"
\o/ (Score:1)
If you're concerned - why not stop?
Re: (Score:2)
Read the summary at least. They aren't concerned, they want someone to help them avoid lawsuits.
Re: (Score:2)
They are not concerned. They KNOW they are doing a lot of damage. But the money they make is more important to them.
Re: (Score:2)
Any sufficiently profitable industry wants to self-regulate. The alternative is to be regulated, which limits profits and can even lead to having to pay for damages caused.
Regulation is always inevitable, but arguing that you will regulate yourself is a way of prolonging the inevitable.
Consider a child who negotiates a longer curfew by saying that they will go to bed on time, without the usual complaints or delays, in return for being allowed to stay up longer. It's always a lie, but it's cute. And even
Re: (Score:2)
> Any sufficiently profitable industry wants to self-regulate.
Only in the most general sense of "regulate", to establish a structure that prevent its profits from eroding.