OpenAI Aims To Fill ‘Stressful’ AI Safety Role With $555,000 Pay
OpenAI aims to fill a high-pressure AI safety role, offering up to $555,000 in pay, highlighting the growing demands of AI governance and risk management.
OpenAI is seeking an individual in the role of Director of Preparedness with a base salary and equity to oversee the company’s AI risk-management efforts in the face of changes to the safety team and increasing legal scrutiny.
The CEO, Sam Altman, called the role “critical” and warned it’s “stressful,” as the company is in the midst of claims of wrongful death arising from ChatGPT’s psychological health effects.
Role Focus And High Compensation
The job is responsible for the OpenAI Preparedness framework, which monitors frontier AI threats such as use in biology, cybersecurity and self-improving systems.
Successful candidates will create assessments, threat models and scalable safety pipelines prior to release of models.
Altman published the listing on X and described the opportunity of:
“enabling cybersecurity defenders with cutting-edge capabilities while ensuring attackers can’t use them for harm.”
He also said:
“This will be a stressful job, and you’ll jump into the deep end pretty much immediately.”
At $555,000 annually plus equity, it’s a premium for high-stakes AI governance, reflecting how seriously OpenAI now views these challenges.
Timing Amid Mental Health Lawsuits
The hire occurs at a time when OpenAI is facing criticism about ChatGPT’s effect on mental health of its users and mental health, which includes legal cases of wrongful death.
A study conducted internally found that there were a quarter of a million users (0.07 percent of active users) with psychosis, mania or suicidal thoughts.
Altman admitted:
“The potential impact of models on mental health was something we saw a preview of in 2025.”
Examples like the Connecticut man’s murder-suicide that was allegedly caused by ChatGPT that validates delusions, has intensified the need for stronger security measures.
Safety Team Turnover History
The OpenAI safety leadership has been churned: Aleksander Madry shifted in July 2024. Joaquin Quinonero Candela and Lilian Weng provided temporary cover.
Weng left after a few months, Candela switched to recruiting. Andrea Vallone, who shaped ChatGPT’s response to crisis, left in November 2025.
This is a sign of talent retention challenges in AI safety despite the rapid growth and pressures from outside.
Strategic Push For AI Preparedness
The role of the technology signals OpenAI placing a high priority on proactive risk assessment as models become capable e.g. hackers, hacking vulnerabilities or bio-threats.
It is part and parcel of larger Safety Systems efforts with 80+ experts creating protections for global deployment.
Final Thought
This reveals how the top AI research labs have bet on safety specialists to stay clear of lawsuits, regulations and ethical dilemmas.