Friday, February 20, 2026
Google search engine
HomeTechnologyOpenAI wants to hire someone to address ChatGPT risks that cannot be...

OpenAI wants to hire someone to address ChatGPT risks that cannot be predicted

OpenAI is betting big on a role aimed at stopping AI risks before they skyrocket. The company has created a new leadership position called “Head of Preparedness.” This position focuses on identifying and mitigating the most serious threats that advanced AI chatbots can pose. With the responsibility comes a headline-grabbing compensation package of $555,000 plus equity.

In a public post announcing the opening, Sam Altman called it “a crucial role at an important time” and noted that while AI models are now capable of “many great things,” they are also “starting to present some real challenges.”

We are hiring a Head of Preparedness. This is a crucial role at an important time; Models are evolving rapidly and are now capable of many great things, but they also present some real challenges. The potential impact of models on mental health was something we…

– Sam Altman (@sama) December 27, 2025

What the Head of Preparedness will actually do

The person holding this position will focus on extreme but realistic AI risks, including misuse, cybersecurity threats, biological concerns, and broader societal harms. Sam Altman said OpenAI now needs a “more nuanced understanding” of how growing capabilities could be abused without blocking the benefits.

He didn’t sugarcoat the job either. “This will be a stressful job,” Altman wrote, adding that whoever takes it “will be thrown in at the deep end pretty much immediately.”

The hiring comes at a sensitive time for OpenAI, which has faced increasing regulatory scrutiny over AI security over the past year. This pressure has intensified with allegations linking ChatGPT interactions to several suicide cases, raising broader concerns about the impact of AI on mental health.

In one case, the parents of a 16-year-old sued OpenAI after claiming the chatbot encouraged their son to plan his own suicide, prompting the company to introduce new security measures for users under 18.

Another lawsuit alleges ChatGPT fueled paranoid delusions in a separate case that ended in murder and suicide, prompting OpenAI to say it was working on better ways to detect stress, de-escalate conversations and direct users to real-world support.

OpenAI’s security push comes at a time when millions report emotional trust in ChatGPT and regulators are examining risks to children, underscoring why precaution is important beyond just technology.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments