OpenAI is looking for a new Head of Readiness


OpenAI is looking to hire a new executive responsible for studying AI-related risks in areas ranging from computer security to mental health.

At post on XCEO Sam Altman admitted that the AI ​​model “is starting to present some real challenges,” including “the potential impact of the model on mental health,” as well as a model that is “very good at computer security, so it finds critical vulnerabilities.”

“If you want to help the world understand how to enable cybersecurity defenses with advanced capabilities while ensuring that attackers cannot use them for harm, ideally by making all systems more secure, and also for how we release biological capabilities and even gain confidence in the safety of systems that can work independently, please consider applying,” wrote Altman.

OpenAI’s register for the Chief Readiness role describing the project as one of the responsibilities of implementing the company’s preparedness framework, “our framework explains OpenAI’s approach to tracking and preparing for border capabilities that create a risk of serious harm.”

The first company announced the creation of a preparedness team in 2023, said it will be responsible for studying potential “catastrophic risks,” whether they are more direct, like phishing attacks, or more speculative, like nuclear threats.

Less than a year later, OpenAI reassigned Head of Preparation Aleksander Madry to projects focused on AI reasoning. Other safety executives at OpenAI have too leave the company or take on a new role outside readiness and safety.

The company also broke up Preparedness Framework updatesstated that it could “adjust” safety requirements if competing AI labs release “high-risk” models without the same safeguards.

Techcrunch event

San Francisco
|
13-15 October 2026

As Altman points out in the post, generative AI chatbots have been the subject of scrutiny for their impact on mental health. new lawsuit claims that OpenAI’s ChatGPT reinforces users’ delusions, increases social isolation, and even causes some people to commit suicide. (The company says it continues to work on improving ChatGPT’s ability to recognize signs of unpleasant situations and connect users to real support.)



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *