
OpenAI is hiring an executive focused on AI-related risks to mental health and computer security. Writing in an X post Saturday (Dec. 27), CEO Sam Altman said that the new “Head of Preparedness” role comes amid the rise of new challenges related to artificial intelligence (AI). “The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” Altman wrote. His comments were flagged in a report by TechCrunch, which also noted that the company’s listing for the job describes the role as being in charge of preparing the company’s framework to explain its “approach to tracking and preparing for frontier capabilities that create new risks of severe harm.” The report added that OpenAI first launched a preparedness team in 2023, saying it would be charged with studying potential “catastrophic risks,” ranging from immediate threats like phishing or theoretical issues like nuclear attacks. However, TechCrunch added the company has since reassigned Head of Preparedness Aleksander Madry to a job focused on AI reasoning and seen other safety executives depart the startup or take on different roles unrelated to safety. The news comes weeks after OpenAI said it would add new safeguards to its AI models in response to rapid advancements across the industry. Those developments, the company said, create benefits for cyberdefense, while bringing dual-use risks. That means they could be used for malicious purposes as…
Want more insights? Join Grow With Caliber - our career elevating newsletter and get our take on the future of work delivered weekly.