
OpenAI CEO Sam Altman testifies before the Senate Committee on Commerce, Science, and Transportation on May 8, 2025. —Chip Somodevilla—Getty ImagesOn April 16, OpenAI announced GPT-Rosalind, a new AI model targeted at the life sciences. It significantly outperforms their current publicly available models in chemistry and biology tasks, as well as experimental design. As with Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber, also released this month, the model is not available to the general public—reserved, at least initially, for “qualified customers” through a “trusted access program.” The releases signal a new and concerning trend of AI companies deeming their most capable models too powerful to entrust to the general public. “I think frontier developers are restricting access to their most capable models because they are genuinely worried about some of the capabilities these models have,” says Peter Wildeford, head of policy at the AI Policy Network, an advocacy group. It is unclear why OpenAI decided to restrict access to GPT-Rosalind in particular. An OpenAI spokesperson said in an email that giving access to trusted partners allows the company to “make more capable systems available sooner to verified users, while still managing risk thoughtfully.”Who decides? The rapid advance of AI capabilities raises the question of whether private companies should be making the increasingly weighty decisions about whether and how potentially dangerous AI models should be built, and who should be allowed to use them. “I think the federal government has a role to play,” says Rep. Mark DeSaulnier, a California Democrat. Anthropic’s Mythos release appears to…
Want more insights? Join Grow With Caliber - our career elevating newsletter and get our take on the future of work delivered weekly.