3 Common Misunderstandings About AI in 2025

3 Common Misunderstandings About AI in 2025

In 2025, misconceptions about AI flourished as people struggled to make sense of the rapid development and adoption of the technology. Here are three popular ones to leave behind in the New Year. AI models are hitting a wall [time-brightcove not-tgx=”true”] When GPT-5 was released in May, people wondered (not for the first time) if AI was hitting a wall. Despite the substantial naming upgrade, the improvement seemed incremental. The New Yorker ran an article titled, “What if A.I. Doesn’t Get Much Better Than This?” claiming that GPT-5 was “the latest product to suggest that progress on large language models has stalled.” It soon emerged that, despite the naming milestone, GPT-5 was primarily an exercise in delivering performance at a lower cost. Five months later, OpenAI, Google, and Anthropic have all released models showing substantial progress on economically valuable tasks. “Contra the popular belief that scaling is over,” the jump in performance in Google’s latest model was “as big as we’ve ever seen,” wrote Google DeepMind’s deep learning team lead, Oriol Vinyals, after Gemini 3 was released. “No walls in sight.” There’s reason to wonder how exactly AI models will improve. In domains where getting data for training is expensive—for example in deploying AI agents as personal shoppers—progress may be slow. “Maybe AI will keep getting better and maybe AI will keep sucking in important ways,” wrote Helen Toner, interim executive director at the Center for Security and Emerging Technology. But the idea that progress is stalling is hard to…

Continue reading →

 

Want more insights? Join Grow With Caliber - our career elevating newsletter and get our take on the future of work delivered weekly.