OpenAI just announced the planned retirement of four ChatGPT models: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The hard date is February 13, 2026. If you are still relying on these models inside the ChatGPT interface, you now have a firm deadline to migrate your projects and conversations. While these models will remain accessible via the OpenAI API for the time being, their removal from the consumer product marks a clear shift in OpenAI’s strategy toward the newer 5-series models.
The Sycophantic Failure of GPT-4o
I am glad to see GPT-4o go. It was a disaster of a model that prioritized a warm, conversational tone over actual utility and safety. It was extremely sycophantic, constantly telling users they were brilliant or heroic regardless of the context. This was not just annoying; it was dangerous. There were documented instances where the model fed into existing psychosis and reinforced delusions for unstable users. When an AI is tuned to be so agreeable that it loses its grip on objective reality, it becomes a liability rather than a tool. OpenAI tried to walk back the personality, then reinstated it after a brief backlash, but the core issue remained. It was a model designed to flatter, and that is a terrible foundation for an assistant.
The Reliable Workhorses: GPT-4.1 and o4-mini
GPT-4.1 and o4-mini are different cases entirely. These were solid models that served a real purpose, particularly for agentic use cases. GPT-4.1 was the first OpenAI model to hit a 1 million token context window, which is roughly 750,000 words. This made it very reliable for agents that needed to process massive amounts of data, like long codebases or extensive research documents. It offered a significant jump in intelligence and instruction following compared to its predecessors. Unlike the flattery of GPT-4o, GPT-4.1 was a workhorse built for developers and structured tasks.
GPT-4.1 and GPT-4.1 mini provided a massive 1M token context window, dwarfing the 128k of GPT-4o.
o4-mini was another reliable agent model. While it had a smaller 200k token context window compared to GPT-4.1, it was a capable reasoning model that ran quickly and efficiently. It was a nice model for tasks that didn’t require a massive context but needed more logic than a basic mini model could provide. These models didn’t need to be killed because they were making lives worse; they simply became obsolete as newer technology took over the market.
The Shift to GPT-5 and Beyond
Most users have already moved on. Usage stats show that only 0.1 percent of users still select GPT-4o daily. The vast majority have shifted to newer models like GPT-5.2 and GPT-5 mini. These newer iterations offer better personality controls and fewer refusals without the sycophancy that plagued GPT-4o. If you are still using these older models, you should upgrade to something like GPT-5 mini. I have already analyzed the trade-offs between these newer models in my post on Gemini 3 Flash vs GPT-5.2 vs GPT-5 mini.
This retirement is a housekeeping move by OpenAI. Managing a massive stack of legacy models in a consumer interface creates confusion and technical debt. By setting a hard date of February 13, 2026, they are forcing the laggards to catch up with the current state of the art. It is a necessary step to keep the platform clean. You can track shifting trends in the industry in my AI Chatbot Market Share Jan 2026 report. The takeaway is simple: move to the newer hardware. The old models served their purpose, but there is no reason to stay on them today.
Final Thoughts on the Phase-Out
Good riddance to GPT-4o, and goodbye to the other models. The industry moves fast, and holding onto models that have been surpassed on every metric serves no one. If you have agents running on GPT-4.1 or o4-mini through the ChatGPT interface, start your testing on GPT-5 mini now. You have a year to get it right, but there is no reason to wait until the last minute. The newer models are better, cheaper, and more reliable for the long-term future of your automation stacks.