Anthropic has shipped more meaningful Claude updates in three months than most companies ship in a year. Here is every launch since January 2026, with what each one actually does.
January 2026: Claude Cowork
Claude Cowork launched in January as Anthropic’s answer to persistent, agent-driven workflows. The initial release targeted professional use cases like legal and financial analysis, with plugin support baked in from the start. This was not just a chat interface update. It was the foundation for a series of productivity tools that would roll out over the next two months.
February 2026: Models, Integrations, and Code Tools
February was Anthropic’s heaviest month. On February 5, Claude Opus 4.6 launched as the most capable model in the lineup. It ships with a one million token context window in beta, improved multi-step reasoning that breaks tasks into subtasks and runs them in parallel, and top scores on domain-specific benchmarks: 90.2% on BigLaw Bench for legal reasoning, and leading scores on finance tasks covering due diligence and market intelligence. It outperforms GPT-5.2 on real-world knowledge work tasks, and pricing stayed flat from Opus 4.5 at $5 per million input tokens and $25 per million output tokens.
Claude Sonnet 4.6 followed on February 17. It is a full upgrade across coding, computer use, long-context reasoning, agent planning, and design. Multimodal from the start, it handles text, voice dictation, and image inputs, and outputs text artifacts, diagrams, and TTS audio. The one million token context window is in beta here as well.
Beyond the models, February packed in a significant amount of tooling. Cowork launched on PC through the Claude Desktop app, with persistent agent threads available to Pro and Max plan users. Excel and PowerPoint integrations went live, with Opus 4.6 powering native spreadsheet operations like pivot tables and conditional formatting, plus full context sharing between the two Office apps. Co-work plugins were released on February 24 with marketplace and admin controls for Team and Enterprise plans. Scheduled tasks followed on February 25, letting users run recurring or on-demand tasks from within Cowork. A new Customize section in Claude Desktop groups skills, plugins, and connectors in one place, with connector access available on the free tier.
On the developer side, Claude Code Security launched with semantic reasoning across full codebases, cross-file analysis, under five percent false positive rates, and automatic patch suggestions. Claude Code Remote Control came alongside Cowork’s persistent thread support, letting users manage coding tasks from Desktop, iOS, or Android.
March 2026: Memory, Marketplace, and More Context
March opened with Claude Memory going free on March 2. Memory built from chat history is now available to all users including the free tier, with chat search and import/export support. This is the kind of feature that makes the day-to-day experience meaningfully better without requiring any plan upgrades, which is relatively rare for a feature this useful.
The Claude Marketplace launched as an extension of the Cowork plugin system introduced in February. It gives users a centralized place to find and install plugins for professional workflows. Alongside this, Claude Skills for Excel and Slides arrived on March 11, adding skill support and full context sharing across both add-ins, with LLM gateway compatibility for Amazon Bedrock, Google Vertex AI, and Microsoft Foundry.
On March 12, Claude gained the ability to create custom interactive charts, diagrams, and visualizations inline in chat responses. This removes a step from most analytical workflows where you would previously have had to export data elsewhere to get a visual output. It is a small change in description but a meaningful one in practice.
The one million token context window, which shipped in beta with both Opus 4.6 and Sonnet 4.6 in February, deserves a separate callout because the retrieval improvement is not marginal. Long-context retrieval accuracy went from 18% to 76%. That is the difference between a model that loses the thread in long documents and one that can process a full document library without dropping critical detail.
Co-work Projects added persistent agent thread management for complex, multi-step tasks. Claude Code Channels added per-organization usage analytics for Claude Code Remote on Enterprise plans. Auto mode in Claude Code let the model operate with less manual steering, which matters for teams running Claude Code as part of a larger automated pipeline rather than a developer-in-the-loop workflow. Code review for Claude Code tied back into the security tooling from February, using full codebase semantic analysis to surface issues and suggest patches. Claude Computer Use gave the model the ability to interact with a computer interface directly, which is a meaningful step for agent-based workflows that require interacting with software that does not expose an API.
For more context on where Anthropic sits competitively right now, see Anthropic Hits $19B ARR as Apple Runs Its Internal Dev on Claude. And if you are thinking about how Claude Code compares to other coding tools in terms of evaluation methodology, CursorBench-3: How Cursor Evaluates Coding Agents on Real Developer Tasks is worth reading alongside this.
What the Cadence Tells You
Anthropic shipped two flagship model releases, a full productivity suite across Excel and PowerPoint, a developer security tool, persistent agent infrastructure, a marketplace, free memory for all users, inline charting, computer use, and a one million token context window across roughly twelve weeks. The release cadence is roughly every two weeks on something meaningful, and that pace has not slowed.
The reason that pace is possible is the same reason OpenAI can sustain a similar cadence: both labs are now using their own models to build and improve those same models. That feedback loop shortens iteration cycles. It is not a runaway acceleration, but it does compress the timeline between identifying a capability gap and shipping a fix for it. Both Anthropic and OpenAI have been explicit about this. Engineers at these labs describe their jobs as fundamentally different now because the model handles the grind of implementation, which frees up the humans to focus on what to build next.
If you are building on Claude right now, the practical implication is straightforward. Do not hard-code assumptions about what the model can or cannot do, because those assumptions will be wrong within a few weeks. Structure your stack so that swapping a model version is a one-line change, not a rebuild. The same applies to any assumptions you are making about pricing or context limits, both of which have shifted multiple times already this year.

