The New Kid on the Block

Anthropic didn’t waste any time. On April 16, 2026, the company officially released Claude Opus 4.7 — its most powerful publicly available model to date. And honestly? It’s impressive. But it also comes with a twist that’s got the tech world buzzing.
Let’s break it down.
Opus 4.7 is a direct upgrade to Opus 4.6. Anthropic built it to tackle the kind of complex, long-running software engineering tasks that used to require constant hand-holding. Think of it as the difference between hiring a junior developer who needs constant check-ins versus a senior engineer who just gets it done. The model handles difficult tasks with more rigor, checks its own work before reporting back, and follows instructions far more precisely than its predecessor.
That last part? It’s actually a double-edged sword. More on that in a moment.
Coding Just Got a Serious Upgrade
Here’s where things get exciting for developers. According to The Decoder, Opus 4.7 scores 64.3% on the SWE-bench Pro coding benchmark. That’s a massive jump from Opus 4.6’s 53.4%. It also beats OpenAI’s GPT-5.4, which clocks in at 57.7%.
That’s not a small gap. That’s a statement.
Early testers agree. Companies like Cursor, Notion, Replit, Shopify, Vercel, Databricks, and Intuit all put Opus 4.7 through its paces before launch. Their verdict? Better results on complex engineering tasks, fewer tool errors, and stronger reliability over long sessions.
Anthropic also introduced a new effort level called “xhigh” — slotting between “high” and “max.” It lets users trade speed for deeper reasoning on the hardest problems. For developers running agentic workflows, that’s a genuinely useful dial to have.
And in Claude Code, there’s a shiny new /ultrareview command. It runs a dedicated code review session that hunts down bugs and design problems the way a meticulous senior engineer would. Pro and Max subscribers get three free sessions. Nice touch.
Your Eyes Just Got Better Too
Coding isn’t the only thing that leveled up. Vision capabilities took a serious leap forward.
Opus 4.7 now processes images at up to 2,576 pixels on the long edge — roughly 3.75 megapixels. That’s more than three times what earlier Claude models could handle. This isn’t a setting you toggle. It’s baked into the model itself.
Why does this matter? Think about computer-use agents that need to read dense screenshots. Or extracting data from complex diagrams. Or analyzing documents packed with fine visual details. All of that just became dramatically more capable.
On the Document Reasoning benchmark (OfficeQA Pro), Opus 4.7 hit 80.6% accuracy, up from 57.1% with Opus 4.6. That’s a staggering improvement. The model also showed significant gains in biomolecular reasoning and visual navigation benchmarks.
One heads-up though: higher resolution means more tokens consumed. If you don’t need the extra detail, downscale your images before sending them. Your wallet will thank you.
The Instruction-Following Trap

Remember that double-edged sword we mentioned? Here it is.
Opus 4.7 follows instructions literally. Where Opus 4.6 would sometimes loosely interpret or quietly skip parts of a prompt, Opus 4.7 takes everything at face value. That sounds great — and it mostly is. But it also means prompts written for older models may produce unexpected results.
Anthropic explicitly warns developers to re-tune their existing prompts and harnesses before migrating. If your old prompt was a little sloppy or relied on the model “filling in the gaps,” you’ll need to clean it up.
Think of it like upgrading from a lenient manager to a precise one. The work gets done better — but you need to give clearer instructions.
The Cybersecurity Curveball
Now here’s the part that’s making headlines beyond the developer community.
Anthropic deliberately throttled Opus 4.7’s cybersecurity capabilities. On purpose. During training.
This is directly tied to Project Glasswing — Anthropic’s initiative addressing the risks of AI in cybersecurity. The company’s more powerful model, Claude Mythos Preview, was found to be capable of outperforming all but the most elite human experts at finding and exploiting software vulnerabilities. That’s a terrifying capability to release into the wild.
So Anthropic made a calculated decision: keep Mythos Preview locked down, and use Opus 4.7 as a testing ground for new cyber safeguards. As The Decoder reports, the model ships with automatic detection and blocking of requests that suggest prohibited or high-risk cybersecurity uses.
The stakes are real. The Arabian Post notes that Reuters reported U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell actually convened bank executives to warn them about cyber risks tied to Anthropic’s latest models. That’s not a typical AI launch story.
The Cyber Verification Program
So what happens if you’re a legitimate security researcher who needs these capabilities?
Anthropic has an answer: the Cyber Verification Program. Security professionals who want to use Opus 4.7 for vulnerability research, penetration testing, or red-teaming can apply. If verified, some of the safeguards get lifted for their specific use cases.
It’s a smart move. It keeps the dangerous stuff away from bad actors while giving legitimate professionals a path forward. The program essentially shifts the trust boundary from “anyone with an API key” to “verified professionals with documented use cases.”
This is new territory for AI deployment. And it signals that Anthropic is serious about responsible release — even when it means limiting what their own model can do.
The Elephant in the Room: Mythos Preview
Let’s be honest about something. Opus 4.7 is impressive. But it’s not Anthropic’s best model.
The Verge points out that in Opus 4.7’s own system card, Anthropic admits the model doesn’t advance the company’s “capability frontier.” Claude Mythos Preview scored higher on every relevant evaluation. Every single one.
Mythos Preview is currently only available to a select group of private partners — Nvidia, JPMorgan Chase, Google, Apple, and Microsoft among them. About 40 tech companies total. The rest of us? We get Opus 4.7.
That’s not a complaint. Opus 4.7 is genuinely excellent. But it’s worth understanding the context: you’re getting the second-best model Anthropic has built, deliberately constrained in specific ways, while the crown jewel stays behind closed doors.
What About the Price?
Good news and… nuanced news.
The per-token pricing stays the same: $5 per million input tokens and $25 per million output tokens. Same as Opus 4.6. No price hike.
But here’s the catch. The Decoder flags an important detail: Opus 4.7 uses a new tokenizer that maps the same text to up to 1.35 times more tokens than before. The model also generates more output tokens at higher effort levels.
Do the math. Same price per token. More tokens per request. Your actual costs per task can rise significantly even though the listed price didn’t change.
Anthropic recommends measuring your actual token consumption on real traffic before doing a full migration. That’s solid advice. Don’t assume your costs stay flat just because the rate card looks the same.
Where Can You Get It?
Opus 4.7 is available right now across all Claude products and the API. It’s also on Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. Wide availability, no waitlist for the general model.
The API model identifier is claude-opus-4-7. If you’re migrating from Opus 4.6, Anthropic has published a migration guide worth reading before you flip the switch.
The Bottom Line

Claude Opus 4.7 is a genuinely strong release. It’s the best publicly available model Anthropic has shipped. Coders get a major performance boost. Vision capabilities tripled. Instruction-following sharpened dramatically. And Anthropic is taking cybersecurity risks seriously in a way that’s unprecedented for a mainstream AI release.
Is it perfect? No. The tokenizer change means hidden cost increases. The literal instruction-following requires prompt rewrites. And the best model Anthropic has built is still locked away from most of us.
But as a signal of where AI is heading — more capable, more responsible, more carefully deployed — Opus 4.7 is a fascinating chapter. The AI race isn’t just about who builds the most powerful model anymore. It’s about who deploys it most wisely.
Anthropic is making a bet that how you release matters as much as what you release. Time will tell if that bet pays off.
Sources
- The Verge — Anthropic releases a new Opus model amid Mythos Preview buzz
- The Decoder — Anthropic’s Claude Opus 4.7 makes a big leap in coding, while deliberately scaling back cyber capabilities
- The Arabian Post — Anthropic sharpens Opus for coders
- Help Net Security — Anthropic releases Claude Opus 4.7 with automated cybersecurity safeguards
- Anthropic Cyber Verification Program
- Anthropic Migration Guide for Opus 4.7





