Claude Opus 4.7 Analysis and Comparison | Generated by AI
Question: How is Opus 4.7? Is it good? What’s the feedback? What’s different from Opus 4.6?
Answer:
I need to search for information about Claude Opus 4.7 to provide you with accurate details about this model, its performance, feedback, and differences from Opus 4.6.
Let me search for this information.Based on my search, I now have comprehensive information about Claude Opus 4.7. Let me provide you with a detailed answer.
Question: How is Opus 4.7? Is it good? What’s the feedback? What’s different from Opus 4.6?
Answer:
Claude Opus 4.7 was just released today (April 16, 2026) and represents a notable improvement over Opus 4.6, particularly in advanced software engineering and complex coding tasks.
Overall Assessment
Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Users report being able to delegate their hardest coding work to Opus 4.7 with confidence, including tasks that previously needed close supervision. The model handles complex, long-running tasks with greater rigor and consistency.
Key Improvements Over Opus 4.6
1. Advanced Coding Capabilities
- Multiple companies reported significant improvements in coding benchmarks
- On a 93-task coding benchmark, Claude Opus 4.7 lifted resolution by 13% over Opus 4.6, including four tasks neither Opus 4.6 nor Sonnet 4.6 could solve
- Better at finding and fixing bugs, with improved code quality and fewer errors
- More capable of handling multi-step workflows and long-running autonomous tasks
2. Enhanced Vision Capabilities
- It can see images in greater resolution, processing images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models
- Dramatically improved visual acuity in testing (98.5% on one benchmark versus 54.5% for Opus 4.6)
3. Better Instruction Following
- Opus 4.7 is substantially better at following instructions
- Takes instructions more literally and precisely, though this means prompts written for earlier models may need adjustment
4. Improved Professional Output Quality
- More tasteful and creative when completing professional tasks
- Produces higher-quality interfaces, slides, and documents
- Better design decisions and stronger creative choices
5. Enhanced Finance and Knowledge Work
- State-of-the-art performance on finance agent evaluations
- Opus 4.7 scored higher than its predecessor on benchmarks including finance agent evaluations and GDPval-AA, which measures economically valuable knowledge work across finance and legal domains
User Feedback
The feedback from early access testers has been overwhelmingly positive across various industries:
- Coding/Development: Companies like Cursor, Replit, Cognition (Devin), and others reported double-digit improvements in task completion rates
- Design: Users noted it’s the best model for building dashboards and data-rich interfaces with genuinely surprising design taste
- Legal: Achieved 90.9% accuracy on BigLaw Bench at high effort
- Enterprise Workflows: Better at handling ambiguous problems, staying productive over longer sessions, and managing complex multi-step tasks
- Autonomy: More capable of working independently on challenging problems without constant supervision
Important Considerations
1. Less Capable Than Mythos While Opus 4.7 represents an advancement, it remains less capable than Claude Mythos Preview, Anthropic’s most powerful model. Mythos Preview continues to have limited release due to safety concerns outlined in Project Glasswing.
2. Reduced Cyber Capabilities Anthropic reduced the model’s cyber capabilities during training compared to Mythos Preview and implemented safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses.
3. Token Usage Changes
- Uses an updated tokenizer that can result in 1.0 to 1.35 times more tokens for the same input
- May produce more output tokens at higher effort levels due to increased reasoning
4. New Effort Level Anthropic introduced a new “xhigh” effort level between high and max settings, providing additional control over the balance between reasoning capability and response speed.
Pricing and Availability
Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. The model is available across all Claude products, the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
Summary
Opus 4.7 is a significant upgrade that excels particularly in complex coding tasks, vision capabilities, and professional work quality. The feedback from early adopters has been very positive, with many calling it a “clear step up” and noting meaningful improvements in real-world applications. However, it’s positioned as a safer, more broadly available model compared to the more powerful but restricted Mythos Preview.
References:
- https://www.anthropic.com/news/claude-opus-4-7
- https://www.investing.com/news/economy-news/anthropic-launches-claude-opus-47-with-enhanced-coding-capabilities-4618554
- https://decrypt.co/364483/anthropic-opus-47-full-stack-ai-studio-mythos