Provider Snapshot
5
24.88
1436.00
Mar 27, 2026
Key Takeaways
5 anthropic models are actively benchmarked with 1185 total measurements across 1185 benchmark runs.
claude-haiku-4.5 leads the fleet with 49.80 tokens/second, while claude-4-opus delivers 17.10 tok/s.
Performance varies by 191.2% across the anthropic model lineup, indicating diverse optimization strategies for different use cases.
Avg time to first token across the fleet is 1436.00 ms, showing moderate responsiveness for interactive applications.
The anthropic model fleet shows varied performance characteristics (50.3% variation coefficient), reflecting diverse model architectures.
Fastest Models
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| anthropic | claude-haiku-4.5 | 49.80 | 3.03 | 72.20 | 630.00 |
| anthropic | claude-opus-4.5 | 20.50 | 2.60 | 32.10 | 1790.00 |
| anthropic | claude-4-sonnet | 19.40 | 6.57 | 31.30 | 1910.00 |
| anthropic | Claude Opus 4.1 | 17.60 | 7.70 | 26.40 | 1500.00 |
| anthropic | claude-4-opus | 17.10 | 5.24 | 22.00 | 1350.00 |
All Models
Complete list of all anthropic models tracked in the benchmark system. Click any model name to view detailed performance data.
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| anthropic | claude-haiku-4.5 | 49.80 | 3.03 | 72.20 | 630.00 |
| anthropic | claude-opus-4.5 | 20.50 | 2.60 | 32.10 | 1790.00 |
| anthropic | Claude Opus 4.1 | 17.60 | 7.70 | 26.40 | 1500.00 |
| anthropic | claude-4-sonnet | 19.40 | 6.57 | 31.30 | 1910.00 |
| anthropic | claude-4-opus | 17.10 | 5.24 | 22.00 | 1350.00 |
Featured Models
Frequently Asked Questions
Based on recent tests, claude-haiku-4.5 shows the highest average throughput among tracked anthropic models.
This provider summary aggregates 1185 individual prompts measured across 1185 monitoring runs over the past month.