anthropic Provider Benchmarks

Comprehensive performance summary covering 9 models.

This provider hub highlights throughput and latency trends across every anthropic model monitored by LLM Benchmarks. Use it to compare hosting tiers, track regressions, and discover the fastest variants in the catalogue.

Visit anthropic Official Website

Provider Snapshot

Models Tracked

9

Median Tokens / Second

24.91

Median Time to First Token (ms)

1.18

Last Updated

Dec 26, 2025

Key Takeaways

  • 9 anthropic models are actively benchmarked with 7756 total measurements across 7756 benchmark runs.

  • claude-haiku-4.5 leads the fleet with 50.50 tokens/second, while claude-opus-4.5 delivers 20.20 tok/s.

  • Performance varies by 150.0% across the anthropic model lineup, indicating diverse optimization strategies for different use cases.

  • Median time to first token across the fleet is 1.18 ms, showing excellent responsiveness for interactive applications.

  • The anthropic model fleet shows varied performance characteristics (41.3% variation coefficient), reflecting diverse model architectures.

Fastest Models

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
anthropicclaude-haiku-4.550.508.0171.800.56
anthropicclaude-3-7-sonnet30.608.9143.500.59
anthropicclaude-3-5-haiku30.205.8240.400.54
anthropicclaude-4-sonnet20.504.1132.401.50
anthropicclaude-3-opus20.202.3427.101.06
anthropicclaude-opus-4.520.204.4829.801.70

All Models

Complete list of all anthropic models tracked in the benchmark system. Click any model name to view detailed performance data.

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
anthropicclaude-3-5-haiku30.205.8240.400.54
anthropicclaude-3-7-sonnet30.608.9143.500.59
anthropicclaude-3-opus20.202.3427.101.06
anthropicclaude-4-opus17.307.1723.801.24
anthropicclaude-4-sonnet20.504.1132.401.50
anthropicClaude Opus 4.117.406.3224.101.43
anthropicclaude-sonnet-4.517.304.2123.301.97
anthropicclaude-opus-4.520.204.4829.801.70
anthropicclaude-haiku-4.550.508.0171.800.56

Featured Models

Frequently Asked Questions

Based on recent tests, claude-haiku-4.5 shows the highest average throughput among tracked anthropic models.

This provider summary aggregates 7756 individual prompts measured across 7756 monitoring runs over the past month.