Provider Snapshot
9
221.22
0.60
Oct 18, 2025
Key Takeaways
9 cerebras models are actively benchmarked with 5112 total measurements across 2318 benchmark runs.
llama-4-scout-17b leads the fleet with 276.00 tokens/second, while qwen-3-coder-480b delivers 204.00 tok/s.
Performance varies by 35.3% across the cerebras model lineup, indicating diverse optimization strategies for different use cases.
Median time to first token across the fleet is 0.60 ms, showing excellent responsiveness for interactive applications.
The cerebras model fleet shows consistent performance characteristics (14.0% variation coefficient), indicating standardized infrastructure.
Fastest Models
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
cerebras | llama-4-scout-17b | 276.00 | 27.20 | 409.00 | 0.20 |
cerebras | llama-3.1-8b | 259.00 | 1.33 | 899.00 | 0.43 |
cerebras | llama-4-maverick-17b | 234.00 | 10.70 | 309.00 | 0.22 |
cerebras | llama-3.3-70b | 233.00 | 1.03 | 315.00 | 0.42 |
cerebras | qwen-3-32b | 223.00 | 1.53 | 433.00 | 0.71 |
cerebras | qwen-3-coder-480b | 204.00 | 1.99 | 282.00 | 0.45 |
All Models
Complete list of all cerebras models tracked in the benchmark system. Click any model name to view detailed performance data.
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
cerebras | llama-3.1-8b | 259.00 | 1.33 | 899.00 | 0.43 |
cerebras | llama-3.3-70b | 233.00 | 1.03 | 315.00 | 0.42 |
cerebras | gpt-oss-120b | 195.00 | 1.54 | 302.00 | 0.99 |
cerebras | qwen-3-32b | 223.00 | 1.53 | 433.00 | 0.71 |
cerebras | llama-4-scout-17b | 276.00 | 27.20 | 409.00 | 0.20 |
cerebras | llama-4-maverick-17b | 234.00 | 10.70 | 309.00 | 0.22 |
cerebras | qwen-3-235b-instruct | 177.00 | 1.07 | 309.00 | 1.41 |
cerebras | qwen-3-235b-thinking | 190.00 | 1.71 | 294.00 | 0.57 |
cerebras | qwen-3-coder-480b | 204.00 | 1.99 | 282.00 | 0.45 |
Featured Models
Frequently Asked Questions
Based on recent tests, llama-4-scout-17b shows the highest average throughput among tracked cerebras models.
This provider summary aggregates 5112 individual prompts measured across 2318 monitoring runs over the past month.