Provider Snapshot
8
61.64
0.59
Oct 19, 2025
Key Takeaways
8 fireworks models are actively benchmarked with 6869 total measurements across 6497 benchmark runs.
llama-3.1-8b leads the fleet with 118.00 tokens/second, while deepseek-r1 delivers 45.50 tok/s.
Performance varies by 159.3% across the fireworks model lineup, indicating diverse optimization strategies for different use cases.
Median time to first token across the fleet is 0.59 ms, showing excellent responsiveness for interactive applications.
The fireworks model fleet shows varied performance characteristics (37.3% variation coefficient), reflecting diverse model architectures.
Fastest Models
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
fireworks | llama-3.1-8b | 118.00 | 22.00 | 194.00 | 0.33 |
fireworks | llama-3.1-70b | 69.00 | 10.90 | 115.00 | 0.41 |
fireworks | mixtral-8x22b | 62.50 | 15.30 | 97.60 | 0.34 |
fireworks | llama-3.3-70b | 60.20 | 4.82 | 124.00 | 0.52 |
fireworks | llama-3.1-405b | 47.60 | 3.03 | 69.30 | 0.47 |
fireworks | deepseek-r1 | 45.50 | 5.63 | 99.80 | 0.76 |
All Models
Complete list of all fireworks models tracked in the benchmark system. Click any model name to view detailed performance data.
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
fireworks | llama-3.1-8b | 118.00 | 22.00 | 194.00 | 0.33 |
fireworks | llama-3.1-70b | 69.00 | 10.90 | 115.00 | 0.41 |
fireworks | llama-3.1-405b | 47.60 | 3.03 | 69.30 | 0.47 |
fireworks | mixtral-8x22b | 62.50 | 15.30 | 97.60 | 0.34 |
fireworks | deepseek-v3 | 45.10 | 1.38 | 77.10 | 1.14 |
fireworks | deepseek-r1 | 45.50 | 5.63 | 99.80 | 0.76 |
fireworks | llama-3.3-70b | 60.20 | 4.82 | 124.00 | 0.52 |
fireworks | kimi-k2 | 45.20 | 8.70 | 92.70 | 0.74 |
Featured Models
Frequently Asked Questions
Based on recent tests, llama-3.1-8b shows the highest average throughput among tracked fireworks models.
This provider summary aggregates 6869 individual prompts measured across 6497 monitoring runs over the past month.