Provider Snapshot
8
48.56
0.52
Oct 18, 2025
Key Takeaways
8 openai models are actively benchmarked with 23684 total measurements across 23015 benchmark runs.
gpt-3.5-turbo leads the fleet with 77.00 tokens/second, while gpt-4o-mini delivers 38.60 tok/s.
Performance varies by 99.5% across the openai model lineup, indicating diverse optimization strategies for different use cases.
Median time to first token across the fleet is 0.52 ms, showing excellent responsiveness for interactive applications.
The openai model fleet shows varied performance characteristics (34.8% variation coefficient), reflecting diverse model architectures.
Fastest Models
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
openai | gpt-3.5-turbo | 77.00 | 1.62 | 144.00 | 0.52 |
openai | gpt-4.1-nano | 72.00 | 1.80 | 153.00 | 0.39 |
openai | gpt-4.1-mini | 53.30 | 9.26 | 103.00 | 0.37 |
openai | gpt-4o | 48.00 | 2.97 | 122.00 | 0.51 |
openai | gpt-4.1 | 39.50 | 1.34 | 80.10 | 0.50 |
openai | gpt-4o-mini | 38.60 | 3.96 | 102.00 | 0.55 |
All Models
Complete list of all openai models tracked in the benchmark system. Click any model name to view detailed performance data.
Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
---|---|---|---|---|---|
openai | gpt-4 | 25.90 | 3.81 | 51.80 | 0.77 |
openai | gpt-4o | 48.00 | 2.97 | 122.00 | 0.51 |
openai | gpt-4o-mini | 38.60 | 3.96 | 102.00 | 0.55 |
openai | gpt-3.5-turbo | 77.00 | 1.62 | 144.00 | 0.52 |
openai | gpt-4-turbo | 34.20 | 1.69 | 51.10 | 0.55 |
openai | gpt-4.1 | 39.50 | 1.34 | 80.10 | 0.50 |
openai | gpt-4.1-mini | 53.30 | 9.26 | 103.00 | 0.37 |
openai | gpt-4.1-nano | 72.00 | 1.80 | 153.00 | 0.39 |
Featured Models
Frequently Asked Questions
Based on recent tests, gpt-3.5-turbo shows the highest average throughput among tracked openai models.
This provider summary aggregates 23684 individual prompts measured across 23015 monitoring runs over the past month.