Provider Snapshot
40
42.74
939.00
Mar 26, 2026
Key Takeaways
40 openai models are actively benchmarked with 8051 total measurements across 7836 benchmark runs.
o3 Mini leads the fleet with 109.00 tokens/second, while o1 delivers 85.50 tok/s.
Performance varies by 27.5% across the openai model lineup, indicating diverse optimization strategies for different use cases.
Avg time to first token across the fleet is 939.00 ms, showing good responsiveness for interactive applications.
The openai model fleet shows varied performance characteristics (68.8% variation coefficient), reflecting diverse model architectures.
Fastest Models
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| openai | o3 Mini | 109.00 | 8.14 | 169.00 | 0.00 |
| openai | o3-mini-2025-01-31 | 107.00 | 15.50 | 160.00 | 0.00 |
| openai | GPT-5.4-nano | 91.60 | 43.90 | 129.00 | 410.00 |
| openai | GPT-5.4-nano-2026-03-17 | 86.70 | 36.80 | 125.00 | 490.00 |
| openai | GPT-5.1-codex-max | 85.50 | 14.00 | 118.00 | 1160.00 |
| openai | o1 | 85.50 | 21.60 | 147.00 | 0.00 |
All Models
Complete list of all openai models tracked in the benchmark system. Click any model name to view detailed performance data.
| Provider | Model | Avg Toks/Sec | Min | Max | Avg TTF (ms) |
|---|---|---|---|---|---|
| openai | 15.30 | 1.94 | 34.40 | 1600.00 | |
| openai | GPT-5.2-pro | 8.85 | 4.27 | 13.90 | 4800.00 |
| openai | GPT-5.2 | 28.10 | 4.37 | 46.90 | 930.00 |
| openai | GPT-5.1-codex-max | 85.50 | 14.00 | 118.00 | 1160.00 |
| openai | GPT-5.1-codex-mini | 25.60 | 1.70 | 49.60 | 1170.00 |
| openai | GPT-5.1-codex | 26.80 | 1.05 | 51.90 | 1250.00 |
| openai | GPT-5.1 | 32.20 | 2.05 | 63.60 | 1040.00 |
| openai | o4 Mini | 49.60 | 4.08 | 76.50 | 0.00 |
| openai | o3 Mini | 109.00 | 8.14 | 169.00 | 0.00 |
| openai | gpt-4.1-nano | 70.80 | 18.40 | 149.00 | 450.00 |
| openai | gpt-4.1-mini | 51.70 | 15.60 | 109.00 | 430.00 |
| openai | gpt-4.1 | 41.10 | 15.40 | 82.60 | 540.00 |
| openai | gpt-4o | 63.20 | 8.68 | 142.00 | 1580.00 |
| openai | gpt-4-turbo | 32.50 | 1.00 | 51.50 | 520.00 |
| openai | gpt-3.5-turbo | 73.50 | 4.00 | 126.00 | 530.00 |
| openai | gpt-4 | 26.50 | 4.09 | 46.40 | 640.00 |
| openai | gpt-4o-mini | 39.70 | 7.97 | 63.40 | 400.00 |
| openai | o1-pro | 9.91 | 1.83 | 17.90 | 430.00 |
| openai | GPT-5.3-codex | 23.80 | 7.89 | 36.90 | 950.00 |
| openai | GPT-5.4 | 29.50 | 15.80 | 40.60 | 810.00 |
| openai | GPT-5-chat-latest | 51.70 | 13.40 | 81.60 | 540.00 |
| openai | GPT-5-pro | 3.75 | 1.19 | 5.65 | 0.00 |
| openai | GPT-5.1-2025-11-13 | 36.70 | 12.80 | 61.80 | 760.00 |
| openai | GPT-5.1-chat-latest | 29.40 | 13.90 | 46.60 | 920.00 |
| openai | GPT-5.2-2025-12-11 | 30.00 | 18.20 | 39.50 | 660.00 |
| openai | GPT-5.2-chat-latest | 10.90 | 1.80 | 23.20 | 1580.00 |
| openai | GPT-5.2-pro-2025-12-11 | 1.89 | 1.05 | 3.25 | 8000.00 |
| openai | GPT-5.4-2026-03-05 | 30.20 | 18.70 | 42.00 | 690.00 |
| openai | GPT-5.4-mini-2026-03-17 | 75.30 | 9.23 | 119.00 | 590.00 |
| openai | GPT-5.4-nano-2026-03-17 | 86.70 | 36.80 | 125.00 | 490.00 |
| openai | o1 | 85.50 | 21.60 | 147.00 | 0.00 |
| openai | o3 | 35.80 | 12.70 | 62.30 | 0.00 |
| openai | o3-2025-04-16 | 38.00 | 13.80 | 67.80 | 0.00 |
| openai | o3-mini-2025-01-31 | 107.00 | 15.50 | 160.00 | 0.00 |
| openai | o3-pro | 6.59 | 1.91 | 12.20 | 480.00 |
| openai | o3-pro-2025-06-10 | 6.36 | 2.10 | 10.90 | 930.00 |
| openai | o4-mini-2025-04-16 | 52.20 | 28.90 | 73.80 | 0.00 |
| openai | GPT-5.4-mini | 78.10 | 16.20 | 111.00 | 530.00 |
| openai | GPT-5.4-nano | 91.60 | 43.90 | 129.00 | 410.00 |
| openai | GPT-5-codex | 8.57 | 1.95 | 17.00 | 1750.00 |
Featured Models
Frequently Asked Questions
Based on recent tests, o3 Mini shows the highest average throughput among tracked openai models.
This provider summary aggregates 8051 individual prompts measured across 7836 monitoring runs over the past month.