API Status

openai Provider Benchmarks

Comprehensive performance summary covering 35 models.

This provider hub highlights throughput and latency trends across every openai model monitored by LLM Benchmarks. Use it to compare hosting tiers, track regressions, and discover the fastest variants in the catalogue.

Visit openai Official Website

Provider Snapshot

Models Tracked

35

Avg Tokens / Second

42.04

Avg Time to First Token (ms)

1911.71

Last Updated

May 10, 2026

Key Takeaways

  • 35 openai models are actively benchmarked with 8207 total measurements across 7980 benchmark runs.

  • o3 Mini leads the fleet with 96.70 tokens/second, while GPT-5.4-nano delivers 64.70 tok/s.

  • Performance varies by 49.5% across the openai model lineup, indicating diverse optimization strategies for different use cases.

  • Avg time to first token across the fleet is 1911.71 ms, showing moderate responsiveness for interactive applications.

  • The openai model fleet shows varied performance characteristics (57.2% variation coefficient), reflecting diverse model architectures.

Fastest Models

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openaio3 Mini96.7033.20161.00140.00
openaigpt-4.1-nano83.307.30153.00420.00
openaiGPT-5.1-codex-max78.5020.70111.001390.00
openaio178.2010.30146.00280.00
openaiGPT-5 Nano76.8022.10158.00290.00
openaiGPT-5.4-nano64.708.03114.00640.00

All Models

Complete list of all openai models tracked in the benchmark system. Click any model name to view detailed performance data.

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openaigpt-3.5-turbo64.6013.80119.00630.00
openaigpt-424.601.9646.30750.00
openaigpt-4.150.009.3392.20530.00
openaigpt-4.1-mini51.104.1089.40650.00
openaigpt-4.1-nano83.307.30153.00420.00
openaigpt-4o46.7010.60107.00590.00
openaigpt-4o-mini39.608.8465.00550.00
openaiGPT-540.102.4971.40380.00
openaiGPT-5-chat-latest62.6023.80104.00590.00
openaiGPT-5-codex45.507.0686.60440.00
openaiGPT-5 Mini49.3010.30108.001240.00
openaiGPT-5 Nano76.8022.10158.00290.00
openaiGPT-5-pro4.291.647.99340.00
openaiGPT-5.142.4010.5075.70710.00
openaiGPT-5.1-chat-latest31.201.9160.501010.00
openaiGPT-5.1-codex47.2014.3066.001050.00
openaiGPT-5.1-codex-max78.5020.70111.001390.00
openaiGPT-5.1-codex-mini55.5012.80103.00980.00
openaiGPT-5.229.8016.0045.50710.00
openaiGPT-5.2-chat-latest13.601.2145.801530.00
openaigpt-5.2-codex36.206.0267.701160.00
openaiGPT-5.2-pro9.414.5216.807030.00
openaiGPT-5.3-codex27.608.5542.40800.00
openaiGPT-5.428.509.9540.90820.00
openaiGPT-5.4-mini61.2025.8091.40570.00
openaiGPT-5.4-nano64.708.03114.00640.00
openaiGPT-5.4-pro7.291.7210.902460.00
openaiGPT-5.517.1010.2022.303310.00
openaiGPT-5.5-pro8.853.5317.2031500.00
openaio178.2010.30146.00280.00
openaio1-pro9.932.4616.50330.00
openaio337.201.9671.00230.00
openaio3 Mini96.7033.20161.00140.00
openaio3-pro8.422.6821.802700.00
openaio4 Mini43.3012.7070.10160.00

Featured Models

Frequently Asked Questions

Based on recent tests, o3 Mini shows the highest average throughput among tracked openai models.

This provider summary aggregates 8207 individual prompts measured across 7980 monitoring runs over the past month.