API Status

gpt-4o-mini Benchmarks

Provider: openai

Explore real-world latency and throughput results for gpt-4o-mini. These measurements come from automated benchmarking runs against the provider APIs using the same harness that powers the public cloud dashboard.

Want a broader view of this vendor? Visit the openai provider hub to compare every tracked model side-by-side.

Visit openai Official Website

Benchmark Overview

Avg Tokens / Second

38.40

Avg Time to First Token (ms)

400.00

Runs Analysed

2

Last Updated

Apr 8, 2026, 12:04 AM

Key Insights

  • gpt-4o-mini streams at 38.40 tokens/second on average across the last 2 benchmark runs.

  • Performance fluctuated by 9.00 tokens/second (23.4% coefficient of variation), indicating variable behavior across benchmark runs.

  • Average time to first token is 400.00 ms (excellent latency), suitable for latency-sensitive workloads.

  • Latest measurements completed on Apr 8, 2026, 12:04 AM based on 2 total samples.

Performance Distribution

Distribution of throughput measurements showing performance consistency across benchmark runs.

Performance Over Time

Historical performance trends showing how throughput has changed over the benchmarking period.

gpt-4o-mini

Benchmark Samples

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openaigpt-4o-mini38.4033.9042.90400.00

Frequently Asked Questions

The latest rolling average throughput is 38.40 tokens per second with an average time to first token of 400.00 ms across 2 recent runs.

Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Apr 8, 2026, 12:04 AM.

Related Links