GPT-5.2-pro Benchmarks

Provider: openai

Explore real-world latency and throughput results for GPT-5.2-pro. These measurements come from automated benchmarking runs against the provider APIs using the same harness that powers the public cloud dashboard.

Want a broader view of this vendor? Visit the openai provider hub to compare every tracked model side-by-side.

Visit openai Official Website

Benchmark Overview

Avg Tokens / Second

1.59

Avg Time to First Token (ms)

0.00

Runs Analysed

17

Last Updated

Dec 29, 2025, 05:30 AM

Key Insights

  • GPT-5.2-pro streams at 1.59 tokens/second on average across the last 17 benchmark runs.

  • Performance fluctuated by 0.32 tokens/second (20.1% coefficient of variation), indicating variable behavior across benchmark runs.

  • Latest measurements completed on Dec 29, 2025, 05:30 AM based on 17 total samples.

Performance Distribution

Distribution of throughput measurements showing performance consistency across benchmark runs.

Performance Over Time

Historical performance trends showing how throughput has changed over the benchmarking period.

GPT-5.2-pro

Benchmark Samples

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openaiGPT-5.2-pro1.591.421.740.00

Frequently Asked Questions

The latest rolling average throughput is 1.59 tokens per second with an average time to first token of 0.00 ms across 17 recent runs.

Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Dec 29, 2025, 05:30 AM.

Related Links