API Status

GPT-5.2-pro-2025-12-11 Benchmarks

Provider: openai

Explore real-world latency and throughput results for GPT-5.2-pro-2025-12-11. These measurements come from automated benchmarking runs against the provider APIs using the same harness that powers the public cloud dashboard.

Want a broader view of this vendor? Visit the openai provider hub to compare every tracked model side-by-side.

Visit openai Official Website

Benchmark Overview

Avg Tokens / Second

1.46

Avg Time to First Token (ms)

8090.00

Runs Analysed

3

Last Updated

Mar 19, 2026, 12:02 PM

Key Insights

  • GPT-5.2-pro-2025-12-11 streams at 1.46 tokens/second on average across the last 3 benchmark runs.

  • Performance fluctuated by 0.60 tokens/second (41.1% coefficient of variation), indicating variable behavior across benchmark runs.

  • Average time to first token is 8090.00 ms (high latency), consider alternatives for latency-sensitive workloads.

  • Latest measurements completed on Mar 19, 2026, 12:02 PM based on 3 total samples.

Performance Distribution

Distribution of throughput measurements showing performance consistency across benchmark runs.

Performance Over Time

Historical performance trends showing how throughput has changed over the benchmarking period.

GPT-5.2-pro-2025-12-11

Benchmark Samples

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openaiGPT-5.2-pro-2025-12-111.461.161.768090.00

Frequently Asked Questions

The latest rolling average throughput is 1.46 tokens per second with an average time to first token of 8090.00 ms across 3 recent runs.

Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Mar 19, 2026, 12:02 PM.

Related Links