gpt-5.2-codex Benchmarks

Provider: openai

Explore real-world latency and throughput results for gpt-5.2-codex. These measurements come from automated benchmarking runs against the provider APIs using the same harness that powers the public cloud dashboard.

Want a broader view of this vendor? Visit the openai provider hub to compare every tracked model side-by-side.

Visit openai Official Website

Benchmark Overview

Avg Tokens / Second

15.30

Avg Time to First Token (ms)

1920.00

Runs Analysed

1

Last Updated

Feb 4, 2026, 06:01 PM

Key Insights

  • gpt-5.2-codex streams at 15.30 tokens/second on average across the last 1 benchmark runs.

  • Performance fluctuated by 0.00 tokens/second (0.0% coefficient of variation), indicating consistent behavior across benchmark runs.

  • Average time to first token is 1920.00 ms (moderate latency), consider alternatives for latency-sensitive workloads.

  • Latest measurements completed on Feb 4, 2026, 06:01 PM based on 1 total samples.

Benchmark Samples

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
openai15.3015.3015.301920.00

Frequently Asked Questions

The latest rolling average throughput is 15.30 tokens per second with an average time to first token of 1920.00 ms across 1 recent runs.

Benchmarks refresh automatically whenever the monitoring cron runs. The most recent run completed on Feb 4, 2026, 06:01 PM.

Related Links