API Status

google Provider Benchmarks

Comprehensive performance summary covering 3 models.

This provider hub highlights throughput and latency trends across every google model monitored by LLM Benchmarks. Use it to compare hosting tiers, track regressions, and discover the fastest variants in the catalogue.

Visit google Official Website

Provider Snapshot

Models Tracked

3

Avg Tokens / Second

59.83

Avg Time to First Token (ms)

1066.67

Last Updated

May 10, 2026

Key Takeaways

  • 3 google models are actively benchmarked with 660 total measurements across 607 benchmark runs.

  • gemini-2.5-flash-lite leads the fleet with 79.10 tokens/second, while gemini-2.5-pro delivers 39.40 tok/s.

  • Performance varies by 100.8% across the google model lineup, indicating diverse optimization strategies for different use cases.

  • Avg time to first token across the fleet is 1066.67 ms, showing moderate responsiveness for interactive applications.

  • The google model fleet shows consistent performance characteristics (27.1% variation coefficient), indicating standardized infrastructure.

Fastest Models

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
googlegemini-2.5-flash-lite79.1037.70114.00500.00
googlegemini-2.5-flash61.000.6498.601000.00
googlegemini-2.5-pro39.400.6058.901700.00

All Models

Complete list of all google models tracked in the benchmark system. Click any model name to view detailed performance data.

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
googlegemini-2.5-flash61.000.6498.601000.00
googlegemini-2.5-flash-lite79.1037.70114.00500.00
googlegemini-2.5-pro39.400.6058.901700.00

Featured Models

Frequently Asked Questions

Based on recent tests, gemini-2.5-flash-lite shows the highest average throughput among tracked google models.

This provider summary aggregates 660 individual prompts measured across 607 monitoring runs over the past month.