API Status

together Provider Benchmarks

Comprehensive performance summary covering 10 models.

This provider hub highlights throughput and latency trends across every together model monitored by LLM Benchmarks. Use it to compare hosting tiers, track regressions, and discover the fastest variants in the catalogue.

Visit together Official Website

Provider Snapshot

Models Tracked

10

Avg Tokens / Second

67.78

Avg Time to First Token (ms)

632.00

Last Updated

Mar 8, 2026

Key Takeaways

  • 10 together models are actively benchmarked with 1879 total measurements across 1736 benchmark runs.

  • llama-3.1-8b leads the fleet with 142.00 tokens/second, while llama-3.2-3b delivers 57.90 tok/s.

  • Performance varies by 145.3% across the together model lineup, indicating diverse optimization strategies for different use cases.

  • Avg time to first token across the fleet is 632.00 ms, showing good responsiveness for interactive applications.

  • The together model fleet shows varied performance characteristics (44.5% variation coefficient), reflecting diverse model architectures.

Fastest Models

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
togetherllama-3.1-8b142.003.05232.00380.00
togetherqwen-2.5-7b94.2011.90145.00240.00
togetherllama-3.1-70b72.607.16129.00390.00
togethermistral-7b70.702.1190.80540.00
togethermixtral-8x7b60.8024.80114.00150.00
togetherllama-3.2-3b57.905.45121.001480.00

All Models

Complete list of all together models tracked in the benchmark system. Click any model name to view detailed performance data.

ProviderModelAvg Toks/SecMinMaxAvg TTF (ms)
togetherllama-3.3-70b53.901.29146.001350.00
togetherdeepseek-r145.501.34113.001000.00
togethermistral-7b70.702.1190.80540.00
togetherqwen-2.5-72b55.7049.4062.10340.00
togetherqwen-2.5-7b94.2011.90145.00240.00
togethermixtral-8x7b60.8024.80114.00150.00
togetherllama-3.2-3b57.905.45121.001480.00
togetherllama-3.1-405b24.5024.5024.50450.00
togetherllama-3.1-70b72.607.16129.00390.00
togetherllama-3.1-8b142.003.05232.00380.00

Featured Models

Frequently Asked Questions

Based on recent tests, llama-3.1-8b shows the highest average throughput among tracked together models.

This provider summary aggregates 1879 individual prompts measured across 1736 monitoring runs over the past month.