Haimaker.ai Logo

Inference Benchmarks

Performance metrics across hardware, software, and model configurations

Back to Home

Early Preview

We are working to certify more internal benchmarks to be published. If you're interested in providing hardware or have questions, email benchmarks@haimaker.ai.

Filters

llama-3.3-70b-instruct
Clear
Active Filters:Tag: llama-3.3-70b-instruct

Found 4 benchmark suites

DateSuite NameGPUModelOutput TPSInput TPSEnergy Cost
(kWh/MT)
11/12/2025
NVIDIA H100 80GB HBM3 (8x) - llama-3.3-70b-instruct
NVIDIA H100 80GB HBM3
8x 632GB
llama-3.3-70b-instruct
meta-llama
9,219.6016,108.820.06
11/5/2025
NVIDIA H200 NVL (2x) - llama-3.3-70b-instruct
NVIDIA H200 NVL
2x 280GB
llama-3.3-70b-instruct
meta-llama
5,005.2911,042.390.03
10/25/2025
NVIDIA H20 (8x) - llama-3.3-70b-instruct (High Throughput)
NVIDIA H20
8x 760GB
llama-3.3-70b-instruct
meta-llama
5,091.047,327.230.10
10/24/2025
NVIDIA H20 (8x) - llama-3.3-70b-instruct
NVIDIA H20
8x 760GB
llama-3.3-70b-instruct
meta-llama
3,370.986,350.240.11