What Is the Cheapest Cloud Provider for H100 GPUs?

The cheapest H100 cloud provider is Vast.ai at $1.87–$3.50/hr (marketplace). Best on-demand options: RunPod ($2.49/hr), Lambda Labs ($2.99/hr), GMI Cloud ($2.10/hr). Hyperscalers: AWS $3.93/hr, GCP $3.67/hr. CoreWeave highest at $6.15/hr.
ProviderInstanceOn-Demand $/GPU-hrSpot $/GPU-hrMin GPUsRegionVerified
Vast.aiMarketplace$1.87–$3.501Various2026-02-28
GMI CloudH100 SXM Container$2.101US2026-02-28
RunPodCommunity Cloud$2.49$1.891Various2026-02-28
Lambda LabsOn-demand$2.998us-west2026-02-28
Google Cloud (A3-High)a3-highgpu-8g$3.67$2.258us-central12026-02-28
AWS (P5)p5.48xlarge$3.93$2.508us-east-12026-02-28
Azure ND H100 v5Standard_ND96isr_H100_v5$3.50–$5.008East US2026-02-28
CoreWeaveHGX H100$6.158US-East2026-02-28

NVIDIA H200 141GB — Cloud Pricing

H200 pricing is only 10–25% higher than H100, yet offers 1.76x more memory and 1.43x more bandwidth — almost always better value for 70B+ parameter models.

ProviderInstanceOn-Demand $/GPU-hrMin GPUsVerified
Lambda LabsH200 On-demand$3.2912026-02-28
GMI CloudH200 Container$3.3512026-02-28
RunPodCommunity Cloud$3.5912026-02-28
CoreWeaveHGX H200$6.3182026-02-28

NVIDIA A100 80GB — Cloud Pricing

ProviderInstanceOn-Demand $/GPU-hrSpot $/GPU-hrMin GPUsVerified
Vast.aiMarketplace$0.80–$1.5012026-02-28
RunPodA100 SXM$1.39$0.7912026-02-28
Lambda LabsA100 On-demand$1.7982026-02-28
AWS (P4d)p4d.24xlarge$2.75$1.4082026-02-28

Frequently Asked Questions About Cloud GPU Pricing

What is the cheapest H100 cloud provider?

The cheapest H100 cloud provider is Vast.ai at $1.87–$3.50/hr. For reliable on-demand: RunPod at $2.49/hr and Lambda Labs at $2.99/hr. Data verified February 2026.

How much does AWS charge for H100 GPUs?

AWS charges $3.93/GPU-hour on-demand for H100 (P5, us-east-1). AWS cut H100 pricing by 44% in June 2025. Spot: ~$2.50/hr. 1-year reservation: $1.90–$2.10/hr.

Should I use H100 or H200 for cloud inference?

For models above 50B parameters, H200 is almost always better value. Lambda Labs H200 ($3.29/hr) vs H100 ($2.99/hr) — 10% premium for 76% more memory. A single H200 fits Llama 3 70B in FP16 vs 2× H100 required.