Open-source dataset tracking cloud GPU pricing from 12 providers, AI hardware specifications, inference accelerators, and MLPerf benchmarks. Pricing updated hourly. Hardware specs updated weekly.
| GPU Model | VRAM | FP16 TFLOPS | Cloud $/hr | Provider |
|---|---|---|---|---|
| NVIDIA H100 SXM | 80 GB HBM3 | 989.5 | $2.49 | Lambda |
| NVIDIA A100 80GB | 80 GB HBM2e | 312 | $1.29 | RunPod |
| AMD MI300X | 192 GB HBM3 | 1,307 | $3.19 | Vast.ai |
| NVIDIA L40S | 48 GB GDDR6X | 362 | $0.99 | FluidStack |
| Google TPU v5e | 16 GB HBM2e | 197 | $1.20 | GCP |
On-demand H100, A100 & RTX GPU cloud for AI inference and training. Competitive per-hour pricing with spot instances available. No minimum commitment.
Get Started on RunPod →GPU cloud built for deep learning teams. On-demand and reserved NVIDIA H100 and A100 clusters. Used by top AI labs worldwide.
Claim Lambda Credits →GPU marketplace with the lowest prices in the market. Decentralized network of providers. Ideal for cost-sensitive inference and experimentation.
Find Cheapest GPU →