AI Infrastructure Index

Open-source dataset tracking cloud GPU pricing from 12 providers, AI hardware specifications, inference accelerators, and MLPerf benchmarks. Pricing updated hourly. Hardware specs updated weekly.

GitHub Stars MIT License HuggingFace Dataset Kaggle Dataset Hourly Pricing Updates

Key Stats

57+
GPU SKUs Tracked
12
Cloud Providers
40+
Spec Categories
5,000+
Data Points
Hourly
Pricing Updates
MIT
License

Sample Data

GPU ModelVRAMFP16 TFLOPSCloud $/hrProvider
NVIDIA H100 SXM80 GB HBM3989.5$2.49Lambda
NVIDIA A100 80GB80 GB HBM2e312$1.29RunPod
AMD MI300X192 GB HBM31,307$3.19Vast.ai
NVIDIA L40S48 GB GDDR6X362$0.99FluidStack
Google TPU v5e16 GB HBM2e197$1.20GCP

Start Using This Data in Your Project Today

57+ GPU SKUs, 12 cloud providers, hourly pricing updates. MIT licensed — free for research, commercial use, and AI infrastructure planning.

Download Free Dataset Now

Top Cloud GPU Providers

RunPod

On-demand H100, A100 & RTX GPU cloud for AI inference and training. Competitive per-hour pricing with spot instances available. No minimum commitment.

Get Started on RunPod →

Lambda Cloud

GPU cloud built for deep learning teams. On-demand and reserved NVIDIA H100 and A100 clusters. Used by top AI labs worldwide.

Claim Lambda Credits →

Vast.ai

GPU marketplace with the lowest prices in the market. Decentralized network of providers. Ideal for cost-sensitive inference and experimentation.

Find Cheapest GPU →

Related Indexes