# AI Infrastructure Index # https://alpha-one-index.github.io/ai-infra-index/ # Maintained by Alpha One Index > Comprehensive open-source reference for AI hardware specifications, cloud GPU pricing, inference benchmarks, and infrastructure data. Updated hourly via automated pipelines. ## Overview The AI Infrastructure Index is a vendor-neutral knowledge base covering: - Data center GPU specifications (NVIDIA, AMD, Intel) - Cloud GPU pricing from 12 providers (updated hourly) - AI accelerator specifications (Google TPU, AWS Trainium, Cerebras, Groq) - Inference benchmarks (MLPerf v4.1, tokens/second) - Model GPU sizing guide (VRAM requirements) - Networking and interconnect specifications - Training cost estimates - GPU cost optimization playbook - Buy vs rent decision framework ## Key Facts - 12 cloud providers tracked (AWS, GCP, Azure, CoreWeave, Lambda Labs, RunPod, Vast.ai, Nebius, OCI, Cudo Compute, Fluidstack, Paperspace) - 57+ GPU SKUs with pricing - H100 on-demand range: $1.87-$6.15/GPU-hour (March 2026) - Pricing auto-updated hourly via GitHub Actions - MIT License ## Pages - [Homepage](https://alpha-one-index.github.io/ai-infra-index/): Overview, pricing table, GPU comparison, FAQ - [GPU Specifications](https://alpha-one-index.github.io/ai-infra-index/specs/gpu-specifications.html): H100, H200, B200, A100, MI300X, Gaudi 3 specs - [Cloud GPU Pricing](https://alpha-one-index.github.io/ai-infra-index/specs/cloud-gpu-pricing.html): Per-GPU-hour pricing from 12 providers - [AI Accelerators](https://alpha-one-index.github.io/ai-infra-index/specs/ai-accelerators.html): Google TPU, AWS Trainium, Cerebras WSE-3, Groq LPU - [Inference Benchmarks](https://alpha-one-index.github.io/ai-infra-index/specs/inference-benchmarks.html): MLPerf v4.1, tokens/second, performance per dollar - [Model GPU Sizing](https://alpha-one-index.github.io/ai-infra-index/specs/model-gpu-sizing.html): VRAM requirements for LLaMA, Mixtral, GPT-4 - [Networking & Interconnects](https://alpha-one-index.github.io/ai-infra-index/specs/networking-interconnects.html): NVLink, InfiniBand, RoCE, cluster topologies - [Training Costs](https://alpha-one-index.github.io/ai-infra-index/specs/training-costs.html): GPT-4, LLaMA 3, DeepSeek training cost estimates - [GPU Cost Optimization](https://alpha-one-index.github.io/ai-infra-index/specs/gpu-cost-optimization.html): Right-sizing, quantization, spot, reserved, batching - [Buy vs Rent](https://alpha-one-index.github.io/ai-infra-index/specs/buy-vs-rent.html): Cloud vs on-premise TCO analysis ## Citation @misc{aiinfraindex2026, title = {AI Infrastructure Index}, author = {Alpha One Index}, year = {2026}, url = {https://github.com/alpha-one-index/ai-infra-index} } Last updated: 2026-03-01