AI LLMOps Index

Open-source dataset tracking LLMOps platforms, LLM inference engines, orchestration frameworks, prompt management tools, and observability solutions. Compare 80+ tools. Updated weekly.

GitHub Stars MIT License HuggingFace Kaggle Updated Weekly

Key Stats

80+
Tools Tracked
12
Tool Categories
15+
Cloud Providers
4,500+
Data Points
Weekly
Update Frequency
CSV/JSON
Data Formats

Sample Data

ToolCategoryThroughput (tok/s)LicenseDeployment
vLLMInference Engine3,200Apache 2.0Self-hosted
TGI (HuggingFace)Inference Server2,800Apache 2.0Self-hosted/Cloud
LangChainOrchestrationN/AMITAny
LangfuseObservabilityN/AMITSaaS/Self-hosted
OllamaLocal Inference450MITLocal

Use This Data in Your Next Project

Available in CSV and JSON. MIT licensed. Perfect for LLMOps tool selection, cost benchmarking, and inference optimization.

Download Dataset

GPU Cloud for LLM Inference

RunPod

Deploy vLLM, TGI, and Ollama on RunPod GPUs. H100 and A100 instances for high-throughput LLM serving.

Try RunPod →

Lambda Cloud

On-demand and reserved H100 instances optimized for LLM training and inference workloads.

Try Lambda →

Vast.ai

GPU marketplace for cost-effective LLM inference. Find the best price-to-performance ratio.

Try Vast.ai →

Related Indexes