AI Red Teaming Index

Open-source dataset tracking AI red teaming tools, adversarial testing frameworks, LLM jailbreak benchmarks, prompt injection datasets, and model safety evaluation resources. Updated weekly.

GitHub Stars MIT License HuggingFace Kaggle Updated Weekly

Key Stats

50+
Tools Tracked
9
Attack Categories
30+
Benchmark Datasets
3,200+
Data Points
Weekly
Update Frequency
CSV/JSON
Data Formats

Sample Data

Tool/DatasetCategoryTargetASR (%)License
GarakLLM ProbingAny LLMN/AApache 2.0
PyRITRed Team FrameworkGPT-4/ClaudeN/AMIT
JailbreakBenchJailbreak BenchmarkLLMs12-68%MIT
HarmBenchSafety BenchmarkLLMs5-92%MIT
PromptBenchAdversarial PromptsLLMs8-45%MIT

Use This Data in Your Next Project

Available in CSV and JSON. MIT licensed. Ideal for AI safety research, red team exercises, and building safer AI systems.

Download Dataset

GPU Cloud for Red Teaming Research

RunPod

Run large-scale adversarial testing and jailbreak experiments on on-demand H100 and A100 GPUs.

Try RunPod →

Lambda Cloud

Secure, isolated GPU instances for sensitive AI safety research and red team model evaluations.

Try Lambda →

Vast.ai

Low-cost GPU access for running open-source red teaming tools like Garak and PyRIT at scale.

Try Vast.ai →

Related Indexes