100% detection. 100% protection. CrowdStrike excels in MITRE's most demanding platform evaluation yet. Learn more

Taxonomy of Prompt Injection Methods

Taxonomy of Prompt Injection Methods Poster cover

Understand and defend against the #1 OWASP risk for GenAI apps

Prompt injection (PI), the leading OWASP security risk for generative AI (GenAI) applications, is a type of attack where attacker instructions manipulate models, causing unwanted behavior that results in sensitive data leaks, bypassed safety controls, unauthorized access, or actions.

This taxonomy diagram:

  • Catalogs 185+ named techniques across direct and indirect injection paths and attacker prompting methods
  • Provides a structured hierarchy showing the full risks of fast-moving GenAI threats
  • Maps the rapidly evolving landscape of PI techniques

Don’t defend your large language models (LLMs) blindly.

Download the latest digital version of the Taxonomy of Prompt Injection Methods today.