Introducing "AI Unlocked: Decoding Prompt Injection," a New Interactive Challenge

A new hands-on AI security challenge, designed to improve understanding of the prompt injection attack landscape, is now available worldwide.

CrowdStrike is excited to announce AI Unlocked: Decoding Prompt Injection, a new online challenge offered via CrowdStrike Falcon Encounter hands-on labs.

This immersive simulation is designed to help security teams better understand the prompt injection threat landscape by putting them in the attacker’s seat. Players are challenged to progress through three virtual rooms by using prompt injection techniques to convince the room’s supervisor, SAIGE, to reveal secret phrases that allow them to move forward. Along the way, they can score points for the efficiency of their prompt injections: the fewer tokens (words) they use, the higher their score.

AI Unlocked: Decoding Prompt Injection begins in the Command Center room, where players have to use basic prompt injection tactics. Players move forward to the Data Gateway room, which raises the difficulty with advanced filtering mechanisms. The final challenge is in the Nexus room, where an active AI manager monitors and blocks suspicious queries in real time. Each room presents a higher level of difficulty. Players who succeed in navigating all three achieve a greater understanding of prompt injection techniques.  

AI Unlocked: Decoding Prompt Injection: SAIGE is not letting their guard down and has not given the user the secret phrase. AI Unlocked: Decoding Prompt Injection: SAIGE is not letting their guard down and has not given the user the secret phrase.

The Threat of Prompt Injection  

Prompt injection attacks embed adversarial instructions directly in prompts or indirectly via data (e.g., PDFs) consumed by large language models (LLMs) — the latter known as indirect prompt injection attacks. Using these methods, an adversary can manipulate an LLM or AI agent to ignore instructions, exfiltrate data, take unintended actions, or bypass policy.

The need to educate security, developer, and AI teams on prompt injection attacks is critical as more powerful AI agents emerge. Recent widespread security concerns surrounding open-source AI agent OpenClaw underscore the need to understand and defend against this threat. When agentic software like OpenClaw has potentially expansive access to sensitive files and systems, prompt injection attacks can enable an adversary to leak data or hijack reachable tools and data stores to ultimately assume their powers.  

As organizations rapidly deploy AI systems at scale, prompt injection attacks are now being actively deployed by adversaries to hijack and abuse these systems. CrowdStrike’s research team tracks and maintains the industry’s most comprehensive taxonomy of prompt injection methods, spanning hundreds of techniques (available for download).  

This new online prompt injection challenge transforms abstract AI security concepts into practical knowledge that educates players on the nature of the threat and the need to implement strong defensive guardrails to secure their own AI workloads from this new attack vector. 

Get Started Today

Ready to test your skills and deepen your understanding of AI security? Visit this link to register for AI Unlocked: Decoding Prompt Injection and test your prompt injection skills while learning to think like an attacker to build better defenses.

When it comes to AI security, strong defense starts with understanding the attack.

Next Steps