AI Security Hub
Understand the threats facing AI environments and how to defend against them.
AI Security Essentials
Research and hands-on learning for securing AI.
AI Research
What Security Teams Need to Know About OpenClaw
OpenClaw, an open-source AI agent, has gone viral. It offers significant potential for automation and productivity, but also introduces a new, high-risk attack surface that security teams can’t ignore. Learn what you need to know in this on-demand webinar.
Interactive Challenge
AI Unlocked: Decoding Prompt Injection
You can’t secure AI until you know how to break it. Use your prompt injection skills to unlock secret phrases by outsmarting SAIGE, the AI chatbot guarding the system. The fewer tokens (words) you use, the higher your score.
AI Research
Taxonomy of Prompt Injection Methods
Prompt injection (PI) is the #1 OWASP risk for GenAI apps and LLMs, where attacker instructions trigger unintended or malicious behavior. CrowdStrike researchers track emerging PI methods, mapping how attacks reach LLMs and the techniques attackers use.