Break AI systems.
Then defend them.
Original research on prompt injection, red teaming, agent hijacking, and AI compliance. Written for practitioners — not consultants.
- AI Red Teaming: What It Is and Why It Matters in 2026
AI red teaming is rapidly becoming the most critical security practice for any organization deploying LLMs and AI agents. Here is what it actually involves, why the traditional security playbook falls short, and where the field is headed.
- Prompt Injection in 2026: Direct, Indirect, and Why Your Guardrails Won't Save You
Most deployed LLM applications have guardrails against direct prompt injection and almost none have meaningful defenses against indirect injection. Here is why that gap is dangerous.
- The EU AI Act's Security Requirements: What Practitioners Actually Need to Do
The EU AI Act is in force. Here is what the security and robustness requirements actually mean for teams building and deploying AI systems in 2026.
- Scanning LLMs for Vulnerabilities with Garak: A Practical Walkthrough
Garak is an open-source LLM vulnerability scanner that automates probing for prompt injection, jailbreaks, hallucination, and more. Here is how to use it and what to make of the results.