<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Obfuscated — AI Security Research</title><description>AI security research, red teaming, and compliance. Exploring how to break AI systems — and how to defend them.</description><link>https://obfuscated.site/</link><language>en-us</language><item><title>AI Red Teaming: What It Is and Why It Matters in 2026</title><link>https://obfuscated.site/blog/ai-red-teaming-what-it-is-and-why-it-matters/</link><guid isPermaLink="true">https://obfuscated.site/blog/ai-red-teaming-what-it-is-and-why-it-matters/</guid><description>AI red teaming is rapidly becoming the most critical security practice for any organization deploying LLMs and AI agents. Here is what it actually involves, why the traditional security playbook falls short, and where the field is headed.</description><pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Prompt Injection in 2026: Direct, Indirect, and Why Your Guardrails Won&apos;t Save You</title><link>https://obfuscated.site/blog/prompt-injection-direct-indirect-guardrails/</link><guid isPermaLink="true">https://obfuscated.site/blog/prompt-injection-direct-indirect-guardrails/</guid><description>Most deployed LLM applications have guardrails against direct prompt injection and almost none have meaningful defenses against indirect injection. Here is why that gap is dangerous.</description><pubDate>Thu, 09 Apr 2026 00:00:00 GMT</pubDate></item><item><title>The EU AI Act&apos;s Security Requirements: What Practitioners Actually Need to Do</title><link>https://obfuscated.site/blog/eu-ai-act-security-requirements/</link><guid isPermaLink="true">https://obfuscated.site/blog/eu-ai-act-security-requirements/</guid><description>The EU AI Act is in force. Here is what the security and robustness requirements actually mean for teams building and deploying AI systems in 2026.</description><pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Scanning LLMs for Vulnerabilities with Garak: A Practical Walkthrough</title><link>https://obfuscated.site/blog/scanning-llms-with-garak/</link><guid isPermaLink="true">https://obfuscated.site/blog/scanning-llms-with-garak/</guid><description>Garak is an open-source LLM vulnerability scanner that automates probing for prompt injection, jailbreaks, hallucination, and more. Here is how to use it and what to make of the results.</description><pubDate>Sun, 05 Apr 2026 00:00:00 GMT</pubDate></item></channel></rss>