The EU AI Act is no longer a future problem. Enforcement began in phases starting August 2024, with the highest-risk prohibitions active immediately and obligations for high-risk AI systems fully applicable as of August 2026. If your organization deploys AI systems that touch EU residents — or if you’re building platforms that others use to do so — you’re in scope.

Most of the compliance conversation has focused on governance and documentation. That’s necessary but incomplete. The Act has specific security and robustness requirements that are technically substantive, and most engineering teams I’ve spoken with haven’t mapped them to their actual systems. This post covers what those requirements actually mean in practice.

The Risk Tier System

The Act divides AI systems into four tiers based on risk. Getting the classification right is the first job.

Unacceptable risk covers outright prohibited uses: social scoring by governments, real-time biometric surveillance in public spaces, manipulative AI that exploits psychological vulnerabilities. If you’re building any of these, you have bigger problems than compliance.

High-risk is where most of the security obligations live. This tier covers AI systems used in: critical infrastructure, educational assessment, employment decisions (screening, performance monitoring), essential services (credit, insurance, healthcare), law enforcement, border control, and administration of justice. The list is broader than most teams realize. An AI hiring tool used to screen resumes for EU applicants is high-risk. An AI system that helps set insurance premiums is high-risk. A document classification model used in a legal context may be high-risk.

Limited risk covers systems with specific transparency obligations — mainly chatbots and deepfake-generating systems that must disclose they’re AI-generated. Most consumer LLM apps land here unless they’re making consequential decisions.

Minimal risk is everything else — spam filters, AI-powered features in games, recommendation engines for entertainment content. No specific Act obligations.

The practical question is: does your system influence consequential decisions about people in the categories above? If yes, start building your high-risk compliance case.

What High-Risk Systems Actually Have to Do

For security practitioners, the most relevant obligations fall under Articles 9 through 15. Here’s what they require in terms that map to actual engineering work.

Risk management system (Article 9). You need a documented, continuous risk management process covering the full AI system lifecycle. This isn’t a one-time risk assessment — it’s an ongoing process that identifies and mitigates known and foreseeable risks, including risks arising from reasonably foreseeable misuse. The security implication: your threat model needs to be formal and documented, not tribal knowledge.

Data governance (Article 10). Training, validation, and test data must meet quality criteria and be examined for potential biases. From a security standpoint, this means you need to document your training data provenance and have a process for detecting and mitigating data poisoning. If you’re using a fine-tuned model, you need to be able to demonstrate what data went into it and what quality controls were applied.

Technical documentation (Article 11). Extensive documentation is required before market placement — system architecture, design choices, training methodology, performance metrics across relevant subgroups, limitations, and more. Annex IV of the Act specifies the contents. Security teams: this documentation needs to include your adversarial testing methodology and results.

Logging and audit trail (Article 12). High-risk systems must automatically log events to a level that allows post-hoc reconstruction of what the system did and why. Logs must be kept for a defined period (varies by sector). This has direct infrastructure implications — you need structured, tamper-evident logs of model inputs, outputs, and the decisions the system made, not just application-level request logs.

Robustness, accuracy, and cybersecurity (Article 15). This is the most directly security-relevant requirement. High-risk AI systems must be “resilient against attempts by unauthorized third parties to alter their use, outputs, or performance.” The Act explicitly calls out robustness against adversarial inputs designed to alter outputs. Cybersecurity measures must be “commensurate with the circumstances.” This is as close as the Act gets to requiring adversarial testing — it doesn’t mandate red teaming by name, but demonstrating robustness against adversarial inputs is hard to do credibly without it.

GPAI: The Rules for Teams Building on Foundation Models

General Purpose AI (GPAI) models — think GPT-4, Claude, Gemini, Llama — have their own obligations under Title VIII of the Act. If you’re an organization deploying these models (not the original developer), most GPAI obligations fall on the model provider, not you. But there are important nuances.

If you’re building a product that uses a GPAI model as a component, you’re considered a “deployer” and you inherit some obligations. Specifically: if your downstream application is high-risk, you still bear the high-risk obligations even if the underlying model is provided by a third party. You can’t outsource compliance to your API provider.

What this means practically: if you’re building a high-risk application on top of OpenAI or Anthropic’s API, you need a contract with your provider that gives you the technical information you need to fulfill your own compliance obligations — including what safety testing the provider has done, what the model’s known limitations are, and what data was used in training. The Act requires GPAI providers to make this information available to downstream deployers. Start asking your providers for their Model Cards and AI Safety Disclosures if you haven’t already.

The Compliance Gap Most Orgs Have Right Now

Based on what I’ve seen, the gap isn’t in awareness — most teams know the EU AI Act exists. The gap is in translating legal text into technical requirements.

The robustness requirements are a good example. Article 15 requires resilience against adversarial inputs, but the Act doesn’t specify what testing methodology is sufficient to demonstrate this. Right now, the Commission is still developing harmonized standards (through CEN/CENELEC) that will provide safe harbor guidance. Until those standards are finalized, organizations need to make defensible choices about what “sufficient” adversarial testing looks like for their specific system and risk tier.

The logging requirements are another gap. Most teams have application logs. Few have the structured, model-level audit trails the Act envisions — logs that capture inputs, intermediate reasoning steps where visible, final outputs, and confidence scores, in a format that supports post-hoc investigation.

A Practical Checklist for Security-Conscious Teams

If you’re responsible for a system that may be high-risk under the Act, here’s where to focus:

1. Classify your systems now. Map every AI system your organization operates or deploys against the high-risk categories in Annex III. Don’t assume you’re not high-risk — the categories are broader than they initially appear.

2. Formalize your threat model. Document the known attack vectors against your system — prompt injection, data poisoning, adversarial inputs, model inversion — and document your mitigations. This is both a compliance requirement and a forcing function for finding gaps.

3. Stand up structured logging. Ensure you have input/output logging for every model inference in scope, with sufficient metadata to reconstruct decisions. Verify that logs are tamper-evident and that your retention policy meets sector requirements.

4. Run adversarial testing before deployment. The Act requires you to demonstrate robustness. Build adversarial testing into your pre-deployment checklist using tools like Promptfoo or Garak. Document methodology and results. This documentation will be requested by notified bodies during conformity assessment.

5. Get technical documentation from your model providers. If you’re using a foundation model, obtain and review their available documentation — system cards, safety evaluations, known limitations. Build this into your own technical documentation package.

6. Establish a post-market monitoring process. The Act requires ongoing monitoring after deployment. Define what metrics you’re tracking, what thresholds trigger re-evaluation, and who is responsible. This doesn’t need to be elaborate, but it needs to exist and be documented.

Where This Is Headed

The harmonized standards are the missing piece. Once CEN/CENELEC publishes the technical standards the Commission is developing, organizations that comply will have a clear safe harbor path. Until then, the state of compliance is “make defensible choices and document your reasoning.”

The organizations that will be best positioned when enforcement intensifies are the ones treating this as a genuine engineering problem now — not a checkbox exercise, not a legal team’s responsibility, but a set of concrete technical requirements that need to be designed into systems from the ground up.

The security requirements in the EU AI Act aren’t onerous if you’re already doing competent AI security work. The gap, for most organizations, is that they aren’t.