The EU AI Act entered into force in August 2024, and its obligations are now rolling out in phases. By August 2026, most high-risk AI systems must comply with full conformity assessment requirements. For organizations using AI in HR, credit scoring, healthcare, law enforcement, and critical infrastructure, the question is no longer whether to comply — it's how.
What the AI Act Actually Requires
The Act classifies AI systems into four risk tiers:
- Unacceptable risk — banned outright (social scoring, real-time biometric surveillance in public)
- High risk — heavy obligations: conformity assessments, transparency, human oversight, data governance
- Limited risk — lighter transparency duties (chatbots must disclose they're AI)
- Minimal risk — no specific obligations
Most enterprise LLM deployments fall somewhere in the high-risk or limited-risk buckets. If your AI system influences decisions about employment, credit, healthcare, or education, you're likely in the high-risk category.
Why Third-Party APIs Create Compliance Headaches
When you send queries to a US-based LLM API, several AI Act requirements become structurally difficult to satisfy:
Transparency and logging. High-risk AI systems must maintain logs "to the extent necessary to ensure that the system can be monitored." With a black-box third-party API, you have limited visibility into what's actually happening.
Data governance. The Act requires that training data be "relevant, sufficiently representative, and to the best extent possible, free of errors." Using a general-purpose cloud model means accepting whatever training data the vendor used — often with no AI Act compliance card in sight.
Human oversight. Systems must be designed to allow humans to "effectively oversee" them. This is harder when you don't control the inference stack.
GPAI model obligations. General-purpose AI models with systemic risk (roughly, those above 10^25 FLOPs) face specific obligations including adversarial testing and incident reporting. If you build products on top of these, you inherit some of that risk.
The Sovereign Infrastructure Advantage
Running your own LLM infrastructure within the EU solves many of these problems structurally:
- Audit trails are your audit trails. EULLM Engine includes built-in logging designed for compliance use cases — you control what gets recorded, stored, and reported.
- Data never leaves your perimeter. No cross-border data transfer means no GDPR Article 46 mechanisms to worry about.
- You choose the model. EULLM Hub provides AI Act compliance cards for every model — so you know exactly what you're deploying and can justify it to a supervisory authority.
Timelines to Watch
| Date | Obligation | |------|-----------| | Feb 2025 | Prohibited AI practices ban applies | | Aug 2025 | GPAI model obligations apply | | Aug 2026 | High-risk AI system requirements fully apply | | Aug 2027 | Certain legacy AI systems must comply |
If you're reading this in 2026, the August deadline is close. Now is the time to audit your AI stack and move toward infrastructure you actually control.
EULLM is an open-source platform for deploying sovereign, GDPR-compliant AI within the EU. View on GitHub.
