The EU AI Act is the first comprehensive law regulating artificial intelligence. It will change how companies develop, buy, and deploy AI systems. It also reshapes how compliance teams operate: more proactive, more thorough, better documented.
The core ask is simple. Companies must show that AI systems are safe, fair, and accountable, with controls in place to prevent harm. This post covers what the law requires, why it matters, what to do now, and what to expect in the next few years.
What the AI Act Requires
The AI Act classifies AI systems by risk and assigns rules to each tier. It applies to companies based in the EU and any organization offering AI services to EU users. Global companies must pay attention even without an EU office. Penalties reach up to €35 million or 7% of global revenue for serious violations.
Timeline:
- Introduced in 2024; some provisions already in effect.
- Most major obligations apply from August 2026.
- Standards, technical guidelines, and conformity frameworks continue rolling out through 2025 and beyond.
Key Requirements of the Law
The AI Act uses a risk-based approach with four tiers:
- Prohibited AI systems. AI that manipulates behavior to cause harm or enables social scoring by authorities is banned.
- High-risk systems. Systems that affect critical areas like hiring, lending, biometric identification, healthcare, and transport. They require risk management programs, detailed technical documentation, human oversight, data governance, pre-market conformity checks, and post-market monitoring. The FDA's parallel AI/ML draft guidance applies a similar risk-based credibility framework for pharma and medical device AI.
- Limited-risk systems. Systems that interact with users (chatbots, deepfakes) must disclose that AI is being used.
- Minimal-risk systems. Most AI tools, including simple automation, face only minimal new obligations.
Compliance Building Blocks
To meet AI Act requirements, companies must implement:
- Full AI lifecycle risk management
- Detailed technical files documenting AI development and usage
- Data governance policies to keep training data accurate and representative
- Clear human oversight protocols
- Pre-market conformity assessments for high-risk systems
- Post-market monitoring and reporting
- Record keeping for audits
Why Compliance Matters Now
Global AI adoption is accelerating. In 2025, over 78% of organizations reported using AI in at least one function, a significant increase from 2023 and early 2024. Regulators will watch risk management practices closely. For LLM-based systems in regulated contexts, this scrutiny is already intensifying.
Estimated compliance costs: A single high-risk AI system can cost around €52,000 per year in governance and certification. Companies managing multiple high-risk systems should expect higher totals for internal resources, third-party audits, and ongoing monitoring.
Practical Implications for Product Teams
Integrate compliance into the design phase:
- Shift risk management left in the development process.
- Continuously test models for bias and errors.
- Maintain logs and documentation for regulatory audits.
- Design AI systems for transparency and explainability.
Procurement and Vendor Management
When buying AI tools, request:
- Evidence of compliance and conformity checks
- Documentation of testing and risk management
- Contracts defining responsibilities for compliance and audit cooperation
Day-to-Day Compliance Actions
Compliance teams should:
- Map all AI systems and classify them by risk level.
- Set up AI governance committees (legal, privacy, security, product).
- Use standard templates for technical files, risk registers, and monitoring reports.
- Conduct regular audits and practice inspections.
- Train staff on AI risks and reporting requirements.
Using Technology to Simplify Compliance
RegTech tools can:
- Discover AI models in use across the organization
- Generate technical documentation
- Monitor models for drift or risk
- Create audit trails and reports
Automation doesn't replace human oversight, but it reduces manual time and effort at scale.
Example: AI Hiring Tool
Consider an AI tool screening resumes across multiple EU countries. Because it impacts fundamental rights, it's high risk. Compliance steps include:
- Documenting training data sources
- Testing for bias and fairness
- Implementing human oversight
- Maintaining logs of decisions
- Informing candidates about automated decision-making
- Monitoring performance and fairness over time
If the tool is purchased from a vendor, the company must obtain supplier documentation and proof of conformity.
Interaction with Other Laws
The AI Act works alongside GDPR and sector-specific rules. Data governance and privacy by design stay critical. Compliance teams must meet both GDPR and AI Act requirements while adapting to national and sectoral regulations.
Board-Level Concerns and Insurance
Boards need clear metrics on AI risk, potential incidents, and response plans. Insurers may adjust AI-related coverage based on the strength of compliance programs. Review policies to confirm they cover AI system risks.
Audits and Regulatory Inspections
Regulators will focus on:
- Risk management and evidence of AI oversight
- Technical documentation for high-risk systems
- Human oversight and transparency measures
- Post-market monitoring and incident reporting
- Third-party vendor agreements with compliance clauses
A Multi-Year Compliance Roadmap
- Short term (3 to 6 months). Map AI systems, classify by risk, close high-impact gaps.
- Medium term (6 to 18 months). Implement governance programs, update vendor contracts, finalize documentation.
- Long term (2026 and beyond). Continuous monitoring, conformity assessments, AI lifecycle updates.
A Note on Regulatory Intelligence
Atlas is built for FDA and international regulatory inspection and enforcement intelligence (FDA, MHRA, Health Canada, PMDA, CDSCO, Swissmedic). It's not an EU AI Act compliance platform. For quality teams in pharma and medical devices, Atlas supports inspection readiness, supplier monitoring, and enforcement tracking. EU AI Act compliance requires dedicated legal advice and product-specific conformity work.
Frequently asked questions
Any AI system used in the EU or offered to EU users is subject to the Act, even if the company is based outside Europe.

Written by
Atlas Team
The Atlas team brings together expertise in FDA regulatory intelligence, pharmaceutical quality systems, and inspection data analytics.