AI is widely used across business functions, and regulators are active. Total GDPR fines and enforcement actions reached several billion euros by early 2025. AI governance is a legal and operational priority, not a tech project.
This post covers the governance basics that hold up in audit, GDPR-focused actions for AI projects, how to monitor people legally, how to control hallucinations, and what to put in place this month.
Simple Definitions
- AI governance. The rules, roles, and processes a company uses to build, deploy, and monitor AI systems.
- GDPR. The EU data protection law that governs processing of personal data. It affects how you train models, what data you use, and how you inform people.
- Monitoring. Tracking people or systems, including employee monitoring tools, CCTV with analytics, or productivity scoring. Regulators require careful legal checks before large-scale monitoring.
- Hallucinations. Generative AI produces false but confident statements, fake citations, or invented facts. These cause reputational damage, legal risk, and operational errors.
The Regulatory Backdrop
- EU AI Act. Classifies some systems as high risk and requires stricter processes, documentation, and monitoring. Applies to safety, hiring, medical, and law enforcement use cases.
- GDPR. Central to any processing of personal data. Processing must have a legal basis, be transparent, and respect rights like access and deletion. Regulators kept fining companies through 2024 and 2025 for poor consent and transparency.
- National data protection authorities. Issued guidance and fines on scraping, cookies, and workplace monitoring. Regulators focus on both consumer privacy and employee rights.
Step 1. Governance Basics That Actually Work
Create an AI governance board
Include legal, privacy, security, HR, and business owners. This group reviews new AI projects, approves high-risk models, and signs off on monitoring programs. Shared ownership prevents single teams from making risky choices.
Map AI use cases
Catalog where AI is used, who uses it, what data feeds it, and whether outputs affect people's rights. Prioritize high-risk use cases for immediate controls.
Inventory data and model sources
Record training data, origin, whether it includes personal data, and retention period. For third-party models, document vendor, model version, and known limitations.
Define roles, not just tools
Appoint a Model Owner, Data Steward, Privacy Officer, and an Auditor who can independently review models and monitoring setups.
Step 2. GDPR-Focused Actions for AI Projects
Legal basis and purpose
Document a lawful basis for processing personal data. If you rely on consent, make it specific and auditable. If you rely on legitimate interest, perform and record a balancing test. Avoid vague, broad purposes.
Data minimization and purpose limitation
Use only the personal data you need. Prefer aggregated or pseudonymized data for training wherever possible.
DPIAs
Run a Data Protection Impact Assessment for high-risk systems: automated decisions about people, large-scale monitoring, or anything that profiles employees or customers. A DPIA documents risks, mitigations, and residual risk. It's often required under GDPR.
Transparency and user rights
Publish clear privacy notices about AI use and give data subjects simple ways to exercise rights (access, correction, deletion). For automated decisions, provide meaningful information about the logic and main factors used.
Vendor contracts
Require vendors to commit to data protection standards, allow audits, and provide model provenance details.
Record keeping and version logs
Retain logs of model versions, training data snapshots, and decisions made by the model for a reasonable retention period.
FDA Inspection Intelligence Digest
We track every 483, EIR, and warning letter the moment it happens. Get the signals before they hit the FDA website.
Step 3. Monitoring People Legally and Ethically
Assess necessity and proportionality
Ask whether monitoring is needed and whether less invasive options exist. Keep the scope narrow.
Inform and consult employees
Tell staff what you monitor, why, how long data is kept, and who sees it. For significant programs, consult workers or unions.
Limit access and use
Give access only to people who need it. Don't use monitoring data for unrelated performance judgments without clear policies.
Anonymize where possible
Use aggregated metrics for management and keep identifiable data separate and protected.
Document justification
Keep a short written justification that proves monitoring is proportionate and lawful.
Step 4. Tackling Hallucinations
Classify outputs by risk
Decide where a wrong answer is minor and where it could cause harm. For high-risk outputs, require human verification.
Human in the loop, not human on standby
For critical tasks, ensure a trained human reviews and approves outputs before they affect customers or decisions.
Provenance and citations
Use models or wrappers that trace the source of factual claims. Have the system provide citations and confidence levels when possible.
Testing and red teaming
Test models frequently on real scenarios to find hallucination patterns. Use red teams to probe weaknesses.
Fallback processes
If an AI can't be relied upon for a task, keep a manual process and a clear escalation path.
Step 5. Practical Tech Controls
Model cards and datasheets
Publish a short summary for each model that lists purpose, training data sources, limitations, and known bias issues.
Access controls and secrets management
Lock down keys and model endpoints. Log who calls the model and why.
Monitoring and observability
Track model outputs, performance, data drift, and unusual patterns. Set alerts for anomalous outputs.
Explainability tools
Use explainability libraries to show why a model gave a result, especially for people-facing decisions.
Bias and fairness tests
Run regular tests to detect disparate impacts and correct them.
Safety layers and guardrails
Add filters to block clearly harmful outputs, and use verification services for factual claims.
Step 6. People and Culture
Trainers, not just users
Train the people who will review and act on AI outputs. Teach them model limits and red flags.
Clear escalation rules
Staff must know whom to contact and how to log issues when a model produces risky output.
Incentives for reporting problems
Encourage staff to report model failures without fear. Use those reports to improve models.
Privacy by default
Make privacy the default for any product or feature that uses AI.
Step 7. Regulatory Watch
- Expect stricter rules for high-risk systems under the EU AI Act and national regulators. Plan for mandatory documentation, audits, and sandboxes.
- Data protection enforcement remains active: multibillion-euro totals in 2024 and 2025 from national decisions focused on consent, cookies, scraping, and workplace monitoring.
- Generative AI adoption continues to grow. Enterprise investment in safety tooling and governance will rise through 2026.
Quick Checklist for This Month
- Build an AI use case inventory.
- Run a DPIA for any AI that handles personal data or impacts people's rights.
- Require vendor contracts to include data protection and audit rights.
- Add human review steps where decisions materially affect people.
- Launch simple monitoring of model outputs with logging and alerts.
- Publish a short internal model card for each production model.
Common Pitfalls
- Treating AI as just a developer tool, not a legal and operational asset.
- Relying on vendor claims without audit clauses.
- Skipping DPIAs for new systems because they seem internal.
- Using monitoring data for unrelated disciplinary action without documented policy.
- Assuming a model is always correct and skipping output validation.
Frequently asked questions
A DPIA is a Data Protection Impact Assessment. Run one when your AI processes personal data at scale, profiles individuals, or makes automated decisions with legal or similarly significant effects. Document risks, mitigations, and residual risk.

Written by
Atlas Team
The Atlas team brings together expertise in FDA regulatory intelligence, pharmaceutical quality systems, and inspection data analytics.