On January 6, 2025, the FDA released a draft guidance titled "Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products." It outlines the information the FDA may require when evaluating AI use in drug development, with a focus on establishing model credibility in contexts that affect patient safety, drug quality, or clinical reliability.
This post breaks down the risk framework, required disclosures, and innovation opportunities, and covers how quality and regulatory teams can stay aligned with the guidance.
Defining the Question of Interest
The first step in the FDA's framework involves identifying the question of interest that the AI model addresses. This could include:
- Selecting participants for clinical trials (e.g., inclusion and exclusion criteria).
- Classifying risks associated with trial participants.
- Analyzing clinical outcomes.
- Improving quality control in drug manufacturing processes.
By clearly defining this question, stakeholders can align their AI models with specific regulatory and operational goals.
Contexts of Use
The guidance introduces contexts of use, which define the scope and role of AI in addressing the identified question. Examples include:
- Clinical trial design and management.
- Evaluating patients during trials.
- Analyzing clinical trial data.
- Ensuring pharmaceutical manufacturing quality.
- Using digital health technologies in drug development.
- Generating real-world evidence (RWE).
- Monitoring drug life cycles.
These contexts determine the potential risks associated with the AI model and the level of credibility required.
Key Considerations:
- AI models with greater influence on decision-making require more comprehensive evaluations.
- Human oversight in processes involving AI can reduce risks and disclosure burdens.
Risk Framework for Information Disclosure
The FDA's framework evaluates risks based on:
- Model Influence Risk: How significantly the AI model affects decision-making.
- Decision Consequence Risk: The potential impact of those decisions, particularly on patient safety.
Disclosure Requirements:
- High-Risk Models: Comprehensive information about architecture, training data, validation methods, and performance metrics is required.
- Low-Risk Models: Less detailed information suffices.
For instance, AI models managing clinical trial participants' safety are considered high-risk and must meet stringent transparency requirements. Conversely, models supporting non-clinical activities may have reduced disclosure demands.
See what the FDA knows about your next investigator.
700,000+ inspections since 2010. Search by investigator, CFR code, or supplier. Every 483 they’ve issued, in under a second.
Establishing AI Model Credibility
To establish credibility, the FDA recommends providing details on:
- Model Description: Including architecture and algorithms.
- Training Data: Addressing data sources, quality, and fitness for purpose.
- Validation Processes: Demonstrating accuracy, reliability, and bias detection.
- Life Cycle Maintenance: Ensuring model outputs remain credible as inputs or conditions change.
Intellectual Property (IP) Considerations
The guidance's transparency requirements create challenges for maintaining trade secrets. Stakeholders are advised to:
- Patent AI Innovations: Protect architectures, training methods, and validation processes.
- Use Trade Secrets Strategically: For AI models unrelated to patient safety or drug quality, consider withholding proprietary details from public disclosures.
Opportunities for Innovation
The FDA's rigorous requirements open doors for technological advancements, including:
- Explainable AI (XAI): Models that clarify decision-making processes.
- Bias Detection Systems: Tools for identifying and mitigating bias in training data.
- Automated Monitoring: Systems to track and validate AI performance over time.
- Real-World Data Integration: Methods to enhance AI models using diverse datasets.

Written by
Atlas Team
The Atlas team brings together expertise in FDA regulatory intelligence, pharmaceutical quality systems, and inspection data analytics.