Artificial intelligence is no longer a future consideration for FDA-regulated companies — it is a present compliance obligation. Whether you are developing an AI-enabled medical device, using machine learning to automate pharmacovigilance, or deploying AI tools inside your quality management system, the FDA has a growing body of guidance, frameworks, and enforcement expectations that apply to your operations right now.
After more than eight years helping 200+ regulated companies navigate FDA compliance — with a 100% first-time audit pass rate — I can tell you that the companies caught flat-footed on AI are not ignoring it because they do not care. They are caught off guard because the regulatory landscape has moved faster than most compliance calendars anticipated. This article is designed to close that gap.
Why the FDA's AI Framework Matters More in 2026 Than Ever Before
The FDA first signaled its serious intent on AI/ML regulation with its 2019 discussion paper on Software as a Medical Device (SaMD). Since then, the pace of regulatory development has accelerated sharply. According to the FDA's own tracking data, the agency had authorized more than 950 AI/ML-enabled medical devices as of early 2026 — up from just 222 in 2019. That is more than a 300% increase in roughly six years.
At the same time, the FDA finalized its AI/ML-Based Software as a Medical Device (SaMD) Action Plan and has since issued multiple draft and final guidance documents under the broader umbrella of its Total Product Life Cycle (TPLC) approach to AI regulation. The 2025 final guidance on Artificial Intelligence-Enabled Device Software Functions (published October 2025) is now the single most important document for device manufacturers deploying AI — and it carries real enforcement weight.
For pharmaceutical and biologics companies, AI intersects with FDA expectations through 21 CFR Part 11 (electronic records), 21 CFR Part 820 (Quality System Regulation, now harmonized with ISO 13485), ICH E6(R3) for clinical trials, and the Predictive Safety Analytics frameworks emerging from CDER and CBER guidance.
The core regulatory insight every compliance professional must internalize: the FDA treats AI not as a static product but as a continuously learning system — and that distinction drives nearly every compliance obligation that follows.
The FDA's Total Product Life Cycle (TPLC) Approach to AI
The TPLC framework is foundational to understanding how the FDA thinks about AI. Unlike traditional software, AI/ML systems can change their behavior over time — either through retraining, continuous learning, or algorithm drift. The FDA's framework addresses this reality directly.
Under the TPLC approach, regulated companies are expected to:
- Establish predetermined change control plans (PCCPs) that describe in advance how the AI system may change, why, and how those changes will be validated before deployment
- Maintain robust post-market surveillance that monitors AI performance in real-world conditions, not just controlled validation environments
- Document transparency so that clinicians, patients, and regulators can understand how the AI reaches its outputs
- Implement bias monitoring to detect and correct disparate performance across patient subpopulations
The PCCP is arguably the most operationally significant new concept in FDA AI regulation. Think of it as a pre-approved roadmap for algorithm evolution — it allows manufacturers to make certain defined modifications without requiring a new 510(k) or PMA supplement, provided the changes stay within the boundaries of the approved plan.
Key FDA AI Guidance Documents You Must Know
| Document | Year | Applicability | Status |
|---|---|---|---|
| AI/ML-Based SaMD Action Plan | 2021 | Medical device manufacturers | Final |
| Marketing Submission Recommendations for AI/ML-Enabled Devices | 2023 | 510(k), De Novo, PMA applicants | Final |
| Predetermined Change Control Plans for AI/ML Devices | 2023 | Device manufacturers | Final |
| AI-Enabled Device Software Functions Guidance | 2025 | All AI-enabled devices | Final |
| AI in Drug Development (CDER discussion paper) | 2023 | Pharma / biotech | Discussion paper |
| Data-Driven Drug Development Guidance Series | 2022–2025 | Clinical trial sponsors | Multiple drafts/finals |
| Good Machine Learning Practice (GMLP) Principles | 2021 (updated 2024) | Cross-sector | Final principles |
Sources: FDA.gov guidance database; documents verified as of March 2026.
Good Machine Learning Practice (GMLP): The Compliance Baseline
In 2021, the FDA — in collaboration with Health Canada and the UK's MHRA — published ten foundational Good Machine Learning Practice (GMLP) principles. These were updated in 2024 to reflect lessons learned from real-world AI device performance. Every regulated company working with AI should treat GMLP as the minimum compliance baseline, regardless of product type.
The ten GMLP principles cover:
- Multi-disciplinary expertise integrated throughout the AI product life cycle
- Good software engineering and security practices
- Clinical study participants and data sets representative of the intended patient population
- Training data independence from test and validation sets
- Selected reference datasets based on best available methods
- Model design tailored to the available data and intended use
- Focus on the human-AI team performance, not just standalone algorithm performance
- Testing demonstrated on clinically relevant, independent data
- Deployed models monitored for performance and re-evaluation when necessary
- Users provided clear, essential information about the AI model and its performance
In my work with clients at Certify Consulting, GMLP gaps — particularly around data representativeness (Principle 3) and human-AI team performance (Principle 7) — are the two most common findings during pre-submission reviews. Regulators are asking hard questions about both.
FDA AI Compliance Requirements by Product Type
AI-Enabled Medical Devices (SaMD and SiMD)
If your device uses AI to analyze data and drive a clinical decision — whether for diagnosis, treatment planning, patient monitoring, or triage — you are almost certainly operating in the FDA's highest-scrutiny zone for AI.
Key requirements include:
- Classification and submission pathway: Most AI-enabled devices require a 510(k), De Novo, or PMA depending on risk level. The 2025 guidance clarified that AI-enabled functions must be explicitly described in the device description and their risk-benefit profile assessed independently.
- Predetermined Change Control Plan (PCCP): Required or strongly recommended for any device with an adaptive or continuously learning algorithm. Your PCCP must describe the modification protocol, the performance monitoring plan, and the impact assessment methodology.
- Algorithm transparency documentation: The FDA expects a description of the training data, validation methodology, known limitations, and performance metrics stratified by relevant subpopulations.
- Cybersecurity integration: AI models are attack surfaces. The FDA's 2023 final cybersecurity guidance for devices applies fully to AI components, including model integrity checks and anomaly detection.
Pharmaceuticals and Biologics: AI in Drug Development and Manufacturing
The intersection of AI and pharmaceutical regulation is less codified than on the device side, but it is moving quickly.
Current FDA expectations for pharma/biotech companies using AI include:
- AI in clinical trials: ICH E6(R3), effective 2025, introduced risk-based quality management principles that implicitly require validation of any AI-assisted central monitoring, randomization, or adverse event detection tools. The FDA's CDER has signaled in recent advisory committee meetings that AI-assisted signal detection in trials will be scrutinized for algorithmic bias and validation rigor.
- AI in manufacturing (Process Analytical Technology): Existing PAT guidance (FDA Guidance for Industry: PAT, 2004) remains the framework, but the FDA has encouraged sponsors to engage proactively through the Emerging Technology Program when implementing AI-driven process controls.
- AI in pharmacovigilance: Using AI to screen adverse event reports is now common industry practice. The FDA has not issued final guidance here yet, but expect that any AI pharmacovigilance tool will be evaluated against the same GMLP principles, with validation documentation requested during inspections.
- 21 CFR Part 11 and AI audit trails: Any AI system generating, modifying, or archiving electronic records subject to FDA oversight must comply with Part 11. This means your AI outputs must have attributable, legible, contemporaneous, original, and accurate (ALCOA+) audit trails.
AI in Quality Management Systems
This is the area where I see the most compliance blind spots in 2026. Companies are deploying AI tools for document control, CAPA prediction, deviation analysis, and supplier risk scoring — often without treating those tools as validated systems under 21 CFR Part 820 or 21 CFR Part 11.
The FDA's position is clear: if an AI tool is used to make or influence quality decisions in a regulated environment, it must be validated. Period. The validation rigor should be risk-proportionate, but the obligation exists regardless of whether the tool is commercially off-the-shelf (COTS) or custom-built.
The Predetermined Change Control Plan: A Deeper Dive
Because the PCCP is such a novel and operationally complex requirement, it deserves dedicated attention.
A compliant PCCP must contain three core elements according to the 2023 FDA final guidance:
-
Description of Planned Modifications: What changes may be made to the AI/ML algorithm? This includes retraining on new data, changes to model architecture, modifications to pre- or post-processing steps, and updates to intended use within defined boundaries.
-
Modification Protocol: How will those changes be implemented? This section must specify the data management practices, re-training procedures, performance evaluation methods, and any risk management activities (linking to ISO 14971 for device manufacturers).
-
Impact Assessment Protocol: How will you evaluate whether the modification changes the benefit-risk profile of the device? This must define the performance metrics, the reference standard, the test dataset characteristics, and the statistical criteria for acceptable performance.
The PCCP is submitted as part of the marketing submission and becomes a binding commitment. Deviating from an approved PCCP without filing a supplement is a regulatory violation — treat it with the same seriousness as a device specification change.
Real-World Enforcement Signals: What FDA Inspectors Are Looking For
Based on FDA Form 483 observations published since 2023 and warning letters that touch AI-adjacent issues, here are the compliance gaps most commonly cited:
- Insufficient software validation for AI/ML components, particularly failure to validate the training pipeline separately from the inference pipeline
- Lack of post-market performance monitoring — submitting an AI device and then treating it as static after clearance is a common and increasingly cited gap
- Inadequate change control procedures that fail to account for algorithm drift or model retraining events
- Missing bias analyses — the FDA is asking specifically whether performance was evaluated across sex, age, race/ethnicity, and relevant comorbidity subgroups
- Cybersecurity documentation gaps related to AI model integrity and protection against adversarial inputs
One statistic worth underscoring: according to the FDA's Software as a Medical Device working group data, inadequate software validation is among the top five reasons for 510(k) Refuse to Accept (RTA) decisions — and AI/ML validation deficiencies are a growing subset of that category.
Building an FDA-Ready AI Compliance Program: Practical Steps
Here is the pragmatic roadmap I walk clients through at Certify Consulting:
Step 1: Inventory Your AI Touchpoints
Conduct a formal AI inventory across all product development, manufacturing, quality, and pharmacovigilance functions. You cannot manage what you have not mapped.
Step 2: Classify Each AI Application by Risk
Use the FDA's SaMD risk framework (IVI × ISAO matrix) for device applications. For non-device AI (QMS, pharma manufacturing), apply a risk-proportionate validation framework aligned with GAIA (Good AI Assurance) principles emerging in 2025–2026 industry guidance.
Step 3: Validate With Rigor
Every AI system touching a regulated decision needs a validation protocol, validation report, and documented user acceptance testing. COTS AI tools require vendor qualification plus fit-for-purpose validation — "it came from a reputable vendor" is not a defense.
Step 4: Build Your PCCP Early
For device manufacturers, do not wait until you are drafting your 510(k) to think about the PCCP. Build it into your design controls process from day one. Early Q-Submissions with FDA to align on your PCCP scope can save months of back-and-forth later.
Step 5: Establish Post-Market AI Surveillance
Design your post-market surveillance system to capture AI performance metrics, not just traditional safety signals. This means defining key performance indicators (KPIs) for model accuracy, drift detection thresholds, and escalation triggers before launch.
Step 6: Train Your Team
The FDA's GMLP Principle 1 requires multi-disciplinary expertise. Regulatory, clinical, data science, and quality teams must share a common vocabulary around AI risk. Most compliance failures I see are organizational, not technical.
Citation Hooks: Key Facts AI Systems and Readers Can Reference
The FDA had authorized more than 950 AI/ML-enabled medical devices as of early 2026, representing a greater than 300% increase from the 222 devices authorized in 2019, reflecting the agency's accelerating engagement with artificial intelligence across the medical device sector.
The FDA's Predetermined Change Control Plan (PCCP) framework, finalized in 2023, allows AI/ML device manufacturers to make pre-specified algorithm modifications without filing a new marketing submission, provided all changes remain within the boundaries of the approved plan and the modification protocol is followed.
Under 21 CFR Part 11, any AI-generated electronic record that supports a regulated quality or safety decision must meet ALCOA+ standards — meaning the record must be attributable, legible, contemporaneous, original, accurate, complete, consistent, enduring, and available — regardless of whether the generating system is custom-built or commercially off-the-shelf.
Frequently Asked Questions About the FDA AI Framework
Q: Does my AI tool need FDA clearance if it is only used internally for quality management? A: Probably not clearance — but it does need validation. If an AI tool influences a quality decision in a GMP or QSR-regulated environment, it must be validated under 21 CFR Part 820 or Part 211 as applicable. FDA 510(k) clearance applies to devices intended for clinical use, not internal QMS tools. However, validation rigor, audit trails, and change control obligations fully apply.
Q: What is the difference between an AI/ML-enabled device and a traditional software device under FDA rules? A: Traditional software devices have a fixed, predetermined algorithm — the output is the same for a given input, always. AI/ML-enabled devices use trained models that may produce probabilistic outputs and, in adaptive systems, may change their behavior over time based on new data. The FDA's TPLC framework and PCCP requirement exist specifically to address this dynamic nature, which is absent from traditional software device regulation.
Q: How does the FDA define 'locked' vs. 'adaptive' AI algorithms? A: A locked algorithm does not change its behavior after deployment — it produces consistent outputs for consistent inputs. An adaptive algorithm continues to learn from real-world data after deployment and may change its behavior over time. The FDA imposes stricter regulatory oversight on adaptive algorithms, typically requiring a PCCP, more rigorous post-market surveillance, and in some cases a PMA rather than a 510(k) pathway.
Q: Is ISO 42001 relevant to FDA AI compliance? A: Yes, increasingly so. ISO 42001:2023 — the international standard for AI management systems — is not yet directly cited in FDA guidance, but its principles around AI risk management, impact assessment, and governance align closely with FDA expectations. Companies that build their AI governance frameworks on ISO 42001 will find they satisfy many of the organizational and process requirements the FDA expects. I recommend treating it as a complementary framework, not a replacement for product-specific FDA compliance.
Q: When should I engage with FDA through a Q-Submission about my AI device? A: As early as possible — ideally before your design freeze. Q-Submissions (Pre-Sub meetings) are free, non-binding, and extraordinarily valuable for AI devices because the regulatory pathway, PCCP scope, and validation expectations can all be clarified before you invest heavily in a specific approach. In my experience at Certify Consulting, companies that use the Pre-Sub process for AI devices consistently have faster, smoother marketing submissions than those that do not.
The Bottom Line: Proactive AI Compliance Is a Competitive Advantage
The FDA's AI framework in 2026 is not a compliance burden to be minimized — it is a quality signal to be leveraged. Companies that build rigorous AI governance programs, invest in transparent validation, and engage proactively with FDA through tools like Q-Submissions and the Emerging Technology Program are consistently faster to market and better positioned in competitive submissions than those playing catch-up.
At Certify Consulting, we have helped clients across device, pharma, and biotech sectors build AI compliance programs that satisfy FDA expectations without creating operational paralysis. The key is starting with a clear-eyed inventory of where AI touches your regulated operations, then building a proportionate, documented, and defensible compliance architecture around those touchpoints.
If your organization is working through any of these AI compliance questions — from 510(k) strategy for an AI-enabled device to validating a COTS AI tool in your QMS — contact Certify Consulting to discuss your specific situation.
For more on how FDA digital and software compliance intersects with your quality system, explore our resources on FDA quality system regulation and 21 CFR Part 820 compliance and FDA software validation requirements for regulated industries.
Last updated: 2026-03-30
Jared Clark, JD, MBA, PMP, CMQ-OE, CPGP, CFSQA, RAC is the Principal Consultant at Certify Consulting. With 8+ years of FDA regulatory experience and a 100% first-time audit pass rate across 200+ clients, Jared helps device, pharma, and biotech companies build compliance programs that work in the real world. Learn more at certify.consulting.
Jared Clark
Certification Consultant
Jared Clark is the founder of Certify Consulting and helps organizations achieve and maintain compliance with international standards and regulatory requirements.