MedDeviceGuideMedDeviceGuide
Back

FDA AI-Enabled Device Software Functions: Understanding the January 2025 TPLC Draft Guidance for Lifecycle Management, Bias Assessment, and Marketing Submissions

A deep analysis of FDA's January 2025 draft guidance on AI-Enabled Device Software Functions — total product lifecycle approach, data management requirements, bias mitigation strategies, transparency expectations, marketing submission content, and practical steps manufacturers should take before the guidance is finalized.

Ran Chen
Ran Chen
Global MedTech Expert | 10× MedTech Global Access
2026-05-0214 min read

Why This Guidance Matters

On January 7, 2025, the FDA published its most comprehensive document yet on artificial intelligence in medical devices: the draft guidance "Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations." This is the first FDA guidance to address the full total product life cycle (TPLC) of AI-enabled devices — from initial design through post-market monitoring — with specific, actionable recommendations for what to include in marketing submissions.

The guidance had been anticipated for over a year. It appeared on FDA's "A list" of priority guidances for fiscal year 2024, was transferred to the FY2025 list when that deadline was missed, and was finally published in January 2025. When finalized, it will fundamentally change how manufacturers develop, document, submit, and monitor AI-enabled medical devices.

This guide explains the guidance's key recommendations, how they differ from existing FDA frameworks, and what manufacturers should do now to prepare — even while the guidance remains in draft form.

How This Guidance Relates to Existing FDA AI Frameworks

The FDA has been building its AI regulatory framework incrementally since 2019. Understanding where this guidance fits within the broader landscape is essential.

Document Date Focus
Proposed regulatory framework for AI/ML SaMD April 2019 Initial discussion paper on AI device regulation
Good Machine Learning Practice (GMLP) guiding principles October 2021 10 principles for ML device development
Draft PCCP guidance April 2023 Pre-approved change control plans for AI devices
PCCP guiding principles (international) October 2023 Global principles for predetermined change control
Transparency guiding principles for ML-enabled devices June 2024 Transparency expectations for ML devices
TPLC draft guidance (this article) January 2025 Full lifecycle management and submission recommendations
Final PCCP guidance August 2025 Finalized change control plan framework

The TPLC draft guidance is the umbrella document that ties together all prior work. It does not replace the PCCP guidance — instead, it provides the broader context within which PCCPs operate. Where the PCCP guidance addresses how to manage future algorithm changes, the TPLC guidance addresses how to design, develop, validate, document, and monitor the AI system throughout its entire life.

Key Definitions

The guidance introduces important terminology that manufacturers must understand:

AI-Enabled Device: A device that includes one or more AI-enabled device software functions (AI-DSFs). The AI component is part of the device's mechanism of action.

AI-DSF (AI-Enabled Device Software Function): A device software function that implements artificial intelligence. This includes machine learning models, deep learning systems, and other AI techniques embedded in the device.

Model: The algorithm and the data used to train it. FDA considers the model part of the device's mechanism of action, not just its implementation.

Data Management: The practices for collecting, curating, labeling, and managing data used in model training, validation, and testing.

Recommended Reading
AR/VR Medical Devices: Regulatory Pathway, FDA-Cleared Devices, and Classification Guide (2026)
Digital Health & AI Regulatory2026-04-26 · 15 min read

The Total Product Life Cycle Approach

The FDA's TPLC approach requires manufacturers to manage AI device risks from initial concept through decommission. The guidance organizes its recommendations across these lifecycle stages:

1. Design and Development

The guidance recommends that manufacturers integrate AI-specific considerations into design controls from the earliest stages, including:

  • Intended use clarity: Explicitly state what the AI component does, how it contributes to the device's overall function, and its role in the clinical workflow
  • Model architecture selection: Document why a particular AI approach was chosen and its limitations
  • Data strategy: Define the data requirements (quantity, quality, diversity, representativeness) needed to train and validate the model for the intended use population
  • Risk management: Address AI-specific risks such as model overfitting, data drift, adversarial inputs, and automation bias

2. Data Management

This is the most detailed section of the guidance and reflects FDA's core concern: data quality determines AI device safety. The guidance states that "reviewers will evaluate the quality, diversity, and quantity of data used to test an AI-DSF to evaluate the safety and effectiveness of the AI-enabled device."

Key recommendations:

Data Collection and Curation

  • Document the source, collection methodology, and curation process for all training, validation, and test datasets
  • Demonstrate alignment between collected data and the device's intended use and target population
  • Maintain sufficient segregation between training and validation datasets to prevent data leakage

Data Representativeness

  • Ensure datasets represent the diversity of the intended use population across relevant demographic dimensions (race, ethnicity, sex, age, geography)
  • Document and justify any gaps in demographic representation
  • Identify potential confounders — for example, if all diseased cases come from one scanner type or clinical site, the model may learn scanner artifacts rather than disease features

Data Quality

  • Implement data quality controls throughout the collection and annotation process
  • Document data cleaning, preprocessing, and augmentation steps
  • Maintain traceability from raw data to processed inputs

3. Bias Mitigation

The guidance treats bias mitigation as a safety requirement, not an aspirational goal. AI bias is defined as "a potential tendency to produce incorrect results in a systematic, but sometimes unforeseeable, way due to limitations in the training data or erroneous assumptions in the machine learning process."

FDA's bias framework requires manufacturers to:

  • Identify sources of bias in data collection, labeling, model architecture, and clinical deployment
  • Mitigate bias through representative data, diverse development teams, and bias testing protocols
  • Evaluate bias by measuring model performance across relevant subgroups (demographics, clinical conditions, device settings)
  • Monitor bias post-market through ongoing subgroup performance tracking

The guidance provides specific examples of bias pathways:

Bias Source Example Risk
Training data composition All diseased cases from one scanner type Model learns scanner artifacts, not disease
Geographic data bias Training data collected outside the US (OUS) Model performs poorly on US population due to demographic differences
Label inconsistency Different annotators using different criteria Model learns inconsistent patterns
Underrepresentation Certain demographics absent from training data Model fails for underrepresented groups
Overfitting to training site Data from few clinical sites Model doesn't generalize to new settings

4. Transparency

Transparency is presented as both a design principle and a regulatory requirement. The guidance recommends:

For Healthcare Providers

  • Clear explanation of what the AI does and does not do
  • Description of the clinical evidence supporting the AI function
  • Disclosure of known limitations and failure modes
  • Information about how the AI output should be used in clinical decision-making
  • Description of how the model was validated across patient subgroups

For Patients

  • Plain-language description of the AI's role in their care
  • Explanation of how the AI output may affect treatment decisions
  • Information about data privacy and security practices

Model Cards The guidance encourages the use of model cards — structured summaries that document a model's intended use, performance metrics, limitations, and evaluation results. Model cards provide a standardized format for organizing transparency information.

5. Marketing Submission Content

This section provides specific recommendations for what to include in 510(k), De Novo, PMA, HDE, and BLA submissions for AI-enabled devices.

Device Description

  • Declaration that AI is used in the device and its role in the intended use
  • Description of the AI model class (e.g., convolutional neural network, transformer)
  • Comparison of training, validation, and test datasets
  • Statistical confidence levels for predictions, including uncertainty metrics

Software Documentation

  • Software description including architecture, inputs, outputs, and decision logic
  • Model development and training methodology
  • Data management practices
  • Performance validation results
  • Risk analysis addressing AI-specific hazards

Performance Testing

  • Validation study design and statistical methods
  • Performance metrics (sensitivity, specificity, PPV, NPV, AUC, etc.)
  • Subgroup performance analysis across demographics
  • Failure mode analysis and error characterization
  • Comparison to clinical standard of care

Post-Market Monitoring Plan

  • Performance monitoring strategy
  • Data collection and analysis plan
  • Thresholds for triggering corrective action
  • PCCP (if applicable) for managing future model updates

6. Post-Market Performance Monitoring

The guidance recommends that manufacturers describe in their marketing submissions how they will monitor AI device performance after deployment. This includes:

  • Performance drift detection: Monitoring whether the model's accuracy changes over time as the patient population or clinical practice evolves
  • Subgroup performance tracking: Ongoing evaluation of performance across demographic groups to detect emerging disparities
  • Real-world data collection: Gathering data on how the device performs in routine clinical use, beyond the controlled study setting
  • Corrective action protocols: Predefined thresholds and procedures for when monitoring reveals performance degradation

The guidance encourages manufacturers to use PCCPs to manage post-market model updates, linking the monitoring plan to the PCCP modification protocol.

Human Factors Considerations for AI Devices

The guidance includes recommendations for human factors engineering specific to AI-enabled devices, expanded in a February 2026 Emergo by UL analysis:

Automation Bias

Manufacturers must address the risk that users may over-rely on AI recommendations — accepting outputs without appropriate critical evaluation. The guidance recommends:

  • Designing user interfaces that present AI outputs as decision support, not directives
  • Including confidence levels and uncertainty indicators alongside AI recommendations
  • Training materials that explain the AI's limitations and appropriate use

User Interface Design

  • Display relevant context for AI outputs (what data was used, how the output was generated)
  • Provide clear indication when the AI is operating outside its validated parameters
  • Design override mechanisms that are intuitive and accessible

Validation Testing

  • Human factors validation should include scenarios where the AI produces incorrect or uncertain outputs
  • Test participants should include the range of intended users (specialists, general practitioners, nurses)
  • Evaluate whether users correctly identify and respond to AI errors

What This Guidance Does NOT Cover

Understanding the boundaries is important:

  • It does not replace existing software guidance. Manufacturers must still comply with IEC 62304, FDA's software guidance, and device-specific requirements.
  • It does not establish legally binding requirements. As a draft guidance, it represents FDA's current thinking but is not for implementation until finalized.
  • It does not address AI used in drug development. FDA issued separate draft guidance on AI in drug and biological product development.
  • It does not provide specific threshold standards for subgroup performance or bias metrics. The guidance encourages monitoring but does not prescribe pass/fail criteria.
  • It does not comprehensively address continuously learning systems. The guidance acknowledges adaptive AI but leaves ambiguity for models that update autonomously post-approval.
Recommended Reading
Cleaning Validation for Reusable Surgical Instruments: Soil, Residue, Worst-Case Devices, and Acceptance Criteria
Quality Systems Regulatory2026-04-30 · 14 min read

How This Differs From the PCCP Guidance

Dimension PCCP Guidance (Final, August 2025) TPLC Draft Guidance (January 2025)
Scope Pre-approved future modifications to AI models Full device lifecycle from design to decommission
Focus Change control — what changes can be made without resubmission Documentation, bias, transparency, performance, monitoring
Submission type Specific component within a marketing submission Comprehensive framework covering the entire submission
Bias Not specifically addressed Central theme with detailed recommendations
Transparency Referenced but not detailed Major section with specific recommendations for HCPs and patients
Data management Not addressed in detail Most detailed section with specific recommendations
Post-market PCCP modification protocol Broader performance monitoring framework

The two guidances are complementary. The TPLC guidance describes what your AI device should look like at every lifecycle stage; the PCCP guidance describes how to manage approved changes to it.

Organizational Capability Maturity

MD+DI's analysis of the TPLC guidance proposes a maturity framework for companies building AI device compliance capabilities:

Capability Domain Foundational (2026) Scaling (2027–2028) Leading (2029–2030)
AI Governance / QMS GMLP embedded in design controls; cross-functional AI review board chartered AI risk tiering integrated into design history files; PCCP library for major product families Automated GMLP compliance checks in CI/CD pipelines; AI governance metrics in management review
PCCP Engineering First PCCP authored and cleared via Q-Sub feedback; labeling template updated PCCP portfolio covers 80%+ of planned model updates; change impact library maintained PCCP-as-a-service: reusable framework enables rapid iteration for new product lines
Bias & Equity Subgroup performance baseline established for first AI device Demographic representativeness tracked across all AI product lines Continuous fairness monitoring integrated into post-market surveillance dashboards
Data Management Training data lineage documented for first submission Standardized data curation pipeline with automated quality checks Synthetic data generation and augmentation for rare demographic subgroups

The most impactful organizational change in 2026 is establishing a cross-functional AI Review Board comprising Regulatory, Quality, R&D, Clinical, and IT Security leaders with authority over PCCP authorization, TPLC compliance, bias assessment, and incident escalation.

Practical Steps for Manufacturers

1. Align Development Practices Now

Even though the guidance is in draft, FDA has stated that it reflects the agency's current thinking. Manufacturers who align their development practices with these recommendations will be better positioned when the guidance is finalized — and may receive fewer clarification requests on current submissions.

2. Implement Data Management Frameworks

Invest in data governance infrastructure that enables you to document:

  • Data provenance (where data came from, how it was collected)
  • Data processing pipeline (cleaning, augmentation, preprocessing)
  • Dataset composition (demographics, clinical settings, device types)
  • Dataset segregation (training, validation, test splits)
  • Bias assessment results

3. Conduct Bias Audits

Before submitting, evaluate your model's performance across relevant subgroups. Document results transparently, including any disparities found and mitigation steps taken.

4. Develop Transparency Materials

Create model cards and user-facing documentation that clearly explain:

  • What the AI does and its role in clinical decision-making
  • The evidence supporting the AI function
  • Known limitations and failure modes
  • How to interpret AI outputs appropriately

5. Plan for Post-Market Monitoring

Design performance monitoring systems that can detect drift, track subgroup performance, and trigger corrective action when thresholds are exceeded. Consider integrating this with a PCCP for managing future updates.

6. Engage with FDA Early

The guidance explicitly encourages early engagement — through pre-submissions, Q-submissions, and direct interaction with the Digital Health Center of Excellence. Early feedback on your AI development strategy can prevent costly misalignment later.

Recommended Reading
SaMD vs SiMD vs Embedded Software: Classification, Documentation, and Regulatory Strategy
Digital Health & AI Regulatory2026-04-24 · 14 min read

What Industry Is Saying

The draft guidance has generated significant industry discussion. Key themes from public commentary:

  • Transparency requirements are welcomed but need more specificity on format and content
  • Bias mitigation expectations are ambitious and may be challenging for smaller companies with limited access to diverse datasets
  • Post-market monitoring recommendations lack specific thresholds and enforcement mechanisms
  • Continuously learning systems remain inadequately addressed — the guidance does not clearly define how autonomous model updating would be regulated
  • Patient-facing transparency is underdeveloped — the guidance focuses on HCP transparency but provides less detail on how patients should be informed

Timeline and Next Steps

Date Event
January 7, 2025 Draft guidance published
Open Public comments accepted (Docket FDA-2024-D-4488)
TBD Comment review and revision
TBD Final guidance published (expected 2026–2027)

The guidance is not expected to be finalized before late 2026 or 2027. However, as the CenterWatch analysis noted: "To reduce the risk of costly delays, it is important to align to these FDA expectations before submission and before marketing an AI-enabled software device."

Key Takeaways

  1. This is the FDA's most detailed AI device document to date. It covers the entire product lifecycle with specific, actionable recommendations.

  2. Data management is the centerpiece. FDA will evaluate the quality, diversity, and quantity of your training and testing data as part of its safety and effectiveness assessment.

  3. Bias is a safety issue. Not a diversity initiative — a regulatory requirement. Manufacturers must identify, mitigate, evaluate, and monitor bias throughout the device lifecycle.

  4. Transparency is required for both HCPs and patients. Model cards, clear labeling, and plain-language documentation are recommended.

  5. The guidance complements, not replaces, the PCCP framework. Use the TPLC guidance for overall lifecycle management and the PCCP guidance for managing approved future changes.

  6. Start aligning now. Even in draft form, this guidance reflects FDA's current expectations. Companies that wait for finalization risk significant rework on their AI device submissions.