MedDeviceGuideMedDeviceGuide
Back

EU AI Act for Medical Devices: Compliance Guide for MedTech (2026)

Comprehensive guide to the EU AI Act requirements for AI-enabled medical devices — covering high-risk classification, conformity assessment, the dual compliance model with MDR/IVDR, data governance, transparency, human oversight, and the August 2026 enforcement deadline.

Ran Chen
Ran Chen
Global MedTech Expert | 10× MedTech Global Access
2026-04-0714 min read

The EU AI Act and Medical Devices: A New Compliance Layer

The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. For medical device manufacturers, it adds a significant new compliance layer on top of the existing Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR).

AI-enabled medical devices — including software as a medical device (SaMD) that incorporates AI or machine learning — are classified as high-risk AI systems under the AI Act. This classification triggers extensive requirements around data governance, transparency, human oversight, risk management, and post-market monitoring.

The deadlines have been evolving. The original regulation set August 2, 2026 as the date when most high-risk AI system obligations become enforceable. However, in early 2026 the European Commission proposed a "Digital Omnibus" package that would extend these deadlines. As of April 2026, both the Council and Parliament have signaled support for pushing the deadlines to December 2, 2027 for standalone high-risk systems and August 2, 2028 for AI embedded in regulated products (including medical devices). Manufacturers should monitor the final outcome but plan for the possibility of either timeline.

Which Medical Devices Are Affected?

High-Risk Classification Under the AI Act

AI-enabled medical devices qualify as high-risk AI systems if they meet two conditions:

  1. The AI system is a medical device in itself, or a safety component of a product, covered by EU harmonization legislation (MDR or IVDR)
  2. The device requires third-party conformity assessment by a Notified Body under MDR or IVDR

This means the following device classes are generally captured:

Regulation Classes Typically Classified as High-Risk
MDR Class IIa, IIb, and III devices incorporating AI
IVDR Class A, B, C, and D IVDs incorporating AI

MDR Class I devices that do not require Notified Body involvement for conformity assessment are generally not classified as high-risk under the AI Act, though other provisions may still apply.

What Counts as "AI" Under the Act

The AI Act defines AI broadly. It covers:

  • Machine learning models (supervised, unsupervised, reinforcement learning)
  • Deep learning and neural networks
  • Natural language processing
  • Computer vision systems
  • Expert and logic-based systems
  • Bayesian estimation approaches
  • Optimization methods

If your medical device uses any of these techniques to generate outputs (predictions, recommendations, classifications, or decisions) that influence patient care, it likely falls within scope.

The Dual Compliance Model: MDR/IVDR + AI Act

The AI Act does not replace MDR or IVDR. It adds a complementary layer focused specifically on AI system integrity and governance. Manufacturers must comply with both frameworks simultaneously.

How the Two Frameworks Interact

Aspect MDR/IVDR AI Act Combined Approach
Risk management ISO 14971 AI-specific risk management (Art. 9) Integrate AI risks into ISO 14971 analysis
Clinical evaluation Clinical evaluation report (CER) Data governance requirements (Art. 10) Address data quality, bias, and representativeness in CER
Quality management ISO 13485 AI QMS requirements (Art. 17) Extend QMS to cover AI-specific processes
Post-market surveillance PMS plan, PMCF Post-market monitoring for AI (Art. 72) Integrate AI performance monitoring into PMS
Technical documentation Technical file per Annex II/III MDR Technical documentation per Annex IV AI Act Create unified documentation addressing both
Conformity assessment NB assessment per MDR/IVDR May leverage existing NB assessment Combined assessment where possible
Labeling CE mark, UDI CE mark with AI-specific information Update IFU to include AI transparency information

Key principle: The AI Act allows integration of AI-specific testing, reporting, and documentation into existing MDR/IVDR procedures to avoid unnecessary duplication (Article 8). The European Commission's MDCG has published guidance (MDCG 2025-6) on the practical interplay between MDR/IVDR and the AI Act, which manufacturers should consult for detailed implementation direction.

Core Requirements for High-Risk AI Medical Devices (Articles 8–15)

1. Risk Management System (Article 9)

You must establish a continuous, iterative risk management system specific to AI that:

  • Identifies and analyzes known and foreseeable risks
  • Estimates risks when the system is used as intended, and when misused
  • Evaluates risks based on probability of occurrence and severity of harm
  • Applies appropriate risk mitigation measures
  • Provides information to users about residual risks

Integration with ISO 14971: Extend your existing ISO 14971 risk management file to include AI-specific hazards such as data drift, model degradation, adversarial attacks, and algorithmic bias. Document how AI risks interact with device safety risks.

2. Data Governance (Article 10)

This is one of the most significant new requirements. Training, validation, and testing datasets must meet stringent quality criteria:

  • Relevance: Data must be sufficiently representative of the intended patient population
  • Bias mitigation: Identify, measure, and mitigate biases in datasets
  • Quality assurance: Implement data governance practices covering design choices, data collection, data preparation, assumptions, and assessment of data quality
  • Data provenance: Document the origin, curation processes, and labeling methodology
  • Statistical properties: Examine and document statistical properties of the data

Practical implications for clinical data: If your AI was trained on data from one demographic, you may need to supplement with data from underrepresented populations to demonstrate generalizability across the EU market.

3. Technical Documentation (Article 11 and Annex IV)

The AI Act requires comprehensive technical documentation including:

  • General description of the AI system
  • Detailed description of system design, development, and evaluation
  • Design specifications (development process, training methodology, design choices)
  • Data governance documentation (datasets, bias testing, representativeness)
  • Risk management documentation specific to AI
  • Information on testing, validation, and performance metrics
  • Human oversight measures
  • Accuracy, robustness, and cybersecurity specifications
  • Post-market monitoring plan

Integration with MDR technical file: Much of this can be integrated into your existing MDR technical file structure, but the AI-specific elements (data governance, bias testing, model performance monitoring) are additive requirements.

4. Record-Keeping and Logging (Article 12)

High-risk AI systems must automatically log events:

  • Record activity logs for the lifetime of the system
  • Ensure logging is at a level sufficient to facilitate post-market monitoring and traceability
  • Store logs in a format that allows for retrieval and analysis

For medical devices, this aligns with MDR post-market surveillance requirements but adds specific expectations around algorithmic traceability and audit trails.

5. Transparency and Information to Users (Article 13)

AI-enabled devices must provide clear information to users:

  • Intended purpose and capabilities of the AI system
  • Level of accuracy and performance metrics
  • Known limitations and risks
  • How to interpret outputs
  • How to provide human oversight (see below)
  • Expected lifetime and maintenance requirements

Integration with labeling: Update your Instructions for Use (IFU) to include AI-specific transparency information. Users must understand what the AI does, how confident they should be in its outputs, and when to override its recommendations.

6. Human Oversight (Article 14)

Devices must be designed to allow effective human oversight:

  • Users must be able to understand AI outputs and detect automation bias
  • Users must be able to intervene, override, or stop the system
  • The system must flag outputs that may be incorrect or unreliable
  • Tools must enable interpretation of AI outputs

Design implication: "Black box" AI models that cannot explain their outputs will face significant compliance challenges. Consider explainable AI (XAI) techniques and ensure your user interface supports clinician understanding of AI recommendations.

7. Accuracy, Robustness, and Cybersecurity (Article 15)

AI systems must meet documented performance levels and be resilient against:

  • Errors, faults, and inconsistencies in input data
  • Adversarial attacks and attempts to manipulate training data
  • Cybersecurity threats targeting the AI model specifically
  • Model degradation over time (data drift, concept drift)

Integration with MDR cybersecurity: The AI Act's cybersecurity requirements complement the MDR's general safety requirements. For connected medical devices, this means addressing both device-level cybersecurity (MDR) and model-level adversarial robustness (AI Act).

Conformity Assessment Pathways

For Most AI Medical Devices: Leverage MDR/IVDR Assessment

The good news for medical device manufacturers is that if your device already requires Notified Body assessment under MDR or IVDR, the AI Act conformity assessment can generally be integrated into that existing process. You do not need a separate, additional assessment for the AI component in most cases.

The conformity assessment procedure involves:

  1. Quality management system review — Extend your ISO 13485 QMS to cover AI-specific requirements (data governance, bias monitoring, model lifecycle management)
  2. Technical documentation assessment — Notified Body reviews your AI technical documentation as part of the MDR/IVDR technical file review
  3. EU Declaration of Conformity — Issue a single declaration covering both MDR/IVDR and AI Act compliance
  4. CE marking — Affix the CE mark as usual

Self-Assessment (Limited Cases)

For some AI systems that do not require third-party conformity assessment under MDR/IVDR (e.g., certain Class I devices), self-assessment may be possible. However, most AI-enabled medical devices fall into classes requiring NB involvement, making this pathway rare in practice.

Quality Management System Requirements (Article 17)

Your QMS must be extended to cover:

  • Regulatory compliance strategy for AI systems
  • Design, development, and testing procedures for AI
  • Data management procedures (collection, labeling, validation, governance)
  • AI-specific risk management
  • Post-market monitoring for AI performance
  • Incident reporting procedures for AI-related issues
  • Communication with authorities and users about AI behavior
  • Resource management and personnel training on AI

Practical integration: Extend your ISO 13485 QMS with AI-specific procedures. Key additions include a data governance SOP, model lifecycle management procedure, bias monitoring protocol, and AI-specific complaint handling process.

Timeline and Key Deadlines

Date Milestone What It Means for MedTech
February 2, 2025 Prohibited AI practices take effect AI practices deemed unacceptable are banned
February 2, 2025 AI literacy obligation begins Organizations must ensure staff have adequate AI literacy
August 2, 2025 GPAI rules and governance structures General-purpose AI model obligations; Notified Body applications open
August 2, 2026 Original high-risk AI obligations date Originally the date when most high-risk AI obligations apply — now subject to proposed extension (see below)
December 2, 2027 Proposed new deadline (standalone high-risk AI) Digital Omnibus proposal: standalone Annex III high-risk systems move to this date
August 2, 2028 Proposed new deadline (AI in regulated products) Digital Omnibus proposal: AI embedded in Annex I products (including medical devices under MDR/IVDR) move to this date

Critical point — Digital Omnibus developments (as of April 2026): In February 2026, the European Commission proposed a "Digital Omnibus" package that would extend the AI Act's high-risk compliance deadlines. On March 13, 2026, the Council of the EU agreed to push the deadlines: standalone high-risk AI systems (Annex III) would move to December 2, 2027, and high-risk AI embedded in regulated products including medical devices (Annex I) would move to August 2, 2028. The European Parliament's committees voted in support on March 18, 2026. As of April 2026, the final legislation is being negotiated in trilogue. The AI literacy obligation (Article 4) remains unchanged with a compliance deadline of August 2, 2026, regardless of the Omnibus outcome. Manufacturers should treat the original dates as the binding baseline until the Omnibus is formally adopted, while preparing for the likely extended timeline.

What Manufacturers Should Do Now

Immediate Actions (Before Compliance Deadlines)

  1. Conduct an AI compliance gap analysis — Assess current processes against AI Act requirements: data governance, bias assessment, documentation depth, and monitoring processes

  2. Inventory your AI-enabled products — Identify all devices that incorporate AI/ML and map them against the AI Act's high-risk classification criteria

  3. Integrate AI requirements into your QMS — Extend ISO 13485 procedures to cover data governance, model lifecycle management, bias monitoring, and AI-specific risk management

  4. Update technical documentation — Add AI-specific content to your MDR technical files per Annex IV of the AI Act

  5. Engage your Notified Body early — Discuss how AI Act compliance will be integrated into your next MDR/IVDR conformity assessment. NB capacity for AI assessment is limited and will be in high demand

  6. Plan for post-market monitoring — Design monitoring systems that track AI performance metrics (accuracy, drift, bias) as part of your PMS plan

Resource Planning

The AI Act creates significant new documentation and governance requirements. Budget for:

  • Internal resources: Data governance specialists, AI ethics/quality personnel, expanded QMS documentation
  • External consulting: AI regulatory specialists, data governance auditors, Notified Body fees for expanded assessment scope
  • Tooling: AI monitoring platforms, bias detection tools, logging infrastructure, explainability solutions

Penalties for Non-Compliance

The AI Act includes substantial penalties:

Violation Maximum Penalty
Prohibited AI practices €35 million or 7% of global annual turnover
High-risk AI system violations €15 million or 3% of global annual turnover
Incorrect information to authorities €7.5 million or 1.5% of global annual turnover
GPAI model violations €15 million or 3% of global annual turnover

For medical device manufacturers, non-compliance could also result in market withdrawal under MDR enforcement mechanisms, compounding the impact.

Frequently Asked Questions

Does the AI Act apply to my medical device?

If your device incorporates AI/ML and requires Notified Body assessment under MDR (Class IIa and above) or IVDR (Class A and above), it is likely classified as a high-risk AI system under the AI Act.

Do I need a separate conformity assessment for the AI Act?

In most cases, no. If your device already undergoes Notified Body assessment under MDR or IVDR, the AI Act compliance can be integrated into that existing process. You issue a single EU Declaration of Conformity and CE mark.

When do I need to comply?

The original regulation set August 2, 2026 as the date for high-risk AI obligations. However, the 2026 Digital Omnibus package proposes extending this: AI-enabled medical devices (classified under Annex I) would move to August 2, 2028, while standalone high-risk systems would move to December 2, 2027. As of April 2026, these extensions are being finalized through the EU legislative process. The AI literacy obligation (Article 4) remains unchanged and applies from August 2, 2026.

What if my AI model is a "black box"?

The AI Act's transparency and human oversight requirements make fully opaque AI models difficult to comply with. Consider implementing explainable AI (XAI) techniques that allow users to understand why the model made a specific recommendation.

How does this interact with the EU MDR transition timeline?

The AI Act and MDR operate as parallel compliance frameworks. If you are still transitioning a device from the MDD to MDR, you must also plan for AI Act compliance if the device incorporates AI. The August 2026 AI Act deadline aligns with a period of significant MDR transition activity.

What about AI devices already on the market?

AI systems already on the EU market before the applicable compliance date benefit from transitional provisions. Under the original regulation, pre-existing systems had until August 2027 to comply. Under the proposed Digital Omnibus extensions, the transitional period for medical devices would extend to August 2028. This applies only to systems that were already placed on the market — not to new devices or significant modifications.

Does the FDA have similar requirements?

The FDA has taken a different approach, integrating AI oversight into existing medical device frameworks rather than creating standalone AI legislation. The FDA's focus has been on AI/ML-based Software as a Medical Device (SaMD) through guidance documents, Predetermined Change Control Plans (PCCPs), and the Digital Health Center of Excellence. There is no US equivalent to the EU AI Act's broad regulatory framework at this time.