MedDeviceGuideMedDeviceGuide
Back

EU AI Act + MDR Single Evidence Matrix: How to Build One Combined Technical File Without Duplicating Work

A field-by-field evidence matrix mapping MDR Annex II/III technical documentation, ISO 14971 risk management, PMS/PMCF, cybersecurity, data governance, human oversight, and QMS records to EU AI Act high-risk obligations — for manufacturers who must comply with both frameworks simultaneously.

Ran Chen
Ran Chen
Global MedTech Expert | 10× MedTech Global Access
2026-05-0517 min read

What This Article Covers / Does Not Cover

This article covers one operational task: how to build a single, combined evidence matrix that satisfies both EU MDR (Regulation 2017/745) technical documentation requirements (Annex II/III) and EU AI Act (Regulation 2024/1689) high-risk AI system requirements (Articles 8–15, Annex IV) for AI-enabled medical devices — without duplicating effort.

It provides a field-by-field evidence matrix, a RACI table for dual-compliance documentation owners, a decision tree for determining which obligations are shared vs. AI-Act-additional, and a section mapping every AI Act Annex IV paragraph to the nearest MDR equivalent.

This article does not cover AI Act classification basics, MDR classification rules, general AI governance strategy, or the EU IVDR. For a broad overview of the AI Act's impact on medical devices, see EU AI Act Medical Devices Compliance Guide. For cybersecurity-specific premarket evidence, see FDA Cybersecurity Unresolved Anomalies Table. For EU cybersecurity obligations, see EU Cyber Resilience Act NIS2 Medical Devices.


Why a Single Evidence Matrix Matters

Since 19 June 2025, when the MDCG and Joint AI Board published MDCG 2025-6 / AIB 2025-1, the regulatory expectation is clear: manufacturers of Medical Device AI (MDAI) systems must comply with both MDR and the AI Act. Article 11(2) of the AI Act explicitly permits a single technical documentation file that combines MDR Annex II/III content with AI Act Annex IV content.

The key compliance dates:

Date Obligation
2 August 2026 AI literacy obligations (Article 4); general-purpose AI (GPAI) transparency
2 August 2027 High-risk AI obligations apply to MDAI (AI Act Article 6(1), per Article 113(c))
Proposed (Digital Omnibus, April 2026) Could extend Annex I high-risk deadline to 2 August 2028; trilogue ongoing

Regardless of the Omnibus outcome, Notified Bodies are already incorporating AI Act considerations into MDR conformity assessments from mid-2026 onward. Manufacturers who wait until the final deadline will face NB capacity bottlenecks.


Decision Tree: Shared Obligation vs. AI-Act-Additional

The following decision tree determines whether a given evidence item can be satisfied by existing MDR documentation alone, or requires additional AI Act content:

START: Evidence item required by AI Act Articles 8-15 or Annex IV
│
├─► Does MDR Annex II/III already require substantially the same content?
│    ├─ YES: Is the MDR content scoped to the AI subsystem?
│    │    ├─ YES → SINGLE DOCUMENT: Extend MDR section with AI-Act paragraph reference
│    │    └─ NO → DUAL DOCUMENT: Extract AI-specific sub-section, cross-reference MDR parent
│    └─ NO: Is this a purely AI-Act obligation (e.g., bias monitoring, data governance)?
│         ├─ YES → NEW SECTION: Create AI-Act-specific section, link to QMS/tech file
│         └─ NO → Review MDCG 2025-6 FAQ #12 for integration guidance
│
└─► END: Map to evidence matrix row

Recommended Reading
FDA Cybersecurity Unresolved Anomalies Table: How to Document Vulnerabilities and Residual Risk in Premarketing Submissions
Cybersecurity 510(k)2026-05-05 · 24 min read

Single Evidence Matrix: MDR + AI Act Field-by-Field

The core of this article. Each row represents one evidence requirement. The "MDR Source" column identifies the MDR provision. The "AI Act Source" column identifies the AI Act provision. The "Integration Strategy" column tells you how to handle it in one file.

Part A: Risk Management

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
1 Risk management file / report (iterative, continuous) Annex II(4); Annex I GSPR 1-3 Article 9 (Risk Management System) Single ISO 14971 file. Add AI-specific hazard categories: data drift, model degradation, adversarial inputs, bias amplification, automation complacency. Document AI-specific risk control measures in same file. RA / Risk Manager
2 Risk analysis scope for AI subsystem MDR does not explicitly require AI-subsystem scoping Article 9(2)(a) — identify/analyze known/foreseeable risks Add a section within the ISO 14971 hazard identification that scopes the AI model as a separate subsystem with its own FMEA ML Engineer + RA
3 Residual risk acceptability for AI-specific hazards Annex I GSPR 1-2 Article 9(5) — residual risk evaluation Use existing benefit-risk analysis framework. Add AI-specific severity definitions (e.g., erroneous clinical recommendation, delayed alert, demographic bias). Clinical + RA
4 Post-market risk monitoring for AI Annex II(4)(c); Article 86-87 Article 9(7) — post-market monitoring for AI Integrate into existing PMS plan. Add AI-specific monitoring endpoints: model accuracy, false positive/negative rates by demographic subgroup, data drift indicators. PMS Lead

Part B: Data Governance

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
5 Data governance practices documentation MDR does not explicitly require data governance documentation Article 10 (Data and Data Governance) New section in tech file. Document: data collection methodology, data provenance, labeling protocols, bias assessment, representativeness analysis, data cleaning procedures, train/validation/test split rationale. Data Science Lead
6 Training data bias assessment General clinical evaluation requirements (Annex XIV) require robust data but not bias analysis per se Article 10(2)(f) — examine possible biases New sub-section. Statistical bias audit across protected characteristics. Include: demographic breakdown, geographic distribution, device/acquisition heterogeneity, label noise assessment. Data Science + Clinical
7 Data quality assurance (relevance, representativeness, accuracy) Annex XIV clinical data quality (general) Article 10(2)(a)-(d) — data governance practices Extend clinical data quality section. Add AI-specific data quality dimensions: feature completeness, temporal consistency, annotation inter-rater agreement, class balance. Data Science Lead

Part C: Technical Documentation

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
8 General description of AI system Annex II(1) — general device description Annex IV(1) — AI system description Single section. Include: intended purpose, model architecture, version history, hardware/software interaction map, interaction with other AI systems. Cross-label both Annex II and Annex IV paragraph numbers. RA
9 Detailed design and development specifications Annex II(2) — design/development Annex IV(2) — detailed description of design/development Single section. Add AI-specific content: training methodology, feature engineering, hyperparameter selection rationale, loss function design, computational resources, validation strategy. ML Engineer
10 System architecture and interaction diagram Annex II(2) — software architecture Annex IV(1)(b) — interaction with hardware/software Extend existing software architecture diagram. Add: data flow from input to AI inference to output, trust boundaries, human-in-the-loop vs. autonomous paths, fallback mechanisms. Systems Architect
11 Performance metrics and evaluation Annex II(2)(c) — verification evidence Annex IV(3) — monitoring, functioning, control Single section. Include: primary performance metrics, subgroup performance breakdown, calibration curves, AUC/ROC, confusion matrices, comparison against clinical baseline. ML Engineer + Clinical
12 Harmonised standards applied Annex II(4)(e) Annex IV(7) Single list. Add AI-specific standards: ISO/IEC 42001, ISO/IEC 23894 (AI risk management), IEC 62304 (software lifecycle). For harmonised standards under MDR, see EU MDR Harmonised Standards. RA

Part D: Transparency and Human Oversight

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
13 Information provided to users / deployers Annex I GSPR 23 (IFU/labeling) Article 13 (Transparency) Extend existing IFU. Add: AI system capabilities and limitations, expected accuracy per population subgroup, interpretation guidance, known failure modes, instructions for human override. Labeling / RA
14 Human oversight measures MDR does not explicitly require human oversight documentation Article 14 (Human Oversight) New section. Document: human-in-the-loop design, override mechanisms, alert thresholds, user training requirements, minimum competency for operators, time constraints for human review. UX + RA
15 Explainability and interpretability provisions Not explicitly required under MDR Article 13(3)(b)-(d) New section. Document: feature importance methods (SHAP, LIME), saliency maps, confidence scores, uncertainty quantification, decision boundary visualization (where feasible). ML Engineer
16 Instructions for deployers on use, interpretation, and limitations IFU requirements (Annex I Chapter III) Article 13(2)-(3) Extend IFU. Include AI-specific instructions: input data requirements, output interpretation guidance, conditions under which AI output should not be relied upon, escalation procedures. Labeling / RA

Part E: Accuracy, Robustness, and Cybersecurity

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
17 Accuracy specifications and testing Annex II(2)(c) — design verification Article 15(1) — accuracy metrics Single section. Extend verification protocols to include AI-specific accuracy testing: held-out test set performance, subgroup analysis, adversarial robustness testing, temporal stability testing. V&V Lead
18 Robustness testing General safety requirements Article 15(2) — robustness and cybersecurity Extend V&V. Add: adversarial input testing, edge-case testing, input perturbation testing, failure mode testing under degraded input quality. V&V + ML Engineer
19 Cybersecurity documentation Not explicitly in MDR (but required under MDCG guidance) Article 15(2)-(3) Cross-reference existing cybersecurity documentation. For premarket cybersecurity package structure, see FDA Cybersecurity Unresolved Anomalies Table and SBOM-to-VEX Vulnerability Triage Workflow. Cybersecurity Lead

Part F: Post-Market Monitoring and PMS

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
20 Post-market monitoring plan for AI Article 84-86 (PMS plan/report) Article 72 (Post-market monitoring for AI) Extend existing PMS plan. Add AI-specific monitoring: model performance drift, data distribution shift, adverse outcome rates by demographic subgroup, complaint categories specific to AI behavior. See Post-Market Surveillance Guide. PMS Lead
21 Logging and record-keeping General record requirements Article 12 (Record-Keeping) New section or extend QMS. Specify: automatic logging of AI inference events, input data logs, output confidence scores, human override events, model version in production. Retention period must meet both MDR (minimum 10 years) and AI Act requirements. IT / Quality
22 Post-market performance monitoring (drift) PMCF requirements (Annex XIV Part B) Article 72(3) — continuous monitoring Integrate into PMCF plan. Add drift detection endpoints: statistical process control on model outputs, population shift monitoring, feature distribution monitoring. For PMCF survey methods, see PMCF Survey Design. Clinical + Data Science

Part G: Quality Management System

# Evidence Item MDR Source AI Act Source Integration Strategy Document Owner
23 QMS documentation covering AI governance Article 10(9) — QMS Article 17 (Quality Management System) Extend ISO 13485 QMS. Add: AI model governance procedure, data governance SOP, bias monitoring SOP, model retraining/change control procedure, AI risk management integration. See ISO 13485 Implementation Guide. Quality Director
24 Corrective actions for AI-related issues PMS/CAPA obligations Article 20 (Corrective Actions) Integrate into existing CAPA process. Add AI-specific root cause categories: data quality issue, model degradation, training-serving skew, edge case not in training distribution. See CAPA Guide. Quality + ML Engineer
25 EU Declaration of Conformity (dual) Annex IV (EU DoC) Article 47; Annex IV(8) Single EU DoC that references compliance with both MDR and AI Act. List both regulation numbers. RA

RACI Table: Dual-Compliance Documentation Owners

Role MDR Technical File AI Act Annex IV Integrated QMS PMS/PMCF Data Governance
Regulatory Affairs Lead R (Responsible) R A (Accountable) R C (Consulted)
ML / Data Science Lead C R C C R
Quality Director A A R A A
Clinical Affairs Lead R C C R C
Cybersecurity Lead C C C I (Informed) C
Systems Architect C R C I C
PMS Lead C C C R C
Labeling / Technical Writer R R C C I
Notified Body Contact A A A A A

Source-to-Evidence Traceability Table

Use this table in your technical file header to map every AI Act obligation to its evidence location:

AI Act Article AI Act Obligation MDR Equivalent Tech File Section Document Name Last Updated
Art. 9 Risk management system Annex II(4); GSPR 1-3 Section 4 RM-File-[Device]-v[X.X] [Date]
Art. 10 Data and data governance Annex XIV (clinical data) Section 6.3 (new) DG-Report-[Device]-v[X.X] [Date]
Art. 11 Technical documentation Annex II/III Full tech file TF-[Device]-v[X.X] [Date]
Art. 12 Record-keeping MDR record requirements QMS Section 8 Logging-Spec-[Device]-v[X.X] [Date]
Art. 13 Transparency GSPR 23 (IFU) Section 3 (IFU) + Section 7 (new) IFU-[Device]-v[X.X] [Date]
Art. 14 Human oversight No direct MDR equivalent Section 7.2 (new) HO-Design-[Device]-v[X.X] [Date]
Art. 15 Accuracy, robustness, cybersecurity Annex II(2)(c); GSPR 11-12 Section 5 (V&V) VVR-[Device]-v[X.X] [Date]
Art. 17 Quality management system Art. 10(9) QMS QMS QMS-Manual-v[X.X] [Date]
Art. 72 Post-market monitoring Art. 84-86 PMS PMS Section PMS-Plan-[Device]-v[X.X] [Date]

Recommended Reading
FDA Third Party Review Program Due-Diligence Checklist: How to Decide Whether 3P510k Is Right for Your 510(k) Submission
510(k) Regulatory2026-05-05 · 19 min read

Common Failure Modes and How to Remediate

# Failure Mode Why It Happens Consequence Remediation
1 Duplicate tech files for MDR and AI Act Teams treat AI Act as a separate compliance stream Inconsistent content, wasted effort, NB confusion during audit Use Article 11(2) to maintain single file. Use the evidence matrix above to map every AI Act requirement to an MDR section or a new section.
2 Data governance section is a placeholder Data science team not integrated into RA documentation workflow NB deficiency on Annex IV(2)(d); non-conformity Assign data science lead as R for data governance section. Include statistical bias audit with actual numbers.
3 Human oversight section copied from IFU AI Act requires design-level human oversight documentation, not just user instructions NB will flag as incomplete per Article 14 Document technical design: override mechanisms, alert thresholds, minimum decision time, competency requirements. Include usability test evidence.
4 Bias assessment limited to training data demographics AI Act Article 10(2)(f) requires examination of biases "that are likely to affect ... health and safety" NB deficiency; potential fundamental rights concern Extend bias analysis to: output disparities across subgroups, geographic bias, device-acquisition bias, temporal bias, severity of consequence of biased output.
5 PMS plan unchanged from non-AI version AI-specific post-market monitoring not added Non-compliance with Article 72; missed drift signals Add AI-specific PMS endpoints: model performance metrics, subgroup-specific adverse event rates, data drift indicators, retraining triggers.
6 Logging specification does not cover AI events IT/QMS teams unaware of Article 12 requirements Cannot demonstrate record-keeping compliance during NB audit Define: what is logged (input, output, confidence, model version), retention period, access controls, integrity assurance. Map to ISO 13485 document control.
7 EU DoC references only MDR Forgot to add AI Act compliance statement Invalid CE marking; potential market withdrawal Update EU DoC to reference both Regulation (EU) 2017/745 and Regulation (EU) 2024/1689. List applicable Articles.

Pre-Submission Checklist: AI Act + MDR Tech File Readiness

Use this checklist before submitting to your Notified Body:

  • Risk management file: AI-specific hazard categories added; residual risk acceptability includes AI-specific severity definitions
  • Data governance documentation: New section created; includes bias audit, data provenance, representativeness analysis, data cleaning procedures
  • Technical documentation: Single file structure maps both MDR Annex II/III and AI Act Annex IV; cross-references labeled with both regulation paragraph numbers
  • Performance metrics: Subgroup-specific accuracy reported; calibration analysis included; comparison against clinical baseline documented
  • Human oversight: Technical design documented (not just IFU text); override mechanisms specified; usability test evidence included
  • Transparency to deployers: IFU extended with AI capabilities, limitations, interpretation guidance, and failure mode descriptions
  • Logging specification: AI inference events logged; retention period meets both MDR (10-year minimum) and AI Act requirements; integrity controls documented
  • Cybersecurity: AI-specific attack vectors addressed (adversarial inputs, model extraction, data poisoning); cross-reference cybersecurity documentation
  • PMS plan: AI-specific monitoring endpoints added; drift detection methods specified; retraining triggers defined
  • QMS: AI governance SOPs added (data governance, model change control, bias monitoring, retraining); integrated into ISO 13485 structure
  • EU Declaration of Conformity: References both MDR and AI Act; lists applicable Articles and harmonised standards
  • Source traceability table: Every AI Act Article 8-15 obligation mapped to a tech file section with document name and version

Timeline: When to Build This Matrix

Milestone Action Owner
Design input phase Identify AI subsystem boundaries; start risk management file with AI hazards RA + ML Lead
Design development Build data governance documentation; document training methodology per Annex IV(2) Data Science Lead
Design verification Include AI-specific V&V (subgroup performance, adversarial robustness, edge cases) V&V Lead
Pre-submission Complete evidence matrix; run checklist above; cross-reference all AI Act paragraphs RA Lead
NB submission Submit single tech file with Annex II/III + Annex IV content; expect AI-specific questions RA Lead
Post-CE Activate AI-specific PMS endpoints; begin drift monitoring; log AI inference events PMS + Data Science

Recommended Reading
FDA Cybersecurity Premarket Submission Deficiencies: 12 Common Rejection Reasons and How to Fix Them (2026)
Cybersecurity 510(k)2026-05-03 · 26 min read

Key Regulatory References

Reference Description
Regulation (EU) 2024/1689 EU Artificial Intelligence Act
Regulation (EU) 2017/745 EU Medical Device Regulation (MDR)
MDCG 2025-6 / AIB 2025-1 FAQ on interplay between MDR/IVDR and AI Act (June 2025)
AI Act Article 6(1) Classification as high-risk when AI is in a product subject to third-party conformity assessment
AI Act Article 11(2) Legal basis for combined MDR + AI Act technical documentation
AI Act Articles 8-15 Requirements for high-risk AI systems
AI Act Annex IV Technical documentation content requirements
MDR Annex II Technical documentation — device description and design information
MDR Annex III Technical documentation — post-market surveillance
ISO 14971:2019 Risk management for medical devices
ISO/IEC 42001:2023 AI management system standard
ISO/IEC 23894:2023 AI risk management guidance
ISO 13485:2016 Quality management systems for medical devices