EU AI Act + MDR Single Evidence Matrix: How to Build One Combined Technical File Without Duplicating Work
A field-by-field evidence matrix mapping MDR Annex II/III technical documentation, ISO 14971 risk management, PMS/PMCF, cybersecurity, data governance, human oversight, and QMS records to EU AI Act high-risk obligations — for manufacturers who must comply with both frameworks simultaneously.
This article covers one operational task: how to build a single, combined evidence matrix that satisfies both EU MDR (Regulation 2017/745) technical documentation requirements (Annex II/III) and EU AI Act (Regulation 2024/1689) high-risk AI system requirements (Articles 8–15, Annex IV) for AI-enabled medical devices — without duplicating effort.
It provides a field-by-field evidence matrix, a RACI table for dual-compliance documentation owners, a decision tree for determining which obligations are shared vs. AI-Act-additional, and a section mapping every AI Act Annex IV paragraph to the nearest MDR equivalent.
Since 19 June 2025, when the MDCG and Joint AI Board published MDCG 2025-6 / AIB 2025-1, the regulatory expectation is clear: manufacturers of Medical Device AI (MDAI) systems must comply with both MDR and the AI Act. Article 11(2) of the AI Act explicitly permits a single technical documentation file that combines MDR Annex II/III content with AI Act Annex IV content.
The key compliance dates:
Date
Obligation
2 August 2026
AI literacy obligations (Article 4); general-purpose AI (GPAI) transparency
2 August 2027
High-risk AI obligations apply to MDAI (AI Act Article 6(1), per Article 113(c))
Proposed (Digital Omnibus, April 2026)
Could extend Annex I high-risk deadline to 2 August 2028; trilogue ongoing
Regardless of the Omnibus outcome, Notified Bodies are already incorporating AI Act considerations into MDR conformity assessments from mid-2026 onward. Manufacturers who wait until the final deadline will face NB capacity bottlenecks.
Decision Tree: Shared Obligation vs. AI-Act-Additional
The following decision tree determines whether a given evidence item can be satisfied by existing MDR documentation alone, or requires additional AI Act content:
START: Evidence item required by AI Act Articles 8-15 or Annex IV
│
├─► Does MDR Annex II/III already require substantially the same content?
│ ├─ YES: Is the MDR content scoped to the AI subsystem?
│ │ ├─ YES → SINGLE DOCUMENT: Extend MDR section with AI-Act paragraph reference
│ │ └─ NO → DUAL DOCUMENT: Extract AI-specific sub-section, cross-reference MDR parent
│ └─ NO: Is this a purely AI-Act obligation (e.g., bias monitoring, data governance)?
│ ├─ YES → NEW SECTION: Create AI-Act-specific section, link to QMS/tech file
│ └─ NO → Review MDCG 2025-6 FAQ #12 for integration guidance
│
└─► END: Map to evidence matrix row
Single Evidence Matrix: MDR + AI Act Field-by-Field
The core of this article. Each row represents one evidence requirement. The "MDR Source" column identifies the MDR provision. The "AI Act Source" column identifies the AI Act provision. The "Integration Strategy" column tells you how to handle it in one file.
Single ISO 14971 file. Add AI-specific hazard categories: data drift, model degradation, adversarial inputs, bias amplification, automation complacency. Document AI-specific risk control measures in same file.
RA / Risk Manager
2
Risk analysis scope for AI subsystem
MDR does not explicitly require AI-subsystem scoping
Integrate into existing PMS plan. Add AI-specific monitoring endpoints: model accuracy, false positive/negative rates by demographic subgroup, data drift indicators.
PMS Lead
Part B: Data Governance
#
Evidence Item
MDR Source
AI Act Source
Integration Strategy
Document Owner
5
Data governance practices documentation
MDR does not explicitly require data governance documentation
Article 10 (Data and Data Governance)
New section in tech file. Document: data collection methodology, data provenance, labeling protocols, bias assessment, representativeness analysis, data cleaning procedures, train/validation/test split rationale.
Data Science Lead
6
Training data bias assessment
General clinical evaluation requirements (Annex XIV) require robust data but not bias analysis per se
Article 10(2)(f) — examine possible biases
New sub-section. Statistical bias audit across protected characteristics. Include: demographic breakdown, geographic distribution, device/acquisition heterogeneity, label noise assessment.
Data Science + Clinical
7
Data quality assurance (relevance, representativeness, accuracy)
Annex XIV clinical data quality (general)
Article 10(2)(a)-(d) — data governance practices
Extend clinical data quality section. Add AI-specific data quality dimensions: feature completeness, temporal consistency, annotation inter-rater agreement, class balance.
Data Science Lead
Part C: Technical Documentation
#
Evidence Item
MDR Source
AI Act Source
Integration Strategy
Document Owner
8
General description of AI system
Annex II(1) — general device description
Annex IV(1) — AI system description
Single section. Include: intended purpose, model architecture, version history, hardware/software interaction map, interaction with other AI systems. Cross-label both Annex II and Annex IV paragraph numbers.
RA
9
Detailed design and development specifications
Annex II(2) — design/development
Annex IV(2) — detailed description of design/development
Single section. Add AI-specific content: training methodology, feature engineering, hyperparameter selection rationale, loss function design, computational resources, validation strategy.
ML Engineer
10
System architecture and interaction diagram
Annex II(2) — software architecture
Annex IV(1)(b) — interaction with hardware/software
Extend existing software architecture diagram. Add: data flow from input to AI inference to output, trust boundaries, human-in-the-loop vs. autonomous paths, fallback mechanisms.
Systems Architect
11
Performance metrics and evaluation
Annex II(2)(c) — verification evidence
Annex IV(3) — monitoring, functioning, control
Single section. Include: primary performance metrics, subgroup performance breakdown, calibration curves, AUC/ROC, confusion matrices, comparison against clinical baseline.
ML Engineer + Clinical
12
Harmonised standards applied
Annex II(4)(e)
Annex IV(7)
Single list. Add AI-specific standards: ISO/IEC 42001, ISO/IEC 23894 (AI risk management), IEC 62304 (software lifecycle). For harmonised standards under MDR, see EU MDR Harmonised Standards.
RA
Part D: Transparency and Human Oversight
#
Evidence Item
MDR Source
AI Act Source
Integration Strategy
Document Owner
13
Information provided to users / deployers
Annex I GSPR 23 (IFU/labeling)
Article 13 (Transparency)
Extend existing IFU. Add: AI system capabilities and limitations, expected accuracy per population subgroup, interpretation guidance, known failure modes, instructions for human override.
Labeling / RA
14
Human oversight measures
MDR does not explicitly require human oversight documentation
Article 14 (Human Oversight)
New section. Document: human-in-the-loop design, override mechanisms, alert thresholds, user training requirements, minimum competency for operators, time constraints for human review.
Instructions for deployers on use, interpretation, and limitations
IFU requirements (Annex I Chapter III)
Article 13(2)-(3)
Extend IFU. Include AI-specific instructions: input data requirements, output interpretation guidance, conditions under which AI output should not be relied upon, escalation procedures.
Labeling / RA
Part E: Accuracy, Robustness, and Cybersecurity
#
Evidence Item
MDR Source
AI Act Source
Integration Strategy
Document Owner
17
Accuracy specifications and testing
Annex II(2)(c) — design verification
Article 15(1) — accuracy metrics
Single section. Extend verification protocols to include AI-specific accuracy testing: held-out test set performance, subgroup analysis, adversarial robustness testing, temporal stability testing.
Extend existing PMS plan. Add AI-specific monitoring: model performance drift, data distribution shift, adverse outcome rates by demographic subgroup, complaint categories specific to AI behavior. See Post-Market Surveillance Guide.
PMS Lead
21
Logging and record-keeping
General record requirements
Article 12 (Record-Keeping)
New section or extend QMS. Specify: automatic logging of AI inference events, input data logs, output confidence scores, human override events, model version in production. Retention period must meet both MDR (minimum 10 years) and AI Act requirements.
IT / Quality
22
Post-market performance monitoring (drift)
PMCF requirements (Annex XIV Part B)
Article 72(3) — continuous monitoring
Integrate into PMCF plan. Add drift detection endpoints: statistical process control on model outputs, population shift monitoring, feature distribution monitoring. For PMCF survey methods, see PMCF Survey Design.
Clinical + Data Science
Part G: Quality Management System
#
Evidence Item
MDR Source
AI Act Source
Integration Strategy
Document Owner
23
QMS documentation covering AI governance
Article 10(9) — QMS
Article 17 (Quality Management System)
Extend ISO 13485 QMS. Add: AI model governance procedure, data governance SOP, bias monitoring SOP, model retraining/change control procedure, AI risk management integration. See ISO 13485 Implementation Guide.
Quality Director
24
Corrective actions for AI-related issues
PMS/CAPA obligations
Article 20 (Corrective Actions)
Integrate into existing CAPA process. Add AI-specific root cause categories: data quality issue, model degradation, training-serving skew, edge case not in training distribution. See CAPA Guide.
Quality + ML Engineer
25
EU Declaration of Conformity (dual)
Annex IV (EU DoC)
Article 47; Annex IV(8)
Single EU DoC that references compliance with both MDR and AI Act. List both regulation numbers.
RA
RACI Table: Dual-Compliance Documentation Owners
Role
MDR Technical File
AI Act Annex IV
Integrated QMS
PMS/PMCF
Data Governance
Regulatory Affairs Lead
R (Responsible)
R
A (Accountable)
R
C (Consulted)
ML / Data Science Lead
C
R
C
C
R
Quality Director
A
A
R
A
A
Clinical Affairs Lead
R
C
C
R
C
Cybersecurity Lead
C
C
C
I (Informed)
C
Systems Architect
C
R
C
I
C
PMS Lead
C
C
C
R
C
Labeling / Technical Writer
R
R
C
C
I
Notified Body Contact
A
A
A
A
A
Source-to-Evidence Traceability Table
Use this table in your technical file header to map every AI Act obligation to its evidence location:
Teams treat AI Act as a separate compliance stream
Inconsistent content, wasted effort, NB confusion during audit
Use Article 11(2) to maintain single file. Use the evidence matrix above to map every AI Act requirement to an MDR section or a new section.
2
Data governance section is a placeholder
Data science team not integrated into RA documentation workflow
NB deficiency on Annex IV(2)(d); non-conformity
Assign data science lead as R for data governance section. Include statistical bias audit with actual numbers.
3
Human oversight section copied from IFU
AI Act requires design-level human oversight documentation, not just user instructions
NB will flag as incomplete per Article 14
Document technical design: override mechanisms, alert thresholds, minimum decision time, competency requirements. Include usability test evidence.
4
Bias assessment limited to training data demographics
AI Act Article 10(2)(f) requires examination of biases "that are likely to affect ... health and safety"
NB deficiency; potential fundamental rights concern
Extend bias analysis to: output disparities across subgroups, geographic bias, device-acquisition bias, temporal bias, severity of consequence of biased output.
5
PMS plan unchanged from non-AI version
AI-specific post-market monitoring not added
Non-compliance with Article 72; missed drift signals
Add AI-specific PMS endpoints: model performance metrics, subgroup-specific adverse event rates, data drift indicators, retraining triggers.
6
Logging specification does not cover AI events
IT/QMS teams unaware of Article 12 requirements
Cannot demonstrate record-keeping compliance during NB audit
Define: what is logged (input, output, confidence, model version), retention period, access controls, integrity assurance. Map to ISO 13485 document control.
7
EU DoC references only MDR
Forgot to add AI Act compliance statement
Invalid CE marking; potential market withdrawal
Update EU DoC to reference both Regulation (EU) 2017/745 and Regulation (EU) 2024/1689. List applicable Articles.
Pre-Submission Checklist: AI Act + MDR Tech File Readiness
Use this checklist before submitting to your Notified Body:
Data governance documentation: New section created; includes bias audit, data provenance, representativeness analysis, data cleaning procedures
Technical documentation: Single file structure maps both MDR Annex II/III and AI Act Annex IV; cross-references labeled with both regulation paragraph numbers
Human oversight: Technical design documented (not just IFU text); override mechanisms specified; usability test evidence included
Transparency to deployers: IFU extended with AI capabilities, limitations, interpretation guidance, and failure mode descriptions
Logging specification: AI inference events logged; retention period meets both MDR (10-year minimum) and AI Act requirements; integrity controls documented
Cybersecurity: AI-specific attack vectors addressed (adversarial inputs, model extraction, data poisoning); cross-reference cybersecurity documentation