MedDeviceGuideMedDeviceGuide
Back

Medical Device Cybersecurity Patch Management: Regulated Update Deployment Under EU MDR, FDA Section 524B, and the Cyber Resilience Act (2026)

How to deploy cybersecurity patches to fielded medical devices while maintaining MDR conformity, FDA Section 524B postmarket obligations, and Cyber Resilience Act vulnerability handling timelines — covering risk-based triage, change control classification, coordinated disclosure integration with PSIRT, and the operational QMS workflow from vulnerability detection to verified field deployment.

Ran Chen
Ran Chen
Global MedTech Expert | 10× MedTech Global Access
2026-05-1331 min read

What This Article Covers

This article addresses a single operational problem: how to deploy cybersecurity patches to medical devices already in the field while staying compliant with every applicable regulatory framework. It covers the full patch lifecycle from vulnerability detection through risk triage, change control classification, validation, regulatory assessment, deployment, and post-deployment verification.

The regulatory landscape in 2026 creates overlapping obligations. A manufacturer selling a connected medical device in both the US and EU must simultaneously satisfy FDA Section 524B postmarket cybersecurity requirements, EU MDR change control and vigilance obligations, the EU Cyber Resilience Act vulnerability handling timelines, and any applicable NIS2 requirements. Each framework has different definitions, different timelines, and different documentation expectations. This article maps them together into a single operational workflow.

What This Article Does NOT Cover

This is not a general introduction to medical device cybersecurity, SBOM generation, threat modeling, or premarket submission packaging. For those topics, see the Medical Device Cybersecurity Guide, the SBOM for Medical Devices Guide, and the FDA Cybersecurity Premarket Deficiencies Guide. For the regulatory change control decision tree for FDA submissions specifically, see Special 510(k) for Software Changes.


The Patch Management Problem in 2026

Connected medical devices run complex software stacks: operating systems, middleware, libraries, communication protocols, and application code. Every component has vulnerabilities. The National Vulnerability Database (NVD) cataloged nearly 50,000 new CVEs in 2025. A typical connected infusion pump or imaging system contains 50 to 200 third-party open-source components, each with its own vulnerability stream.

The operational reality is that a manufacturer will receive vulnerability intelligence continuously -- from NVD feeds, component vendor advisories, coordinated vulnerability disclosures from external researchers, and internal code analysis. The regulatory question is never whether to patch. The question is: what regulatory pathway does this specific patch require, how fast must it deploy, and what documentation must accompany it?

Getting this wrong carries real consequences. In the EU, deploying a patch that changes device safety or performance characteristics without proper conformity assessment can invalidate the CE mark. In the US, failing to address known cybersecurity vulnerabilities in a timely manner violates Section 524B's postmarket monitoring requirements. Under the Cyber Resilience Act, which entered into force in December 2024 with phased obligations beginning August 2025, failing to handle vulnerabilities within mandated timelines exposes the manufacturer to direct regulatory enforcement.


Regulatory Framework Mapping: Four Overlapping Obligations

A manufacturer deploying a cybersecurity patch to a fielded device in 2026 faces at least four regulatory frameworks. Here is what each one demands for postmarket patch management.

FDA Section 524B (United States)

Section 524B of the FD&C Act, effective since March 29, 2023, requires manufacturers of "cyber devices" to:

  1. Monitor for and identify postmarket cybersecurity vulnerabilities on a reasonably justified timeline.
  2. Deploy patches and updates to address vulnerabilities on a reasonably justified timeline, informed by the clinically relevant risk the vulnerability presents.
  3. Maintain processes for coordinated vulnerability disclosure (CVD).
  4. Report under 21 CFR Part 806 when a cybersecurity vulnerability causes or could cause a death or serious injury, or when a fix is initiated without first notifying FDA.

FDA's February 3, 2026 final guidance on cybersecurity in premarket submissions reinforces that the postmarket cybersecurity plan (required in every premarket submission for a cyber device) must describe the manufacturer's processes for vulnerability monitoring, risk assessment, patch development, and deployment. The plan is not aspirational. FDA reviewers assess its adequacy during premarket review, and FDA can issue postmarket information requests to verify compliance.

Critical detail: Section 524B does not require a new 510(k) for every cybersecurity patch. The guidance explicitly acknowledges that many security patches can be deployed under the manufacturer's existing quality system change control process without a new premarket submission. The threshold question is whether the patch "could significantly affect the safety or effectiveness of the device."

EU MDR 2017/745 (European Union)

Under the MDR, a cybersecurity patch is a change to a CE-marked device. The manufacturer must determine:

  1. Does this change affect the device's conformity with the essential safety and performance requirements? If yes, the manufacturer must reassess conformity before placing the modified device on the market.
  2. Does this change require Notified Body involvement? This depends on the nature of the change, the risk classification of the device, and the conditions in the manufacturer's EU type examination certificate or quality system certificate.
  3. Is this change reportable under vigilance obligations? If the vulnerability being patched has already manifested in the field as a cybersecurity incident that caused or could cause death or serious injury, it triggers MDR Article 87 incident reporting obligations.

MDR Annex IX, Chapter I, Section 4.3 requires the manufacturer to inform the Notified Body of any planned change that could affect conformity. For Class IIa, IIb, and III devices, the Notified Body must assess significant changes before implementation. MDR Article 120(3) provides a limited exception for changes made to eliminate an immediate danger, but this requires immediate notification to the Notified Body and competent authorities.

The MDCG 2020-3 guidance on significant changes under the MDR provides the classification framework. For software changes specifically, MDCG 2020-3, Appendix 2, Section 4 lists criteria for when a software change is "significant" and requires Notified Body approval versus when it can be managed under the manufacturer's own quality system.

EU Cyber Resilience Act (Regulation 2024/2847)

The Cyber Resilience Act (CRA) entered into force on December 10, 2024. Its obligations apply in phases:

Date Obligation
December 11, 2025 Incident reporting to ENISA/CSIRTs begins (Article 14)
December 11, 2026 Vulnerability handling processes must be in place (Article 13)
December 11, 2027 All substantive security requirements apply, including patch delivery obligations (Article 11)

For medical devices, the CRA applies alongside the MDR. Article 13 requires manufacturers to:

  1. Identify and document vulnerabilities in products with digital elements.
  2. Handle vulnerabilities on a risk-based timeline.
  3. Deliver security updates to address vulnerabilities free of charge, within a timeline commensurate with the risk.
  4. Provide a Software Bill of Materials (SBOM) for each product.
  5. Notify relevant authorities of actively exploited vulnerabilities within 24 hours and provide a remediation plan within 72 hours.

The CRA's 24-hour notification requirement for actively exploited vulnerabilities is significantly faster than any MDR vigilance timeline. This creates a direct operational tension: the CRA wants rapid public-facing action, while the MDR change control process for Class IIb/III devices requires Notified Body assessment before deploying changes to the field.

NIS2 Directive

For manufacturers classified as essential or important entities under NIS2 (typically larger manufacturers or those operating in critical infrastructure), NIS2 Article 23 requires:

  1. Initial notification of significant incidents to the competent authority or CSIRT within 24 hours.
  2. An intermediate report within 72 hours.
  3. A final report within one month.

A cybersecurity vulnerability that is actively exploited in a medical device may constitute a "significant incident" under NIS2, triggering these reporting timelines. NIS2 applies to the entity (the manufacturer), not the product, so it operates independently of the MDR device classification.


Recommended Reading
EU AI Act + MDR Single Evidence Matrix: How to Build One Combined Technical File Without Duplicating Work
EU MDR / IVDR Digital Health & AI2026-05-05 · 17 min read

The Patch Classification Decision Tree

Not all cybersecurity patches require the same regulatory pathway. The first operational task is classifying the patch. Here is the decision framework.

Step 1: Vulnerability Risk Triage

When a vulnerability is identified in a component used in your device, the first question is the clinically relevant risk level. Use the CVSS v4.0 base score as the starting point, but adjust based on the specific medical device context.

CVSS Score Range Clinical Risk Level Expected Patch Timeline Regulatory Pathway
9.0 -- 10.0 (Critical) Immediate patient safety risk if exploited 24 -- 72 hours for mitigation; full patch in 30 days Emergency change control; may trigger 806 report (US) and FSCA (EU)
7.0 -- 8.9 (High) Significant safety concern; exploit feasible 30 -- 90 days Accelerated change control; assess for 510(k) need (US) and Notified Body notification (EU)
4.0 -- 6.9 (Medium) Moderate concern; exploit requires specific conditions 90 -- 180 days Standard change control under QMS
0.1 -- 3.9 (Low) Theoretical concern; exploit unlikely Next scheduled release Standard change control under QMS

The key regulatory principle: the patch timeline is driven by the clinically relevant risk of the vulnerability, not the severity of the fix itself. A one-line code change addressing a critical vulnerability carries more regulatory urgency than a major refactoring that addresses a low-risk issue.

Step 2: Change Control Classification

Once the vulnerability risk level is established, classify the change itself.

Category A: Like-for-like patch. The patch replaces a vulnerable library version with a patched version of the same library, with no change to the API surface, behavior, or configuration of the device. This is the most common type of cybersecurity patch. Example: updating OpenSSL from 3.1.4 to 3.1.5 to address a specific CVE.

Category B: Compensating control. The vulnerability cannot be directly patched (no patch available, or patching is technically infeasible), so a compensating control is implemented: a firewall rule, a configuration change, a network segmentation update. The device software itself is not changed, but the deployment environment is.

Category C: Architectural change. The patch requires a change to the device's security architecture: replacing a deprecated cryptographic algorithm, changing an authentication mechanism, modifying the communication protocol. This affects the device's security functionality and potentially its safety characteristics.

Category D: Functional change. The vulnerability remediation requires disabling or modifying a device function. Example: disabling a legacy communication protocol that is the attack vector for a critical vulnerability.

Step 3: Regulatory Pathway Determination

Combine the vulnerability risk level and the change category to determine the regulatory pathway.

Change Category US Regulatory Pathway EU Regulatory Pathway
Category A (Low/Medium CVSS) QMS change control; no new 510(k) QMS change control; no Notified Body notification
Category A (High/Critical CVSS) QMS change control; assess for 806 report; document why no new 510(k) is needed Accelerated change control; notify Notified Body per MDCG 2020-3; assess for FSCA
Category B QMS change control; update cybersecurity risk analysis; document compensating control rationale QMS change control; update risk management file; assess for Notified Body notification
Category C QMS change control; likely requires new 510(k) assessment; update cybersecurity risk analysis Significant change per MDCG 2020-3; Notified Body assessment required for Class IIa and above
Category D QMS change control; new 510(k) likely required; 806 report assessment Significant change; Notified Body assessment required; FSCA assessment for fielded devices

The Operational Workflow: From Vulnerability Detection to Verified Deployment

Here is the complete operational workflow for a cybersecurity patch, integrating all four regulatory frameworks into a single process.

Phase 1: Vulnerability Detection and Intake

Inputs: NVD feed, component vendor advisories, internal vulnerability scans, external researcher reports (via your CVD program), customer reports, threat intelligence feeds.

Actions:

  1. Log every vulnerability in the vulnerability tracking system (part of your PSIRT infrastructure). Assign a unique tracking ID.
  2. Triage against your SBOM to determine which products and versions are affected. If you do not have a complete, accurate SBOM, you cannot perform this step. This is why the CRA and FDA both mandate SBOM maintenance.
  3. Assess whether the vulnerability is actively being exploited. This determination drives the CRA's 24-hour reporting clock.
  4. Initial classification of the vulnerability risk level per the table above.

Regulatory clock starts:

  • CRA: If actively exploited, the 24-hour notification clock starts at detection.
  • NIS2: If the vulnerability constitutes a significant incident, the 24-hour notification clock starts.
  • FDA Section 524B: The "reasonably justified timeline" for deploying a fix begins. Document your justification.
  • MDR: No specific clock for vulnerability detection, but if the vulnerability has caused or could cause a serious incident, the MDR Article 87(1) reporting clock (immediately, but no later than 15 days after awareness) may apply.

Phase 2: Risk Analysis and Patch Planning

Actions:

  1. Perform a cybersecurity-specific risk analysis using ISO 14971 methodology, informed by the threat model from your premarket submission. The risk analysis must consider:

    • The attack vector and complexity
    • The clinical impact if the vulnerability is exploited (not just the IT security impact)
    • The exploitability in the specific deployment environment
    • Whether existing compensating controls reduce the risk
    • The patient population and clinical context
  2. Determine the patch strategy:

    • Can the vulnerability be patched directly (Category A)?
    • Is a compensating control appropriate as a temporary measure while a patch is developed (Category B)?
    • Is an architectural change required (Category C)?
    • Must a device function be disabled or modified (Category D)?
  3. Classify the regulatory pathway using the decision tree above.

  4. Develop the patch timeline based on the clinically relevant risk level. Document the rationale for the timeline.

Key documentation: Updated cybersecurity risk analysis, patch plan, regulatory pathway assessment.

Phase 3: Patch Development and Verification

Actions:

  1. Develop the patch. For Category A patches, this is typically straightforward: update the component version, rebuild, and test. For Category C and D patches, follow full software development lifecycle per IEC 62304.

  2. Perform regression testing. At minimum:

    • Verify that the patch resolves the specific vulnerability (penetration test or vulnerability scan confirmation)
    • Verify that existing device functions are not adversely affected (regression testing suite)
    • Verify that the patch does not introduce new vulnerabilities (patch-specific security testing)
    • Verify performance in the target deployment environment
  3. Update the risk management file. Document the residual risk after the patch is applied.

  4. Update the SBOM. The patched device has a new SBOM. This must be available to customers and regulators.

  5. Prepare deployment instructions. For fielded devices, document exactly how the patch will be deployed: download mechanism, installation procedure, rollback plan, verification steps.

Key documentation: Updated software design history, test results, updated risk management file, updated SBOM, deployment instructions.

Phase 4: Regulatory Assessment and Clearance

This is where the patch management process intersects with the regulatory change control system.

For the US (FDA):

  1. Assess whether a new 510(k) is required. Use the decision criteria from FDA's December 2023 guidance on changes that may require a new 510(k). For cybersecurity patches specifically:

    • If the patch modifies the device's intended use, a new 510(k) is required.
    • If the patch changes the device's technology, engineering, or performance characteristics in a way that could significantly affect safety or effectiveness, a new 510(k) is required.
    • If the patch is a pure security fix with no impact on device function or safety (Category A), no new 510(k) is typically required.
    • Document the assessment and the rationale for the decision.
  2. Assess whether an 806 report is required. If the vulnerability being patched has caused or could cause a death or serious injury, or if you are initiating a fix (the patch) without first notifying FDA, an 806 report may be required.

  3. Update the postmarket cybersecurity plan if the patch changes the plan's content (e.g., new monitoring frequency, new compensating controls).

For the EU (MDR):

  1. Classify the change per MDCG 2020-3. Determine whether the patch is a "significant" or "non-significant" change.

    • For Category A patches addressing low/medium CVSS vulnerabilities in Class I devices: non-significant change, manage under QMS.
    • For Category A patches addressing high/critical CVSS vulnerabilities: assess per MDCG 2020-3 Appendix 2, Section 4. The risk-based nature of the vulnerability may elevate the change classification.
    • For Category C and D patches: almost certainly significant changes requiring Notified Body assessment for Class IIa and above.
  2. Submit to Notified Body if required. Use the Notified Body's change notification process. Typical turnaround for non-urgent changes: 4 to 12 weeks. For emergency changes (critical vulnerability with active exploitation), most Notified Bodies have expedited processes, but you must contact them proactively.

  3. Assess for FSCA. If the vulnerability has manifested in the field and required corrective action (the patch), determine whether a Field Safety Corrective Action (FSCA) is required per MDR Article 89. If so, submit the FSCA notification through the national competent authority before or simultaneously with deploying the patch.

For the EU (CRA):

  1. Submit vulnerability notifications per Article 14 if the vulnerability is actively exploited:

    • Initial notification within 24 hours of becoming aware of the actively exploited vulnerability.
    • Remediation plan within 72 hours.
    • These notifications go to ENISA and the relevant national CSIRT, not to the Notified Body.
  2. Document the vulnerability handling process per Article 13. The CRA requires that your vulnerability handling process is documented, traceable, and risk-based. This overlaps with your PSIRT process.

Phase 5: Deployment

Actions:

  1. Deploy the patch per the deployment plan. For connected devices, this may be an over-the-air (OTA) update. For non-connected devices, this may require a field service visit or customer-initiated download.

  2. Track deployment coverage. Know what percentage of fielded devices have received the patch. FDA expects manufacturers to track this. The CRA requires that updates are made available to all users of the product.

  3. Deploy compensating controls for devices that cannot be immediately patched. For example, if a hospital cannot take an imaging system offline for patching, deploy a network segmentation rule as a temporary compensating control and document the residual risk.

  4. Communicate with customers. Provide clear instructions, timelines, and risk information. For high/critical vulnerabilities, customer communication should happen before or simultaneously with patch availability.

Key documentation: Deployment records, customer communications, deployment coverage reports.

Phase 6: Post-Deployment Verification and Closure

Actions:

  1. Verify deployment effectiveness. Confirm that the patch resolves the vulnerability in the field, not just in the lab. Methods include remote vulnerability scanning (for connected devices), customer confirmation, or field service verification.

  2. Update the vulnerability tracking record. Close the tracking ID with documentation of the resolution.

  3. Update the cybersecurity risk analysis to reflect the post-patch risk state.

  4. Archive all documentation in the technical file / design history file.

  5. Assess for PSUR inclusion. If the patch was deployed during a PSUR reporting period, include it in the PSUR as a software change and/or corrective action.


Resolving the Timeline Conflict Between CRA and MDR

The most difficult operational challenge in 2026 is the timeline mismatch between the CRA and the MDR.

The CRA says: Deliver security updates within a risk-based timeline, and for actively exploited vulnerabilities, notify authorities within 24 hours with a remediation plan within 72 hours.

The MDR says: For Class IIa/IIb/III devices, significant changes require Notified Body assessment before the modified device can be placed on the market. Notified Body assessment takes 4 to 12 weeks under normal circumstances.

These timelines are fundamentally incompatible for critical vulnerabilities. Here is how to resolve the conflict operationally.

Strategy 1: Pre-Agreed Patch Protocols with Your Notified Body

Work with your Notified Body during the initial conformity assessment (or at the next surveillance audit) to establish a pre-agreed protocol for cybersecurity patches. The protocol should define:

  1. Categories of patches that the manufacturer can deploy without prior Notified Body approval. Typically: Category A patches (like-for-like library updates) that do not affect device safety or performance.
  2. The notification timeline for the Notified Body after deploying an emergency patch. Typically: within 48 to 72 hours.
  3. The retrospective assessment process for patches deployed under the emergency protocol. The Notified Body reviews the patch documentation at the next surveillance assessment or through a dedicated review.

This approach is supported by MDCG 2020-3, which acknowledges that certain changes can be implemented under the quality system and assessed retrospectively by the Notified Body.

Strategy 2: Compensating Controls as an Interim Measure

When a critical vulnerability requires a Category C or D fix that needs Notified Body assessment, deploy compensating controls immediately to reduce risk while the Notified Body reviews the patch:

  1. Deploy network-level compensating controls (firewall rules, network segmentation, port restrictions) within 24 to 72 hours of vulnerability detection. These do not change the device and do not require Notified Body approval.
  2. Notify the CRA authorities that compensating controls have been deployed and that a permanent software fix is under development with Notified Body assessment.
  3. Submit the software patch to the Notified Body through the change notification process with a request for expedited review.
  4. Deploy the software patch after Notified Body approval and remove compensating controls.

This strategy satisfies the CRA's requirement for timely action, the MDR's requirement for conformity assessment before market placement, and the patient safety imperative of reducing risk as quickly as possible.

Strategy 3: Emergency Change Justification Under MDR Article 120(3)

MDR Article 120(3) allows a manufacturer to take measures to eliminate an immediate danger presented by a device. If a critical cybersecurity vulnerability poses an immediate danger to patients, the manufacturer can deploy a patch and notify the Notified Body and competent authorities afterward.

The key limitation: this is intended for genuine emergencies where delay would result in patient harm. It is not a routine pathway for deploying cybersecurity patches. Document the justification carefully, including the specific patient safety risk that warranted emergency action.


Recommended Reading
Battery and Cell Sourcing for Portable Medical Devices: Supplier Qualification, Chemistry Selection, and Regulatory Compliance
Supply Chain Manufacturing2026-05-11 · 14 min read

Integrating Patch Management into Your QMS

Patch management is not a standalone process. It must be integrated into the quality management system. Here are the specific QMS elements that must address cybersecurity patch management.

Design Control (21 CFR 820.30 / MDR Article 61 / ISO 13485:2016 Clause 7.3)

For Category A patches (like-for-like component updates), the design control requirement is typically satisfied by:

  • Documenting the change in the design change request
  • Performing verification testing (regression + security testing)
  • Updating the design output documents (software version, SBOM)
  • Closing the design change through the design transfer process

For Category C and D patches, full design control activities apply: design input review, design verification, design validation (including clinical evaluation assessment), design review, and design transfer.

With QMSR (effective February 2, 2026), FDA has modernized the design control requirements. The terminology has shifted, but the substance remains: any change to a device's design must follow the established design control process, and the depth of the process must be proportionate to the risk and complexity of the change.

Risk Management (ISO 14971:2019)

Every cybersecurity patch must be reflected in the risk management file. Specifically:

  1. Update the risk analysis to include the vulnerability as a hazardous situation. The sequence is: vulnerability exists --> threat actor exploits vulnerability --> device function is compromised --> patient harm occurs.
  2. Document the risk control measure (the patch) and verify its implementation.
  3. Evaluate residual risk after the patch is applied.
  4. Update the risk/benefit determination if the vulnerability or the patch affects the overall risk/benefit profile of the device.

Corrective and Preventive Action (CAPA)

Not every cybersecurity patch requires a CAPA. A CAPA is appropriate when:

  • The vulnerability was introduced by a failure in the software development or supplier management process.
  • The vulnerability represents a systemic quality issue (e.g., using a known-vulnerable component across multiple products).
  • The vulnerability recurred despite previous corrective actions.

For routine third-party library updates addressing newly discovered CVEs, a CAPA is typically not required unless there is an underlying process failure.

Supplier Management

Most vulnerabilities originate in third-party components. Your supplier management process must include:

  1. Component selection criteria that include security maturity assessment.
  2. Ongoing monitoring of component vulnerability status (automated SBOM monitoring tools).
  3. ** contractual requirements** for component vendors to provide timely security advisories and patches. Under the CRA, this becomes a legal requirement: manufacturers must contractually require their suppliers to cooperate on vulnerability handling.

Document Control and Traceability

Every patch must be traceable through the QMS. The traceability chain must connect:

  • The vulnerability (CVE ID, internal tracking ID)
  • The risk analysis (risk ID)
  • The change control record (change request ID)
  • The verification and validation records (test protocol and report IDs)
  • The regulatory assessment (510(k) decision memo, Notified Body notification reference)
  • The deployment record (device serial numbers or lot numbers, deployment dates)
  • The post-deployment verification record

This level of traceability is required by both FDA (21 CFR 820, and QMSR from February 2026) and MDR (Annex II technical documentation requirements). It is also the primary evidence that auditors will request during a Notified Body surveillance audit or FDA inspection.


The SBOM as the Operational Backbone

The SBOM is not just a premarket submission artifact. It is the operational backbone of your patch management process.

Maintaining a Living SBOM

Your SBOM must be a living document that is updated every time a component is added, removed, or updated in your device software. This means:

  1. Generate the SBOM automatically as part of your build process. Use tools like Syft, CycloneDX plugins for your build system, or vendor-provided SBOM generation tools.
  2. Store the SBOM in a central repository linked to the specific software version and hardware configuration.
  3. Monitor the SBOM against vulnerability databases continuously. Automated tools like Grype, Trivy, or commercial SBOM monitoring platforms can cross-reference your SBOM components against NVD and other vulnerability sources in near-real-time.
  4. Version the SBOM with every software release, including patches. Each SBOM version must correspond to a specific software version that is traceable to a specific device configuration.

SBOM Formats and Interoperability

FDA accepts SBOMs in SPDX, CycloneDX, or SWID formats. The CRA does not specify a format but requires that the SBOM is "sufficiently detailed" to allow vulnerability matching. CycloneDX is the most widely adopted format in the medical device industry because it has specific extensions for medical device metadata and vulnerability tracking.

Distributing the SBOM

Under the CRA, manufacturers must make the SBOM available to buyers of products with digital elements. Under FDA guidance, the SBOM must be included in the premarket submission. Operationally, maintain three distribution channels:

  1. Regulatory submissions: SBOM included in 510(k)/De Novo/PMA submissions and MDR technical documentation.
  2. Customer distribution: SBOM provided to device purchasers (hospitals, health systems) per CRA requirements.
  3. Internal operations: SBOM used by your PSIRT and patch management teams for vulnerability matching.

Special Scenarios

Multi-Version Deployments

Medical devices often have multiple software versions in the field simultaneously. A vulnerability may affect some versions but not others. Your patch management process must:

  1. Map the vulnerability to specific affected versions using the SBOM.
  2. Determine the patch strategy for each affected version. In some cases, the patch can be applied to all affected versions. In others, older versions may require a different approach (e.g., end-of-life components that cannot be patched).
  3. Prioritize patching based on the installed base size and clinical risk of each version.
  4. Document version-specific risk acceptance for versions that cannot be patched, with compensating controls.

End-of-Life Components

When a vulnerable component has reached end-of-life and no patch is available from the vendor:

  1. Assess the feasibility of replacing the component with an actively maintained alternative. This is a Category C change requiring full design control and likely Notified Body assessment.
  2. If replacement is not feasible in the short term, implement compensating controls and document the residual risk in the risk management file.
  3. Include the component in your ongoing vulnerability monitoring with heightened scrutiny.
  4. Plan a longer-term remediation (component replacement or device retirement) and track it as a corrective action.

Cloud-Connected and AI-Enabled Devices

For devices that connect to cloud services or incorporate AI/ML models, patch management extends beyond the device firmware:

  1. Cloud service patches can typically be deployed more rapidly because they do not require field device updates. However, changes to the cloud service that affect device behavior may still require regulatory assessment.
  2. AI/ML model updates have their own regulatory pathway (FDA's predetermined change control plan, or PCCP). A cybersecurity vulnerability in an AI model (e.g., adversarial attack susceptibility) is both a cybersecurity issue and an AI performance issue, requiring coordination between the cybersecurity and AI/ML teams.
  3. OTA update mechanisms are themselves attack surfaces. The patch delivery mechanism must be secure (signed updates, secure boot chain, rollback protection). FDA's cybersecurity guidance requires that the update mechanism itself be described and secured.

Vulnerability Disclosure from External Researchers

When an external security researcher reports a vulnerability through your CVD program:

  1. Acknowledge receipt within 48 hours. This is a common industry expectation and is codified in the CRA.
  2. Begin the vulnerability intake and triage process (Phase 1 above).
  3. Negotiate a coordinated disclosure timeline with the researcher. Typical timelines: 90 days for high/critical vulnerabilities, 180 days for medium/low. The CRA's timelines may require faster action.
  4. Deploy the patch before the public disclosure date. If this is not possible, request an extension from the researcher and deploy compensating controls.
  5. Credit the researcher in your security advisory (with their consent).

For a detailed guide on setting up and operating a CVD program and PSIRT, see the Coordinated Vulnerability Disclosure and PSIRT Guide.


Recommended Reading
FDA Cybersecurity Premarket Submission Deficiencies: 12 Common Rejection Reasons and How to Fix Them (2026)
Cybersecurity 510(k)2026-05-03 · 26 min read

Common Failures and How to Avoid Them

Based on FDA warning letters, Notified Body nonconformities, and field safety corrective action analyses, these are the most common patch management failures:

Failure 1: Patching Without Updating the Risk Management File

The patch is deployed, but the risk management file is not updated to reflect the new risk control measure and the revised residual risk. This creates a disconnect between the actual device state and the documented risk state. Notified Bodies frequently cite this during surveillance audits.

Fix: Make risk management file update a mandatory gate in the patch deployment process. No patch is deployed until the risk analysis has been updated and approved.

Failure 2: Deploying a Patch That Requires Notified Body Approval Without Notified Body Notification

The manufacturer classifies a patch as non-significant and deploys it without notifying the Notified Body. The Notified Body later determines it was a significant change. This can result in a nonconformity, suspension of the quality system certificate, or withdrawal of the CE mark.

Fix: When in doubt, notify. The cost of an unnecessary notification is far lower than the cost of a nonconformity. Use the pre-agreed patch protocol (Strategy 1 above) to establish clear boundaries with your Notified Body.

Failure 3: Ignoring the CRA Timeline

The manufacturer is focused on MDR and FDA obligations and misses the CRA's 24-hour notification requirement for actively exploited vulnerabilities. The CRA has its own enforcement mechanisms separate from the MDR, including fines of up to EUR 15 million or 2.5% of global annual turnover.

Fix: Include CRA timeline monitoring in your PSIRT process. When triaging a vulnerability, the first question after "is it actively exploited?" should be "does the CRA reporting clock start now?"

Failure 4: Incomplete SBOM Leading to Incomplete Vulnerability Coverage

The manufacturer's SBOM is incomplete or outdated, so a vulnerability affecting a component in the device is not detected during routine monitoring. The vulnerability is later publicly disclosed and exploited.

Fix: Automate SBOM generation as part of the build process. Validate SBOM completeness periodically (at minimum quarterly, or after every release). Use dependency scanning tools that can detect transitive dependencies that may not appear in a manually maintained SBOM.

Failure 5: No Deployment Verification

The patch is released, but the manufacturer does not verify that it was actually deployed to fielded devices. FDA has cited manufacturers for failing to track and verify postmarket cybersecurity patch deployment.

Fix: Implement deployment tracking mechanisms. For connected devices, use telemetry to confirm patch installation. For non-connected devices, track customer acknowledgment and field service completion. Report deployment coverage to management review.

Failure 6: No Rollback Plan

A patch is deployed that causes an unexpected issue (e.g., compatibility problem with a hospital IT system), and there is no mechanism to roll back to the previous version. The device is non-functional until a new patch is developed.

Fix: Every patch deployment plan must include a rollback procedure. Test the rollback procedure during verification. For OTA updates, implement a secure rollback mechanism in the bootloader.


Building a Patch Management SOP: Template Structure

For manufacturers who need to establish or improve their patch management process, here is a recommended SOP structure:

  1. Purpose and Scope: Define that this procedure covers cybersecurity vulnerability identification, assessment, patching, and deployment for all marketed devices containing software.
  2. Definitions: CVSS, CVE, SBOM, PSIRT, CVD, compensating control, significant change, like-for-like update, OTA, CRA, etc.
  3. Roles and Responsibilities: Define who owns the PSIRT, who performs vulnerability triage, who approves patches, who manages regulatory assessment, who manages deployment, who verifies deployment.
  4. Vulnerability Intake: Sources, logging, initial triage criteria.
  5. Risk Assessment: Methodology for clinical risk assessment of cybersecurity vulnerabilities, linkage to ISO 14971 risk management process.
  6. Patch Classification: The Category A/B/C/D framework, decision criteria, documentation requirements for each category.
  7. Regulatory Pathway Assessment: FDA 510(k) decision process, MDR Notified Body notification requirements, CRA timeline compliance, NIS2 incident reporting.
  8. Patch Development and Verification: Development requirements per IEC 62304 (scaled by category), testing requirements, SBOM update.
  9. Deployment: Deployment mechanisms, customer communication, deployment tracking, rollback procedures.
  10. Post-Deployment: Verification, documentation closure, PSUR inclusion, management review reporting.
  11. Emergency Procedures: Expedited process for critical vulnerabilities with active exploitation.
  12. Metrics and Reporting: Key performance indicators (mean time to detect, mean time to patch, deployment coverage percentage, number of open vulnerabilities by risk level).

Key Metrics to Track

Regulators and auditors increasingly expect manufacturers to demonstrate that their patch management process is effective, not just documented. Track and report these metrics:

Metric Target Regulatory Driver
Mean Time to Detect (MTTD) for vulnerabilities in device components Less than 48 hours from CVE publication FDA Section 524B, CRA Article 13
Mean Time to Patch (MTTP) for critical/high vulnerabilities Less than 30 days for critical, less than 90 days for high FDA Section 524B, CRA Article 13
Deployment coverage for critical patches Greater than 95% within 30 days of patch availability FDA Section 524B
SBOM accuracy rate Greater than 99% component coverage vs. actual build CRA Article 13, FDA guidance
CRA notification compliance 100% on-time notifications CRA Article 14
Percentage of patches requiring new 510(k) Track trend; unexpected spikes indicate classification issues FDA
Percentage of patches requiring Notified Body approval Track trend MDR
Number of open vulnerabilities by risk level Dashboard for management review All frameworks

Recommended Reading
Biological Specimen Raw Material Sourcing for IVD Development: Human Serum, Plasma, and Matrix Materials
Manufacturing IVD & Diagnostics2026-05-11 · 21 min read

The Path Forward

Cybersecurity patch management for medical devices in 2026 requires operating within four overlapping regulatory frameworks simultaneously. The operational key is a single, risk-based process that satisfies the most stringent requirement in each area:

  • For timelines: The CRA's 24-hour notification clock is the fastest. Design your process to meet it.
  • For change control: The MDR's Notified Body assessment requirement is the most restrictive. Use pre-agreed protocols and compensating controls to manage the timeline conflict.
  • For documentation: Both FDA and MDR require full traceability from vulnerability detection through patch deployment and verification. Design your documentation system to support this.
  • For risk management: ISO 14971 provides the methodology. Apply it consistently to cybersecurity vulnerabilities, treating them as hazardous situations with specific threat actors as causes.

The manufacturers who handle this well are not those with the largest cybersecurity teams. They are the ones who have integrated patch management into their existing QMS processes, established clear protocols with their Notified Bodies, automated their SBOM management, and built a vulnerability triage process that can classify a vulnerability's regulatory pathway within hours of detection.


Related Articles

RegulatoryPolicy & Legislation

DOJ Medical Device Fraud Enforcement in 2026: False Claims Act, Anti-Kickback Statute, and What MedTech Companies Must Know About the Record $6.8 Billion Crackdown

How the Department of Justice's record $6.8 billion False Claims Act enforcement in FY2025 impacts medical device companies — National Fraud Enforcement Division, Health Care Fraud Data Fusion Center using AI analytics, West Coast Strike Force, Anti-Kickback Statute compliance, whistleblower qui tam risks, and what manufacturers, distributors, and executives must do to reduce exposure in the most aggressive healthcare fraud enforcement environment in US history.

2026-05-13·34 min read
ManufacturingQuality Systems

Adhesive Bonding Process Validation for Medical Devices: From Variables to IQ/OQ/PQ

How to validate adhesive bonding processes for medical devices — covering ISO 13485 Clause 7.5.6 and FDA QMSR requirements, UV curing and epoxy bonding process variables, IQ/OQ/PQ protocols, critical process parameters, surface preparation controls, adhesive chemistry selection (cyanoacrylate, UV-curable, epoxy, silicone), destructive testing strategy, revalidation triggers, and ongoing monitoring under FDA and EU MDR.

2026-05-11·16 min read
IVD & DiagnosticsManufacturing

Antibody Clone Lock and Lot-to-Lot Bridging for Immunoassay IVD Kits

How to lock down antibody clones, manage lot-to-lot bridging studies, and maintain immunoassay performance across manufacturing campaigns — covering recombinant vs hybridoma strategies, critical quality attribute monitoring, bridging study design, and regulatory expectations under FDA QMSR, ISO 13485, and EU IVDR.

2026-05-11·21 min read