MedDeviceGuideMedDeviceGuide
Back

IEC 62304 Medical Device Software Lifecycle: The Complete Implementation Guide

How to implement IEC 62304 for medical device software development — safety classification, lifecycle processes, SOUP management, documentation requirements, and practical tips for agile teams.

Ran Chen
Ran Chen
2026-03-17Updated 2026-03-2479 min read

What Is IEC 62304?

IEC 62304:2006+AMD1:2015 is the international standard that defines the lifecycle requirements for medical device software. It specifies the processes, activities, and tasks that a software development organization must follow when building software that is itself a medical device or is embedded in a medical device. Published by the International Electrotechnical Commission (IEC) jointly with ISO (as ISO/IEC 62304), it is the single most referenced standard when regulators, Notified Bodies, and auditors evaluate whether your software development process is adequate.

The standard does not tell you how to write code. It does not prescribe a programming language, an architecture pattern, or a development methodology. What it does is define a framework of lifecycle processes — planning, requirements, design, implementation, testing, release, maintenance, and problem resolution — and then scale the rigor of those processes based on how much harm the software could cause if it fails. That scaling mechanism, called software safety classification, is the conceptual backbone of the entire standard.

If you are building software that runs on, in, or alongside a medical device — whether it is firmware for an infusion pump, a cloud-based diagnostic algorithm, a mobile app that controls a therapeutic device, or standalone SaMD (Software as a Medical Device) — IEC 62304 applies to you. There is no shortcut around it.

Why IEC 62304 Matters

IEC 62304 is recognized or required by virtually every major medical device regulatory framework:

Regulatory Framework How IEC 62304 Is Used
EU MDR (2017/745) Harmonized standard. Conformity with IEC 62304 creates a presumption of conformity with the MDR's General Safety and Performance Requirements (GSPR) related to software lifecycle
FDA (United States) Recognized consensus standard. FDA expects software lifecycle processes consistent with IEC 62304; referenced in FDA software guidance documents
Health Canada Recognized standard under MDSAP. Expected for software-containing devices
PMDA (Japan) Referenced in Japanese medical device regulations for software lifecycle
TGA (Australia) Recognized via MDSAP participation
NMPA (China) National equivalent standards exist; IEC 62304 principles are reflected in Chinese guidance

In the EU specifically, IEC 62304 has the status of a harmonized standard under the MDR. This means that if your software development process conforms to IEC 62304, you benefit from a "presumption of conformity" with the relevant MDR requirements. Notified Bodies audit against this standard directly. If you deviate from it, you bear the burden of proving that your alternative approach provides an equivalent level of safety.

For FDA-regulated products, IEC 62304 is listed as a recognized consensus standard. While the FDA does not mandate compliance with any specific standard, FDA reviewers and investigators expect to see software lifecycle processes that are consistent with IEC 62304. The FDA's own guidance documents — including General Principles of Software Validation (2002), Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (2005), and more recent premarket guidance — align closely with IEC 62304's process framework.

Scope and Applicability

IEC 62304 applies to:

  • Software that is a medical device (standalone software, SaMD)
  • Software that is embedded in a medical device (firmware, operating system components, application software within a device)
  • Software used in the manufacture or maintenance of a medical device — but only to the extent the manufacturer determines it affects device safety

The standard explicitly does not apply to:

  • Software used in the business operations of the manufacturer (ERP, CRM) unless it directly affects device quality or safety
  • Validation of software used as a production tool (covered by other standards and regulations such as 21 CFR 820 / QMSR)

Important clarification from Amendment 1:2015: The original 2006 edition was sometimes interpreted as applying only to software that was already classified as a medical device. Amendment 1 clarified that the standard applies to software that is part of a medical device system, regardless of whether the software component itself is independently classified as a medical device. This closed a loophole that some manufacturers had used to argue that embedded software modules did not need to follow IEC 62304 processes.

Software Safety Classification

The software safety classification system is the single most consequential decision you will make under IEC 62304. It determines how much documentation you need, how detailed your design must be, how rigorous your testing must be, and how much process overhead your team will carry. Get the classification wrong — especially if you under-classify — and you will face audit findings, regulatory delays, and potentially a complete rework of your development artifacts.

The Three Classes

IEC 62304 defines three software safety classes:

Class Definition Consequence of Failure Example
Class A No injury or damage to health is possible Software failure cannot contribute to a hazardous situation Software that displays non-critical administrative data; software where all hazardous outputs are controlled by independent hardware safety mechanisms
Class B Non-serious injury is possible Software failure can contribute to a hazardous situation, but the resulting harm is non-serious injury Software that controls a low-energy therapeutic device with hardware safety limits; software that presents wellness data with clinical context
Class C Death or serious injury is possible Software failure can contribute to a hazardous situation that could result in death or serious injury Software that controls drug delivery dosing; diagnostic AI that influences treatment decisions for life-threatening conditions; software controlling high-energy radiation therapy

How Classification Works

Classification is not a gut feeling. It is the output of a risk analysis process, specifically integrated with ISO 14971 (risk management for medical devices). The classification is based on the severity of harm that could result from a hazardous situation to which the software contributes, after considering the system-level risk control measures that are external to the software.

This last point is critical and frequently misunderstood. If your medical device has independent hardware-based safety mechanisms that prevent harm even when the software fails, those mechanisms can reduce the software safety classification. For example:

  • A motorized surgical tool where the software controls motor speed, but an independent hardware current limiter prevents the motor from exceeding a safe speed regardless of what the software commands. The hardware limiter is external to the software and is an independent risk control measure. This could justify classifying the software as Class A or B instead of Class C, even though the motor could theoretically cause serious injury.

However, there are strict conditions for this argument to hold:

  1. The hardware risk control must be independent of the software — it cannot share a processor, power supply, or any failure mode with the software
  2. The hardware risk control must be adequate — it must fully mitigate the hazard to the claimed severity level
  3. The risk analysis must document this reasoning explicitly
  4. The hardware risk control itself must be verified and validated

Practical warning: Auditors are highly skeptical of classification reductions based on external risk controls. If you claim Class A for software that controls a Class III medical device, expect pointed questions. The burden of proof is entirely on you, and the risk analysis must be airtight. When in doubt, classify higher. The cost of additional process rigor is always less than the cost of a failed audit or a recall.

Classification at the Software System vs. Software Item Level

Amendment 1:2015 introduced a crucial refinement: you can classify at the software item level, not just the software system level. This means that within a single software system classified as Class C, individual software items (modules, components, subsystems) can be classified as Class A or B if the architecture properly segregates them and the risk analysis supports the lower classification.

This is enormously practical. A typical medical device software system might include:

  • A Class C module that calculates drug dosing
  • A Class B module that displays patient vitals
  • A Class A module that handles user authentication and logging

By classifying at the item level, you can apply lighter-weight processes to the Class A and B modules while reserving the full rigor of Class C processes for the modules where it matters most. This requires a software architecture that enforces isolation between items of different classes — a topic covered in the architectural design section below.

Documentation Requirements by Safety Class

The amount of required documentation and process rigor varies dramatically by class. This table summarizes the key lifecycle activities and which classes require them:

Lifecycle Activity Class A Class B Class C
Software development planning Required Required Required
Software requirements analysis Required Required Required
Software architectural design Not required Required Required
Software detailed design Not required Not required Required
Software unit implementation Required Required Required
Software unit verification Not required Required Required
Software integration and integration testing Not required Required Required
Software system testing Required Required Required
Software release Required Required Required
Software maintenance planning Required Required Required
Software risk management Required Required Required
Software configuration management Required Required Required
Software problem resolution Required Required Required
Traceability (requirements to tests) Required Required Required
Traceability (requirements to architecture) Not required Required Required
Traceability (architecture to detailed design) Not required Not required Required
Traceability (detailed design to implementation) Not required Not required Required

The implications are stark. A Class A software project requires planning, requirements, system testing, release, maintenance, risk management, configuration management, and problem resolution — but can skip formal architectural design, detailed design, unit verification, and integration testing. A Class C project must do everything.

This is why classification matters so much. Misclassifying a Class B system as Class C could double your documentation workload. Misclassifying a Class C system as Class A could mean you ship software without the design documentation and unit testing needed to ensure safety — and you will not survive a Notified Body audit.

Software Development Planning

Every IEC 62304 project begins with a software development plan. This is not a project management schedule — it is a document (or set of documents) that defines how you will execute each lifecycle process, what deliverables will be produced, and what standards, methods, and tools you will use.

Required Content

The software development plan must address:

  • The software development lifecycle model — waterfall, V-model, iterative, agile, or hybrid. The standard does not prescribe a model, but you must define the one you are using.
  • Deliverables for each lifecycle activity — what documents, artifacts, and records will be produced
  • Traceability between lifecycle activities — how you will maintain traceability from requirements through design, implementation, and testing
  • Software development standards, methods, and tools — coding standards, design notation, review methods, static analysis tools, compilers, IDEs
  • Software integration and integration testing plan — how software items will be integrated and how integration will be verified (Class B and C)
  • Software verification plan — the approach to verifying software items (Class B and C) and the software system (all classes)
  • Software risk management plan — how risk management per ISO 14971 will be integrated into the software development lifecycle
  • Documentation plan — what documents will be produced, their format, and how they will be controlled
  • Software configuration management plan — how software items, documents, tools, and SOUP will be identified, versioned, and controlled
  • Supporting tool qualification — if a tool is relied upon for verification or could introduce errors, how will the tool be qualified?

Practical tip: Many organizations maintain a single "Software Development Plan" template that covers all of the above for a product line, with project-specific appendices or configuration for individual products. This avoids re-creating the plan from scratch for each project while still allowing project-specific tailoring. The plan should be a living document, updated as the project evolves — IEC 62304 explicitly allows and expects plan updates.

Requirements Analysis

Software requirements analysis is required for all safety classes. This is where you define what the software must do — its functional requirements, performance requirements, interface requirements, and software-specific safety requirements derived from the system-level risk analysis.

What Must Be Documented

The standard requires that software requirements include:

  • Functional and capability requirements — what the software does
  • Inputs and outputs of the software system — data interfaces, user interfaces, hardware interfaces
  • Interfaces between software systems and components — including interfaces with external systems, SOUP, and hardware
  • Software-driven alarms, warnings, and operator messages
  • Security requirements — protection against unauthorized access (increasingly important with cybersecurity guidance from FDA and EU)
  • Usability engineering requirements — derived from IEC 62366 (usability engineering for medical devices)
  • Data definition and database requirements
  • Installation and acceptance requirements
  • Requirements related to methods of operation and maintenance
  • User documentation requirements
  • Regulatory requirements — any requirements derived from applicable regulations

Risk-Derived Requirements

This is where IEC 62304 integrates tightly with ISO 14971. The system-level risk analysis identifies hazardous situations. For each hazardous situation where software contributes to the cause or where software implements a risk control measure, you must derive specific software requirements. These are sometimes called "software safety requirements" and must be traceable back to the risk analysis.

For example, if the risk analysis identifies "overdose due to incorrect flow rate calculation" as a hazardous situation, the software requirements must include specific requirements such as:

  • The flow rate calculation algorithm shall use a specific formula with defined precision
  • The software shall validate user-entered flow rate values against a defined range
  • The software shall display a warning if the calculated dose exceeds a threshold
  • The software shall implement an independent check of the flow rate command before sending it to the pump actuator

Each of these requirements traces back to the hazardous situation in the risk analysis. This traceability is auditable and auditors will check it.

Requirements Verification

You must verify that the software requirements are:

  • Not contradictory
  • Testable (you can write a test case against each requirement)
  • Traceable to system requirements or risk analysis
  • Complete (no gaps in expected behavior, boundary conditions, or failure modes)
  • Unambiguous

For Class B and C software, this verification must be documented. In practice, most organizations accomplish this through a formal requirements review (peer review or design review) with documented minutes, action items, and sign-off.

Architectural Design

Software architectural design is required for Class B and Class C software. For Class A software, it is optional but often still good practice.

Purpose

The architectural design describes how the software system is decomposed into software items (modules, components, services) and how those items interact. It serves three critical purposes under IEC 62304:

  1. Enables item-level classification — If you want to classify individual software items at different safety classes (e.g., a Class C calculation module and a Class A logging module within the same system), the architecture must demonstrate that the items are properly segregated.
  2. Supports integration planning — The architecture defines the integration order and identifies the interfaces that must be tested during integration testing.
  3. Supports risk analysis — The architecture identifies potential failure modes at the interface level and supports the allocation of safety requirements to specific software items.

Required Content

For Class B and C software, the architectural design must document:

Element Description
Software items Identification of the major components/modules/services that make up the software system
Interfaces between software items Data flows, control flows, APIs, message formats between internal components
Interfaces with external systems How the software interacts with hardware, external software, networks, and users
SOUP identification Which SOUP (Software of Unknown Provenance) components are used, their versions, and where they fit in the architecture
Functional properties of software items What each item does in the context of the system
Segregation of software items of different classes How items of different safety classes are isolated so that a failure in a lower-class item cannot propagate to a higher-class item

Segregation

Segregation is the architectural mechanism that prevents a failure in a low-integrity software item from causing a failure in a high-integrity software item. Common segregation techniques include:

  • Separate processes with OS-enforced memory protection
  • Separate hardware (different processors, different boards)
  • Communication through well-defined, validated interfaces with input checking
  • Watchdog timers and heartbeat monitoring between items
  • Separate partitions in a real-time operating system with partition scheduling

If your architecture claims that a Class A module cannot affect a Class C module, you must demonstrate the segregation mechanism and verify that it works. This is an area where auditors dig deep.

Detailed Design

Detailed design is required only for Class C software. It takes the architectural design and refines each software item (or unit) to a level of detail sufficient for implementation and verification.

What Detailed Design Looks Like

The detailed design must document each software unit's:

  • Interfaces — function signatures, data types, protocols, error codes
  • Internal data structures and their constraints
  • Algorithms — including mathematical formulas, state machines, decision logic
  • Error detection and handling — how the unit detects errors in inputs, internal state, and outputs, and what it does about them

For Class C software, the detailed design must be detailed enough that a developer can implement the unit from the design document alone, and a reviewer can verify the implementation against the design. This is the level of documentation that many agile teams find most burdensome — and where creative approaches (code-as-design, auto-generated documentation, formal specification tools) can help reduce friction without sacrificing compliance.

Practical tip: For Class C software, many organizations use annotated header files, interface definition documents, or formal modeling tools (e.g., MATLAB/Simulink models, UML state diagrams) as their detailed design documentation. The standard does not prescribe a format. What matters is that the design is documented, reviewable, and traceable to the implementation.

Implementation and Verification

Unit Implementation

For all safety classes, you must implement the software according to the plan, using the coding standards and tools defined in the software development plan. The standard requires that the implementation is verifiable — meaning someone other than the original developer can review it and confirm it does what the requirements and design say it should.

Unit Verification (Class B and C)

For Class B and C software, you must verify each software unit. Verification methods include:

Method Description Common Use
Code review Manual inspection of source code against requirements, design, and coding standards All classes where unit verification applies
Static analysis Automated analysis of source code for coding standard violations, potential defects, security vulnerabilities Highly recommended for Class B and C
Unit testing Executing individual software units with defined inputs and checking outputs against expected results The primary verification method for most organizations
Formal methods Mathematical proof of correctness Rare; used for safety-critical algorithms in some Class C applications

The standard requires that unit verification demonstrates:

  • The software unit implements the detailed design (Class C) or architectural design (Class B) correctly
  • The software unit meets its requirements
  • The software unit does not contain unintended functions (defense against undocumented code paths)
  • The software unit complies with the coding standards defined in the development plan

Common audit finding: Insufficient code coverage in unit tests. While IEC 62304 does not mandate a specific code coverage metric, auditors expect to see evidence that unit tests exercise the critical paths through the code, including error-handling paths. For Class C software, statement coverage or branch coverage metrics should be measured and documented, even if the standard does not specify a numeric threshold. A reasonable target for Class C is at least 80% branch coverage, with justification for any uncovered branches.

Integration and Integration Testing

Integration testing is required for Class B and C software. It verifies that software items, when combined, work together correctly.

Integration Strategy

The software development plan should define the integration strategy:

  • Bottom-up: Integrate lowest-level items first, building up to the full system
  • Top-down: Start with the highest-level item, using stubs for lower-level items, and progressively replace stubs with real implementations
  • Continuous integration: Integrate items incrementally as they are completed, running integration tests with each integration — this is the approach most compatible with agile development

What Must Be Tested

Integration testing must verify:

  • Software items are correctly integrated and their interfaces work as specified in the architectural design
  • The integrated software items meet the software requirements allocated to those items
  • Data flows between items are correct — correct data types, ranges, formats, timing
  • Error handling at interfaces works — what happens when one item sends invalid data to another?

Regression Testing

The standard requires that when changes are made to software items that have already been integrated and tested, you evaluate the impact on previously passed integration tests and re-run affected tests. This is regression testing. Automated test suites are effectively mandatory for any non-trivial system if you want to maintain compliance without crushing your team under manual test execution.

Integration Testing Documentation

For each integration test, you must document:

  • The test case (what is being tested, input data, expected output)
  • The test procedure (how to execute the test)
  • The test result (pass/fail, actual output, date, tester)
  • The version of the software under test (tied to configuration management)

System Testing

System testing is required for all safety classes — including Class A. It verifies that the complete, integrated software system meets all software requirements.

Relationship to Other Testing Levels

System testing is distinct from integration testing and from design validation:

Testing Level What It Verifies Standard
Unit verification Individual software units meet their specifications IEC 62304 (Class B, C)
Integration testing Software items work together correctly at their interfaces IEC 62304 (Class B, C)
System testing The complete software system meets all software requirements IEC 62304 (all classes)
Design validation The finished medical device meets user needs and intended use under actual or simulated use conditions ISO 13485 / 21 CFR 820

System testing is software verification at the system level. It is not the same as design validation (which encompasses the entire device, not just the software, and tests against user needs rather than specifications). However, system test results often provide evidence that supports design validation.

System Test Requirements

You must test the software system against:

  • Every software requirement (functional, performance, interface, security, usability)
  • Software-related risk control measures identified in the risk analysis
  • Known anomalies and their resolution (or rationale for non-resolution)

The system test plan should define the test environment, test equipment, test data, pass/fail criteria, and the roles responsible for test execution and review.

Anomaly Resolution Before Release

Any anomalies (failures, unexpected behaviors) found during system testing must be documented and evaluated. For each anomaly, you must determine:

  • Whether it represents a hazard (feeds back into risk management)
  • Whether it must be resolved before release, or can be accepted as a known anomaly with justification
  • The impact on previously completed testing (regression analysis)

Software Release

The software release process ensures that the software is complete, tested, and ready for deployment. It is required for all safety classes.

Release Requirements

Before releasing software, you must:

  1. Verify that all lifecycle activities are complete — all planned requirements, design, implementation, and testing activities have been performed and documented
  2. Evaluate known residual anomalies — every known bug or deficiency must be documented, risk-assessed, and accepted or resolved
  3. Ensure documentation is complete — the software development record (or design history file equivalent) contains all required deliverables
  4. Verify that the released version is the tested version — configuration management must tie the release package to the specific build that was tested
  5. Complete the risk management report — the overall residual risk is acceptable per ISO 14971

Release Documentation

The release documentation typically includes:

  • Release notes (version number, date, changes from previous version, known anomalies)
  • The released software build (binary, installer, image) identified by version
  • A statement that the software was developed in accordance with the software development plan
  • Reference to or inclusion of the software verification and validation summary
  • Reference to the risk management report

Software Maintenance

Once software is released and in use, IEC 62304 requires a maintenance process. This is not optional — even for Class A software. Software maintenance covers changes to the software after release, whether those changes are bug fixes, enhancements, adaptations to new platforms, or responses to field problems.

Maintenance Planning

You must establish a software maintenance plan that defines:

  • How feedback from the field (complaints, problem reports, vigilance reports) enters the maintenance process
  • How change requests are evaluated for impact (including impact on safety classification, risk analysis, and previously verified/validated functionality)
  • How changes are implemented — using the same lifecycle processes as original development, scaled to the scope and safety impact of the change
  • How modified software is re-verified and re-tested (regression analysis and testing)
  • How modified software is re-released

The Key Principle: Changes Follow the Same Process

This is the most important concept in IEC 62304 maintenance. When you modify released software, the modification must follow the applicable lifecycle processes for the safety class of the affected software items. If you are changing a Class C software item, you must update the detailed design, implement the change, perform unit verification, integration testing, system testing, and update the risk analysis — just as you would for new development.

The scope of re-verification and re-testing can be tailored based on a documented impact analysis. You do not necessarily have to re-run every test in your suite. But you must demonstrate that you analyzed the impact, identified affected tests, re-ran those tests, and documented the results.

Software Problem Resolution

IEC 62304 requires a software problem resolution process that covers the entire lifecycle — from development through maintenance. This process manages the evaluation and resolution of problems discovered in the software, whether found during development, testing, or post-market use.

Process Requirements

The problem resolution process must:

  • Accept and document problem reports from any source (testing, customer complaints, internal reviews, post-market surveillance, regulatory agencies)
  • Investigate and evaluate each problem to determine its cause, impact on safety, and whether it affects released software
  • Prioritize and track resolution — with priority based on safety impact
  • Implement and verify corrections — changes follow the lifecycle processes appropriate to the safety class
  • Notify relevant stakeholders — if a problem affects safety or regulatory compliance, the appropriate regulatory and quality processes must be triggered (CAPA, field safety corrective action, vigilance reporting)
  • Feed problems back into the risk management process — newly discovered software failure modes may require updates to the risk analysis

Relationship to CAPA

The software problem resolution process interfaces with but is distinct from the CAPA (Corrective and Preventive Action) process required by ISO 13485 and 21 CFR 820. Not every software problem report triggers a CAPA — but any software problem that reveals a systematic issue, a safety concern, or a non-conformity with requirements should be evaluated for CAPA initiation. The two processes must have a defined interface, and auditors will check that safety-relevant software problems flow into CAPA and post-market surveillance processes.

Configuration Management

Software configuration management is required for all safety classes. It ensures that you can identify, track, and control every version of every software item, document, and tool in your project.

What Must Be Under Configuration Management

Item Description
Source code All source files, build scripts, makefiles, configuration files
Software items and the software system Every identified software item and the integrated system, with version identifiers
SOUP/OTS components All third-party software, libraries, frameworks, and operating systems, with version numbers
Development tools Compilers, linkers, IDEs, test frameworks, static analysis tools — with versions
Documentation Software development plan, requirements, design documents, test plans, test results, risk analysis
Build environment The complete environment needed to reproduce a build (can be documented or captured as a container/VM image)

Change Control

Changes to configuration items must be controlled. This means:

  • Each change must be requested, evaluated (including impact on safety and previously completed activities), approved, implemented, and verified
  • The history of changes must be maintained (revision history)
  • You must be able to recreate any previous build from the configuration items in place at that time

In practice, modern version control systems (Git), CI/CD pipelines, and artifact repositories provide the infrastructure for configuration management. But the process — especially the change evaluation and approval steps — must be documented and followed. A Git commit history alone is not sufficient; you need evidence that changes were reviewed and approved before being integrated.

Risk Management Integration with ISO 14971

IEC 62304 does not stand alone. It is designed to work in conjunction with ISO 14971, the standard for risk management of medical devices. The interaction between the two standards is not a loose coupling — it is a tight integration that runs throughout the entire software lifecycle.

How the Standards Interact

Lifecycle Phase IEC 62304 Activity ISO 14971 Integration
Planning Define risk management approach in software development plan Align with the overall risk management plan per ISO 14971
Requirements Derive software safety requirements from risk analysis System-level risk analysis identifies hazardous situations; software requirements implement risk control measures
Architecture Design segregation, identify SOUP risks Architectural design decisions are risk control measures; SOUP hazard analysis feeds the risk analysis
Implementation Apply coding standards, implement safety requirements Coding standards are risk control measures against implementation errors
Verification Verify risk control measures are implemented Verification of software safety requirements is verification of risk control measure implementation
System testing Test risk control measures System tests include tests of risk control measures; results feed back into residual risk evaluation
Release Evaluate residual risk Risk management report confirms overall residual risk is acceptable
Maintenance Evaluate change impact on risk Changes to software may introduce new hazards or affect existing risk controls; risk analysis must be updated

Software Hazard Analysis

In addition to the system-level risk analysis performed under ISO 14971, IEC 62304 requires that you perform risk analysis activities specific to software:

  • Identify potential causes of contribution to a hazardous situation — what software failure modes (incorrect output, delayed output, no output, unexpected behavior) could cause or contribute to hazardous situations identified in the system-level risk analysis?
  • Evaluate SOUP risks — what are the known anomalies and potential failure modes of SOUP components, and how could they propagate through the system?
  • Document risk control measures implemented in software — these become software safety requirements
  • Verify risk control measures — through code review, testing, and analysis

This is not a separate risk analysis from ISO 14971. It is the software-specific contribution to the overall ISO 14971 risk management process. Some organizations maintain a single risk management file with both system-level and software-level analyses; others maintain separate but cross-referenced documents. Either approach is acceptable as long as traceability is maintained.

SOUP and OTS Software Management

SOUP — Software of Unknown Provenance — is one of the most practically challenging aspects of IEC 62304 compliance. SOUP refers to any software that was not developed under the control of your IEC 62304-compliant lifecycle process. In practice, this includes:

  • Open-source libraries (e.g., OpenSSL, Boost, React, TensorFlow)
  • Commercial off-the-shelf (COTS) software (e.g., Windows, Linux, RTOS kernels, database engines)
  • Third-party SDKs and APIs
  • Previously developed software that was not created under IEC 62304 processes

Nearly every modern medical device software system uses SOUP. The question is not whether you use it — it is how you manage it.

SOUP Requirements Under IEC 62304

Requirement Class A Class B Class C
Identify SOUP items and their versions Required Required Required
Define functional and performance requirements for each SOUP item Not required Required Required
Define hardware/software compatibility requirements for SOUP Not required Required Required
Evaluate known anomalies in SOUP (review bug databases, release notes, errata) Not required Required Required
Evaluate SOUP for potential contribution to hazardous situations Required Required Required
Specify SOUP verification (if publicly available anomaly lists are insufficient) Not required Required Required

Practical SOUP Management

For each SOUP item, you need a SOUP management record that typically includes:

  • Name and manufacturer/community (e.g., OpenSSL, by the OpenSSL Project)
  • Version in use
  • Purpose — why this SOUP is used in your software
  • Functional and performance requirements — what you need the SOUP to do and how well
  • Known anomaly evaluation — review of the SOUP's bug tracker, CVE database, release notes for known issues that could affect your use case
  • Risk evaluation — could a failure of this SOUP contribute to a hazardous situation? If so, what risk controls are in place?
  • Verification approach — how are you verifying that the SOUP performs as needed in your context? (System testing that exercises the SOUP functionality, specific integration tests, review of SOUP test results provided by the manufacturer)

Common audit finding: Incomplete SOUP lists. Auditors expect to see every third-party component listed, versioned, and risk-evaluated — including transitive dependencies. If your Node.js application pulls in 400 npm packages, you need a strategy for managing that list. Many organizations use software composition analysis (SCA) tools to auto-generate SOUP inventories and monitor for newly discovered vulnerabilities. This is not just a regulatory checkbox — it is essential for cybersecurity posture.

SOUP vs. OTS Terminology

IEC 62304 uses the term "SOUP." You will also encounter "OTS" (Off-The-Shelf) in FDA guidance documents. The FDA's Guidance for Industry and FDA Staff: Off-The-Shelf Software Use in Medical Devices (1999) addresses similar concerns but uses different terminology and a somewhat different framework. For practical purposes, if you comply with IEC 62304's SOUP requirements, you will also satisfy the FDA's OTS expectations — but be aware that FDA reviewers may use OTS terminology in their questions, and your submission documents should use the terminology appropriate to the regulatory context.

Relationship to IEC 82304-1: Health Software

IEC 82304-1:2016 is a related but distinct standard that applies to health software products — standalone software that is intended to be used in healthcare but is not necessarily part of a medical device. Where IEC 62304 focuses on the software lifecycle process, IEC 82304-1 focuses on the product-level requirements for health software, including:

  • Requirements for a health software product (labeling, instructions for use, security, data integrity)
  • Requirements on the manufacturer (quality management, risk management, post-market surveillance)
  • A direct reference to IEC 62304 for the software lifecycle process

Think of IEC 82304-1 as a wrapper around IEC 62304. If your standalone health software product is also a medical device (SaMD), you will need to comply with both: IEC 62304 for the lifecycle process and IEC 82304-1 for the product-level requirements.

Aspect IEC 62304 IEC 82304-1
Focus Software lifecycle processes Health software product requirements
Applies to Medical device software (embedded or standalone) Standalone health software products
Covers Development, maintenance, risk management, CM, problem resolution Product labeling, security, safety, post-market aspects
Relationship Referenced by IEC 82304-1 for lifecycle References IEC 62304 for lifecycle
Safety classification Classes A, B, C References IEC 62304 classification

For SaMD manufacturers, the practical implication is straightforward: implement IEC 62304 for your development process, then layer IEC 82304-1 product requirements on top. The standards are designed to work together without duplication.

Agile Development Under IEC 62304

One of the most persistent myths in medical device software is that IEC 62304 requires waterfall development. It does not. The standard is lifecycle-model-agnostic. The 2006 edition was sometimes interpreted as favoring a sequential (waterfall or V-model) approach because of its sequential presentation of lifecycle activities, but nothing in the standard mandates that these activities be performed in a strict sequence for the entire project.

Amendment 1:2015 reinforced this by explicitly acknowledging iterative and incremental development approaches. The amendment states that the software development plan should identify the lifecycle model and that the lifecycle activities can be applied iteratively — provided that the outputs of each activity are produced and documented as required.

Making Agile Work with IEC 62304

The keys to successfully running agile development under IEC 62304:

1. Map IEC 62304 activities to agile artifacts and ceremonies.

IEC 62304 Activity Agile Equivalent
Software development planning Living product/sprint backlog + software development plan that describes the agile process
Requirements analysis User stories with acceptance criteria, documented and traceable to system requirements and risk analysis
Architectural design Architecture documentation maintained iteratively; updated as architecture evolves
Detailed design (Class C) Design documentation for each unit, created during the sprint in which the unit is implemented
Unit verification Automated unit tests, code reviews as part of Definition of Done
Integration testing Automated integration tests run in CI/CD pipeline
System testing Automated system tests run against each build; manual system tests as needed
Software release Sprint release or product release with release checklist

2. Define a compliant Definition of Done.

Your team's Definition of Done must include the IEC 62304 deliverables. A story is not "done" until:

  • Requirements are documented and traceable
  • Design documentation is updated (Class B/C)
  • Code is reviewed against coding standards
  • Unit tests pass and meet coverage targets (Class B/C)
  • Integration tests pass
  • Risk analysis is updated if the change affects safety
  • Traceability matrix is updated
  • All artifacts are under configuration management

3. Maintain continuous traceability.

In a waterfall model, traceability is typically established retrospectively — you write all requirements, then trace them to design, then to tests. In agile, traceability must be maintained incrementally, with each sprint adding new traces. Tools like Jama Connect, Polarion, Siemens Teamcenter, or even well-structured JIRA/Confluence configurations can manage this, but the discipline of updating traceability in real-time must be embedded in the team's workflow.

4. Handle plan changes explicitly.

Agile teams expect plans to change. IEC 62304 requires a software development plan. These are not in conflict — but you must manage plan changes as configuration-controlled changes. If you change your integration strategy, your test approach, or your coding standards, update the plan and version it. Auditors will compare the plan to the actual process and look for consistency.

5. Document design decisions, not just code.

Agile teams sometimes rely on "the code is the documentation." IEC 62304 does not accept this for Class B and C software. The architectural design and (for Class C) detailed design must be documented in a form that is reviewable independently of the code. However, the documentation can be generated from code (e.g., auto-generated API documentation from code comments), maintained in the repository alongside the code (architecture decision records, design documents in version control), or captured in modeling tools. The key requirement is that the documentation exists, is version-controlled, and is traceable.

Practical tip for agile teams: The biggest friction point is usually detailed design for Class C software. One effective pattern is to write design documents as pull request descriptions or linked design documents before implementation begins, review them as part of the PR process, and store them in the repository. This integrates design documentation into the developer workflow rather than treating it as a separate burden.

IEC 62304 Amendment 1:2015 — Key Changes

The 2015 amendment to IEC 62304 was not a minor editorial revision. It introduced several substantive changes that affect how organizations implement the standard. If your processes are still based on the original 2006 edition, you need to update them.

Major Changes

Change Impact
Software item-level classification You can now classify individual software items (not just the entire software system), allowing mixed-class architectures with lighter processes for lower-risk components
Legacy software provisions New guidance on how to handle software that was developed before IEC 62304 was applied — evaluate it, classify it, and determine what additional activities are needed based on the gap
Clarification of SOUP management Expanded requirements for evaluating SOUP, particularly around known anomaly lists and risk evaluation
Iterative development acknowledgment Explicit recognition that iterative and incremental lifecycle models are acceptable
Clarification of scope Software that is part of a medical device system (not just standalone medical device software) is clearly in scope
Software system test record Clarified that test records must include the version of the software tested, the test environment, and test results
Risk management alignment Better alignment with ISO 14971:2007/2019, clarifying how software risk activities feed into the overall risk management process

Legacy Software

The amendment's legacy software provisions deserve special attention. If you have software that is already on the market and was not developed under IEC 62304, you do not necessarily need to retroactively create all the documentation that would have been required for new development. Instead, Amendment 1 provides a path:

  1. Classify the legacy software using the same safety classification process
  2. Perform a gap analysis — compare the existing development documentation and evidence against IEC 62304 requirements for the determined safety class
  3. Determine and implement additional activities to close the gaps — this might include additional testing, documentation of the existing architecture, risk analysis, or SOUP evaluation
  4. Document the rationale for the level of additional activities performed

This is pragmatic. Requiring full retroactive IEC 62304 compliance for legacy software that has years of field experience and a strong post-market safety record would be disproportionate. But the gap analysis and remediation must be genuine — not a rubber stamp.

Documentation Requirements Summary

The following table consolidates the primary documentation deliverables expected under IEC 62304, organized by safety class. This is the checklist your quality team and auditors will use.

Document / Artifact Class A Class B Class C Notes
Software development plan Required Required Required Living document; updated as project evolves
Software requirements specification Required Required Required Must include safety requirements derived from risk analysis
Software architecture document -- Required Required Includes SOUP identification, item classification, segregation rationale
Software detailed design -- -- Required Must support implementation and verification of individual units
Traceability matrix (requirements to tests) Required Required Required Must cover all software requirements
Traceability matrix (requirements to architecture) -- Required Required Maps requirements to architectural components
Traceability matrix (architecture to detailed design) -- -- Required Maps architectural items to detailed design units
Unit verification records -- Required Required Code review records, unit test results, static analysis results
Integration test plan and results -- Required Required Documents integration strategy, test cases, results
System test plan and results Required Required Required Tests against all software requirements
Risk management file (software contributions) Required Required Required Software hazard analysis, SOUP risk evaluation, risk control verification
SOUP list and evaluation Required Required Required Identification, versioning, anomaly evaluation, risk evaluation
Configuration management records Required Required Required Version history, change control records, build records
Software release documentation Required Required Required Release notes, version identification, residual anomaly evaluation
Software maintenance plan Required Required Required Defines how post-release changes will be managed
Problem resolution records Required Required Required Problem reports, investigation records, resolution verification

Relationship to FDA Software Guidance

The FDA has its own ecosystem of software guidance documents that overlaps with but is not identical to IEC 62304. Understanding the relationship between the two is essential for companies that need to satisfy both FDA and IEC 62304 requirements — which is most companies selling globally.

Key FDA Guidance Documents and IEC 62304 Alignment

FDA Guidance IEC 62304 Alignment
General Principles of Software Validation (2002) Covers similar lifecycle concepts but uses different terminology; broader in scope (covers production software, automated equipment)
Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices (2005) Defines what FDA expects to see in 510(k)/PMA submissions for software; documentation requirements align well with IEC 62304 outputs
Deciding When to Submit a 510(k) for a Software Change (2017) Addresses when software modifications require a new 510(k); complements IEC 62304 maintenance process
Clinical Decision Support Software (2022) Defines which CDS functions are devices; once determined to be a device, IEC 62304 applies
Predetermined Change Control Plans (2023) Allows manufacturers to pre-specify anticipated software changes in submissions; IEC 62304 maintenance process supports this concept
Cybersecurity guidance (2023) Cybersecurity requirements that complement IEC 62304's security-related requirements; expects SBOM aligned with SOUP management

Key Differences

  • Software Level of Concern: The FDA historically used "Level of Concern" (Minor, Moderate, Major) as its software risk classification. This was replaced by the 2023 premarket guidance's approach, which aligns more closely with IEC 62304's safety classification but is not identical. FDA now focuses on documentation expectations based on device risk (not just software risk).
  • Software Documentation Level: FDA guidance defines what documentation to include in a premarket submission. IEC 62304 defines what documentation to maintain in your design history file. The submission is a subset of the DHF. Complying with IEC 62304 produces the documentation needed for FDA submissions, but you still need to package it according to FDA formatting expectations.
  • SBOM and cybersecurity: FDA's cybersecurity guidance requires a Software Bill of Materials (SBOM) in premarket submissions. The SBOM is closely related to but not identical to IEC 62304's SOUP list. The SOUP list focuses on risk evaluation; the SBOM focuses on component transparency. In practice, maintaining a comprehensive SOUP list under IEC 62304 provides most of the data needed for an SBOM, with some additional fields.

Practical approach: If you build your software lifecycle process to comply with IEC 62304, you will satisfy approximately 90% of what FDA expects for software documentation. The remaining 10% consists of FDA-specific formatting, submission structure, and cybersecurity-specific deliverables (SBOM format, threat modeling, vulnerability management plan) that are best addressed as supplements to your IEC 62304 deliverables rather than as a separate parallel process.

Relationship to EU MDR

Under the EU MDR, IEC 62304 is a harmonized standard. This means compliance with IEC 62304 creates a presumption of conformity with the following General Safety and Performance Requirements (Annex I of the MDR):

  • GSPR 17.1: Software devices shall be developed and manufactured in accordance with the state of the art taking into account the principles of development lifecycle, risk management, including information security, verification and validation.
  • GSPR 17.2: Software intended to be used in combination with mobile computing platforms shall be designed and manufactured taking into account the specific features of the mobile platform.
  • GSPR 17.3: Manufacturers shall set out minimum requirements concerning hardware, IT networks characteristics and IT security measures.
  • GSPR 17.4: Manufacturers shall address cybersecurity requirements (information security).

The presumption of conformity is powerful — it means that if you demonstrate compliance with IEC 62304, a Notified Body should accept that as evidence of conformity with GSPR 17 unless there is a specific reason to question it. However, IEC 62304 alone may not fully cover GSPR 17.2, 17.3, and 17.4 (mobile platform, IT network, and cybersecurity requirements), which may require additional evidence from standards like IEC 81001-5-1 (health software and health IT systems security) or MDCG guidance on cybersecurity.

For EU MDR compliance, your software documentation must also feed into the technical documentation required by Annex II of the MDR. The IEC 62304 deliverables (development plan, requirements, design, verification records, risk management file) map directly to the technical documentation sections covering design and manufacturing information.

Practical Tips for Startups

If you are a startup building your first medical device software product, IEC 62304 can feel overwhelming. Here is how to approach it without drowning in process.

Start with Classification

Your safety classification determines your documentation burden. Invest the time upfront to get the classification right, with a properly documented risk analysis. If you can legitimately achieve Class A or B, your path is dramatically simpler.

Build Process Into Your Tools, Not Around Them

Do not create a separate "regulatory process" that lives outside your development workflow. Instead:

  • Use your version control system (Git) as the backbone of configuration management
  • Use your issue tracker (JIRA, Linear, GitHub Issues) to manage requirements and problem reports — with custom fields or labels for traceability
  • Use your CI/CD pipeline to enforce coding standards, run static analysis, execute unit and integration tests, and record results
  • Use pull request reviews as your code review process — with documented approval
  • Generate documentation from your tools wherever possible (auto-generated SOUP lists from dependency files, auto-generated API docs from code comments, test reports from CI)

Template Your Documents

Create templates for every required document: software development plan, requirements specification, architectural design, test plans, release documentation. Use the same templates for every project. Customize the content, not the structure. This reduces the cognitive load on your engineers and ensures consistency that auditors value.

Do Not Over-Document Class A Software

If your software is legitimately Class A, you need planning, requirements, system testing, release documentation, risk management, configuration management, and problem resolution. You do not need formal architectural design, detailed design, or unit verification. Do not create documentation that is not required — it increases your maintenance burden and creates artifacts that auditors will review (and potentially find non-conformities in).

Engage a Regulatory Consultant Early — But Not Forever

A consultant with IEC 62304 experience can review your software development plan, validate your safety classification, and spot gaps before your first audit. This is worth the investment. But do not outsource your entire regulatory process to a consultant. Your team needs to own the process and understand it. The consultant's job is to get you started and review your work, not to do the work for you.

Plan for SOUP From Day One

Every library and framework you add to your project is a SOUP item that requires evaluation and ongoing monitoring. Make SOUP management part of your technology selection process, not an afterthought. Prefer well-maintained SOUP with good documentation, active security response, and published anomaly lists. A trendy library with no CVE tracking and sporadic maintenance is a compliance headache waiting to happen.

Common Audit Findings

Understanding what auditors look for — and what they most commonly find lacking — can help you prioritize your compliance efforts.

Top IEC 62304 Audit Findings

Finding Description How to Avoid
Incomplete or missing traceability Requirements cannot be traced to design, design cannot be traced to tests, or gaps exist in the traceability chain Maintain a traceability matrix as a living artifact; automate trace links where possible
Inadequate SOUP management SOUP items not identified, not versioned, or not risk-evaluated; transitive dependencies missing Automate SOUP inventory generation; establish a regular SOUP review cadence
Safety classification not justified Classification is stated but not supported by documented risk analysis; classification reductions based on external risk controls not substantiated Ensure classification reasoning is explicitly documented in the risk management file; get independent review
Software development plan not followed Plan says one thing, evidence shows another; plan not updated when process changed Treat the plan as a living document; audit your own compliance to the plan before the external audit
Insufficient unit/integration testing Tests do not cover critical paths, error-handling code, or safety-related functionality; no code coverage metrics Define coverage expectations; measure and report coverage; include negative and boundary tests
Anomaly evaluation at release incomplete Known bugs at release not risk-evaluated; no documented justification for deferring resolution Maintain a running anomaly list with risk evaluation for each item; review the list formally at release
Risk management not integrated with software lifecycle Risk analysis exists but is disconnected from software requirements and testing; software-specific hazards not analyzed Use bidirectional traceability between risk controls and software requirements/tests
Configuration management gaps Build environment not documented; released version cannot be reproduced; change history incomplete Containerize build environments; tag releases in version control; enforce change control process
Coding standards not defined or not enforced No coding standard referenced in the plan, or coding standard exists but static analysis / code reviews do not check compliance Define coding standards in the plan; enforce with automated tools (linters, static analyzers) in CI
Maintenance process not defined Post-release changes made without following the lifecycle process; regression testing not performed Define maintenance in the software development plan; apply the same rigor to changes as to new development

Unit Testing and Verification Requirements by Safety Class

While the earlier sections of this guide cover unit verification at a high level, the practical differences in what IEC 62304 expects at each safety class deserve detailed treatment. This is one of the areas where teams most frequently under-deliver — and where auditors focus their scrutiny.

Class A: No Unit Verification Required

For Class A software, IEC 62304 does not require unit verification at all. There is no requirement for unit tests, code reviews at the unit level, or static analysis. System testing is sufficient. This does not mean you should avoid unit testing — it simply means the standard does not mandate it, and auditors will not issue findings for its absence.

Class B: Unit Verification Required (Clause 5.5.2, 5.5.3, 5.5.5)

For Class B software, you must establish a unit verification process, define acceptance criteria, and execute verification against those criteria. The standard requires that verification demonstrates:

  • The software unit correctly implements the software architecture
  • The software unit meets its allocated requirements
  • The software unit does not contain unintended functionality
  • The software unit complies with the coding standards defined in the development plan

Acceptable verification methods include code review, static analysis, and unit testing. Most organizations use a combination: automated unit tests for functional correctness, static analysis for coding standard compliance and defect detection, and code review for design intent and undocumented behavior.

Coverage expectations for Class B: While IEC 62304 does not mandate a specific coverage metric, auditors expect to see evidence that tests cover the functional paths through the code. Statement coverage is generally considered a minimum for Class B. A reasonable target is 70-80% statement coverage, with documented justification for uncovered code (e.g., defensive code that is difficult to trigger, platform-specific branches).

Class C: Unit Verification with Additional Acceptance Criteria (Clause 5.5.2, 5.5.3, 5.5.4, 5.5.5)

For Class C software, the requirements include everything from Class B plus additional acceptance criteria specified in Clause 5.5.4. These additional criteria require that, where present in the design, verification addresses:

Acceptance Criterion What It Means in Practice
Proper event sequence Verify that state machines, event handlers, and interrupt-driven code execute events in the correct order
Data and control flow Verify that data flows through the unit correctly — correct transformations, no unintended side effects, control paths match design
Planned resource allocation Verify that the unit uses memory, CPU, file handles, and other resources as designed — no resource leaks
Fault handling Verify error definition, isolation, and recovery — what happens when inputs are invalid, resources are unavailable, or downstream components fail
Initialization of variables Verify that all variables are properly initialized before use — a common source of non-deterministic behavior
Self-diagnostics Verify built-in test and monitoring functions that detect the unit's own failures
Memory management and memory overflows Verify correct dynamic memory allocation/deallocation, buffer bounds checking, and detection of overflow conditions
Boundary conditions Verify behavior at the edges of input ranges, array bounds, counter limits, and timing constraints

Coverage expectations for Class C: Branch coverage (also called decision coverage) is considered the minimum expectation. The Johner Institute notes that 100% branch coverage "is considered to be a minimum level of coverage for most software products, but decision coverage alone is insufficient for high-integrity applications." In practice, target at least 80% branch coverage for Class C, with higher coverage for safety-critical algorithms. For the most critical software units — those implementing risk control measures — consider MC/DC (Modified Condition/Decision Coverage), which is the standard in avionics (DO-178C Level A) and provides the strongest evidence of thorough testing.

Recommended Tools for Unit Verification

The standard does not prescribe tools, but the following categories are widely used in IEC 62304-compliant development:

Tool Category Examples Purpose
Unit test frameworks Google Test, CppUnit (C/C++), JUnit (Java), NUnit (.NET), pytest (Python), Jest (JavaScript) Automated execution of unit tests with structured assertions and reporting
Code coverage gcov/lcov (C/C++), Testwell CTC++ (C/C++/Java/C#), JaCoCo (Java), Coverage.py (Python), Istanbul/nyc (JavaScript) Measure statement, branch, and MC/DC coverage
Static analysis Parasoft C/C++test (TUV SUD certified for IEC 62304), CodeSonar (used by FDA for infusion pump analysis), Coverity, SonarQube, ESLint, RuboCop Detect coding standard violations, potential defects, security vulnerabilities, dead code
Complexity analysis Testwell CMT++, SonarQube, Lizard Measure cyclomatic complexity to identify units that need refactoring or additional testing

Practical tip: The FDA has used GrammaTech CodeSonar to analyze recalled medical device software — specifically investigating a series of infusion pump software failures. This demonstrates that regulators take static analysis seriously and expect manufacturers to use comparable tools. If a regulatory agency is using static analysis on your software post-recall, you should be using it proactively during development.

Common Unit Testing Mistakes in Medical Device Software

  1. Testing at the wrong granularity — Writing hundreds of tests against internal helper functions while leaving the actual software units (as defined in the architecture) untested. Unit tests should target the software units identified in your architectural design, not arbitrary code fragments.
  2. No negative testing — Testing only the happy path while ignoring error conditions, boundary values, and invalid inputs. Clause 5.5.4 explicitly requires fault handling and boundary condition testing for Class C.
  3. No regression automation — Running unit tests manually or only before release. Unit tests must be automated and run frequently (ideally on every commit) to catch regressions early. This feeds directly into the CI/CD integration discussed below.
  4. Coverage metrics without analysis — Achieving a coverage number without analyzing what the uncovered code does. A 90% coverage metric is meaningless if the uncovered 10% contains all the error-handling and safety-critical paths.
  5. Test independence failures — Unit tests that depend on execution order, shared state, or external services. Flaky tests erode confidence and create audit evidence that is unreliable.

CI/CD Pipeline Integration for IEC 62304 Compliance

Modern medical device software teams increasingly adopt Continuous Integration and Continuous Delivery (CI/CD) practices. Far from being incompatible with IEC 62304, a well-designed CI/CD pipeline can automate much of the evidence generation that the standard requires — reducing manual burden while improving compliance consistency.

Why CI/CD Matters for Regulated Development

The traditional approach to IEC 62304 compliance — creating documentation artifacts manually at the end of a development phase — is fragile. Documentation drifts from reality, traceability gaps appear, and teams spend weeks preparing for audits. CI/CD addresses this by producing compliance evidence as a natural byproduct of the engineering pipeline.

Parasoft's analysis of medical device development found that integrating static analysis, unit testing, and structural code coverage into the CI pipeline "greatly reduces labor and delivery schedules and increases test efficiency" while maintaining IEC 62304 compliance. The key insight is that the pipeline becomes your evidence pipeline.

Mapping IEC 62304 Activities to CI/CD Pipeline Stages

Pipeline Stage IEC 62304 Activity Automated Evidence Produced
Pre-commit hooks Coding standards enforcement (5.1) Linter/formatter results
Pull request validation Unit verification (5.5), code review (5.5) Unit test results, coverage reports, static analysis findings, reviewer approval records
Build Configuration management (8) Build metadata (commit hash, tool versions, environment, timestamps)
Integration test stage Integration testing (5.6) Integration test results, interface verification reports
System test stage System testing (5.7) System test results traced to requirements
Artifact archival Release documentation (5.8), CM records (8) Versioned test reports, SOUP inventory, build artifacts with checksums
Deployment gate Release review (5.8) Release checklist verification, anomaly evaluation, residual risk sign-off

Implementing a Regulated CI/CD Pipeline

Step 1: Enforce traceability at the PR level.

Define typed identifiers for requirements (SRS-###), risk controls (RC-###), design items (SDS-###), and test cases (TST-###). Require every pull request to reference the requirements and risk controls it implements. Configure CI to block merges if trace links are missing. This turns traceability from a retrospective documentation exercise into an enforceable engineering contract.

Step 2: Automate verification at every commit.

Configure your CI pipeline to run on every commit or pull request:

  • Static analysis against your defined coding standards (MISRA C, CERT C, custom rules)
  • Unit tests with coverage measurement
  • Integration tests for affected interfaces
  • SOUP/dependency checks (see SOUP gate below)

Archive all results as versioned, timestamped artifacts tied to the specific commit hash.

Step 3: Implement a SOUP gate.

Configure the build to fail if a new dependency is introduced without:

  • A recorded purpose and owner in the SOUP registry
  • A pinned version (no floating version ranges)
  • A risk evaluation note
  • A review of known anomalies (CVE database check)

This prevents uncontrolled SOUP proliferation — one of the most common audit findings. Use software composition analysis (SCA) tools like Snyk, Dependabot, Black Duck, or OWASP Dependency-Check to automate vulnerability scanning of all dependencies, including transitive dependencies.

Step 4: Generate audit-ready artifacts automatically.

Configure CI to export after every build:

  • Test summary reports (JUnit XML, HTML reports) with pass/fail status
  • Code coverage reports with line-by-line and branch-level detail
  • Static analysis reports with findings categorized by severity
  • Build metadata (commit hash, branch, build environment, tool versions, timestamps)
  • SOUP/SBOM inventory (auto-generated from dependency manifests)

Every build becomes a reproducible "mini release package" — when an auditor asks "show me the evidence for version 2.3.1," you can retrieve the complete artifact set in seconds.

Step 5: Automate change impact analysis.

Map touched source files to affected modules, then to required test suites. If a change touches a Class C module, enforce mandatory independent code review (not just peer review). If a change touches a SOUP dependency, trigger a SOUP evaluation workflow. Block merging if the required reviewers have not approved.

Step 6: Implement release dossier generation.

For formal releases, automate the generation of:

  • Complete traceability report (requirements to design to implementation to tests to results)
  • SBOM/SOUP inventory with version pins and risk evaluations
  • Test execution summary across all verification levels
  • Known anomaly list with risk assessments
  • Release metadata (version, date, build hash, environment)

Store release dossiers with immutable retention — signed artifacts in a controlled artifact repository.

Change Control Through Git Workflow

Your Git workflow can serve as your change control system when structured correctly:

  • Every pull request is a change request — it includes what changed, why, which requirements and risk controls are affected, and what regression testing is needed
  • Branch protection rules enforce approval gates — no direct commits to the release branch without review and CI passing
  • Merge history provides an auditable change log — every change is traceable to a PR, a reviewer, a set of test results, and a rationale
  • Tags and releases provide configuration baselines — each release tag corresponds to a tested, approved configuration

Practical tip: Many teams worry that CI/CD is incompatible with the "formal" change control that IEC 62304 requires. It is not. The standard requires that changes are evaluated, approved, implemented, and verified. A well-configured Git workflow with branch protection, required reviewers, and automated CI checks satisfies these requirements — and produces better evidence than manual change request forms because the evidence is generated automatically and cannot be fabricated retroactively.

Cloud-Hosted SaMD Considerations

The rise of Software as a Medical Device (SaMD) deployed on cloud infrastructure — AWS, Azure, Google Cloud — introduces unique challenges for IEC 62304 compliance. Cloud-hosted SaMD is fundamentally different from embedded firmware or on-premises software because the manufacturer does not control the full execution environment. Understanding these differences is essential for maintaining compliance while leveraging cloud benefits.

Cloud Infrastructure as SOUP

A critical realization for cloud-hosted SaMD: the cloud platform itself is SOUP. Every cloud service your software depends on — compute instances, databases, storage, networking, managed AI/ML services, identity providers — was not developed under your IEC 62304 lifecycle process. This means:

  • Each cloud service must be identified in your SOUP inventory with its version or service tier
  • Functional and performance requirements must be defined for each service (Class B/C)
  • Known anomalies (service outage history, known limitations) must be evaluated
  • Risk evaluation must assess what happens to patient safety if the service fails, degrades, or changes behavior

For example, if your SaMD runs a diagnostic algorithm on AWS Lambda and stores patient data in Amazon RDS, your SOUP list must include AWS Lambda (with runtime version), Amazon RDS (with engine version), the AWS SDK version, and every other cloud service in the dependency chain.

The Shared Responsibility Model

Cloud providers operate under a shared responsibility model that directly impacts your IEC 62304 obligations:

Responsibility IaaS (e.g., EC2) PaaS (e.g., App Service) SaaS (e.g., managed API)
Application code Manufacturer Manufacturer Provider
Runtime and middleware Manufacturer Provider Provider
Operating system Manufacturer Provider Provider
Infrastructure Provider Provider Provider
Physical security Provider Provider Provider

As you move from IaaS to SaaS, you transfer more operational responsibility to the provider — but you never transfer regulatory responsibility. You remain accountable for demonstrating that every layer affecting your software's safety is adequately controlled, regardless of who operates it.

Change Control Challenges in Cloud Environments

Cloud environments introduce a fundamental change control challenge: the provider can change the infrastructure beneath your software without your prior approval. This is qualitatively different from traditional SOUP (like an open-source library whose version you pin).

Key questions your change control process must address:

  • Notification: Does the cloud provider notify you of infrastructure changes (hardware, OS, library updates) before implementation? Can you control timing?
  • Impact assessment: How will you assess the impact of provider-initiated changes on your validated software? What automated regression testing can detect behavioral changes?
  • Rollback capability: If a provider change breaks your software, can you roll back? What is your contractual right to reject changes?
  • Version consistency: Cloud providers may roll out changes across regions at different times. How will you handle devices or users operating on different infrastructure versions simultaneously?

Practical tip: Include specific change notification and approval requirements in your cloud provider Service Level Agreement (SLA). Require technical documentation (release notes) for all changes, advance notice for breaking changes, and the ability to defer non-security updates. Major cloud providers (AWS, Azure, GCP) offer enterprise support tiers that include some of these provisions, but you must negotiate them explicitly.

Validation and Continuous Monitoring

Unlike traditional software where you validate once and re-validate on change, cloud-hosted SaMD requires continuous validation because the execution environment can change independently of your software. Your validation approach should include:

  1. Initial validation: Deploy your SaMD, execute your full verification suite against the cloud environment, and document results as your baseline
  2. Automated regression testing: Run automated test suites at regular intervals (daily or weekly) against the production environment to detect behavioral changes caused by infrastructure updates
  3. Application performance monitoring: Implement continuous monitoring for response times, error rates, resource utilization, and data integrity — anomalies may indicate an infrastructure change affecting your software
  4. Service availability monitoring: Track cloud service uptime against SLA commitments; document and investigate any outages that could affect patient safety
  5. Annual service review: Formally review the cloud provider's performance against SLA terms, security certifications, and any service changes that occurred during the year

Data Integrity, Availability, and Sovereignty

Cloud-hosted SaMD must address data concerns that do not arise with on-premises software:

  • Data in transit: All data must be encrypted in transit (TLS 1.2+). Verify this for every service-to-service communication, not just the user-facing endpoint.
  • Data at rest: All patient data and safety-critical data must be encrypted at rest. Verify encryption extends to backups and temporary storage.
  • Availability: What happens to patient safety if the cloud service is unavailable? Your risk analysis must address this. If the SaMD provides critical clinical information, you may need offline fallback capabilities or redundant deployment across availability zones/regions.
  • Data sovereignty: Some jurisdictions require patient data to remain within national borders. Verify that your cloud deployment complies with data residency requirements for every market where your SaMD is used.
  • Backup and recovery: Define backup frequency, retention period, and recovery time objectives. Test restoration procedures periodically and document results.

Regulatory Landscape for Cloud-Hosted SaMD

Regulatory guidance for cloud-hosted medical devices is evolving:

  • FDA: Has not yet published specific guidance on cloud computing for medical devices but has addressed SaMD, mobile medical applications, and AI/ML-based SaMD. The FDA's OTS guidance principles apply to cloud services. Expect cloud-specific guidance to emerge as more SaMD products adopt cloud architectures.
  • EU MDR: GSPR 17.3 requires manufacturers to set out minimum requirements concerning IT networks, which includes cloud infrastructure. GSPR 17.4 covers cybersecurity, which has significant cloud implications.
  • AAMI: Approved a consensus report project on compliant use of cloud computing for quality systems and medical devices, signaling that formal industry guidance is coming.
  • 21 CFR 820.50 / QMSR: Purchasing controls apply to cloud service procurement. You must treat your cloud provider as a supplier and apply appropriate supplier qualification and monitoring.

Practical warning: Cloud back-end services that were initially considered supporting infrastructure may evolve into components that directly affect device safety — for example, a cloud-based algorithm service that was initially used for non-critical analytics might later be used for clinical decision support. When this happens, the service may need to be reclassified and brought under more rigorous IEC 62304 lifecycle processes. Design your architecture with clear boundaries so that safety-critical and non-safety-critical cloud components are segregated and independently classifiable.

Cybersecurity Integration: IEC 81001-5-1

Cybersecurity is no longer a peripheral concern for medical device software — it is a core safety requirement. IEC 81001-5-1:2021, titled "Health software and health IT systems safety, effectiveness and security — Part 5-1: Security — Activities in the product life cycle," is the standard that defines how cybersecurity must be integrated into the software lifecycle. It is designed to work as a direct supplement to IEC 62304, following the same lifecycle structure but adding security-specific activities at each phase.

Why IEC 81001-5-1 Matters Now

IEC 81001-5-1 fills a critical gap. While IEC 62304 addresses software safety and ISO 14971 addresses risk management, neither standard adequately addresses the cybersecurity threat landscape that modern connected medical devices face. Before IEC 81001-5-1, manufacturers had to cobble together cybersecurity processes from guidance documents (MDCG 2019-16, FDA premarket cybersecurity guidance) and standards from other industries (IEC 62443-4-1, originally developed for industrial control systems). IEC 81001-5-1 provides a single, healthcare-specific standard.

Regulatory status:

  • IEC 81001-5-1 is an FDA-recognized consensus standard (recognized November 2023 as ANSI/AAMI SW96:2023)
  • It is expected to be harmonized under the EU MDR, with a current target implementation date of May 2028
  • Even before formal harmonization, it represents the "state of the art" for cybersecurity in medical device software, which means Notified Bodies and regulators may reference it during audits
  • The standard applies to all "health software" — a broader category than medical devices, including wellness apps, care planning tools, and health IT systems

Relationship Between IEC 62304 and IEC 81001-5-1

IEC 81001-5-1 assumes that you already have a software lifecycle process conforming to IEC 62304. It then specifies additional security activities at each lifecycle phase. Think of it as a security overlay on your existing lifecycle:

IEC 62304 Lifecycle Phase IEC 81001-5-1 Additions
Software development planning Security requirements planning, threat modeling methodology selection, security testing strategy, secure development environment requirements
Software requirements analysis Security requirements derived from threat modeling (authentication, authorization, encryption, audit logging, input validation, session management)
Software architectural design Security architecture review, attack surface analysis, defense-in-depth design, secure communication design, security boundary definition
Software detailed design Secure coding guidelines, cryptographic design specifications, secure data storage design, secure API design
Software unit implementation Secure coding practices, avoidance of known-vulnerable patterns, security-focused code review
Software unit verification Security-focused static analysis (SAST), checking for CWE/CERT/OWASP violations
Software integration testing Security integration testing, interface security verification, authentication/authorization testing
Software system testing Penetration testing, vulnerability scanning, fuzz testing, security regression testing
Software release Security release notes, vulnerability disclosure information, security-relevant configuration guidance
Software maintenance Continuous vulnerability monitoring, coordinated vulnerability disclosure, security patch management, incident response

Key Differences from IEC 62304

Several aspects of IEC 81001-5-1 diverge from IEC 62304's approach:

  1. No safety-class-based scaling for security activities: Unlike IEC 62304, where Class A software has significantly reduced requirements, IEC 81001-5-1 applies cybersecurity requirements regardless of safety classification. A Class A software system that handles patient data still requires full cybersecurity treatment. This reflects the reality that cybersecurity risks (data breaches, ransomware, unauthorized access) exist independently of functional safety risks.

  2. Independence of security testing: IEC 81001-5-1 requires objectivity in security testing, mandating that security testing is performed by individuals or teams independent from the developers who wrote the code. This can be achieved through separate internal security teams or external third-party penetration testing providers.

  3. Shared responsibility with healthcare delivery organizations: The standard explicitly recognizes that cybersecurity is a shared responsibility between manufacturers and healthcare delivery organizations (HDOs). Manufacturers must define the "intended product security context" — the assumptions about the deployment environment — and HDOs must implement their part of the security controls.

  4. Post-market security obligations are more extensive: Beyond IEC 62304's maintenance process, IEC 81001-5-1 requires continuous vulnerability monitoring, coordinated vulnerability disclosure processes, and bilateral communication with healthcare organizations about security issues.

Threat Modeling

IEC 81001-5-1 requires threat modeling as a foundational security activity. While the standard does not prescribe a specific methodology, commonly used approaches include:

  • STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) — developed by Microsoft, widely used for software threat analysis
  • Attack trees — hierarchical decomposition of attack goals into sub-goals and attack steps
  • LINDDUN — specifically designed for privacy threat modeling, useful for SaMD handling personal health data

The threat model must document:

  • The intended security context (deployment environment assumptions)
  • System boundaries and trust boundaries
  • Data flows and data stores containing sensitive information
  • Identified threats and their potential impact
  • Security controls that mitigate each threat (these become security requirements)

The threat model feeds directly into the risk management process under ISO 14971. Security risks are evaluated alongside safety risks, and security controls are managed as risk control measures with the same traceability and verification requirements.

Practical Implementation: Integrating Security into Your IEC 62304 Process

If you already have an IEC 62304-compliant lifecycle, integrating IEC 81001-5-1 requires:

  1. Extend your software development plan to include security activities, threat modeling approach, security testing strategy, and secure coding standards
  2. Add security requirements to your requirements specification — derived from threat modeling, not just from functional analysis
  3. Include security review in your architectural design review — attack surface analysis, authentication/authorization design, encryption design
  4. Add security-focused static analysis rules to your CI pipeline — CWE Top 25, OWASP Top 10, CERT secure coding rules
  5. Include penetration testing and vulnerability scanning in your system testing — performed by independent testers
  6. Extend your SOUP management to include cybersecurity vulnerability monitoring for all SOUP components — subscribe to CVE feeds, use automated vulnerability scanning
  7. Define a coordinated vulnerability disclosure process — how will you receive, evaluate, and communicate about security vulnerabilities discovered in your product after release?
  8. Extend your maintenance process to include security patch management — how quickly will you release security updates? How will you communicate urgency to users?

Practical warning: Do not treat cybersecurity as an add-on that you address after development is complete. Bolt-on security is expensive, incomplete, and creates audit findings. Integrate security activities from the planning phase, just as you integrate safety activities. The cost of adding security controls during design is a fraction of the cost of retrofitting them after system testing reveals vulnerabilities.

Specific SOUP Examples and Management Workflows

The earlier section on SOUP management covers the regulatory framework. This section provides concrete examples of SOUP components commonly found in medical device software, along with practical management workflows.

Common SOUP Categories and Examples

SOUP Category Specific Examples Typical Risk Considerations
Operating systems Linux kernel, Windows Embedded, Android AOSP, FreeRTOS, QNX, VxWorks Broad attack surface; frequent security patches required; kernel vulnerabilities can affect all software running on the device
Programming language runtimes .NET Runtime, Java JRE/JDK, Python interpreter, Node.js runtime Runtime bugs can affect all application code; garbage collection behavior can cause timing issues in real-time applications
Web frameworks React, Angular, Django, Rails, ASP.NET Core, Spring Boot XSS and injection vulnerabilities; frequent updates; large transitive dependency trees
Database engines PostgreSQL, MySQL, SQLite, MongoDB, Amazon RDS Data integrity risks; SQL injection if used improperly; backup/recovery behavior affects data availability
Cryptographic libraries OpenSSL, BoringSSL, libsodium, Bouncy Castle, Windows CNG Vulnerabilities directly affect data confidentiality and integrity; incorrect use can nullify security; must track CVEs closely (e.g., Heartbleed in OpenSSL was CVE-2014-0160)
Communication stacks TCP/IP stacks, Bluetooth stacks, MQTT libraries, HL7/FHIR libraries Protocol-level vulnerabilities; timing-dependent behavior; interoperability issues between versions
Machine learning frameworks TensorFlow, PyTorch, ONNX Runtime, scikit-learn Model inference correctness depends on framework version; numerical precision differences between versions can change clinical outputs
Cloud SDKs and APIs AWS SDK, Azure SDK, Google Cloud client libraries, REST API clients Cloud service behavior changes can propagate through the SDK; SDK version updates may change retry logic, timeout defaults, or error handling
Image processing libraries OpenCV, ITK, DCMTK (DICOM toolkit), Pillow Pixel-level accuracy is critical for diagnostic imaging; memory safety vulnerabilities in C/C++ libraries; format parsing bugs
Build and test tools GCC, Clang, CMake, Gradle, npm, pip, Docker Compiler bugs can introduce silent code generation errors; package managers may resolve transitive dependencies differently across versions; Docker base images change

SOUP Management Workflow: A Concrete Example

Consider a SaMD product — a cloud-based diagnostic imaging application — that uses the following SOUP stack:

Scenario: React (frontend), Django (backend), PostgreSQL (database), TensorFlow Serving (ML inference), OpenSSL (TLS), Docker (containerization), deployed on AWS.

Step 1: Create the SOUP inventory.

For each component, document:

Field React Example TensorFlow Serving Example
Name React TensorFlow Serving
Manufacturer/Community Meta (Facebook) Google
Version 18.2.0 2.14.0
Purpose User interface rendering for diagnostic image viewer Inference serving for diagnostic classification model
Safety class Class B (displays diagnostic images) Class C (inference output influences clinical decision)

Step 2: Define functional and performance requirements (Class B/C).

For TensorFlow Serving (Class C): "The inference server shall return classification results within 2 seconds for images up to 50 MB. The inference server shall produce identical results for identical inputs across restarts. The inference server shall return an error code (not a silent failure) if inference fails."

For React (Class B): "The UI framework shall correctly render DICOM image overlays without pixel-level distortion. The UI framework shall not introduce cross-site scripting vulnerabilities in the rendered output."

Step 3: Evaluate known anomalies.

  • Review TensorFlow's GitHub Issues and CVE database for known bugs affecting inference correctness, memory leaks under sustained load, or security vulnerabilities in the serving API
  • Review React's changelog and security advisories for XSS vulnerabilities or rendering bugs
  • Review OpenSSL's security advisories — OpenSSL has had significant vulnerabilities (Heartbleed, CVE-2014-0160; CCS Injection, CVE-2014-0224) that required immediate patching
  • Document each finding with a risk assessment: does this anomaly affect our specific use case?

Step 4: Pin versions and implement lock files.

  • Use package-lock.json (npm) or yarn.lock for JavaScript dependencies
  • Use requirements.txt with exact versions or Pipfile.lock (Python) for backend dependencies
  • Use specific Docker base image digests (not latest tags)
  • Never use floating version ranges (e.g., ^18.0.0) in production — always pin exact versions

Step 5: Automate ongoing monitoring.

  • Configure automated vulnerability scanning (Snyk, Dependabot, OWASP Dependency-Check) to run in CI and alert on new CVEs for any SOUP component
  • Subscribe to security mailing lists for critical SOUP components (e.g., OpenSSL security announcements)
  • Schedule quarterly SOUP review meetings to evaluate new versions, new vulnerabilities, and end-of-life announcements
  • Document all SOUP updates as change-controlled activities with impact assessment and regression testing

Managing Transitive Dependencies

Modern software projects — especially those using npm, pip, or Maven — pull in large numbers of transitive dependencies. A single direct dependency can bring in dozens or hundreds of indirect dependencies, each of which is also SOUP.

The problem: Auditors expect every SOUP component to be identified and risk-evaluated. If your React application has 400 npm packages in node_modules, you need a strategy.

The practical solution:

  1. Use SCA tools to auto-generate the full dependency tree — tools like Snyk, Black Duck, WhiteSource (Mend), or OWASP Dependency-Check can enumerate all direct and transitive dependencies with version numbers
  2. Risk-stratify your SOUP list — not every transitive dependency carries the same risk. A utility library that formats dates is different from a cryptographic library. Focus detailed risk evaluation on SOUP components that are in the data path, handle security-sensitive operations, or could affect safety-critical functionality
  3. Monitor the full tree for vulnerabilities — even low-risk transitive dependencies can have critical CVEs. Automated scanning catches these.
  4. Document your SOUP management strategy — explain in your software development plan how you identify, evaluate, and monitor transitive dependencies. Auditors want to see that you have a systematic approach, not that you have manually evaluated 400 packages.

Real-World Implementation Examples and Lessons Learned

Understanding how IEC 62304 requirements play out in practice — including what happens when they are not followed — provides valuable context for implementation decisions.

Historical Software Failures That Shaped the Standard

Therac-25 (1985-1987): The Therac-25 radiation therapy machine delivered lethal radiation doses to patients due to software race conditions and the removal of hardware safety interlocks that had been present in earlier models. The software was the sole safety mechanism, and it failed. This case is foundational to medical device software regulation. Key lessons that are directly reflected in IEC 62304:

  • The principle that software safety classification must account for the presence or absence of independent hardware risk controls (Clause 4.3)
  • The requirement for architectural segregation between software items of different safety classes (Clause 5.3)
  • The requirement that risk control measures implemented in software are independently verified (Clause 7)

Infusion pump software failures (2005-2015): The FDA analyzed a series of infusion pump recalls and found that software defects were a leading cause. The FDA used GrammaTech CodeSonar (a static analysis tool) to analyze recalled pump software and identified critical defects including buffer overflows, null pointer dereferences, and uninitialized variables — defects that Clause 5.5.4's additional acceptance criteria (memory management, initialization of variables, boundary conditions) are specifically designed to catch. This led to the FDA's Infusion Pump Improvement Initiative and reinforced the importance of static analysis and unit verification for Class C software.

Abbott Libre 3 continuous glucose monitor (2023): Abbott recalled certain Libre 3 sensors linked to reported injuries where the device provided inaccurate glucose readings. Software-related issues in glucose monitoring can lead to incorrect insulin dosing — a direct patient safety hazard. This illustrates why diagnostic SaMD that influences treatment decisions is typically classified as Class C, and why system testing must include accuracy and reliability testing under real-world conditions.

Implementation Case Studies

Case Study 1: IEC 62304 Class C firmware for a European medical device manufacturer.

A European manufacturer engaged a firmware development team to build IEC 62304 Class C-compliant software for a medical device. The development process included: Hardware Abstraction Layer (HAL), middleware, application layer, and communication protocol — each documented as separate software items in the architecture. The firmware was developed against Class C requirements (full detailed design, unit verification with additional acceptance criteria, integration testing, system testing). The project was completed within a strict certification deadline, passed Notified Body audit, and entered mass production. Key success factor: investing in architectural documentation and unit verification from the start rather than retrofitting documentation before the audit.

Case Study 2: Closed-loop insulin therapy system (Class C SaMD).

A medical device company developed a next-generation closed-loop insulin therapy system — one of the most safety-critical SaMD applications — requiring compliance with IEC 62304, ISO 13485, ISO 14971, and IEC 62366 (usability). Development included automated testing and continuous integration from the first sprint. The resulting device delivered therapy, provided an intuitive UI translated into 28 languages, and ran for two weeks on a single AA battery. Key success factor: integrating automated testing and CI into the development process from the beginning, producing continuous compliance evidence rather than retrospective documentation.

Case Study 3: Autonomous wheelchair controller (proactive Class C compliance).

A technology company developing a headset-controlled wheelchair proactively adopted IEC 62304 Class C processes even though early versions of the controller did not require functional safety certification. The rationale: future autonomous wheelchair capabilities would clearly require safety certification, and retrofitting compliance is far more expensive than building it in. They combined IEC 62304 processes with MISRA C coding standards, using static analysis and code coverage tools to enforce compliance. Key success factor: adopting regulated development practices early, before they were legally required, to avoid costly retrofitting.

Lessons from Audits

Common patterns from organizations that have successfully passed Notified Body audits for IEC 62304 compliance:

  1. Process consistency matters more than process perfection. Auditors look for evidence that you follow your own plan consistently. A simple but consistently followed process produces fewer findings than an elaborate process that is followed sporadically.

  2. Traceability is the first thing auditors check. The ability to trace a requirement from the risk analysis through the software requirements, architecture, implementation, and test results is the single most audited aspect of IEC 62304. If your traceability matrix has gaps, expect findings.

  3. SOUP management is the second thing auditors check. Incomplete SOUP lists, missing version numbers, and absent risk evaluations are among the most common findings across all audits. The organizations that pass cleanly are the ones with automated SOUP inventory generation and systematic anomaly evaluation.

  4. Show your work on classification. Stating "our software is Class B" without showing the risk analysis, the hazardous situations considered, and the reasoning for the classification will result in a finding. The classification rationale must be documented in the risk management file and must reference specific hazardous situations and their severity after risk controls.

  5. Maintenance evidence is often missing. Teams focus on development documentation and forget that post-release changes must follow the same lifecycle processes. Auditors will ask to see change impact analyses, regression test results, and updated risk analyses for post-release modifications.

Putting It All Together: A Practical Implementation Roadmap

For organizations implementing IEC 62304 for the first time, here is a phased approach.

Phase 1: Foundation (Weeks 1-4)

  • Classify your software — perform the risk analysis per ISO 14971 and determine the safety classification
  • Write the software development plan — define your lifecycle model, deliverables, tools, standards, and processes
  • Establish configuration management — set up version control, branching strategy, change control process, and build pipeline
  • Create document templates — requirements specification, design documents, test plans, release documentation, SOUP list

Phase 2: Development Process (Weeks 4-12)

  • Document software requirements — including safety requirements derived from risk analysis
  • Document architectural design (Class B/C) — including SOUP identification, item classification, and segregation
  • Document detailed design (Class C) — for each software unit
  • Implement and verify — write code, write tests, review code, run static analysis
  • Maintain traceability — continuously update the traceability matrix as requirements, design, and tests evolve
  • Integrate and test (Class B/C) — integration testing per the plan

Phase 3: Verification and Release (Weeks 12-16)

  • Execute system testing — test against all software requirements, including safety requirements
  • Resolve anomalies — evaluate, fix or accept, re-test
  • Complete the risk management report — confirm residual risk is acceptable
  • Compile release documentation — release notes, version identification, residual anomaly list
  • Conduct a release review — verify all lifecycle activities are complete

Phase 4: Sustain (Ongoing)

  • Operate the maintenance process — handle field feedback, change requests, and problem reports through the defined lifecycle
  • Monitor SOUP — watch for new vulnerabilities, updates, and end-of-life announcements for all SOUP components
  • Conduct internal audits — periodically audit your own compliance to the software development plan and IEC 62304 requirements
  • Update processes — as you learn from audits, field experience, and team feedback, improve your processes and update your plan

Conclusion

IEC 62304 is not a bureaucratic obstacle. It is a framework that, when implemented thoughtfully, produces software that is safer, more maintainable, and more defensible to regulators and auditors. The standard's safety classification system ensures that process rigor scales with actual risk — you are not forced to apply Class C processes to a Class A logging module. The integration with ISO 14971 ensures that your development activities are driven by real hazards, not generic checklists. And the standard's lifecycle-model-agnostic design means you can implement it within an agile workflow that your engineering team will actually follow.

The organizations that struggle with IEC 62304 are usually the ones that treat it as a documentation exercise — creating artifacts to satisfy an auditor rather than building a process that produces safe software. The organizations that succeed are the ones that embed IEC 62304's requirements into their daily engineering workflow: traceability maintained in real-time, risk analysis updated as the design evolves, testing automated and integrated into CI/CD, SOUP monitored continuously, and configuration management handled by the same tools their developers already use.

Start with classification. Build process into your tools. Document what the standard requires for your safety class — nothing more, nothing less. And when in doubt, classify higher. The cost of rigor is always less than the cost of a recall.