MedDeviceGuideMedDeviceGuide
Back

Design Verification vs. Design Validation for Medical Devices: What You Actually Need to Know

A comprehensive guide to design verification and design validation under FDA design controls and ISO 13485. Covers the V-model, verification methods, validation approaches, documentation, common 483 findings, and practical examples across device types.

Ran Chen
Ran Chen
2026-03-17Updated 2026-03-2458 min read

The Confusion That Costs Companies Months

If you have spent any time in the medical device industry, you have heard the classic one-liner: "Verification asks 'did we build the thing right?' and validation asks 'did we build the right thing?'" That sentence is repeated in every training deck and regulatory textbook. It is also, on its own, almost useless for actually planning and executing your design controls.

The real confusion shows up when a design engineer sits down to write a verification protocol and cannot figure out where verification ends and validation begins. Or when an FDA investigator writes a 483 observation because your "validation" was actually just verification testing with the wrong title page. Or when a startup burns three months re-running studies because nobody mapped the V&V plan to the design inputs and user needs properly.

This article is the resource I wish existed when I was first navigating design controls. It covers the regulatory basis, the practical execution, the documentation, the pitfalls, and the real-world nuances that separate companies that get this right from companies that get a warning letter.

The Fundamental Difference — Stated Plainly

Design verification confirms that each design output satisfies its corresponding design input requirement. You are testing the device (or a component, subsystem, or software module) against the specifications you wrote.

Design validation confirms that the final device meets the actual user needs and intended use that motivated the whole project. You are testing whether the device works for the people who will use it, in conditions that reflect real-world use.

The difference is about the reference point:

Design Verification Design Validation
Reference point Design input specifications User needs and intended use
Question answered Does the output meet the spec? Does the device work for the user in real conditions?
When performed Throughout development, as outputs are produced On the final or near-final device, before design transfer
Test conditions Lab conditions, bench testing, analysis Simulated or actual use conditions
Performed on Components, subsystems, software builds, prototypes Production-equivalent or production units
Typical methods Testing, inspection, analysis, demonstration Simulated-use testing, clinical evaluation, usability studies

The House Analogy

A more useful analogy than the standard one-liner: imagine you are building a house.

  • Design inputs are the specifications drawn from the homeowner's needs. "The kitchen must have at least 200 square feet of floor space. The load-bearing walls must support 40 psf live load. The electrical system must deliver 200-amp service."
  • Design outputs are the blueprints, material selections, and construction details that respond to those inputs.
  • Design verification is measuring the kitchen floor area against the blueprint spec, running structural calculations on the beams, and testing the electrical panel capacity. You are checking that the house-as-built matches the blueprint-as-designed.
  • Design validation is having the homeowner's family move in and cook Thanksgiving dinner. Can the family actually use the kitchen for its intended purpose? Is the electrical system adequate when someone runs the oven, the dishwasher, and the microwave at the same time? Does the space work in practice, not just on paper?

Verification can find that every specification was met and the house can still fail validation — because the specifications were incomplete or wrong. Maybe the 200-square-foot kitchen meets the written spec, but the layout makes it impossible for two people to cook at the same time, which was the actual user need.

This is exactly why FDA requires both.

The FDA's Own Definitions — From the Design Control Guidance

The FDA's Design Control Guidance for Medical Device Manufacturers (1997) provides the official definitions that every medical device professional should know verbatim:

  • Verification: "Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled." (21 CFR 820.3(aa))
  • Validation: "Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use can be consistently fulfilled." (21 CFR 820.3(z))

The Guidance also provides an illustrative example that makes the distinction concrete. Consider an infusion pump where the user need is that the pump must function in an operating room environment:

  • Design input: The pump must function uninterrupted when used with other products that generate an electromagnetic field.
  • Design outputs: PCB with filtering circuit, pump EMI shield, software signal filtering code and error handling code.
  • Design verification: Simulated EMI testing on hardware and software, dimensional verification of the shield, verification of system error handling due to EMI.
  • Design validation: EMC testing to industry standards, simulated EMI testing in a high-EMI environment, risk analysis concerning EMI, software validation for filtering code.

This example from the FDA itself demonstrates how a single user need cascades through the design control process and requires both verification (do the individual outputs meet their specs?) and validation (does the finished system actually work in the OR environment?).

Key insight from the 1997 Guidance: The FDA explicitly notes that verification only requires a single unit to be assessed (though the nature of that unit varies — one batch of material, one machined part, one package sample). Validation, by contrast, requires studies on multiple devices to ensure the finished device is safe and effective on a consistent basis.

Regulatory Basis

FDA Design Controls (21 CFR 820.30)

The QSR (and the new QMSR) requires design controls for Class II and Class III devices, and for certain Class I devices that are listed in 21 CFR 820.30(a)(2). The relevant subsections:

  • 21 CFR 820.30(f) — Design Verification: "Each manufacturer shall establish and maintain procedures for verifying the device design. Design verification shall confirm that the design output meets the design input requirements. The results of the design verification, including identification of the design, method(s), the date, and the individual(s) performing the verification, shall be documented in the DHF."

  • 21 CFR 820.30(g) — Design Validation: "Each manufacturer shall establish and maintain procedures for validating the device design. Design validation shall be performed under defined operating conditions on initial production units, lots, or batches, or their equivalents. Design validation shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions."

Key phrases that matter:

  1. "Initial production units, lots, or batches, or their equivalents" — Validation must be on production-representative devices. Prototypes made by a different process than your production process generally do not qualify. This trips up companies constantly.

  2. "Actual or simulated use conditions" — Lab bench conditions where an engineer operates the device in a controlled environment usually do not meet this bar. You need conditions that reflect how the device will actually be used.

  3. "Defined user needs and intended uses" — Validation traces back to user needs, not design input specs. This is the critical distinction.

ISO 13485:2016

  • Clause 7.3.5 — Design and Development Verification: "Design and development verification shall be performed in accordance with planned and documented arrangements to ensure that the design and development outputs have met the design and development input requirements."

  • Clause 7.3.6 — Design and Development Validation: "Design and development validation shall be performed in accordance with planned and documented arrangements. ... Validation shall be completed prior to the delivery or implementation of the medical device. ... Where the intended use requires the medical device to be connected to, or to interface with, other medical device(s), validation shall include confirmation that the user needs and intended use have been met for the specified combination(s)."

The ISO language is largely consistent with FDA's. One nuance worth noting: ISO 13485 Clause 7.3.6 explicitly calls out system-level validation for devices that interface with other devices. This is particularly relevant for connected health devices, modular systems, and IVD analyzers that work with specific reagent kits.

EU MDR (Regulation 2017/745)

The EU MDR does not use the terms "verification" and "validation" in the same structured way that FDA and ISO 13485 do. Instead, the MDR addresses V&V through:

  • Annex II (Technical Documentation), Section 6.1: Requires evidence of verification and validation testing as part of the technical documentation, including bench testing, biocompatibility evaluation, electrical safety and EMC testing, software verification and validation, usability evaluation, and clinical evaluation.

  • Annex XIV (Clinical Evaluation): The clinical evaluation report can serve as a component of design validation, particularly the assessment of whether clinical data demonstrate conformity with relevant general safety and performance requirements.

Under the MDR, your V&V evidence feeds directly into the technical documentation and is assessed by the Notified Body. The terminology may differ, but the substance — demonstrating that your device meets its specifications and fulfills its intended purpose — is the same.

The QMSR Transition (Effective February 2, 2026)

As of February 2, 2026, the FDA's Quality System Regulation (QSR) has been replaced by the Quality Management System Regulation (QMSR), which incorporates ISO 13485:2016 by reference. For V&V purposes, the key implications are:

  • Design controls are substantially equivalent under the QMSR. The verification and validation requirements from ISO 13485 Clauses 7.3.5 and 7.3.6 now serve as the enforceable requirements in the US, replacing the language of the old 820.30(f) and (g). The substance is the same, but the clause numbering and some terminology have changed.
  • Terminology shifts: The Design History File (DHF) concept is now aligned with ISO 13485's "Design and Development File." The Device Master Record (DMR) is replaced by the "Medical Device File" concept. Your V&V protocols and reports still belong in the design and development file.
  • Risk management emphasis is broader: Under the QMSR, risk-based thinking must be embedded throughout the entire QMS — not just in design validation. FDA inspectors will expect to see risk-informed decisions in V&V scope, sample sizing, and acceptance criteria.
  • Expanded inspection authority: FDA can now inspect management reviews, internal audits, and supplier audit reports. V&V-related supplier qualifications (such as test lab accreditation and calibration records) may receive greater scrutiny.
  • IDE devices are not exempt: FDA explicitly states that devices manufactured or tested under an Investigational Device Exemption (IDE) must still meet QMSR design control requirements via ISO 13485 Clause 7.

For companies already compliant with both FDA QSR and ISO 13485, the practical impact on V&V execution is minimal. But companies that had separate US and international quality systems should ensure their V&V procedures now reference ISO 13485 clause numbers and terminology.

How V&V Fits Into the V-Model

The V-model (or verification and validation model) is the standard visualization for how design controls flow. If you have not seen it before, here is how it maps:

The left side of the V is the decomposition (planning and specification) phase:

  1. User needs — The top-left starting point
  2. Design inputs — Requirements derived from user needs, risk analysis, regulatory requirements, and standards
  3. System architecture / high-level design — How the device is structured into subsystems
  4. Detailed design — Component-level specifications, software module specs, drawings

The bottom of the V is implementation — where you actually build the device.

The right side of the V is the recomposition (testing) phase, and each level on the right maps to its corresponding level on the left:

  1. Unit / component testing — Verifies detailed design outputs
  2. Integration testing — Verifies that subsystems work together per the system architecture
  3. System verification — Verifies that the complete design outputs meet design input requirements
  4. Design validation — Validates that the finished device meets user needs and intended use

The critical insight of the V-model is the traceability. Every test on the right side should trace directly to a requirement on the left side. If you have a design input that does not trace to at least one verification test, you have a gap. If you have a user need that does not trace to at least one validation activity, you have a gap.

The V-Model vs. the FDA Waterfall Diagram

You will encounter two common visualizations in practice. The V-model described above is the industry-standard engineering diagram. The FDA's Design Control Guidance (1997) instead uses a waterfall diagram (originally adapted from Health Canada) that shows design controls overlaid on a sequential development process: user needs flow into design inputs, then to design outputs, then to medical device production, with verification linking outputs back to inputs and validation linking the finished device back to user needs.

The two models are entirely compatible — they illustrate the same relationships from different perspectives. The waterfall diagram emphasizes the sequential flow and the feedback loops inherent in iterative design. The V-model emphasizes the hierarchical decomposition (left side) and recomposition through testing (right side), making the traceability requirements more visually explicit.

The FDA does not mandate either visualization. What matters is that your design control process establishes clear, documented traceability regardless of which model you use or whether your development follows a traditional waterfall, agile, or hybrid methodology.

Practical tip: Build your requirements traceability matrix (RTM) at the same time you are writing design inputs. Do not wait until verification planning to discover that your inputs are untestable. If you cannot figure out how you would verify a requirement, the requirement is probably not written well enough.

Verification Methods in Detail

Design verification is not limited to running physical tests. FDA and ISO 13485 recognize four primary verification methods:

1. Testing

Physical or functional testing of the device or component against a specification. This is what most people think of when they hear "verification."

Examples:

  • Tensile strength testing of a catheter shaft per the dimensional and mechanical requirements
  • Electrical leakage current measurement per IEC 60601-1 requirements
  • Software unit testing against module-level specifications
  • Hermeticity testing of an implantable device enclosure
  • Shelf-life / accelerated aging studies against packaging integrity requirements

2. Inspection

Visual or dimensional examination to confirm conformance to a specification. No functional testing is involved — you are looking and measuring.

Examples:

  • Dimensional inspection of machined components against engineering drawings
  • Visual inspection of PCB assemblies for solder quality per IPC-A-610 workmanship standards
  • Label content review against labeling specifications
  • Visual examination of surface finish on a surgical instrument

3. Analysis

Using mathematical models, simulations, or calculations to demonstrate that a design output meets an input requirement. Analysis is legitimate verification when physical testing is impractical, destructive, or unnecessary.

Examples:

  • Finite element analysis (FEA) of a hip implant under worst-case loading conditions
  • Thermal analysis of an electronic enclosure to verify operating temperature stays within component ratings
  • Worst-case circuit analysis to verify voltage and current specifications are met across component tolerances
  • Computational fluid dynamics (CFD) analysis of flow through an infusion set
  • Software static analysis (code coverage, cyclomatic complexity, MISRA-C compliance)

Important caveat: Analysis alone is not always sufficient. FDA expects that analysis-based verification is supported by evidence that the model is valid — meaning the model assumptions and boundary conditions have been justified, and ideally the model has been correlated against physical test data at some point.

4. Demonstration

Operating or exercising the device in a controlled setting to show that a specification is met. Demonstration is somewhere between testing and inspection — you are operating the device but not necessarily quantifying a measurement against a precise acceptance criterion.

Examples:

  • Demonstrating that a surgical instrument can be fully disassembled and reassembled per the cleaning instructions
  • Demonstrating that a software interface displays all required fields per the UI specification
  • Demonstrating that an alarm activates when a threshold condition is simulated

Choosing the Right Method

The choice depends on the requirement, the risk, and practicality:

Method Best For Limitations
Testing Performance requirements, safety-critical parameters, material properties Can be time-consuming, expensive, may require destructive testing
Inspection Dimensional requirements, workmanship, labeling accuracy Limited to observable attributes, cannot assess functional performance
Analysis Structural integrity, thermal behavior, worst-case electrical, early design phases Requires model validation; regulators may require confirmatory testing for high-risk claims
Demonstration Functional capabilities, UI behavior, assembly/disassembly procedures Less rigorous than testing; not suitable for quantitative performance claims

For safety-critical requirements, testing is almost always expected. You can supplement with analysis, but do not plan to verify a critical safety specification with analysis alone unless you have strong justification.

IEC 60601-1 Testing as Verification: A Closer Look

For electromechanical medical devices, IEC 60601-1 (and its collateral and particular standards) represents a major block of verification testing. Understanding the structure of this standard family helps you plan verification scope:

The standard hierarchy:

  • IEC 60601-1 (base standard): General requirements for basic safety and essential performance — electrical safety, mechanical safety, temperature limits, leakage currents, dielectric strength, protective earthing, labeling, and risk management integration
  • IEC 60601-1-2 (collateral standard): Electromagnetic compatibility (EMC) — immunity and emissions testing
  • IEC 60601-1-6 (collateral standard): Usability — references IEC 62366-1
  • IEC 60601-1-8 (collateral standard): Alarm systems requirements
  • IEC 60601-2-XX (particular standards): Device-specific requirements (e.g., 60601-2-24 for infusion pumps, 60601-2-27 for ECG monitors, 60601-2-4 for defibrillators)

Key verification tests under IEC 60601-1:

Test Category What It Verifies Example Acceptance Criteria
Leakage current measurement Earth leakage, touch current, patient leakage current under normal and single-fault conditions Patient leakage current for Type BF applied parts must not exceed 100 uA in normal condition, 500 uA in single-fault condition
Dielectric strength (hipot) Insulation integrity between circuits No breakdown at specified test voltage (e.g., 4000 VAC for 2xMOPP)
Protective earth continuity Integrity of the ground connection Resistance must not exceed 0.1 ohm at 25A test current
Temperature testing Surface temperatures under normal and abnormal conditions Touchable surfaces must not exceed limits (e.g., 41 degrees C for metal parts touched in normal use)
Mechanical strength Impact resistance, drop resistance, stability of enclosure Device withstands specified impact energy without loss of basic safety
Ingress protection (IP rating) Protection against water and particulates IPX1 minimum for general equipment; higher ratings for specific use environments
Applied part classification Degree of patient protection (Type B, BF, CF) Leakage current limits differ by classification; Type CF (cardiac) is most restrictive

All of these tests are design verification — they confirm that specific design outputs (circuit design, enclosure design, insulation barriers) meet input requirements derived from the standard. The test results become part of the verification evidence in your DHF and are typically included in premarket submissions.

Practical tip: IEC 60601-1 testing is expensive and time-consuming. Pre-test in-house whenever possible — if you already own a temperature/humidity chamber or hipot tester, run preliminary tests to gain confidence before committing to a certified test house. A failed test at the test house can cost tens of thousands of dollars in retest fees and project delays. Many companies find that 60601 testing takes 8-12 weeks at the test house, so factor this into your verification timeline.

Validation Approaches in Detail

Design validation is about the user, the use environment, and the intended purpose. Here are the primary validation approaches:

Simulated-Use Testing

The most common form of design validation. You create conditions that replicate real-world use and have representative users (or trained operators acting as surrogates) operate the device.

Key elements:

  • Production-equivalent devices: The units tested must be manufactured using the same processes, materials, and specifications as production devices
  • Simulated-use conditions: Environmental conditions (temperature, humidity, lighting), use scenarios (emergency vs. routine), contamination, and interference that reflect actual use
  • Representative users: Where the user interface is part of what is being validated, you need actual representative users — not the design engineers who built the device

Examples:

  • Testing a defibrillator with paramedics in a simulated emergency scenario, using production units, measuring time-to-shock and successful shock delivery rate
  • Running an IVD analyzer through a full clinical workflow using production reagents and patient-like samples, comparing results to a reference method
  • Testing a surgical robot through complete surgical procedures on cadaveric tissue, using production hardware and software

Clinical Evaluation

For some devices, clinical data is required as part of validation. This can take the form of:

  • Clinical investigations (clinical trials): Prospective studies conducted under an IDE (US) or Clinical Investigation Plan (EU). Required for most Class III devices and many novel Class II devices.
  • Clinical evaluation based on literature: For well-established technologies where clinical data already exists in published literature. Under the EU MDR, this route is significantly more restricted — Annex XIV sets high bars for demonstrating equivalence.
  • Post-market clinical follow-up (PMCF): Under the EU MDR, PMCF is ongoing validation that continues after market placement.

Clinical evaluation is the most resource-intensive validation approach and typically has the longest timeline. Plan for it early.

Usability Validation (Summative Usability Testing)

Required per IEC 62366-1, usability validation (also called summative or human factors validation testing) is specifically focused on the user interface. The goal is to demonstrate that the device can be used safely and effectively by the intended users without serious use errors.

This is a specific type of design validation — it validates the user interface design, not the device's functional performance.

Key requirements:

  • Conducted with representative users from each intended user population
  • Uses production-equivalent devices
  • Simulates realistic use scenarios including reasonably foreseeable misuse
  • Focuses on critical tasks identified through use-related risk analysis
  • Evaluates whether use errors or difficulties could lead to harm

Common mistake: Companies often try to combine functional validation testing with usability validation in a single protocol. While there can be some overlap, the objectives are different. Functional validation confirms the device performs to specification in use conditions. Usability validation confirms that the user interface does not introduce unacceptable use-related risk. Keep the objectives clear, even if some testing is combined for efficiency.

Biocompatibility Evaluation

For devices that contact the body, biocompatibility evaluation per ISO 10993-1 is a form of design validation. You are validating that the device materials are safe for the intended contact duration and type (surface contact, externally communicating, implant).

Biocompatibility evaluation can include biological testing (cytotoxicity, sensitization, irritation, systemic toxicity, etc.) or a biological evaluation based on material characterization and existing data.

Software Validation

For software-driven devices and SaMD, software validation (per IEC 62304 and FDA's guidance on software validation) includes:

  • System-level testing that exercises the software in its intended operating environment
  • Integration testing with hardware components (for software in a device)
  • Performance testing under worst-case data loads and edge conditions
  • Cybersecurity testing in the intended network environment
  • Interoperability testing with connected systems

Software V&V Under IEC 62304: Safety Classifications and Testing Requirements

IEC 62304 is the internationally recognized standard for medical device software lifecycle processes. A critical concept within IEC 62304 is the software safety classification, which determines the rigor of verification and documentation required:

Classification Definition V&V Requirements
Class A No injury or damage to health is possible Basic verification; documentation requirements are minimal
Class B Non-serious injury is possible Requires verification at unit, integration, and system levels; documented test protocols and reports with expected results, observed results, and pass/fail determination
Class C Death or serious injury is possible Full verification at all levels with enhanced documentation; all unit and integration test protocols and reports must be provided, including expected results derived from requirements and design, actual results observed and recorded, and objective pass/fail determination

How software V&V maps to the V-model under IEC 62304:

  • Software unit verification (Class B and C): Each software unit is verified against its detailed design specification. This typically includes code review, unit testing, and static analysis. For Class C software, every unit must have documented evidence of verification.
  • Software integration testing (Class B and C): Verified that software items work together correctly per the architectural design. Integration tests confirm data flows, interface behavior, and error handling between modules.
  • Software system testing: The complete software system is tested against the software requirements specification. This is verification — it confirms the software does what the SRS says it should do. System testing should cover all software requirements.
  • Software validation: The software is tested in its intended operating environment to confirm it meets user needs. For software in a device, this means running on the target hardware in conditions reflecting actual clinical use. For SaMD, this means testing in the intended computing environment with representative clinical data.

Key distinction: Software system testing (IEC 62304 Clause 5.7) is verification — it checks the software against its requirements specification. Software validation is a separate activity that checks the software against user needs in the intended use environment. These are complementary, not interchangeable.

FDA's General Principles of Software Validation guidance (2002) adds further expectations:

  • Software validation should be conducted as an integral part of device design validation
  • The rigor of validation should be commensurate with the risk associated with the software's use
  • Automated testing tools used in V&V must themselves be validated for their intended use
  • Off-the-shelf (OTS) software incorporated into a medical device must be validated within the context of the device

Accelerated Aging as Verification

Accelerated aging studies per ASTM F1980 deserve special attention because they sit at the intersection of verification and validation, and their proper classification depends on what you are testing.

The science: Accelerated aging uses elevated temperature to simulate the passage of time on materials. The underlying principle is the Arrhenius equation, which states that a 10 degrees C increase in temperature approximately doubles the rate of chemical reactions. The standard uses a Q10 factor (typically Q10 = 2) to calculate the accelerated aging time:

Accelerated Aging Time = Desired Real Time / Q10^((TAA - TRT) / 10)

Where TAA is the accelerated aging temperature and TRT is the real-time storage temperature. For example, at a standard ambient storage temperature of 23 degrees C and an accelerated aging temperature of 55 degrees C, 40 days of accelerated aging simulates approximately one year of real-time aging.

When accelerated aging is verification: If you are testing the device or packaging system against a shelf-life design input specification (e.g., "the sterile barrier system shall maintain integrity for 5 years at 23 degrees C / 50% RH"), accelerated aging is verification — you are confirming a design output against a design input. This is typical for packaging integrity testing per ISO 11607.

When accelerated aging is validation: If you are demonstrating that the complete packaged device maintains its fitness for intended use over the claimed shelf life — including that the device still functions correctly after aging — this is closer to validation, because you are confirming the device meets user needs (a device that has degraded past the point of safe use fails the user).

Important: FDA accepts accelerated aging data to support initial shelf-life claims and product launch, but real-time aging studies must be conducted in parallel to confirm the accelerated aging results. The accelerated aging result is considered a conservative estimate. If real-time data later contradicts the accelerated aging results, the shelf-life claim must be revised.

Biocompatibility: Where It Falls in the V&V Spectrum

Biocompatibility evaluation per ISO 10993-1 is frequently categorized as design validation, but the reality is more nuanced:

Biocompatibility testing as verification: Individual biological tests (cytotoxicity per ISO 10993-5, sensitization per ISO 10993-10, irritation per ISO 10993-23) can serve as verification when they are testing specific material properties against design input requirements. For example, if your design input states "fluid-path materials shall be non-cytotoxic per ISO 10993-5," then the cytotoxicity test is verifying that specific output against that specific input.

Biocompatibility evaluation as validation: The overall biological evaluation — the comprehensive assessment documented in the Biological Evaluation Report per ISO 10993-1 — is a validation activity. It answers the question: "Are the materials in this device safe for their intended contact with the human body?" This is a user-need-level question, not a specification-level question.

The practical distinction matters because:

  • Individual biocompatibility tests can be performed on materials, components, or extracts earlier in development (verification timing)
  • The overall biological evaluation, which integrates material characterization, literature review, biological testing data, and a toxicological risk assessment, is typically finalized on production-representative materials (validation timing)
  • Material changes during development may require re-testing specific endpoints (re-verification) without necessarily repeating the entire biological evaluation (re-validation)

Sterilization Validation

For sterile medical devices, sterilization validation is a critical validation activity that bridges design validation and process validation:

  • EtO (ethylene oxide) sterilization validation per ISO 11135 — demonstrates that the sterilization process consistently achieves the required Sterility Assurance Level (SAL), typically SAL 10^-6
  • Radiation sterilization validation (gamma or e-beam) per ISO 11137 — establishes the minimum dose required to achieve the SAL
  • Moist heat sterilization validation per ISO 17665 — for steam-sterilizable devices and reusable instruments

Sterilization validation is performed on production-representative devices in production-representative packaging, using the production sterilization process. It is design validation because it demonstrates that the complete product-packaging-process system achieves the intended user need of delivering a sterile device. It is also process validation because it demonstrates the manufacturing process consistently produces a conforming output.

Packaging Validation

Packaging validation per ISO 11607 (Parts 1 and 2) is frequently overlooked as a design validation activity, but it is one the FDA specifically expects to see. Your medical device is not just the hardware — it includes the label, instructions for use, packaging, and everything inside the package.

Packaging validation must demonstrate:

  • Seal integrity: The seals on sterile barrier packaging maintain integrity through the distribution environment
  • Distribution simulation: The package system protects the device through transportation, handling, and storage (per ASTM D4169 or ISTA protocols)
  • Shelf-life: The sterile barrier system maintains integrity for the claimed shelf life (per ASTM F1980 for accelerated aging and real-time aging)
  • Labeling legibility and adhesion: Labels remain legible and attached throughout the shelf life and distribution conditions

Common oversight: Companies often validate the device but forget to validate the packaging system as part of design validation. This is a 483 finding. The packaging is a design output, and its performance against user needs (delivering a sterile, intact, correctly labeled device to the point of use) must be validated.

V&V Examples by Device Type

The abstract concepts are easier to grasp with concrete examples. Here is how V&V typically plays out for different device categories:

Electromechanical Device (e.g., Infusion Pump)

Activity Verification or Validation What It Covers
Flow rate accuracy testing against pump spec (mL/hr +/- tolerance) Verification Design output vs. design input
Electrical safety testing per IEC 60601-1 Verification Output vs. standard-derived input requirements
EMC testing per IEC 60601-1-2 Verification Output vs. standard-derived input requirements
Alarm system testing (occlusion, air-in-line, low battery) against alarm specs Verification Output vs. design input
Software unit and integration testing Verification Software output vs. software requirements
Worst-case circuit analysis for power supply Verification (analysis) Output vs. design input
Simulated-use study: nurses programming and administering infusions using production units in a simulated clinical environment Validation Device vs. user needs and intended use
Summative usability test: representative nurses performing critical tasks (programming, responding to alarms, changing IV sets) Validation (usability) User interface vs. safe and effective use
Biocompatibility evaluation of fluid path materials Validation Materials vs. patient safety needs

Software as a Medical Device (SaMD) (e.g., Diagnostic Algorithm)

Activity Verification or Validation What It Covers
Unit testing of algorithm modules against software requirements Verification Software output vs. software requirement spec
Integration testing of data pipeline components Verification System output vs. architecture spec
Performance testing: throughput, latency, memory usage against specifications Verification Output vs. non-functional requirements
Static analysis and code review per coding standards Verification (analysis/inspection) Code vs. coding standard requirements
Cybersecurity testing: penetration testing, vulnerability scanning Verification Output vs. security requirements
Clinical performance study: algorithm accuracy (sensitivity, specificity, AUC) against clinical ground truth using production software on representative patient datasets Validation Device vs. intended clinical use
Usability validation: clinicians interpreting algorithm output in simulated clinical decision-making scenarios Validation (usability) User interface vs. safe and effective clinical use
Interoperability testing with hospital PACS/EHR systems Validation Device vs. intended use environment

In Vitro Diagnostic (IVD) Device (e.g., Immunoassay Analyzer)

Activity Verification or Validation What It Covers
Analytical performance testing: precision, accuracy, linearity, LoD, LoQ against assay specs Verification Output vs. design input specifications
Reagent stability testing against shelf-life specs Verification Output vs. design input
Environmental testing (temperature, humidity, vibration) against operating condition specs Verification Output vs. design input
EMC and electrical safety testing Verification Output vs. standard-derived input requirements
Method comparison study against predicate/reference method using clinical samples Validation Device vs. clinical intended use
Clinical performance study: sensitivity, specificity, PPV, NPV using patient specimens Validation Device vs. clinical diagnostic claims
Usability validation: lab technicians performing complete clinical workflow Validation (usability) User interface vs. safe and effective use
Reagent-instrument system validation on production units with production reagent lots Validation Complete system vs. intended use

IVD-Specific V&V: Analytical Performance vs. Clinical Performance

IVD devices warrant additional discussion because their V&V framework uses terminology unique to the diagnostics industry, and the EU IVDR (Regulation 2017/746) has formalized these distinctions:

Analytical performance (verification) evaluates the ability of the device to correctly detect or measure a particular analyte. Key analytical performance parameters include:

  • Precision (repeatability, within-laboratory precision, reproducibility)
  • Trueness / accuracy (bias relative to reference materials or methods)
  • Analytical sensitivity (Limit of Detection, Limit of Quantitation)
  • Analytical specificity (cross-reactivity, interference)
  • Linearity and measuring range
  • Hook effect (for immunoassays)
  • Reagent stability (open-vial, on-board, and shelf-life stability)

These are all verification activities — they test the analytical output against the design input specifications for the assay.

Clinical performance (validation) evaluates the ability of the device to yield results correlated with a particular clinical condition in the target population. Key clinical performance parameters include:

  • Clinical sensitivity (positive percent agreement with a clinical reference)
  • Clinical specificity (negative percent agreement)
  • Positive predictive value (PPV) and negative predictive value (NPV)
  • Diagnostic accuracy (overall agreement)
  • Clinical utility (impact on patient management decisions)

Clinical performance evaluation is validation — it tests the complete IVD system against its intended clinical use, using patient specimens representative of the target population. Under the IVDR, clinical performance studies must follow ISO 20916:2019.

Key IVD nuance: The IVDR introduces a third element — scientific validity — which refers to the established association between the analyte and the clinical condition. Scientific validity is typically demonstrated through literature review and is a prerequisite for the analytical and clinical performance evaluation, but it is not itself a V&V activity.

Implantable Device (e.g., Orthopedic Implant)

Activity Verification or Validation What It Covers
Mechanical testing: fatigue, static load, wear per ASTM/ISO standards Verification Output vs. design input (performance specs)
Dimensional inspection of implant components against engineering drawings Verification (inspection) Output vs. dimensional specs
FEA of implant under worst-case physiological loading Verification (analysis) Output vs. structural requirements
Corrosion testing per ASTM F2129 Verification Output vs. corrosion resistance specs
Biocompatibility testing per ISO 10993 Validation Materials vs. patient safety requirements
Cadaveric or synthetic bone bench testing: implant fits, instrument set works, procedure is feasible Validation Device system vs. intended surgical use
Clinical investigation: patient outcomes, device survival, functional scores Validation Device vs. clinical intended use
Packaging validation per ASTM F2095 / ISO 11607 Validation Sterile barrier system vs. maintaining sterility through distribution

Single-Use Disposable Device (e.g., Blood Collection Set)

Activity Verification or Validation What It Covers
Dimensional verification of tubing, needle gauge, connector fit Verification (inspection/testing) Output vs. dimensional specs
Flow rate testing against specifications Verification Output vs. performance specs
Tensile and bond strength testing of joints Verification Output vs. mechanical specs
Accelerated aging and real-time aging for shelf life Verification Output vs. shelf-life claims
Biocompatibility testing per ISO 10993 Validation Materials vs. patient safety
Simulated-use testing: phlebotomists performing venipuncture on arm models using production units Validation Device vs. intended clinical use
Package integrity validation per ISO 11607 Validation Sterile barrier vs. maintaining sterility
Sterilization validation (EtO, gamma, e-beam) Validation Process vs. sterility assurance level

Documentation Requirements

Verification Protocol and Report

A verification protocol should include:

  1. Purpose and scope — What is being verified and why
  2. Requirements being verified — Explicit list of design input requirements, with traceability IDs
  3. Verification method — Testing, inspection, analysis, or demonstration
  4. Test samples — Description of units under test, including design revision, lot/serial numbers, and how they were produced (prototype, pilot, production)
  5. Test equipment and calibration — Equipment used, calibration status, measurement uncertainty where relevant
  6. Test procedure — Step-by-step instructions, detailed enough that someone else could reproduce the testing
  7. Acceptance criteria — Quantitative criteria linked directly to the design input requirements. "Pass/fail" without a measurable criterion is insufficient.
  8. Sample size and statistical rationale — If using sampling, justify the sample size (confidence level, reliability level, or reference to an appropriate sampling standard like ANSI/ASQ Z1.4)
  9. Environmental conditions — Temperature, humidity, and other conditions under which testing will be performed

A verification report should include:

  1. Reference to the protocol — Protocol number, revision
  2. Test results — Raw data or summarized data, presented clearly against acceptance criteria
  3. Pass/fail determination — For each requirement verified
  4. Deviations — Any deviations from the protocol and their impact assessment
  5. Conclusion — Summary of verification results
  6. Signatures and dates — Author, reviewer, approver

Validation Protocol and Report

A validation protocol has similar structure but with additional requirements:

  1. User needs and intended use — The validation must reference the user needs document or intended use statement, not just design input specs
  2. Test units — Must be production-equivalent. Document how the units were produced and why they are representative of production.
  3. Use conditions — Describe how actual or simulated use conditions are achieved. Justify that the simulation is adequate.
  4. User population — If representative users are involved, define the user population and recruitment criteria
  5. Clinical endpoints or performance criteria — For clinical validations, define primary and secondary endpoints
  6. Statistical analysis plan — Particularly for clinical studies and analytical performance studies

Requirements Traceability Matrix (RTM)

The RTM is the connective tissue that holds V&V together. It should map:

  • User needs to design inputs
  • Design inputs to design outputs
  • Design inputs to verification activities
  • User needs to validation activities
  • Risk control measures to verification/validation activities

Every user need should trace forward to at least one validation activity. Every design input should trace forward to at least one verification activity. Every risk control measure identified in your risk management file should trace to a verification or validation activity that confirms the control is effective.

FDA expectation: The RTM is one of the first things an FDA investigator will review. A clean, complete, up-to-date traceability matrix signals a well-controlled design process. Gaps in the matrix are 483 findings waiting to happen.

RTM Worked Example: Infusion Pump

To make the traceability concept concrete, here is a simplified excerpt from a traceability matrix for an infusion pump. In practice, a full RTM would have hundreds of rows, but the structure is the same:

User Need Design Input Design Output Verification Activity Validation Activity
UN-001: Clinician must be able to set flow rate accurately DI-012: Flow rate accuracy shall be within +/- 5% of set rate from 1-999 mL/hr DO-012: Pump mechanism design, stepper motor spec, firmware flow control algorithm VER-018: Flow rate accuracy test per protocol TP-012 (bench test, 30 units, C=0 at 95/95) VAL-003: Simulated-use study — nurses program and administer infusions in simulated ICU
UN-003: Device must be safe when used near other electronic equipment DI-025: Device shall maintain essential performance per IEC 60601-1-2 EMC requirements DO-025: PCB with EMI filtering circuit, shielded enclosure VER-031: EMC testing at accredited test lab per IEC 60601-1-2 VAL-003: Simulated-use study includes testing in simulated clinical environment with other equipment present
UN-007: Device must alert clinician to occlusion DI-041: Occlusion alarm shall activate within 30 seconds at pressures exceeding 15 psi DO-041: Pressure sensor spec, alarm firmware module VER-045: Occlusion alarm response time test (bench test, 15 units) VAL-003: Simulated-use study includes occlusion scenarios; VAL-005: Usability validation — nurses respond to alarms
Risk control: RC-008: Software interlock prevents flow rate > 1000 mL/hr DI-055: Software shall reject flow rate entries exceeding 1000 mL/hr DO-055: Input validation code module, error handling code VER-060: Software integration test — boundary value testing of flow rate input VAL-005: Usability validation — nurses attempt to enter out-of-range values during critical task scenarios

The key features of a well-constructed RTM:

  • Every user need traces forward to at least one validation activity
  • Every design input traces forward to at least one verification activity
  • Risk control measures from the risk management file trace to specific V&V activities
  • Bidirectional traceability is maintained — you can trace forward (need to test) and backward (test result to need)
  • The matrix uses unique identifiers for every element, enabling cross-referencing across documents

Statistical Methods for Sample Sizing in V&V

One of the most common 483 findings is inadequate statistical justification for sample size. "We tested three and they all passed" is not a statistically defensible sample size rationale. Here is a practical guide to the most commonly used methods:

Attribute Testing (Pass/Fail Data)

For requirements tested on a pass/fail basis, the most widely used approach in the medical device industry is the C=0 sampling plan — also called the success-run theorem or zero-acceptance-number sampling.

The principle: you define a confidence level (how sure you want to be) and a reliability level (what proportion of the population must meet the requirement), and the formula tells you how many units to test with zero failures allowed.

The formula (zero failures allowed):

n = ln(1 - C) / ln(R)

Where n = sample size, C = confidence level, R = reliability level, and ln = natural logarithm.

Common confidence/reliability combinations and their sample sizes (C=0):

Confidence Reliability Sample Size (C=0)
90% 90% 22
90% 95% 45
95% 90% 29
95% 95% 59
95% 99% 299
99% 90% 44
99% 95% 90
99% 99% 459

Selecting the right confidence/reliability level: Use your risk analysis to guide the selection. A common industry practice:

Risk Level Confidence/Reliability Rationale
High risk (could cause death or serious injury) 95%/99% or 99%/99% Maximum assurance required; aligns with critical safety requirements
Moderate risk (could cause non-serious injury) 95%/95% Standard for most performance-critical requirements
Low risk (unlikely to cause injury) 90%/90% or 95%/90% Appropriate for non-critical characteristics; reduces sample cost

Why C=0? The C=0 approach is preferred in medical devices because it provides maximum consumer protection — if even one unit fails, the lot is rejected. This is especially important when health and human welfare are involved. C=0 plans were derived from ANSI/ASQ Z1.4 by Nicholas Squeglia and are widely accepted by FDA and Notified Bodies.

Variable Testing (Continuous/Measured Data)

For requirements tested by measuring a continuous variable (e.g., flow rate in mL/hr, tensile strength in N), the approach depends on whether you are comparing to a specification limit or comparing two populations:

  • Tolerance interval approach: Calculate a statistical tolerance interval that captures a specified proportion of the population with a given confidence. For example, "with 95% confidence, at least 99% of units will meet the specification." The sample size depends on the required confidence and reliability, similar to the attribute case but using tolerance interval factors (k-values).
  • Power analysis: When testing whether a population mean meets a specification (one-sample t-test) or whether two populations differ (two-sample t-test), power analysis determines the sample size needed to detect a meaningful difference with sufficient statistical power (typically 80% or 90% power at alpha = 0.05).

Sample Sizing for Specific Test Types

Some test types have sample size conventions driven by standards or industry practice:

Test Type Typical Sample Size Approach Reference
Fatigue testing Minimum 3-6 specimens for finite-life testing; statistical methods per standard ASTM E739, ISO 14801 (dental implants)
Biocompatibility testing Defined by the test standard (e.g., 3 replicates per ISO 10993-5 for cytotoxicity) ISO 10993 series
Sterile barrier testing Typically 95/95 (n=59 at C=0) for seal strength; may use smaller samples for destructive tests with statistical justification ISO 11607, ASTM F2095
EMC and electrical safety Typically 1-3 units (type test); justified by the standard itself IEC 60601-1, IEC 60601-1-2
Packaging distribution testing Per ASTM D4169 or ISTA protocol-defined quantities ASTM D4169, ISTA 2A/3A
Clinical studies Power analysis based on clinically meaningful difference, expected variability, and acceptable error rates ISO 14155, FDA guidance

Document the rationale: Regardless of method, the statistical rationale must be documented in the protocol before testing begins. Stating "n=10 was selected based on engineering judgment" is not sufficient for FDA. State the method (C=0, power analysis, tolerance interval), the parameters (confidence, reliability, effect size), and the resulting sample size. If the sample size is constrained by practical limitations (e.g., limited production units available), document this constraint and any additional mitigations (e.g., additional analysis, worst-case testing conditions).

Common FDA 483 Observations Related to V&V

Understanding what goes wrong helps you get it right. The numbers tell a stark story:

  • Design control deficiencies accounted for approximately 14% of all QSR-related 483 observations in the FDA's CY2014 data (515 out of 3,740 total observations), making design controls the third-most-cited QS subsystem after Production & Process Controls and CAPA.
  • Design validation (21 CFR 820.30(g)) was cited 38 times in FY2020 — far exceeding any other individual design control clause. Observations ranged from not having procedures, to not performing risk analysis, to not using production-equivalent devices.
  • In FY2021, design control violations represented more than 33% of all device 483 observations, with 64 out of 191 medical device 483s related to design controls.
  • Design validation has been the number one citation in FDA warning letters for design controls from 2011 through 2015, according to analysis of FDA enforcement data.
  • More than half of all FDA device recalls — and the majority of Class I recalls — trace to design control gaps that better risk analysis, verification, and validation would have caught.

Here are the most frequent V&V-related observations from FDA inspections:

1. Validation Not Performed on Production-Equivalent Units

The observation: "Design validation was performed on prototype units that were not manufactured using production processes."

Why it matters: If the units used for validation were hand-built, machined with different equipment, or assembled using different methods than production, the validation results may not be representative.

How to avoid it: Plan your validation timing so that production-equivalent units are available. If you must use pre-production units, document exactly how they were made and justify why they are equivalent to production units in all respects that could affect performance or safety.

2. Validation Did Not Include Simulated or Actual Use Conditions

The observation: "Design validation testing was conducted under laboratory conditions that do not represent actual or simulated use conditions."

Why it matters: A device that performs perfectly on a bench may fail in the field. Environmental conditions, user variability, interference from other equipment, and the stress of real clinical workflows all affect device performance.

How to avoid it: Identify the use conditions during design planning. Consider temperature ranges, humidity, altitude, electrical interference, lighting conditions, patient populations (pediatric, obese, elderly), user expertise levels, and workflow pressures.

3. Incomplete Traceability Between User Needs and Validation Activities

The observation: "The design validation did not address all user needs and intended uses."

Why it matters: If a user need is not traced to a validation activity, there is no evidence that the device meets that need. This is a systemic design control failure.

How to avoid it: Maintain a complete RTM. Review it during design reviews. Conduct a formal gap assessment before beginning validation.

4. Acceptance Criteria Not Established Before Testing

The observation: "Acceptance criteria for design verification/validation were established after testing was completed."

Why it matters: Setting criteria after you see the data is not verification or validation — it is rationalization. Pre-established criteria are fundamental to objective evidence of conformity.

How to avoid it: Always define acceptance criteria in approved protocols before testing begins. If you need to revise criteria, do so through your change control process with documented rationale, and ensure the revision is justified by something other than "the data did not meet the original criteria."

5. Inadequate Statistical Justification for Sample Size

The observation: "The sample size used for design verification/validation was not justified."

Why it matters: Testing three units with no statistical rationale does not demonstrate reliability. FDA expects that sample sizes are based on statistical reasoning appropriate to the test and the risk.

How to avoid it: Use established statistical methods. For attribute testing, consider ANSI/ASQ Z1.4 or C=0 sampling plans with defined confidence and reliability levels. For variable testing, use power analysis to determine adequate sample sizes. For fatigue testing, reference the relevant standard (e.g., ASTM E739 for S-N fatigue data). Document the rationale.

6. Design Verification Confused with Design Validation

The observation: "The firm's design validation activities were verification activities; they confirmed design outputs met design inputs but did not confirm the device meets user needs under actual or simulated use conditions."

Why it matters: Calling a verification test "validation" does not make it validation. If your "validation" only tests against design input specifications under controlled lab conditions with engineers as operators, it is verification regardless of what you titled the protocol.

How to avoid it: Be honest about what your test actually demonstrates. If it tests an output against an input spec, it is verification. If it tests the finished device against user needs under use conditions, it is validation. Some tests can serve both purposes if properly planned, but the protocol must clearly identify which requirements are being verified and which user needs are being validated.

Planning V&V During the Design Input Phase

The biggest mistake companies make with V&V is treating it as something that happens at the end of the design process. By the time you are writing verification and validation protocols, most of the decisions that determine whether V&V will go smoothly have already been made.

Write Testable Requirements

A design input that cannot be verified is a bad requirement. Every design input should be:

  • Specific: "The device shall weigh no more than 2.5 kg" not "the device shall be lightweight"
  • Measurable: A test method and acceptance criterion must be definable
  • Achievable: The requirement must be technically feasible
  • Traceable: It must trace to a user need, regulatory requirement, risk control measure, or standard

When writing design inputs, simultaneously draft the verification method and acceptance criterion. If you cannot articulate how you would test the requirement, rewrite the requirement.

Identify Validation Strategy Early

During design planning, determine:

  1. What validation activities are needed? Simulated-use testing, clinical study, usability validation, biocompatibility evaluation, sterility validation, etc.
  2. What production-equivalent units are needed? How many, from what manufacturing process, at what design revision?
  3. What is the timeline? Clinical studies take 6-24 months to plan, execute, and report. Simulated-use studies require protocol development, user recruitment, and site preparation. Factor these timelines into your design plan.
  4. What are the regulatory expectations? Does your device classification or regulatory pathway require clinical data? Consult the relevant FDA guidance documents, classification regulations, and predicate device 510(k) summaries.

Use Risk Management to Drive V&V Scope

Your risk management process (ISO 14971) directly informs V&V planning:

  • Risk control measures identified in your risk analysis must be verified as effective. If you add a software interlock to prevent an overdose, you must verify that the interlock functions correctly under all specified conditions.
  • Residual risks that are accepted must be evaluated — validation should confirm that the device can be used safely despite known residual risks.
  • Use-related risks identified in your use-related risk analysis (IEC 62366-1) drive the scope of usability validation.

The relationship flows both ways: V&V results feed back into risk management. If verification testing reveals a failure mode not previously identified, the risk file must be updated.

The Relationship Between V&V and Usability Engineering

Usability engineering per IEC 62366-1 has its own lifecycle that parallels and intersects with design controls. Here is how they map:

Usability Engineering Activity Design Control Phase V&V Relevance
Use specification (intended users, use environments, user interface) Design input Feeds user needs and design inputs
Use-related risk analysis Risk management (ongoing) Drives scope of usability validation
Formative usability evaluation Design verification / design review Iterative evaluation during development — not formal V&V, but informs design decisions
Summative usability evaluation (usability validation) Design validation Formal validation of user interface safety and effectiveness

A critical distinction: formative evaluations are iterative, informal assessments done during development to identify usability issues and refine the design. They are part of your development process but are not design validation. Summative evaluation is the formal, final assessment conducted on the production-equivalent device with representative users — this is design validation.

Do not skip formative evaluations in the hope of "getting it right" in the summative study. The summative study is not the place to discover fundamental usability problems. By the time you reach summative evaluation, the design should be stable and you should have high confidence it will pass.

Design Transfer Verification

Design transfer — the process of translating the device design into production specifications — is required by 21 CFR 820.30(h) and ISO 13485 Clause 7.3.8. Design transfer verification is a critical but often overlooked component of V&V:

What design transfer verification covers:

  • Confirming that all design outputs (drawings, specifications, BOMs, manufacturing procedures, inspection criteria) are complete, accurate, and sufficient for production
  • Verifying that production processes can consistently reproduce the device as designed
  • Confirming that production personnel can manufacture the device using the documented procedures
  • Verifying that inspection and test methods used in production yield results consistent with design verification testing

How design transfer verification relates to other V&V activities:

  • Design transfer verification typically happens after design verification is complete but during or before design validation
  • First-article inspection of production units is a design transfer verification activity — it confirms that the production process is producing devices that match the design outputs
  • Process validation (IQ/OQ/PQ) is closely related to design transfer — it demonstrates that production processes consistently produce conforming output

Process Validation vs. Design Validation

A common source of confusion: process validation and design validation are distinct activities that serve different purposes, though they are related.

Design Validation Process Validation
Question Does the device meet user needs? Does the manufacturing process consistently produce conforming product?
Regulatory basis 21 CFR 820.30(g), ISO 13485 Cl. 7.3.6 21 CFR 820.75, ISO 13485 Cl. 7.5.6
When required For every device design For processes whose output cannot be fully verified by subsequent inspection and test
Performed on Production-equivalent finished devices The manufacturing process itself (IQ/OQ/PQ)
Focus Device performance in use conditions Process capability and consistency

The two activities are complementary. Design validation confirms you designed the right device. Process validation confirms you can make that device consistently. Both are required before commercial distribution.

Practical tip: If your design validation uses production units (as required), and those units pass validation, this provides supporting evidence that the production process is capable — but it does not replace formal process validation. Conversely, a validated process that consistently produces conforming product does not by itself prove the device meets user needs. Both must be independently demonstrated.

Handling Design Changes and Re-Verification / Re-Validation

Design changes after initial V&V is complete are inevitable. The question is: what do you need to re-do?

The Assessment Framework

For every design change, conduct a change impact assessment that addresses:

  1. Which design inputs are affected? If a design input changes, all verification activities linked to that input must be re-evaluated.
  2. Which design outputs are affected? If an output changes (drawing revision, software update, material change), re-verification of affected requirements is typically needed.
  3. Does the change affect user needs or intended use? If yes, re-validation may be needed.
  4. Does the change affect risk? If the change introduces new hazards, modifies risk controls, or changes residual risk levels, the risk file must be updated and relevant V&V must be repeated.
  5. Does the change affect manufacturing? If a process change could affect device performance, validation on units produced by the new process may be needed.

Re-Verification

Re-verification is required when:

  • A design input requirement changes
  • A design output changes in a way that could affect conformance to a previously verified requirement
  • The manufacturing process changes and could affect a verified characteristic

Re-verification can be limited in scope. If a software update only changes the alarm display format, you do not need to re-run flow rate accuracy testing. But you must justify the scope limitation through the change impact assessment.

Re-Validation

Re-validation is required when:

  • The intended use or user needs change
  • The device design changes in ways that could affect safety or effectiveness in use conditions
  • The user interface changes (triggering usability re-validation)
  • Manufacturing changes could produce a device that performs differently in use

Re-validation is more burdensome than re-verification. This is one of the strongest arguments for getting the design right before validation — and for making design inputs complete and correct early in the process.

Practical tip: For software-intensive devices, establish a regression testing strategy early. Every software change triggers a question about what needs to be re-verified and re-validated. A well-designed regression test suite can dramatically reduce the time and cost of re-verification. For re-validation, consider whether the change affects clinical performance, usability, or interoperability — not all software changes require full re-validation.

Documentation of the Decision

Whether you decide to re-verify, re-validate, or neither, document the decision and its rationale. A common 483 finding is that design changes were implemented without any assessment of their impact on previous V&V results. The assessment itself — even when the conclusion is "no re-verification or re-validation needed" — is the required evidence.

V&V in Agile and Iterative Development

A frequently asked question: "Can we do V&V in an agile development environment?" The answer is yes, but it requires deliberate planning.

The FDA does not mandate a waterfall development methodology. The 1997 Design Control Guidance explicitly acknowledges the iterative nature of product development. The V-model and the waterfall diagram are conceptual frameworks for understanding traceability — they are not mandated development methodologies.

How V&V adapts to agile/iterative development:

  • Verification can (and should) happen continuously throughout development. Each sprint or iteration that produces a testable design output should include verification of that output. Unit tests, integration tests, and component-level testing are all verification activities that fit naturally into agile workflows.
  • Validation remains a gate activity that occurs on production-equivalent devices. You cannot validate a feature in isolation during a sprint — validation requires the complete, finished device under use conditions. However, you can plan for validation throughout development and use formative evaluations to build confidence.
  • Requirements traceability must be maintained regardless of development methodology. Agile teams sometimes struggle with formal traceability because requirements evolve during development. Use tools that maintain live traceability links between user stories (or requirements), design outputs, and test cases.
  • Design reviews serve as integration points where the iterative development work is assessed against design control requirements. In agile, these may align with sprint reviews, milestone reviews, or phase-gate reviews.

Common pitfall: Some agile teams treat all testing as "V&V" without distinguishing between developer testing (which may not be formal verification) and formal design verification and validation. Code that passes a CI/CD pipeline test may be informally verified, but it is not formally verified unless the test is documented in a verification protocol with pre-defined acceptance criteria, traceable to a design input, and the results are recorded in the DHF.

Summary: V&V Quick Reference

Aspect Design Verification Design Validation
Regulatory basis 21 CFR 820.30(f), ISO 13485 Cl. 7.3.5 21 CFR 820.30(g), ISO 13485 Cl. 7.3.6
Reference point Design input requirements User needs and intended use
When Throughout development Before design transfer, on production-equivalent units
Methods Testing, inspection, analysis, demonstration Simulated-use testing, clinical evaluation, usability testing, biocompatibility evaluation
Test units Can be prototypes, subassemblies, or components at any stage Must be production-equivalent (initial production units or equivalents)
Test conditions Lab/controlled conditions acceptable Must reflect actual or simulated use conditions
Operators Engineers and technicians acceptable Representative users for usability; clinical users for clinical studies
Output Protocols, reports, data in DHF Protocols, reports, data in DHF; clinical evaluation reports
Traceability Design input to verification activity User need to validation activity
Re-execution trigger Design output or input changes affecting verified characteristics Changes affecting safety, effectiveness, intended use, or user interface

Frequently Asked Questions

Can a single test serve as both verification and validation?

Yes, in some cases. A test can serve as both verification and validation if it (1) tests a design output against a design input specification (verification) and (2) simultaneously tests the finished device against a user need under simulated or actual use conditions (validation). For example, EMC testing per IEC 60601-1-2 may verify conformance to the standard-derived design input requirement while also validating that the device works in its intended electromagnetic environment. The key is that the protocol must clearly identify which requirements are being verified and which user needs are being validated, and the test conditions must satisfy both sets of criteria (lab conditions for verification, use conditions for validation).

How do you handle V&V for design changes to marketed devices?

The same change impact assessment framework applies. Evaluate which design inputs, outputs, user needs, risk controls, and manufacturing processes are affected. The scope of re-verification and re-validation should be proportional to the scope and risk of the change. Minor changes (e.g., a cosmetic label update) may require only re-verification of the label against the labeling specification. Major changes (e.g., a new material in a patient-contacting component) may require re-verification of material properties, re-validation of biocompatibility, and potentially a new regulatory submission. Always document the rationale for the scope decision, even when the conclusion is "no re-V&V needed."

Is design validation required for Class I devices?

Design controls, including design validation, are required for Class II and Class III devices and for certain Class I devices listed in 21 CFR 820.30(a)(2). Most Class I devices are exempt from design controls. However, even for exempt devices, design validation is considered a best practice, and companies that skip it entirely may face quality issues and recalls.

Can verification be performed on prototypes?

Yes. Unlike design validation (which requires production-equivalent units), design verification can be performed on prototypes, subassemblies, components, or software builds at any stage of development. The key is that the item being verified must be representative of the design output being evaluated. If you are verifying a dimensional specification, the prototype must be produced using a process that yields dimensionally representative results.

What is the difference between design validation and process validation?

Design validation confirms that the finished device meets user needs and intended use (21 CFR 820.30(g)). Process validation confirms that a manufacturing process consistently produces product meeting its predetermined specifications (21 CFR 820.75). Design validation focuses on the device; process validation focuses on the process. Both are required, and they are complementary — a validated design produced by an unvalidated process is not acceptable, and a validated process producing a device with unvalidated design is equally problematic.

How much does V&V typically cost?

V&V costs vary enormously by device type and regulatory pathway. As a rough guide: IEC 60601-1 electrical safety and EMC testing alone can cost $50,000-$150,000 at a certified test lab. A clinical study for a Class III device can cost $500,000 to several million dollars. A summative usability study with 15-25 representative users typically costs $50,000-$200,000. The most effective way to control V&V costs is to write clear requirements, plan V&V early, and pre-test in-house before committing to expensive external testing. The cost of V&V failure (re-testing, design changes, project delays, 483 observations) almost always exceeds the cost of doing it right the first time.

Do I need to validate off-the-shelf (OTS) software?

If you incorporate OTS software into your medical device (e.g., an operating system, database, DICOM library), the OTS software must be validated within the context of your device. You cannot simply rely on the OTS vendor's testing. FDA's Guidance for Off-The-Shelf Software Use in Medical Devices (1999) requires that you assess the level of concern associated with the OTS software and perform validation activities proportional to that risk. This includes verifying that the OTS software performs correctly within your system and validating the complete device including the OTS components.

Final Thoughts

Design verification and validation are not paperwork exercises. They are the mechanism by which you generate objective evidence that your device is safe and effective. Every shortcut in V&V planning, every ambiguous requirement, every untested user need — these are the gaps that show up as field failures, recalls, 483 observations, and ultimately patient harm.

The companies that do V&V well share common traits: they write testable requirements from day one, they plan validation before they start building, they maintain clean traceability, and they invest in understanding how their device will actually be used. None of this requires exotic tools or massive budgets. It requires discipline, planning, and a genuine understanding of what verification and validation are actually asking you to demonstrate.

Get the fundamentals right, and V&V becomes a straightforward execution problem rather than a scramble at the end of your design project.