MedDeviceGuideMedDeviceGuide
Back

Content of Human Factors Information in FDA Medical Device Marketing Submissions: What to Include in Your 510(k), De Novo, or PMA Human Factors Package

How to determine what human factors information to include in FDA medical device marketing submissions — three risk-based HF submission categories, critical task identification, use-related risk analysis, validation testing requirements, and documentation structure under the FDA's draft guidance on content of human factors information.

Ran Chen
Ran Chen
Global MedTech Expert | 10× MedTech Global Access
2026-05-1213 min read

Why the FDA's Human Factors Submission Guidance Matters

The FDA's December 2022 draft guidance "Content of Human Factors Information in Medical Device Marketing Submissions" introduces a risk-based categorization system that determines exactly what human factors documentation you need to include in your 510(k), De Novo, PMA, or HDE submission. This guidance is on the FDA's FY2026 B-list for finalization, meaning the FDA intends to finalize it in the current fiscal year. It replaces the previous approach where the FDA published a "List of Highest Priority Devices for Human Factors Review" — a prescriptive list of device types that required HF data — with a more generalizable, process-based framework that applies to all devices.

The practical impact is significant. Under the old approach, many manufacturers could determine whether HF data was needed by checking whether their device type appeared on the priority list. Under the new framework, every manufacturer must conduct a use-related risk analysis to determine their HF submission category, and even devices with no critical tasks must include some level of HF documentation in their submission. Industry observers have noted that this approach will dramatically expand the number of submissions that include HF data, because the threshold is set low: if even one critical task is identified for a new or modified device, human factors data must be addressed in the submission.

This guide explains the three HF submission categories, how to determine which category applies to your device, what documentation each category requires, how critical tasks are identified, and how the FDA's framework aligns with IEC 62366-1 usability engineering requirements.

The Three HF Submission Categories

The guidance introduces a flowchart-based decision process that routes every submission into one of three categories based on two factors: whether the device is new or modified, and whether critical tasks exist or are impacted by the change.

HF Submission Category 1: Minimal Documentation

Category 1 applies to:

  • Modified devices where there are no changes to the user interface, intended device users, intended device uses, intended use environments, training, or labeling

For Category 1, you provide:

  • A conclusion and high-level summary stating that the modification does not affect human factors considerations
  • Rationale for why no HF data is needed

Category 1 is the narrowest category. It applies only to modified devices where the change is truly unrelated to how users interact with the device — for example, a backend software update that does not change the user interface, or a material change in an internal component that has no effect on user interactions.

HF Submission Category 2: Moderate Documentation

Category 2 applies to:

  • New devices that do not have any critical tasks
  • Modified devices where changes affect use-related hazards but do not add or impact critical tasks (for example, labeling changes that do not affect any critical tasks)

For Category 2, you provide:

  • A conclusion and high-level summary
  • Descriptions of intended device users, uses, use environments, and training
  • Description of the device-user interface
  • Summary of known use problems
  • Rationale for why there are no critical tasks (new devices) or why no critical tasks are added or impacted (modified devices)

Category 2 requires the "front end" of the HFE report — the contextual information and preliminary analyses — but does not require validation testing data because no critical tasks are identified or impacted.

HF Submission Category 3: Full HFE Report

Category 3 applies to:

  • New devices with critical tasks
  • Modified devices where the change introduces new critical tasks or impacts existing critical tasks

For Category 3, you provide a comprehensive human factors engineering report including:

  • All elements required for Category 2
  • Summary of preliminary analyses and evaluations
  • Use-related risk analysis
  • Identification and description of critical tasks
  • Details of human factors validation testing of the final design

Category 3 is the most documentation-intensive category. It requires the full HFE report with validation testing data that demonstrates intended users can perform critical tasks safely and effectively under realistic use conditions.

How to Determine Your HF Submission Category

The decision flowchart follows these steps:

For new devices:

  1. Is this a new device? → Yes
  2. Based on the use-related risk analysis, are there critical tasks? → If No → Category 2; If Yes → Category 3

A new device submission will never fall under Category 1. If the device has no critical tasks, it falls into Category 2. If any critical tasks exist, it falls into Category 3.

For modified devices:

  1. Is this a modified device? → Yes
  2. Is there a change to any of the following: user interface, intended device users, intended device uses, intended use environments, training, or labeling? → If No → Category 1; If Yes → Continue
  3. Based on the use-related risk analysis, are new critical tasks introduced or are existing critical tasks impacted? → If No → Category 2; If Yes → Category 3

For modified devices, the analysis is more nuanced. Even if the change affects the user interface or labeling, if it does not create or impact critical tasks, it falls into Category 2. But if the change affects any critical task, a full HFE report with validation testing is required.

Recommended Reading
Auto-Injector Critical-Task Matrix for Human Factors Validation: How to Identify, Document, and Test Every Safety-Critical Use Step
Design Controls Clinical Evidence2026-05-05 · 21 min read

Identifying Critical Tasks

The concept of a "critical task" is central to the categorization framework. The guidance defines a critical task as a user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user, where harm is defined to include compromised medical care.

What Counts as Serious Harm

The guidance defines harm to include compromised medical care and treatment. This is broader than physical injury alone. A use error that leads to a missed diagnosis, delayed treatment, or incorrect medication dose could constitute serious harm even if no immediate physical injury occurs. This broad definition means more tasks may qualify as critical than manufacturers initially expect.

The Critical Task Identification Process

Critical tasks are identified through the use-related risk analysis, which is part of the broader risk management process under ISO 14971. The process involves:

  1. Task analysis. Decompose all user interactions with the device into individual tasks and sub-tasks, from initial setup through use, maintenance, and disposal.

  2. Use-related risk analysis. For each task, analyze what could go wrong if the user performs it incorrectly or fails to perform it. Assess the severity of potential harm. Tasks where use errors could lead to serious harm are designated as critical.

  3. Severity-based categorization. The guidance instructs manufacturers to provide a table of severity levels with descriptions of each level. Tasks that meet the threshold for serious harm are classified as critical.

The list of critical tasks is dynamic. As the device design evolves and preliminary analyses progress, additional critical tasks may be identified. The final list of critical tasks is used to structure the human factors validation test to ensure it covers the tasks that relate to device use safety and effectiveness.

What the FDA Expects in Your Critical Task Analysis

For your submission, you should:

  • Explain the process used to identify critical tasks
  • List and describe all critical tasks
  • For Category 3 submissions, provide a separate table highlighting any new critical tasks and rationale for why any of those tasks do not warrant new HF validation test data to support safe use
  • Describe each use scenario included in the validation testing and list the critical and non-critical tasks within each use scenario

The Human Factors Engineering Report Structure

For Category 3 submissions, the comprehensive HFE report should follow the structure outlined in the guidance. While the FDA does not mandate a rigid template, comprehensive submissions typically include the following sections.

Section 1: Conclusion and High-Level Summary

A brief executive summary of the HF evaluation findings and the overall conclusion about device use safety. This should state whether the validation testing demonstrated that the critical tasks can be performed safely and effectively.

Section 2: Intended Device Users, Uses, Use Environments, and Training

Describe who will use the device, what they will use it for, where they will use it, and what training they will receive. User groups should be defined by their relevant characteristics (professional training, experience level, physical capabilities, cognitive abilities). Training alone is not an adequate control for critical tasks — the FDA has been explicit on this point, and warnings must be shown to be effective through validation testing.

Section 3: Device-User Interface

Describe the user interface elements — controls, displays, alarms, software interfaces, physical form factors, labeling. The description should be detailed enough for the reviewer to understand how users interact with the device.

Section 4: Summary of Known Use Problems

Summarize known use problems from the predicate device, similar devices, complaint data, and published literature. This section demonstrates awareness of existing use-related risks and how the current device design addresses them.

Section 5: Summary of Preliminary Analyses and Evaluations

Describe the formative evaluation methods used — contextual inquiry, cognitive walkthrough, expert review, formative usability testing — and summarize the key findings. Describe any design modifications made in response to formative evaluation findings.

Present the use-related risk analysis for the device. This analysis should identify use errors that could occur with each critical task, the potential consequences, and the risk mitigation measures implemented through design, labeling, or training. For modified devices where critical tasks are impacted but existing risk control measures remain acceptable, provide the rationale for why new validation data is not needed.

A key point from the guidance: the FDA's use-related risk analysis does not include frequency of occurrence in the estimation of use-related risk. This is a departure from some traditional risk matrices that combine severity and probability. For HF purposes, the focus is on the severity of potential harm from use errors, not how frequently those errors are expected to occur.

Section 7: Identification and Description of Critical Tasks

List and describe all critical tasks identified through the use-related risk analysis. For each critical task, explain why it meets the threshold for serious harm. For modified devices, highlight new critical tasks in a separate table and provide rationale for why any new tasks do or do not warrant new validation testing.

Section 8: Details of HF Validation Testing

This section describes the summative (validation) usability test, which demonstrates that intended users can perform critical tasks safely and effectively under realistic use conditions. The FDA recommends:

  • Participants: A minimum of 15 completed usability testing sessions per user group, with additional participants scheduled to account for cancellations (generally 18-20 per group)
  • Use scenarios: All critical tasks should be included in the test, organized into realistic use scenarios that represent natural workflow
  • Realistic conditions: The test environment should simulate actual conditions of use as closely as possible
  • Final design: The device tested should represent the final design that would be marketed
  • Knowledge tasks: Some tasks may be assessed through knowledge questions rather than direct observation (e.g., "What would you do if the alarm sounds?")

If residual risks are found to be unacceptable during validation testing, the guidance sets the expectation that additional risk control and mitigation measures must be implemented and re-evaluated. This may require design changes followed by additional testing.

For Modified Devices: When Is New HF Data Required

The guidance provides a structured approach for modified devices that is more nuanced than for new devices.

No changes to user interface, users, uses, environments, training, or labeling: Category 1. Provide a brief conclusion and summary.

Changes to user interface, users, uses, environments, training, or labeling, but no new or impacted critical tasks: Category 2. Provide contextual information, preliminary analyses, and rationale for why no critical tasks are impacted.

Changes that introduce new critical tasks or impact existing critical tasks: Category 3. Provide the full HFE report including validation testing. The validation test may be limited to assessment of those aspects of user interactions and tasks that were affected by the design modifications — you do not need to re-validate the entire device if the change is localized.

An important consideration: even if a modified device previously did not require HF data for clearance (for example, if it was cleared under the old priority list framework and its device type was not on the list), the new guidance applies to all subsequent submissions. If the modification impacts critical tasks, HF validation data is now required regardless of whether it was required for the original clearance.

Recommended Reading
FDA One-Day AI Inspection Pilot: How Elsa Is Reshaping Medical Device Facility Oversight
Regulatory Digital Health & AI2026-05-07 · 10 min read

Alignment with IEC 62366-1

The FDA's HF submission guidance operates alongside IEC 62366-1:2015+A1:2020, the international standard for usability engineering of medical devices. IEC 62366-1 is recognized by the FDA as a consensus standard, meaning a declaration of conformity can satisfy part of the FDA's premarket review requirements.

Some terminology differences exist that manufacturers should be aware of:

  • The FDA uses "human factors validation testing" while IEC 62366-1 uses "summative evaluation" — the terms are synonymous
  • The FDA's "critical task" concept aligns with IEC 62366-1's concept of tasks related to use-related risks that could result in serious harm
  • The FDA's use-related risk analysis is similar to the use-related risk assessment in IEC 62366-1

For EU MDR submissions, the Notified Body will expect a Usability Engineering File (UEF) under IEC 62366-1 as part of the conformity assessment. The documentation you prepare for the FDA's HF submission categories can largely be used to populate the UEF, though the EU expects the full usability engineering process documentation rather than the FDA's submission-focused summary.

Practical Recommendations

  1. Start your use-related risk analysis early. The HF submission category is determined by the presence or absence of critical tasks. If you identify critical tasks late in the development process, you may need to conduct validation testing on a compressed timeline.

  2. Conduct formative testing throughout development. Formative evaluations help identify use-related risks early, allowing you to make design changes before the final validation test. They also provide the preliminary analysis documentation the FDA expects in Categories 2 and 3.

  3. Plan for Category 3 as the default for new devices. Most new devices with any user interaction will have at least one critical task. Budget and plan for a full HFE report with validation testing unless you have a strong rationale for why no critical tasks exist.

  4. Document everything. The FDA's reviewers will evaluate the quality and completeness of your HF documentation. Incomplete use-related risk analyses, missing critical task identification, or insufficient validation testing data are common reasons for additional information requests.

  5. Consider global requirements simultaneously. If you are pursuing both FDA and EU MDR clearance, design your usability engineering process to satisfy both IEC 62366-1 and the FDA's HF submission guidance from the start. The underlying process is the same; the submission documentation differs in format and emphasis.