Insights

Design Controls for Medical Device Software: A Practical Guide

Author
Bartosz Michalak
Published
January 10, 2026
Last update
March 13, 2026

Table of Contents

EXCLUSIVE LAUNCH
AI Implementation in Healthcare Masterclass
Start the course

Key Takeaways

  1. Design controls (ISO 13485 Clause 7.3, FDA 21 CFR 820.30) apply to all medical device software, including SaMD. Skipping them results in failed certification audits and delayed market access.
  2. The design control process has eight stages: planning, inputs, outputs, reviews, verification, validation, transfer, and changes. Each stage produces documented evidence that auditors will inspect.
  3. A requirements traceability matrix connecting user needs to verification tests is the single most audited artifact in software design controls.
  4. Agile development is compatible with design controls. Sprint ceremonies produce the same evidence trail that waterfall processes produce, but in smaller increments with the same documentation requirements.
  5. Software companies most often fail audits because they retrofit requirements after coding, skip validation after verification passes, or treat informal code reviews as formal design reviews.

Is Your HealthTech Product Built for Success in Digital Health?

Download the Playbook

Introduction

Design controls are the systematic process for controlling the design of a medical device from concept through production and maintenance. Defined in ISO 13485 Clause 7.3 and FDA 21 CFR 820.30, they originated in the world of physical devices: implants, surgical instruments, diagnostic equipment. The assumption was a linear hardware development process with distinct manufacturing stages.

Software does not follow that model. Code changes in hours, not months. Deployment happens through pipelines, not factory floors. Releases ship incrementally, not as finished physical units. These differences create friction when software teams encounter design controls for the first time.

But the friction is a tooling problem, not a conceptual one. Design controls exist to answer a straightforward question: can you prove that what you built matches what you intended to build, and that what you intended to build is safe for its intended use? For a pacemaker, this prevents a faulty circuit from reaching a patient. For a clinical decision support application, this prevents a miscalculated risk score from influencing a diagnosis. The stakes are the same. The implementation differs. Software teams that ignore design controls fail certification audits. Software teams that apply hardware-era controls rigidly burn months on documentation that adds no safety value. The goal is finding the right adaptation.

This article explains how design controls work for medical device software, drawing from Momentum's experience as an ISO 13485-certified software development company. We have applied design controls across 100+ healthcare deployments, and we have seen where the process breaks down for software teams and where it works well.

What Design Controls Are (and Why Software Needs Them)

The Purpose of Design Controls

Design controls establish a documented, verifiable connection between what users need, what was designed, what was built, and what was tested. Each step generates evidence that regulators and auditors can inspect. Without this evidence chain, a company cannot demonstrate that its product is safe and effective.

For software, the failure modes differ from hardware but the consequences are equivalent. A broken data pipeline could feed stale lab values to a clinical algorithm. An unvalidated update to a dosage calculation could produce outputs outside safe therapeutic ranges. Design controls provide the structure to catch these issues before deployment.

When Design Controls Apply to Software

The FDA classifies software as a medical device (Software as a Medical Device, or SaMD) when it performs a medical function independent of hardware: diagnosing conditions, monitoring patient status, calculating treatment parameters, or guiding clinical decisions. If your software falls into any of these categories, design controls apply under both FDA 21 CFR 820.30 and ISO 13485 Clause 7.3.

Software that only stores, transfers, or displays data without interpreting it typically does not qualify as SaMD. But the boundary is narrow. A patient portal that displays lab results is not SaMD. The same portal with a feature that flags abnormal results for clinician review may qualify.

The V-Model Framework

The traditional design control framework follows a V-model: user needs feed into design inputs, which feed into design outputs and implementation on the left side. On the right side, verification confirms that implementation matches design outputs, and validation confirms that the final product meets user needs. Modern software teams rarely follow the V-model as a strict sequence, but the relationships it describes remain valid: every requirement needs a corresponding test, and there is a difference between testing whether the code works correctly (verification) and testing whether the product serves its intended users (validation).

The Design Control Process for Software

Design Planning (Clause 7.3.2)

Before development begins, create a design and development plan that defines the scope of work, team responsibilities, review stages, and verification and validation strategies. This is not a sprint plan or a product roadmap. Design planning establishes the quality framework within which development operates.

For software projects, a design plan typically includes: the software safety classification per IEC 62304 (Class A, B, or C), the planned development milestones with associated review gates, the tools and methods for requirements management and traceability, the verification strategy (test types, coverage expectations, automation approach), and the validation strategy (who performs UAT, under what conditions, with what acceptance criteria).

Sprint planning does not replace design planning. Sprints execute work within the framework that design planning establishes.

Design Inputs (Clause 7.3.3)

Design inputs are the requirements that define what the product must do: functional requirements (what the software does), non-functional requirements (performance thresholds, security controls, availability targets), regulatory requirements (HIPAA, GDPR, cybersecurity mandates), interface requirements (integrations with EHR systems, lab instruments, other devices), and risk-derived requirements from ISO 14971 risk analysis.

Each design input must be reviewed, unambiguous, and approved before design work begins. A requirement like "the system should be fast" is not a valid design input. "The system shall return diagnostic results within 3 seconds for 95% of queries under peak load" is. Design inputs must also address intended use and intended user, since a clinical decision support tool used by radiologists has different requirements than the same algorithm embedded in a primary care workflow.

Design Outputs (Clause 7.3.4)

Design outputs are the results of the design process: architecture documents, detailed design specifications, database schemas, API specifications, and the source code itself. Outputs must be verifiable against inputs, meaning each output must include or reference acceptance criteria that map to specific input requirements. The depth of documentation scales with risk classification per IEC 62304. A Class C (high-risk) module requires more detailed design documentation than a Class A (low-risk) module.

Design Reviews (Clause 7.3.5)

Design reviews are formal evaluations at defined stages of development. A design review is not a standup or a casual code walkthrough. ISO 13485 requires that design reviews include participants independent of the work being reviewed, evaluate outputs against inputs, document participants and their roles, and record all decisions and action items.

Practical review gates for software projects include: after requirements finalization, after architecture design, after implementation of major components, and before each release. The record must identify who attended, what was reviewed, what issues were found, and who is responsible for each action item. Meeting notes from a Slack thread do not satisfy this requirement.

Design Verification (Clause 7.3.6)

Verification confirms that design outputs meet design inputs. In software terms: does the code do what the requirements say it should do?

Verification evidence for software includes unit test results, integration test results, system test results, security test results, and performance test results. Automated test suites serve as verification evidence when each test maps to a specific requirement (via traceability), records a pass/fail result with timestamp, identifies the software version under test, and is executed in a controlled test environment.

A test suite with 2,000 passing tests provides no verification value if those tests cannot be linked to specific requirements. Conversely, a traceability matrix showing 100% requirement coverage with corresponding test results is strong verification evidence, even if the total test count is modest.

Design Validation (Clause 7.3.7)

Validation confirms that the product meets user needs and intended use. Verification asks "did we build it right?" Validation asks "did we build the right thing?"

Validation must occur on the finished product (or a production-equivalent build), not on a development prototype. It typically involves user acceptance testing with representative users performing representative tasks under representative conditions. For clinical software, this may include clinical validation studies where the software is evaluated against known clinical outcomes.

Validation cannot be skipped because verification passed. A software module might pass all unit and integration tests while still failing to meet user needs due to workflow mismatches, confusing interface design, or performance issues that only appear with production data volumes.

Design Transfer (Clause 7.3.8)

Design transfer moves the product from development to production. For hardware, this means manufacturing transfer. For software, it means deployment: the release pipeline, production environment configuration, infrastructure provisioning, monitoring setup, and rollback procedures.

The transfer process must ensure that the exact software build that was verified and validated is the one deployed to production. This requires version control, build reproducibility, and deployment audit trails. A CI/CD pipeline with tagged releases, automated builds from specific commits, and deployment logs satisfies this requirement when properly documented.

Design Changes (Clause 7.3.9)

Every change to an approved design must be reviewed, verified, and validated before implementation. Bug fixes, feature additions, dependency updates, and infrastructure modifications all fall under change control.

The level of review scales with the risk of the change. A cosmetic UI adjustment requires less review than a modification to a dosage calculation algorithm. But both require documented review and approval. Pull request workflows with documented approvals, linked requirements, and automated test execution provide a practical change control mechanism for software teams.

The Traceability Matrix

The requirements traceability matrix (RTM) is the single most important artifact in medical device software design controls. It links: User Need to Requirement to Design Specification to Implementation to Verification Test to Validation Test. Every requirement traces forward to at least one test, and every test traces backward to at least one requirement. Gaps in either direction are audit findings.

The RTM does not require specialized tooling. Dedicated platforms (Jama, Polarion, IBM DOORS) provide built-in traceability features. Smaller teams achieve equivalent results with structured spreadsheets, Jira with custom fields, or GitHub Issues with labels. The tool matters less than the discipline: the matrix must be complete, current, and maintained throughout the project lifecycle.

A practical approach: assign each requirement a unique identifier (e.g., REQ-001), reference it in design documents, link it to implementation artifacts (commits, pull requests), and tag test cases with the requirement ID. This creates the bidirectional traceability that auditors expect without a separate document that goes stale.

Design Controls in Agile Development

Design controls specify what evidence must exist, not the process for generating it. Agile ceremonies produce the same evidence that waterfall phases produce, organized differently. Here is how they map:

  • Sprint planning refines the design plan with specific work items and acceptance criteria for each iteration.
  • User stories with acceptance criteria serve as design inputs when they are traceable, reviewed, and approved.
  • Architecture documents, code, and configuration produced during sprints constitute design outputs, traceable to the requirements that prompted them.
  • Automated test suites in CI pipelines provide design verification evidence with timestamped pass/fail records per software version.
  • Sprint demos with clinical stakeholders serve as interim design reviews when they include documented attendees, decisions, and action items.
  • Release-level UAT constitutes design validation. Each release candidate undergoes testing with representative users before deployment.
  • CI/CD pipelines with approval gates implement design transfer with tagged releases and deployment audit trails.
  • Pull request workflows with documented approvals implement change control, linking each PR to requirements and test results.

The documentation requirement does not decrease with agile. Auditors do not care whether you used two-week sprints or six-month phases. They examine whether the evidence trail from requirement to verified, validated output is complete and consistent.

Common Design Control Mistakes in Software Companies

Retrofitting Requirements After Coding

The most frequent failure: building the software first, then writing requirements to match what was built. This inverts the purpose of design controls. Design inputs are supposed to guide design, not describe it after the fact. Auditors recognize retrofitted requirements because they perfectly match the implementation without evidence of review, revision, or rejected alternatives.

Conflating Code Reviews with Design Reviews

Code reviews evaluate implementation quality: code style, error handling, test coverage, performance. Design reviews evaluate whether the design satisfies requirements and whether development should proceed to the next stage. A code review does not assess whether the architectural approach meets the clinical workflow requirements. Both are necessary, and they serve different purposes.

Missing Traceability for Non-Functional Requirements

Functional requirements (what the system does) typically have good traceability in software teams. Non-functional requirements (performance, security, availability, scalability) often lack formal verification. If the requirements specify a 99.9% uptime SLA or a maximum 200ms API response time, those requirements need corresponding tests with documented results.

Skipping Validation Because Verification Passed

Verification proves the code works as specified. Validation proves it works for users in the intended context. A clinician workflow that passes all automated tests may still be unusable in a busy emergency department because the interaction model does not fit the clinical pace. Validation with representative users catches issues that verification cannot.

Ignoring Change Control for Minor Fixes

A one-line change to a clinical algorithm is a design change. A dependency update that patches a security vulnerability is a design change. An infrastructure configuration change that affects data processing latency is a design change. The scope of re-verification and re-validation scales with risk, but the change control process (documented review, approval, rationale) applies to all changes.

How Momentum Implements Design Controls

Momentum is ISO 13485 certified and applies design controls across all medical device software projects. Our approach integrates design controls into the development workflow rather than layering them on as a separate compliance exercise.

Requirements traceability is maintained from project initiation through deployment and maintenance. Each requirement links to design artifacts, implementation, verification tests, and validation evidence. We use IEC 62304 for software lifecycle management and ISO 14971 for risk management, with design controls providing the structure that connects both.

This approach has supported regulatory submissions across FDA, EU MDR, and other global markets. The same design control evidence that satisfies ISO 13485 auditors feeds into the technical documentation required for regulatory clearance.

Building medical device software and need help with design controls?

Momentum is ISO 13485 certified with 100+ healthcare deployments. Learn about our compliance capabilitiesor explore our ISO 13485 guide for software companies

For teams preparing for audits, our ISO 13485 internal audit checklist covers what to review before your certifying body arrives.

Frequently Asked Questions

Are design controls required for all medical device software?
Yes, if the software qualifies as SaMD. The FDA requires design controls under 21 CFR 820.30 for all Class II and Class III devices, and recommends them for Class I devices involving software. ISO 13485 requires design controls for all medical devices, including software. If your software diagnoses, monitors, calculates treatment parameters, or guides clinical decisions, design controls apply.

Written by Bartosz Michalak

Director of Engineering
He drives healthcare open-source development at the company, translating strategic vision into practical solutions. With hands-on experience in EHR integrations, FHIR standards, and wearable data ecosystems, he builds bridges between healthcare systems and emerging technologies.

See related articles

Let's Create the Future of Health Together

Looking for a partner who not only understands your challenges but anticipates your future needs? Get in touch, and let’s build something extraordinary in the world of digital health.

Newsletter

Bartosz Michalak

{
 "@context": "https://schema.org",
 "@type": "FAQPage",
 "mainEntity": [
   {
     "@type": "Question",
     "name": "Are design controls required for all medical device software?",
     "acceptedAnswer": {
       "@type": "Answer",
       "text": "Yes, if the software qualifies as SaMD. The FDA requires design controls under 21 CFR 820.30 for all Class II and Class III devices, and recommends them for Class I devices involving software. ISO 13485 requires design controls for all medical devices, including software. If your software diagnoses, monitors, calculates treatment parameters, or guides clinical decisions, design controls apply."
     }
   }
 ]
}