How Does a Small Medical Device Company Afford the Requirements Rigor That IEC 62304 and FDA Expect?
The question came in from the head of engineering at a 25-person medical device startup preparing for their first FDA 510(k) submission. She put it plainly: “We have eight engineers, two of whom know what IEC 62304 is. We don’t have a full-time RA person. How do we actually do this without the whole company grinding to a halt?”
It’s the right question, and it deserves a straight answer rather than a consultant’s sales pitch.
Here is the honest baseline: FDA and IEC 62304 do not have a small-company exemption. The regulatory expectation is risk-proportional—scaled to the safety class of your software—not size-proportional. A 25-person startup shipping a Class B medical device software system is held to the same documentation and traceability standard as a 2,500-person division of a large medtech company. The difference is you have fewer people to do it.
That said, the rigor required for Class B is achievable without a dedicated regulatory team, provided you understand what the standard actually asks for and build your process around what reviewers actually check.
What IEC 62304 Actually Requires at Class B
IEC 62304 classifies medical device software into three safety classes:
- Class A: No injury or damage to health possible.
- Class B: Non-serious injury possible.
- Class C: Death or serious injury possible.
Most connected diagnostic tools, monitoring apps, and software-controlled device interfaces land at Class B. Class C adds formal verification requirements and a stricter architectural decomposition—important, but out of scope here.
For Class B, the standard requires:
- Software development planning — A documented plan that identifies your lifecycle process, the standards you’re following, and how you’ll handle risk.
- Software requirements analysis — Documented requirements that are testable, traceable to the system-level design, and linked to your hazard analysis.
- Software architectural design — A description of the software items and their interfaces.
- Software unit implementation and verification — Evidence that individual units were tested, not just that the product “works.”
- Software integration and integration testing — Evidence that components work together as specified.
- Software system testing — Test cases linked to requirements, with pass/fail records.
- Software release — A documented release process including a SOUP list, known anomalies, and version identification.
That’s a real process. But notice what it does not require at Class B: formal proof of correctness, exhaustive static analysis, or the complete architectural verification burden of Class C. The workload is real, but it’s calibrated.
What a Credible SOUP List Actually Looks Like
Software of Unknown Provenance (SOUP) is any software component your team did not develop in-house that contributes to device functionality. This includes open-source libraries, commercial SDKs, operating system components, and cloud service dependencies.
FDA reviewers and notified bodies look at SOUP lists carefully, because SOUP is where teams take shortcuts and where supply chain risk lives.
A credible SOUP list is not a package manifest. A requirements.txt file dumped into your DHF (Design History File) is not a SOUP list. Here is what a reviewable SOUP entry requires:
For each SOUP item:
- Title and version — Not a version range. A specific pinned version. “numpy ≥ 1.21” is not acceptable. “numpy 1.26.4” is.
- Manufacturer/source — PyPI, GitHub repository with commit hash, commercial vendor.
- Intended use within the device — One or two sentences explaining what this component does in your system.
- Functional and performance requirements — What you are relying on this component to do correctly. This is what you test against.
- Anomaly list — A documented process for monitoring the component’s known bugs and CVEs, and evidence you have reviewed it at defined intervals.
- Risk contribution — A brief assessment tied to your hazard analysis. If this component fails or behaves unexpectedly, what is the patient safety impact?
FDA 510(k) reviewers do not read every line of your SOUP list, but they do spot-check it. A list with pinned versions, anomaly monitoring records, and risk assessments passes. A flat list of library names does not.
Practically, teams should maintain their SOUP list in the same system that holds their requirements—not in a separate document. When a SOUP version changes, that change should trigger a review of the affected requirements and tests automatically, not require a separate manual notification process.
What Traceability Records FDA Reviewers Actually Check
The traceability chain FDA expects for software runs in both directions:
Forward traceability: User need → System requirement → Software requirement → Test case → Test result.
Backward traceability: Test result → Test case → Software requirement → System requirement → User need.
The purpose of this chain is not paperwork—it is evidence that everything you said you would build, you built, and that everything you tested connects to something a user actually needs. Gaps in the chain are exactly what generates FDA Form 483 observations.
In practice, reviewers look for:
- Orphaned requirements — Software requirements with no corresponding test case. These appear when requirements are added late or tests are added without being linked back.
- Untested user needs — User needs that never get decomposed into software requirements. This often happens at the boundary between the system specification and the software specification.
- Changed requirements with no re-test evidence — A requirement was modified after testing. Did you re-test? Is there documented evidence that you did?
- SOUP behavior relied upon but not tested — You depend on a library for a safety-relevant calculation. Is there a test that verifies that library produces correct output with your specific inputs?
The traceability matrix (often called an RTM—Requirements Traceability Matrix) is the artifact that makes these checks possible. For a Class B submission, your RTM needs to show complete forward and backward coverage for every software requirement in the system.
This does not mean you need a formal tool. A well-maintained spreadsheet with disciplined link management can satisfy the standard. But spreadsheet RTMs break down quickly when requirements change—links go stale, rows get deleted, and coverage gaps appear without anyone noticing.
How Small Teams Get Buried—and How They Don’t
The teams that drown in IEC 62304 compliance treat documentation as a separate job that runs parallel to engineering. Engineers write code, then someone—often whoever drew the short straw—goes back and writes down what was built, adds it to a document, and tries to reconstruct the traceability chain from memory.
This approach is unworkable at any size, but it is fatal at 25 people.
The teams that stay afloat treat traceability as a byproduct of normal engineering work. Requirements are written before code, not after. Tests are written against those requirements, with explicit links. When a requirement changes, the system surfaces which tests need to be re-run. When a SOUP component is updated, the change is recorded and reviewed against the affected requirements automatically.
The discipline required is real. But the overhead is contained when the tooling enforces the structure rather than leaving it to individual engineers to maintain manually.
How Flow Engineering Fits Into a Small Medical Device Team’s Process
In the second half of the lifecycle—after you have defined your process and understand what records you need—the bottleneck becomes maintenance. Requirements change. SOUP updates happen. Test coverage drifts. And without a dedicated regulatory affairs engineer watching the whole system, gaps accumulate quietly until a submission or an audit surfaces them.
This is the specific context where small medical device teams have found Flow Engineering useful.
Flow Engineering is built on a graph-based model of system requirements rather than a document-based one. Every requirement is a node. Every link between a user need, a system requirement, a software requirement, a test case, and a SOUP dependency is a typed edge in that graph. When any node changes, the system can immediately surface which downstream nodes are affected and which traceability links may no longer be valid.
For a Class B software team, this means a few concrete things:
Change control without a change control bureaucracy. When a software requirement is modified, Flow Engineering flags all downstream test cases that link to it and prompts for re-verification evidence before the requirement can be marked as resolved. This is the behavior IEC 62304 requires for software change management—and it happens automatically rather than depending on an engineer remembering to check the RTM.
SOUP integration into the requirements graph. SOUP dependencies are not a separate document—they are nodes in the same graph as your software requirements. When you update a SOUP version, the affected requirements are surfaced immediately. This closes the loop that most teams leave open: the SOUP list exists, the requirements exist, but the link between them is informal.
Traceability coverage reporting on demand. Rather than manually auditing the RTM before a submission, teams can generate a coverage report that shows exactly which requirements have complete forward traceability to test cases and which have gaps. For a first 510(k) submission, this kind of report is what separates a clean submission from one that generates a deficiency letter.
Flow Engineering’s focus is narrow by design—it is built for systems and hardware engineering traceability, not for broader QMS functions like CAPA management, complaint handling, or design validation protocols. Teams using it for IEC 62304 compliance will still need separate processes or tools for those functions. That is a deliberate trade-off: a tool that does requirements traceability and change control exceptionally well rather than one that tries to be a complete QMS.
For a 25-person team, this specialization is often the right fit. You do not need one tool that does everything poorly. You need a few tools that each do their job well and connect cleanly.
Practical Starting Points for Class B Compliance
If you are at the beginning of this process, here is a workable sequence:
-
Establish your safety class first. Walk through your risk analysis and confirm whether your software is Class A, B, or C. The rest of your process scales from that decision.
-
Write your software development plan before writing requirements. The plan does not need to be long, but it needs to commit to a lifecycle process, identify the standards you’re following, and describe how you’ll handle SOUP and change control.
-
Build your SOUP list in the same system as your requirements, not in a separate document. Version-pin everything. Establish a review cadence for anomaly lists.
-
Write requirements before code. This is the single discipline that prevents the most common compliance failures. Requirements written after the fact are reconstructions—and they read like reconstructions to reviewers.
-
Link every test case to a requirement before executing the test. Unlinked test cases do not contribute to traceability coverage, no matter how thorough the testing was.
-
Run a traceability coverage check before submission, not after deficiencies arrive. The gaps you find are easier to close before a reviewer flags them.
The regulatory expectation is not going to get smaller. But a well-structured process, maintained with the right tooling, is achievable without a team of regulatory specialists. The companies that navigate their first submissions successfully are not the ones with the largest compliance departments—they are the ones that built traceability into their engineering workflow from the start.