The Hidden Systems Engineering Crisis in Consumer Medical Device Programs
A continuous glucose monitor that updates its hypoglycemia alert algorithm every six weeks. A wearable cardiac patch that adds a new arrhythmia detection model via firmware push. A digital inhaler that refines its inhalation technique coaching based on population-level usage data. Each of these is a real product category shipping today. Each of them is, by FDA definition, a regulated medical device. And each of them is quietly breaking the systems engineering practices that the medical device industry spent thirty years building.
This is not a compliance scare article. The companies building these products are not reckless. Many of them have world-class engineering teams. The problem is structural: the regulatory frameworks, quality systems, and systems engineering processes that govern medical device development were designed for products that do not change after they leave the factory. Consumer medical devices — connected, software-intensive, continuously updated — are fundamentally different products. The mismatch between what these products are and how they are being engineered and regulated is the hidden crisis that most program managers in the space are not yet talking about openly.
What the Traditional Framework Assumed
The FDA’s 510(k) substantial equivalence pathway and the older De Novo classification process both assume, at their core, that a predicate device or a newly classified device can be characterized at a point in time. The Design History File — the master record required under 21 CFR Part 820 — is essentially a snapshot: here is what we designed, here is why we designed it this way, here is the evidence that it is safe and effective. The software lifecycle documentation requirements under IEC 62304, similarly, describe a development process oriented toward defined releases. Even the IEC 62443 cybersecurity requirements, newer and more operationally aware, still think in terms of product versions.
These frameworks produced good outcomes for decades because the products they governed were, in fact, relatively static. An infusion pump gets a firmware update once a year, if that. A diagnostic imaging system ships with a software version that is supported for five to seven years. The rhythm of change was slow enough that point-in-time documentation practices were a reasonable approximation of reality.
Consumer medical devices operate on a completely different temporal cadence. The development teams building continuous glucose monitors or cardiac wearables are shipping app updates every few weeks, firmware updates every few months, and cloud-side algorithm changes on an even faster cycle. Some of those changes are cosmetic. Some of them materially affect device behavior in ways that the FDA would — and should — care about.
The 510(k) Tension
The 510(k) pathway requires a manufacturer to demonstrate substantial equivalence to a predicate device. Once cleared, the manufacturer can market the device. But 510(k) does not grant permission to change the device indefinitely without additional submissions. The FDA has long maintained guidance on when a change to a cleared device requires a new 510(k) versus when it can be implemented under a manufacturer’s own change control procedures. That guidance, most recently updated in 2019, asks manufacturers to evaluate whether a change could significantly affect safety or effectiveness.
For traditional hardware devices, this evaluation is manageable. The change surface is bounded. For a software-intensive consumer medical device, the change surface is enormous and often poorly mapped. A machine learning model update that shifts the sensitivity of a hypoglycemia alert by a clinically meaningful margin is obviously a significant change. But what about a firmware change that modifies the power management logic, which affects how often the sensor polls for glucose readings, which slightly changes the temporal resolution of the data? Is that a significant change? Most device teams answering this question honestly will admit they are making judgment calls with incomplete information — because their requirements traceability is not granular enough to tell them what downstream effects a given change will produce.
The FDA is aware of this problem. The agency’s 2021 AI/ML Action Plan, and the subsequent work toward a regulatory framework for AI-enabled devices, introduced the concept of the Predetermined Change Control Plan. A PCCP allows a manufacturer to specify, at the time of initial clearance, a defined envelope of changes — including algorithm updates — that can be implemented without a new submission, provided they stay within the bounds described in the plan. This is a meaningful regulatory innovation. It is also, in practice, extraordinarily difficult to implement without a requirements and traceability architecture that most device companies do not currently have.
A PCCP is essentially a contract between the manufacturer and the FDA: we will change our device in these specific ways, constrained by these specific performance boundaries, validated by these specific methods. Operationalizing that contract requires knowing, at any given moment, exactly which requirements are affected by a proposed change, what the validation evidence for those requirements looks like, and how the change traces through the system architecture. That is a graph problem, not a document problem.
The Design History File Under Continuous Delivery
The DHF crisis is where the systems engineering failure becomes most visible at the program level. Under FDA QSR and the newly effective 21 CFR Part 820 (updated to align with ISO 13485:2016), the Design History File must contain or reference the records necessary to demonstrate that the design was developed in accordance with the approved design plan. For a traditional device, this is painful but tractable: you maintain a document set that captures design inputs, design outputs, verification and validation records, and design reviews.
For a device on a continuous delivery cycle, “the DHF” is no longer a coherent concept if you are maintaining it as a document set. You need a DHF that is effectively a living, version-linked knowledge graph: every requirement tied to its rationale, its verification evidence, its implementation artifact, and its version history. Every change needs to be traceable not just to a change order, but to the specific requirements it modifies, the verification activities it invalidates, and the new evidence that restores confidence in safety and effectiveness.
Most device companies today are handling this one of two ways. The first is denial: they maintain the traditional document-snapshot DHF and hope that their change control procedures are tight enough to keep it roughly accurate. This approach fails quietly, and the failure becomes visible only during an FDA inspection or an adverse event investigation. The second is heroic manual effort: dedicated regulatory affairs and quality engineers spend enormous time manually updating requirement documents, traceability matrices, and verification summaries after each release. This approach is expensive, slow, and still produces a DHF that lags reality by weeks or months.
Neither approach is sustainable for programs targeting two-week sprint cycles.
What Software Teams Don’t Know
Engineers coming to medical device programs from pure software backgrounds — from consumer electronics, from mobile health apps, from enterprise software — consistently underestimate two things.
The first is the requirements burden. In most software contexts, requirements are lightweight artifacts: user stories, acceptance criteria, maybe a product requirements document that the team treats as living guidance. In a regulated medical device context, requirements are legally significant records. Design inputs must be formally captured, reviewed, and approved before design work begins. Changes to design inputs must go through controlled processes. The relationship between design inputs and design outputs must be demonstrable. This is not bureaucratic theater — it exists because patients can be harmed by devices that were not designed to meet the right requirements. But for engineers who have never worked in this context, the friction is jarring.
The second is the change control discipline required. Software teams are accustomed to reverting changes, experimenting in production, and treating continuous deployment as a normal operating mode. In a medical device context, every change to software of safety concern (as classified under IEC 62304’s software safety classification framework) requires documented justification, impact assessment, verification, and record-keeping. “We’ll fix it in the next release” is not a viable posture when the device is measuring blood glucose or detecting atrial fibrillation.
What Quality Teams Don’t Know
The failure goes the other direction too. Quality and regulatory affairs professionals who came up through traditional device programs often underestimate the velocity and complexity of software change in a connected device program.
The most common failure mode is treating software as if it behaves like hardware. Hardware changes are rare and expensive, so document-centric, slow-cycle quality processes are a reasonable fit. Software changes are frequent and cheap — the marginal cost of a firmware update, once the infrastructure exists, is near zero. Quality processes that add two weeks of manual documentation work to every software change will either create a massive backlog or, worse, will be quietly bypassed by teams under delivery pressure.
Quality teams also frequently underestimate the architectural complexity of software-intensive devices. A traditional quality engineer reviewing a change to a physical component can evaluate that change with reference to a bill of materials and a set of drawings. A quality engineer reviewing a change to a machine learning inference pipeline needs to understand data inputs, model architecture, threshold logic, integration points with device firmware, and the statistical validation framework used to characterize model performance. This is a different knowledge domain, and the tools quality teams have been trained to use — Microsoft Word for requirement documents, Excel for traceability matrices, document management systems for DHF maintenance — are not equipped to represent this complexity.
How Leading Programs Are Adapting
The companies doing this well share a few common characteristics.
They have invested in modular system architectures with explicit, documented interfaces between hardware, firmware, app software, and cloud services. This modularity is not just good engineering — it is the foundation of a defensible change control strategy. If you can demonstrate that a change to the cloud inference layer does not affect the firmware validation boundary, you have meaningfully bounded the scope of your re-verification effort.
They have moved toward graph-based requirements management rather than document-based approaches. The requirement hierarchy for a connected medical device — from regulatory design inputs through system requirements, software requirements, hardware requirements, and down to test cases — is inherently relational. Every requirement has multiple parents, multiple children, multiple verification artifacts, and a version history that must be preserved as the product evolves. Platforms that model this as a graph rather than a document hierarchy can answer questions like “what is the impact of changing this requirement?” in minutes rather than weeks.
Flow Engineering, built specifically for hardware and systems engineering programs, exemplifies this shift. Its graph-based traceability model makes it practical to maintain living requirement-to-verification coverage across multiple concurrent releases — which is the operational reality of a consumer medical device program running continuous delivery alongside a traditional regulated release track. For quality teams wrestling with how to maintain DHF coherence across OTA update cycles, the ability to query requirement coverage by version, by component, and by change event is not a nice-to-have feature. It is the enabling capability for sustainable compliance.
They have also built cross-functional processes — not just cross-functional teams. Having software engineers and quality engineers in the same room is necessary but not sufficient. The processes that govern how requirements are captured, how changes are proposed and evaluated, how verification evidence is linked to requirements, and how the DHF is maintained must be co-designed by people who understand both the regulatory framework and the software delivery model. Companies that try to apply traditional quality processes to software teams, or that try to exempt software teams from quality processes, fail in predictable ways.
The Regulatory Horizon
The FDA is moving. The 2024 finalized guidance on marketing submissions for AI-enabled devices, the ongoing development of the PCCP framework, and the agency’s increasing engagement with the Software as a Medical Device (SaMD) community all signal that the regulatory environment for connected medical devices will continue to evolve. The direction is toward more explicit upfront specification of change envelopes, more rigorous performance monitoring post-market, and more sophisticated validation frameworks for AI/ML components.
This regulatory trajectory is actually good news for companies that are willing to build the underlying infrastructure now. The PCCP framework, in particular, rewards companies that can demonstrate rigorous traceability between their initial clearance requirements and their proposed change envelope. Companies that have invested in graph-based requirements management and living DHF practices will find PCCP submissions tractable. Companies maintaining document-snapshot DHFs will find them nearly impossible.
Honest Assessment
The consumer medical device industry is not facing a compliance crisis in the sense that products are being recalled en masse or that the FDA is issuing warning letters at an accelerating rate. The crisis is quieter and slower-moving: it is the accumulating technical debt in quality systems and requirements management that makes each new product generation harder to manage, each regulatory submission more labor-intensive, and each post-market change control decision more uncertain.
The companies that will be competitive in connected medical devices over the next five years are the ones investing now in the systems engineering infrastructure — graph-based requirements management, modular architectures with clean interface definitions, integrated verification and validation workflows, and quality processes designed for software velocity rather than hardware cadence. The companies that are trying to apply 1990s document management practices to 2026 continuous delivery pipelines will spend an increasing fraction of their engineering capacity on compliance maintenance rather than product development.
The regulatory frameworks will catch up. The question is whether the engineering organizations will.