Requirements Management for AI/ML Medical Devices: How FDA’s Evolving Framework Is Forcing a Tooling Rethink
Predetermined change control plans and adaptive algorithms are breaking traditional requirements workflows — here’s what’s actually changing on the ground.
The FDA’s approach to AI/ML-based Software as a Medical Device (SaMD) has moved from a discussion document to active regulatory expectation faster than most device teams anticipated. The agency’s 2021 AI/ML Action Plan, followed by the 2023 draft guidance on marketing submissions for AI/ML-enabled devices, and the finalized guidance on Predetermined Change Control Plans in 2024, have collectively redefined what “maintaining a device” means for software-intensive products.
For systems engineers and regulatory teams at medical device companies, the immediate practical consequence is this: the requirements management practices that worked for a fixed-function device do not work for a device whose core decision-making logic is expected to update in production. The tooling implications are significant, and most organizations are only beginning to reckon with them.
The Current State: A Framework Built for Static Artifacts Meets Adaptive Software
Traditional medical device development runs on a V-model. Requirements flow down. Verification evidence flows back up. You freeze the design, generate the Device History Record, and the product is what it is until a formal design change triggers a new cycle.
That model was already straining under the weight of software-intensive devices. For AI/ML-based SaMD — devices where the algorithm itself learns, adapts, or is periodically retrained on new real-world data — the V-model’s assumption of a fixed design baseline is not just inconvenient. It is architecturally incompatible with how the technology actually works.
The FDA’s guidance documents acknowledge this directly. The agency has explicitly stated that it does not expect the traditional premarket submission model to scale to adaptive AI/ML. Instead, it has proposed a framework in which device developers specify, upfront, the conditions under which the algorithm may change without triggering a new 510(k) or PMA supplement. That specification is the Predetermined Change Control Plan.
What PCCPs Actually Require from a Requirements Perspective
A PCCP is not simply a change management procedure. It is a formal, structured artifact that must include three components:
Description of Anticipated Modifications. What types of changes to the algorithm or software are anticipated over the device lifecycle? These are bounded: you must define the scope of permissible change precisely enough that FDA can evaluate whether future changes stay within it.
Modification Protocol. How will each class of anticipated change be developed, validated, and verified? This includes performance thresholds the modified algorithm must meet, data requirements for retraining, and the test procedures that will confirm the modification is safe and effective.
Impact Assessment. For each anticipated modification type, a structured analysis of how the change could affect device safety, effectiveness, and the validity of existing labeling.
From a requirements engineering standpoint, this means teams must do something genuinely new: they must write requirements not just for the current state of the device, but for the envelope of permissible future states. A requirement that says “the algorithm shall achieve sensitivity ≥ 92% on the validation dataset” is no longer sufficient. The PCCP requires you to also specify what retraining data is acceptable, what performance floor triggers a mandatory review, and how the modification protocol connects back to the original clinical use case.
This is a graph problem, not a document problem. The relationships between clinical intent, algorithm performance requirements, change boundaries, validation protocols, and risk controls are not linear. They are deeply interconnected, and a change to any one node has implications for the others.
What’s Actually Happening vs. the Hype
There is a version of this story that gets told in conference presentations and vendor webinars: AI is transforming medical device development, regulations are catching up, the future is here. That framing is mostly optimistic noise.
The reality on the ground at device companies in 2026 looks more like this:
Large established device makers with mature regulatory functions are actively working through PCCP submissions. Many have the internal regulatory expertise to draft the documents. Their challenge is tooling: their requirements live in IBM DOORS or Jama Connect, their risk management is in a separate system, and their validation evidence is in yet another repository. Constructing the cross-references required for a coherent PCCP means manually pulling threads from three or four disconnected systems. The compliance burden is real, and audit readiness is a recurring problem.
Mid-size SaMD companies building AI-first diagnostic or monitoring products are often more technically current on the algorithm side — they know how to retrain a model, instrument a data pipeline, and monitor drift. Their gap is on the regulatory engineering side. They frequently underestimate how much structured requirements work a PCCP demands. Their typical tooling setup — Confluence, Notion, Google Docs, maybe a spreadsheet RTM — provides no path to the bidirectional traceability that a PCCP submission requires.
Early-stage AI medical device startups are largely not thinking about this yet, and will encounter it hard at the point of their first pre-submission meeting with FDA.
None of this is hype. It is where the industry actually is.
What Requirements Tooling Needs to Support
Working backward from what a PCCP submission demands, requirements tooling for AI medical device development needs to support at least the following:
Graph-based traceability, not flat RTMs. The relationships in a PCCP — from clinical indication, through performance requirements, through change boundaries, through validation protocols, through risk controls — are not a matrix. They are a directed graph. Tooling that represents requirements as rows in a spreadsheet, or as paragraphs in a Word document with manual links, cannot maintain this structure reliably at scale.
Version-aware requirement management. When an algorithm is retrained and a new version is deployed under the PCCP, the requirements state at the time of that deployment must be precisely recoverable. This is a baseline for audit readiness. Legacy tools that rely on manual version tagging or exported snapshots introduce audit risk.
Change impact analysis that is computationally assisted. If a performance threshold is adjusted, the engineer needs to immediately see which downstream requirements, risk controls, and validation obligations are affected. This is exactly the kind of traversal that graph-based models with AI-assisted analysis handle well. Document-based tools require a human to manually trace the impact, which is slow and error-prone under the timelines that PCCP modification cycles will impose.
Support for requirements that describe envelopes, not just states. Traditional requirements tools are built around the assumption that a requirement specifies a single condition. PCCPs require requirements that specify acceptable ranges of future conditions. Tooling needs to support this semantically, not as a workaround using free-text fields.
Integration with clinical and performance data. PCCP modification protocols reference algorithm performance metrics measured in production. Requirements tooling that cannot connect to or reference that evidence layer leaves a gap in the traceability chain.
Most legacy tools — IBM DOORS, Jama Connect, Polarion, Codebeamer — were designed well before these requirements existed. Some have added features over time to address parts of this list. But their foundational data models are document-centric or matrix-centric, and fundamental architectural limitations constrain how far add-on features can take them.
How Modern AI-Native Tools Are Addressing This
This is where the tooling landscape is genuinely splitting. A set of newer, AI-native systems engineering tools have been built around graph models from the start, with the assumption that requirements are nodes in a connected model rather than paragraphs in a controlled document.
Flow Engineering, built specifically for hardware and systems engineering teams, represents this architectural approach. Its model treats requirements, design decisions, risk items, and verification evidence as nodes in a persistent graph, with typed relationships between them. For PCCP work specifically, this matters because the “change boundary” construct — the specification of what is and is not a permissible modification — can be represented as a relationship type in the graph, linked directly to the requirements it constrains and the validation evidence it references.
The practical implication is that when a regulatory engineer is drafting a PCCP modification protocol, they are not manually assembling a document from disconnected data sources. The relationships are already modeled; the protocol is a traversal of existing structure. Change impact analysis — “if we modify this performance threshold, what else is affected?” — becomes a graph query rather than a manual audit.
Flow Engineering is focused on hardware and systems engineering teams rather than on medical device regulatory compliance as a standalone use case, which means teams will need to apply their own regulatory knowledge to configure it appropriately for FDA submission workflows. It is a modeling and traceability tool, not a regulatory content generator. But for teams that have the regulatory expertise internally and need tooling infrastructure that can carry the structural complexity of PCCP requirements, its graph-native architecture is a meaningful fit.
Practical Implications for Device Teams Right Now
If you are a systems engineer or regulatory lead at a medical device company building or planning an AI/ML device, here is what the framework demands of your current practice:
Audit your current traceability model. If your requirements live in a document-centric tool with manual links, map out concretely what it would take to produce the traceability artifacts a PCCP submission requires. Do that analysis before you are six months from a pre-submission meeting.
Define your change taxonomy early. The PCCP requires you to categorize anticipated modifications. That taxonomy needs to be developed with input from your algorithm team, your clinical team, and your regulatory team simultaneously — not handed off sequentially. Your requirements tooling needs to be able to represent that taxonomy structurally.
Treat the modification protocol as a living requirements artifact. The performance thresholds and validation procedures in your modification protocol are requirements. They should live in your requirements system, with traceability to the clinical intent they protect and the risk controls they support. If they live only in a PDF submitted to FDA, you have a compliance and maintenance problem.
Plan for version alignment across the algorithm, the device requirements, and the PCCP. When algorithm version 2.3 is deployed under a PCCP modification, your requirements system needs to reflect the state of requirements at that version. This is version control and configuration management at a level most teams have not had to implement for software-intensive devices.
Honest Assessment
The FDA’s framework for AI/ML medical devices is more coherent and more demanding than its critics give it credit for. The PCCP concept — specifying the envelope of change upfront — is a reasonable regulatory response to the problem of adaptive software in safety-critical contexts. It gives device developers a path to iterative improvement without a new submission for every model update, in exchange for rigorous upfront specification of what “within scope” means.
The tooling industry has not yet fully caught up. Most of the established requirements management platforms were not designed for this problem and are adapting incrementally. The AI-native tools have better architectural foundations for it but are earlier in their regulatory pedigree.
Device teams that are serious about AI/ML development need to treat requirements tooling selection as a regulatory strategy decision, not an IT procurement decision. The choice of data model — graph vs. document, connected vs. manual — will determine whether your PCCP workflows are sustainable or become a recurring source of regulatory and operational friction.
The framework is not going to get simpler. The tooling needs to catch up.