How Do You Manage Requirements for a Product With a 20-Year Service Life?
A submarine commissioned today will operate into the 2040s. A nuclear waste repository must remain functional and compliant for longer than most institutions have existed. A spacecraft launched to a distant orbit cannot be recalled for an upgrade cycle. These programs share a structural problem that most systems engineering practice quietly sidesteps: the technology that will actually implement the system in year 15 has not been invented yet, and the requirements written today will still be the governing baseline when that technology arrives.
This is not an edge case. It is the defining challenge of long-life platform programs, and it has a specific failure mode. Programs that write requirements tightly coupled to current technology choices — specifying particular processor families, communication protocols, display standards, or software frameworks — discover a decade later that their requirements baseline has become a change request backlog. Every technology refresh triggers a replanning event, not because the mission changed, but because the requirements were written to describe an implementation rather than a need.
Writing requirements that survive a 20-year service life is a discipline, not a hope. It requires deliberate choices about how requirements are structured, what language is used, and how the architecture is constrained. It also requires a requirements management infrastructure that can hold the baseline coherent across years of incremental change.
The Core Principle: Specify What, Not How
The most durable requirements focus on function and performance. They describe what a system must do and how well it must do it, without specifying the mechanism by which it achieves that result.
Consider two ways to write a navigation requirement for a platform with a 20-year service life:
Implementation-coupled: “The navigation system shall use a ring laser gyroscope with a drift rate not to exceed 0.01 degrees per hour, interfacing via MIL-STD-1553B at a data update rate of 50 Hz.”
Function-focused: “The navigation system shall provide continuous position estimates with a drift rate not to exceed 0.01 degrees per hour and a data update rate not less than 50 Hz, using an interface compliant with the platform’s Navigation Data Bus Standard.”
The first version locks you to ring laser gyroscopes and MIL-STD-1553B. If a solid-state MEMS gyroscope with equivalent or superior drift performance becomes available in year 8, you cannot qualify it without a requirements change. The second version specifies the performance envelope and an interface standard you control. Technology can change inside that envelope without touching the requirement.
This distinction — performance and function versus implementation — is the foundation of durable requirements writing. It is also the most commonly violated principle in practice, because engineers are familiar with today’s technology and write naturally toward it.
Catching implementation coupling requires active review. During requirements development, every occurrence of a specific product name, component identifier, communication standard, or software technology should be interrogated. The question is always: “Are we specifying this because the mission requires this specific thing, or because this is what we know how to build today?” If the answer is the latter, the requirement should be restructured.
Open Architecture as a Requirements Commitment
Open architecture is often discussed as a procurement strategy or an acquisition philosophy. On long-life programs, it is more accurately understood as a requirements-level commitment that must be stated explicitly, allocated downward through the architecture, and verified.
An open architecture requirement says something like: “The combat system shall implement a modular open system architecture in which processing, communications, and sensor subsystems are replaceable independently, connected through published, government-owned interface standards.” That is a verifiable requirement, not a policy preference.
The practical content of that requirement has several components, each of which needs its own specification:
Interface ownership. Interface standards that are vendor-proprietary will drift with vendor decisions. Long-life programs need interfaces defined in government-owned or industry-consortium-owned standards documents that will remain available and controlled across the program life. This must be stated as a requirement on the integration architecture.
Modularity at the right level. Open architecture requirements are only useful if the modular boundary is drawn at the right level of abstraction. Drawing it at the wrong level — too fine-grained and the integration overhead eliminates the benefit; too coarse and subsystem replacement still triggers cascading changes — is a systems engineering judgment that needs to be made deliberately and documented in the architecture rationale.
Backward compatibility provisions. Interface standards evolve. Requirements should specify the version management policy: whether new versions must be backward compatible, how long old versions will be supported, and who controls the standards body. Without these provisions, an interface standard that starts as a stability mechanism can become a migration burden.
Open standards, not just open APIs. A program that defines its own internal bus standard has not achieved open architecture in any meaningful sense. Long-life programs should specify compliance with established open standards — OpenVPX, FACE, SOSA, ROS 2 for robotics applications, or similar — because these carry an ecosystem of suppliers, tooling, and trained engineers that survive technology generations in ways that proprietary architectures do not.
Technology Refresh Provisions: Write Them In at the Start
Technology refresh is predictable. Every program with a 20-year service life will execute multiple technology refreshes. The question is not whether this will happen, but whether the requirements baseline is structured to absorb it.
Programs that fail to plan for refresh typically discover the problem in year 10 or year 12, when a critical component approaches end-of-life and the engineering team realizes that qualifying a replacement requires changes at three levels of the requirements hierarchy — because the original requirements did not isolate the component boundary.
Refresh provisions that should be built into the baseline from program start include:
Replacement eligibility requirements. The requirements for major subsystems should explicitly state that the subsystem shall be designed for replacement without modification to adjacent subsystems, subject to the interface standards already specified. This is a design constraint, not a wish. It forces the engineering team to prove, at CDR, that the replacement path exists.
Technology horizon assessments as program deliverables. Some programs require technology roadmap assessments at defined intervals — every five years, for example — as part of the program’s technical baseline review cycle. This creates a recurring process for identifying obsolescence risk before it becomes a crisis, and for staging planned refreshes rather than reacting to component end-of-life notices.
Graceful degradation and forward compatibility requirements. Systems operating for 20 years will encounter situations where a legacy subsystem must interoperate with a replacement subsystem during a transition period. Requirements that specify graceful degradation modes — acceptable performance levels when operating with mixed-generation subsystems — reduce the risk of transition events requiring simultaneous replacement of all affected subsystems.
Requirements for requirements. On programs of this scale, it is entirely appropriate to include requirements on the program’s own engineering process — specifically, requirements that the requirements baseline be maintained in a traceable, impact-assessable form throughout the program life. This is not bureaucratic overhead; it is risk management. A requirements baseline that cannot be traversed in year 15 is not a baseline.
Managing the Baseline Across Years of Evolution
Requirements management on a long-life program is not a phase of work. It is a sustained engineering function that runs for the life of the program, and the tool infrastructure that supports it must be capable of maintaining integrity across that timeframe.
The practical challenges are specific:
People turn over. The engineers who wrote the original requirements will not be present in year 12 to explain what they meant. The rationale for every significant requirement must be captured in the baseline itself, not in meeting notes or the memory of original team members.
Change accumulates. A program that processes 200 requirement changes per year will have accumulated 4,000 changes by year 20. Without systematic impact analysis, each incremental change carries risk of introducing inconsistencies with requirements that were not identified as affected. This risk compounds over time.
Technology refreshes reshape the requirement space. When a subsystem is replaced, some requirements that were written to enable the original technology may no longer be meaningful, and new requirements may need to be added to govern the replacement technology’s capabilities. Managing this reshaping while maintaining the integrity of the higher-level baseline is the core challenge.
This is where the choice of requirements management tooling has long-term program consequences. A tool that stores requirements as documents with manual traceability links will degrade over a 20-year program as the link maintenance falls behind change volume. A tool built on a graph model — where requirements, interfaces, tests, and rationale are nodes connected by typed relationships — can traverse impact paths algorithmically when a change is proposed, showing which requirements are affected, which tests must be re-executed, and which interface definitions require review.
Flow Engineering is built on this graph-based model, and it is specifically oriented toward the kind of long-life program baseline management this challenge demands. When a technology refresh is proposed — say, a new processing architecture to replace an obsolete embedded computing subsystem — the requirements baseline can be traversed to identify every performance requirement that allocates to that subsystem, every interface requirement that touches its boundaries, and every verification record that will need to be revisited. That analysis is not a manual review exercise; it is a query against a maintained graph. The difference between “archaeology” and “analysis” is the difference between a program that can execute a refresh in two years and one that spends the first year figuring out what is affected.
Flow Engineering’s scope is focused on hardware and systems requirements rather than attempting to be a full lifecycle PLM platform. For programs that need deep integration with manufacturing execution systems or component-level configuration management, that scope will need to be supplemented. But for the requirements management function specifically — maintaining baseline integrity, enabling impact analysis, capturing rationale alongside requirements — the focused scope is a feature: the tool does its job well rather than spreading across too many concerns.
Decision Framework: How to Structure for Longevity
If you are starting a program today that will operate for 20 or more years, these choices at program start will shape how manageable the requirements baseline remains in year 15:
-
Write every lower-level requirement against a higher-level functional or performance requirement. If you cannot answer “what mission need does this requirement satisfy,” it is either a design decision masquerading as a requirement or an orphan that will cause confusion during future changes.
-
Define and own your interfaces. Specify that all major subsystem interfaces shall comply with government-owned or open-consortium-owned standards. Document which standards, which versions, and who controls them.
-
State open architecture as a verifiable requirement, not a principle. Write it down, allocate it, verify it at CDR.
-
Include technology refresh planning in your requirements baseline now. Write replacement eligibility requirements for major subsystems, and include program-level requirements for periodic technology horizon assessments.
-
Choose requirements tooling that will still be viable in year 15. SaaS tools on actively maintained platforms have survivability advantages over client-installed tools that depend on server infrastructure you must maintain. Graph-based tools have analytical advantages over document-based tools that compound over time as the baseline grows.
The Honest Summary
Managing requirements across a 20-year program life is hard. There is no methodology that eliminates the difficulty. Technology will change in ways that current requirements did not anticipate, interfaces will evolve, suppliers will exit markets, and the program will accumulate change requests that create drift between the baseline and the implemented system if not actively managed.
What good practice can do is reduce the cost of adaptation — ensuring that when technology changes, the requirements baseline can absorb the change with bounded impact rather than triggering a full replanning cycle. That reduction comes from requirements written to function rather than implementation, from open architecture stated as commitment rather than aspiration, from technology refresh provisions built into the baseline at program start, and from tooling that maintains traceability integrity across years of evolution rather than degrading under accumulated change volume.
Programs that do this work at program start will still face hard problems in year 15. But they will face them as engineering problems — bounded, analyzable, and solvable — rather than as institutional crises driven by a requirements baseline no one fully understands anymore.