How Do You Manage Requirements on a Program Where the Customer Keeps Changing the Mission Concept?

Let’s be direct: this is one of the most common problems in defense and space programs, and the standard advice — “baseline early, control changes rigorously” — is not wrong, but it is incomplete. Applied without nuance, it produces a program that is technically compliant and operationally irrelevant, or one buried under waiver requests before it reaches PDR.

The honest answer involves several things simultaneously: understanding why ConOps volatility happens and where it actually sits in the requirements hierarchy, making deliberate architectural decisions about which requirements can afford to float and which cannot, and using tooling that can keep pace with iteration without collapsing your traceability model. None of these are easy. All of them are tractable.

Why the ConOps Keeps Changing (and Why That’s Not Always the Customer’s Fault)

On most defense programs, the Concept of Operations is authored before the system is well understood. This is not negligence — it is the nature of early-phase acquisition. The customer is trying to close a capability gap against a threat environment that is itself evolving. For space programs, particularly commercial LEO constellations and responsive-launch vehicles, the mission concept may be tied to a business model that is still being validated against market reality.

The technical mistake is treating the ConOps as stable input when it is actually an evolving hypothesis. When that hypothesis changes — and it will — the damage propagates downward through the requirements hierarchy if the hierarchy was not designed to absorb it.

In practice, that propagation looks like this: the customer updates their operational scenario, which changes a capability need, which invalidates a derived system requirement, which puts a subsystem interface under review, which triggers an engineering change proposal against hardware that is already in procurement. At that point, the choice is a waiver, a redesign, or a scope negotiation — none of which are free.

The Structural Solution: Layered Requirements Architecture

The standard Systems Engineering handbook tells you requirements decompose from stakeholder to system to subsystem. What it does not always emphasize is that this layering is also your volatility management architecture. Each layer should be designed with a different tolerance for change.

Stakeholder Requirements (StRS) are where mission-concept volatility belongs. These requirements describe what the mission must accomplish, expressed in operational terms. They should be written at a level of abstraction that accommodates reasonable scenario variation. “The system shall provide persistent surveillance of a 200km x 200km area under specified lighting and weather conditions” is a stakeholder requirement. The specific orbital regime that achieves it is a derived decision.

When the ConOps changes, your first question should be: does this change the operational outcome required, or does it change the operational approach? If only the approach changes, the StRS may not need to change at all. If the outcome changes, you revise the StRS — and that revision triggers a structured impact assessment downward, not an uncontrolled cascade.

System Requirements (SyRS) allocate the operational outcome to a system architecture. These are more stable than StRS because they are derived from the outcomes, not the scenarios. But they are still abstract enough to accommodate some architectural variation. “The payload shall achieve a ground resolution of no worse than 0.5m in panchromatic mode at nadir” is a system requirement. Which focal plane achieves it is a lower-level decision.

Subsystem and component requirements are where hardware-driving specifications live. These must be stabilized earliest, because they control lead time decisions — detector procurement, structural design, thermal architecture. These should not change unless the system requirement that drove them changes. If you have done your architecture correctly, most ConOps evolutions will not reach this level.

Baselining Strategies for Programs with Immature ConOps

Three approaches have real-world track records on defense and space programs with genuine ConOps immaturity.

Provisional Baselines with Structured Exit Criteria

Rather than delaying all baselining until the ConOps stabilizes, establish a provisional baseline for each requirements layer with explicit exit criteria that trigger a formal re-baseline. For example: “This SyRS is baselined for purposes of enabling subsystem design activities. It will be formally re-baselined at SRR following ConOps validation by the customer. Changes prior to that point are managed at the program level without formal ECP routing.”

This gives the hardware teams something to design to. It gives the customer a defined window for ConOps revision without catastrophic cost impact. It makes the cost of a late ConOps change visible — and visibility creates customer discipline.

Tiered Change Authority

Not all requirements changes cost the same, and not all of them need the same level of review. A tiered change authority structure assigns change approval authority based on the impact zone of the change. Stakeholder requirements changes that do not ripple below the system level go through a lighter-weight process — program manager approval, rapid documentation update, no ECP. Changes that touch hardware-driving requirements go through the full board. The critical enabler is a reliable impact-tracing capability, so you know quickly and accurately which tier a proposed change falls into.

Requirements Segmentation by Hardware Dependency

Explicitly tag requirements by their hardware dependency status: “hardware-driving,” “software-driving,” or “operationally-defined.” Hardware-driving requirements are frozen earliest and protected most aggressively. Software-driving requirements can remain fluid longer because software is cheaper to change than hardware. Operationally-defined requirements — things like operating procedures, crew interfaces, data products — can be deferred almost entirely.

On a recent defense ISR program, this segmentation strategy allowed the program to absorb three significant ConOps updates during EMD without any hardware redesign, because the updates affected operational tempo and data exploitation workflow, neither of which drove hardware specifications.

The Tradeoff: Locking Early vs. Staying Flexible

This tradeoff is real, and anyone who tells you there is a clean solution is selling something. The honest version looks like this:

Locking hardware-driving requirements early enables procurement, drives down lead time risk, and gives you a defensible baseline for cost and schedule estimation. The cost is that if a hardware-driving requirement turns out to be wrong — because the ConOps changed in a way you did not anticipate — you are looking at a waiver, a deviation, or a redesign. On defense programs, waivers are not free. They consume engineering hours, program office attention, and sometimes contractual standing.

Staying flexible preserves the ability to get the system right. The cost is that hardware procurement cannot start until requirements are stable, which compresses schedule, increases risk, and sometimes forces a parallel-path procurement strategy that is expensive by design.

The practical resolution is not to choose one or the other, but to be explicit about which requirements are locked and why, and to make the cost of a change to those locked requirements visible and attributable. When the customer asks for a ConOps change, the program’s job is to immediately and accurately answer: “Here is what that change touches, here is what it would cost, and here is when you need to decide.” That answer requires traceability infrastructure that is current, readable, and fast.

How Modern Tools Handle Rapid Requirements Iteration

This is where tooling matters operationally, not just as a compliance artifact.

Traditional requirements management tools — IBM DOORS, DOORS Next, Polarion — were built for a world where requirements change was relatively infrequent and controlled. They have deep formal compliance capabilities, strong change history, and proven track records on large programs. Their architecture assumes that you are primarily managing a document, with change as an exceptional event to be recorded. When the ConOps is genuinely volatile, that architecture creates friction: update cycles are slow, impact traces require manual maintenance, and the cognitive overhead of navigating a deeply nested module structure makes rapid iteration painful.

Flow Engineering (flowengineering.com) was built specifically for the kind of iterative requirements work that volatile programs require. Its data model is graph-based rather than document-based, which means requirements relationships — traces, derivations, allocations, conflicts — are first-class objects in the data model rather than annotation layers on a document. When a stakeholder requirement changes, the system surfaces its downstream dependents immediately and accurately, without requiring a manual trace audit.

For programs managing an evolving ConOps, this matters because the daily workflow is not “write requirements and freeze them” — it is “test a requirement formulation against the current mission understanding, assess its downstream impact, revise, and do it again.” Flow Engineering’s iteration model is designed for that workflow. Its AI-assisted requirement generation and review capabilities can also accelerate the process of translating ConOps language — which is typically operational and scenario-based — into formal system requirements, which is one of the most time-consuming and error-prone steps in the process.

The deliberate trade-off Flow Engineering makes is depth of formal process compliance for large, mature programs with established change control bureaucracy. If your program requires DOORS-native format exports for a prime contractor data submission or has a contractually mandated tool chain, that is a real integration consideration. For programs earlier in their lifecycle, or for teams that need to move fast on an immature ConOps, that trade is usually worth making.

Honest Summary

Managing requirements on a program with an evolving ConOps is not a failure of discipline — it is a systems engineering challenge that requires deliberate architectural decisions. Structure your requirements hierarchy so that volatility absorbs at the stakeholder level. Baseline hardware-driving requirements explicitly and early, and make the cost of changing them immediately visible to the customer. Use tiered change authority and provisional baselines to keep the program moving without freezing prematurely.

The programs that do this well have two things in common: they treat requirements architecture as a first-class design decision, and they have tooling that can maintain accurate traceability through rapid iteration. The programs that struggle are the ones that inherit a flat requirement set, a legacy document tool, and a customer who is still figuring out the mission — and try to manage through it with manual discipline alone.

That approach runs out of discipline before the program runs out of ConOps changes.