Requirements Management in Space: What’s Actually Being Used in 2026

The space industry runs on requirements. Every NASA contract deliverable, every ESA milestone review, every FAA launch license involves a requirements baseline that someone has to write, trace, verify, and maintain. That process has not fundamentally changed in decades — and in 2026, that fact is becoming a serious operational liability.

The divergence between how heritage prime contractors and new space companies approach requirements management has never been sharper. Heritage programs treat requirements as controlled documentation artifacts. New space programs treat them as living system data. Both are responding to real pressures. The results are very different.

What the Heritage Primes Are Actually Running

Lockheed Martin, Northrop Grumman, Boeing, and Airbus Defence and Space are, in most cases, running IBM DOORS or DOORS Next as their primary requirements management infrastructure. This is not surprising. These tools were specified into many contracts explicitly — some government program offices still list DOORS compliance as a proposal requirement — and replacing a requirements backbone across a multi-billion-dollar program carries risks that no program manager wants to own.

DOORS in its classic client-server form remains in active use on programs that began before 2010 and have not completed. The tool handles deep module hierarchies and attribute-heavy requirements structures effectively. For stable, contractually frozen requirements — the kind that define an interface between a satellite bus and a payload on a 15-year mission — it performs the function it was built for.

DOORS Next (part of IBM ELM) is the migration target for programs that have modernized. It introduces a web interface, better cross-tool integration through Open Services for Lifecycle Collaboration (OSLC), and improved baseline management. Adoption has been uneven. Programs that migrated report mixed results: the web interface is more accessible, but teams that depended on complex DOORS module scripting have faced real transition friction.

Jama Connect has taken share at mid-tier primes and at defense contractors doing space adjacent work — missile systems, launch vehicles with significant defense heritage. Its strengths are in review management, real-time collaboration on requirement reviews, and a cleaner interface than legacy DOORS. It does not have DOORS’ depth on large module hierarchies, but for programs under a certain size threshold, that trade is acceptable. Polarion and Codebeamer appear in similar niches, particularly in European programs where DOORS licensing costs and legacy procurement patterns differ from U.S. norms.

The honest picture at heritage primes: requirements are largely controlled, but the infrastructure is showing its age. Traceability matrices are often maintained as separate exports. Impact analysis — understanding what changes when a requirement changes — requires significant manual effort or custom tooling layered on top of the baseline RM platform. Integration with model-based systems engineering (MBSE) environments like Cameo or Rhapsody is possible but requires deliberate investment to make it work well.

What New Space Is Actually Doing

SpaceX does not publish its internal tooling stack. What is observable from job descriptions, engineering publications, and people who have worked there is that requirements traceability at SpaceX operates differently from aerospace norms. Requirements exist, but they are embedded in a rapid iteration process where the fastest path from requirement to test to flight is the organizing principle. The tooling follows that process, not the other way around.

Rocket Lab, Planet, Relativity Space (before its pivot), and the broader commercial launch and satellite manufacturing sector have shown consistent behavior: they start with something lightweight — Confluence, Notion, JIRA with custom schemas — and eventually hit a wall when they need to demonstrate traceability for a government contract, a launch license, or an insurance requirement. The wall usually arrives around Series B or when they win their first cost-plus government program.

That collision between startup-speed tooling and aerospace-rigor requirements is where a lot of the real action in requirements management tooling is happening in 2026.

The Traceability Gap Is Getting Expensive

The most consistent failure mode in both segments is traceability — specifically, the inability to quickly answer: which requirements are affected by this change, and what tests verify those requirements?

In heritage programs, this manifests as RTM sprawl. Requirements Traceability Matrices maintained in Excel or exported from DOORS become out of sync with the actual requirement baseline. Engineers spend days reconstructing traces for milestone reviews. When a requirement changes late in a program, the impact assessment is a manual exercise that takes weeks.

In new space programs, the failure mode is different but equally costly. Moving fast with document-based or wiki-based requirements management works until it doesn’t. When a customer demands a compliance matrix, when a launch anomaly requires root cause traced back to a system requirement, or when a key engineer leaves and their requirement rationale leaves with them — the cost of lightweight tooling becomes visible.

The underlying problem in both cases is the same: requirements are stored as text in documents, not as nodes in a connected system model. Traceability is added on top as a separate activity, rather than being a natural property of how requirements relate to each other and to the system architecture.

What Agile Space Development Actually Demands

The term “agile” gets applied loosely in aerospace. What it usually means in practice for new space programs is: shorter development cycles, more frequent hardware builds, and a willingness to accept that not every requirement will survive contact with the first prototype. This is a legitimate engineering philosophy. It is also poorly served by requirements management practices designed for waterfall programs with 10-year development timelines.

Agile space development does not reduce the need for requirements. If anything, it increases it. When you are iterating quickly, you need to know precisely what each version of the system is supposed to do, which requirements changed between iterations, and what the test coverage looks like at any given moment. A document that was accurate in January is not necessarily accurate in March if the design has evolved.

What agile space needs from requirements management: continuous traceability (not point-in-time snapshots), easy baseline comparison, and the ability to propagate requirement changes into downstream work — test cases, verification plans, interface control documents — without a separate manual update cycle. Most current tools, including DOORS and Jama, were not designed for this workflow. They can be configured to support it, but the configuration effort is significant.

Where the New Tooling Is Gaining Ground

The past 18 months have seen a cohort of AI-native and graph-native requirements tools establish footholds in programs where the legacy tools are not a contractual requirement. Flow Engineering has been gaining traction specifically in new space programs where teams are running fast development cycles and need requirements traceability to be continuous rather than periodic.

The distinction that matters here is architectural. Flow Engineering represents requirements as nodes in a connected graph rather than as rows in a document or a database table. Traceability between requirements, system functions, verification methods, and test evidence is a structural property of the model, not a manually maintained overlay. When a requirement changes, the impact on connected requirements and downstream artifacts is immediately visible — not after someone runs an export and updates a matrix.

For a small satellite program running monthly hardware builds, that difference is operationally significant. The alternative — manual RTM maintenance across a fast-moving development cycle — creates a traceability lag that compounds quickly. By the time a formal review arrives, reconciling the requirements baseline to what was actually built and tested is a significant program risk in itself.

AI-native tooling also changes how requirements are generated and refined. Natural language processing applied to customer statements of work, interface control documents, and heritage requirements databases can accelerate the front-end work of requirements decomposition. That is genuinely useful in new space programs where the requirements engineering staff is small relative to the program scope.

The Honest Assessment

The space industry’s requirements management tooling is bifurcated in a way that reflects a genuine philosophical disagreement about how aerospace engineering should work in 2026, not just a technology adoption lag.

Heritage primes are not wrong to run DOORS. Their programs are large, their contracts have established tooling requirements, and the cost of migration exceeds the operational benefit for many programs that are already in execution. Where they should invest is in the integration layer: connecting DOORS to MBSE tools, automating traceability verification, and modernizing the impact analysis workflow.

New space companies that are still running requirements in Confluence are not wrong about wanting to move fast. They are wrong to assume that requirements rigor can be deferred until a formal review forces the issue. The cost of retrofitting traceability into a mature design is almost always higher than building it in from the start.

The tools that will win in new space are the ones that make rigor feel like a natural property of the development process rather than a compliance overhead added on top of it. Graph-based, AI-assisted, continuously traced requirements management is not a future state — it is in use now on programs where speed and accountability have to coexist.

The gap between what the best current tooling can do and what most programs are actually doing is significant. Closing it is less a technology problem than a practice problem. The tools exist. The workflow changes required to use them well are the harder part.