What Hypersonic Commercialization Is Doing to Systems Engineering

Hypersonic vehicles — sustained flight above Mach 5 — have been a defense research priority for decades. What changed in the last four years is who is building them and how fast. Hermeus, Venus Aerospace, Stratolaunch, and a growing list of defense-funded startups are now fielding engineering teams measured in the hundreds, not thousands, and are being asked to deliver first flights on timelines that would have been considered reckless under any prior acquisition paradigm. The defense primes — Raytheon, Northrop Grumman, Lockheed’s Skunk Works — are simultaneously running parallel programs under classified contracts with different rules and different risk tolerances.

What all of these programs share is a requirements management problem that the industry is not fully prepared to solve. The physics of hypersonic flight create coupling between vehicle systems that makes the standard hierarchical requirements decomposition — system to subsystem to component — actively misleading. And the urgency driving these programs means organizations are frequently writing requirements and conducting tests in the same breath, with no stable baseline between them.

This article is not about whether hypersonic systems will work. It is about what building them is teaching the engineering community — and what that teaching reveals about the limits of tools and processes inherited from an earlier era.

What Hypersonic Vehicles Actually Demand of Systems Engineers

The engineering challenges of sustained hypersonic flight are well-documented in the technical literature. The practical challenge for systems engineers is that these problems are not separable.

At Mach 5 and above in the atmosphere, aerodynamic heating is not a thermal problem with structural implications. It is a thermal-structural-aerodynamic-propulsive problem that cannot be assigned to a single engineering discipline. The leading edges of a hypersonic vehicle heat to temperatures that change their geometry, which changes the aerodynamic pressure distribution, which changes the structural loads, which changes what thermal protection material survives. The propulsion system — whether a scramjet, a rotating detonation engine, or a combined-cycle powerplant — is integrated into the airframe to a degree that has no analog in subsonic or even supersonic aircraft. The inlet geometry is the vehicle forebody. The nozzle is the vehicle afterbody. There is no clean interface where propulsion ends and airframe begins.

Guidance in contested electromagnetic environments adds another layer. Hypersonic vehicles generate plasma sheaths that interrupt conventional RF communication and GPS reception. Navigation and guidance must function through periods of communication blackout, which means onboard autonomy requirements are both safety-critical and classified, and they interact with vehicle geometry through antenna placement and plasma boundary conditions.

For a systems engineer trained in the MIL-STD-499 tradition — requirements flow down, interfaces are documented, disciplines talk at specified review gates — this is a structurally different problem. The disciplines are not loosely coupled through interfaces. They are tightly coupled through physics. A requirement that specifies a maximum surface temperature at a specific flight condition is not stable: it is a function of geometry, material selection, trajectory, and propulsion state simultaneously. Change any one input and the temperature requirement recalculates.

How Hermeus and Stratolaunch Are Organizing to Deal With This

Hermeus, which is developing the Quarterhorse hypersonic aircraft and the Darkhorse vehicle under USAF contracts, has been public about its organizational approach: small, integrated teams where aerodynamicists, structural engineers, and propulsion engineers sit together and share computational models. The company explicitly avoids the traditional IPT (Integrated Product Team) structure where disciplines synchronize at scheduled reviews. Instead, they use shared model environments where changes propagate across disciplines in near-real time.

The staffing profile reflects this. Hermeus hired heavily from SpaceX, Blue Origin, and defense contractors — people accustomed to compressed schedules and model-centric workflows. What they are learning, by multiple accounts from engineers who have spoken publicly at AIAA and other conferences, is that even model-centric organizations hit a requirements management wall. A shared CFD or FEA model tells you what the physics does. It does not automatically generate a requirements baseline that a supplier, an integrator, or a government customer can verify against.

Stratolaunch, which operates the Roc carrier aircraft and is developing the Talon-A hypersonic testbed, faces a different version of the same problem. As a test services company, Stratolaunch’s product is data — flight envelope data at Mach 5 and above for customers who are designing their own hypersonic systems. Their requirements management challenge is defining what a test point means for a customer whose own requirements baseline does not yet exist. You cannot trace a test result to a requirement that has not been written.

The defense primes approach this differently. Lockheed’s Skunk Works, Northrop’s advanced programs division, and Raytheon’s hypersonic programs operate with larger teams, more infrastructure, and — critically — more institutional tolerance for long-cycle requirements processes. They also have decades of classified program experience that shapes how they handle the interface between what can be documented and shared and what cannot.

The Test Heritage Problem

Standard aerospace systems engineering practice assumes a body of test heritage on which requirements are partly grounded. When you specify a thermal protection material, you can point to ground test data, prior flight data, and material characterization databases built over decades. Hypersonic programs at the leading edge are operating at conditions — specific Mach numbers, specific altitude bands, specific combined aero-heating and structural load environments — where that heritage is thin or nonexistent.

The consequence is that simulation is not a verification tool in these programs. It is a requirements-generating tool. Wind tunnel tests, computational fluid dynamics runs, and coupled multi-physics simulations are being used to establish what the requirements should be, not to verify that requirements have been met. This inversion is poorly supported by most requirements management frameworks, which assume requirements are defined before testing begins.

When a Mach 7 arc jet test on a candidate TPS (Thermal Protection System) material reveals that the material fails in a mode that was not predicted by the pre-test simulation, the program has three options: update the material requirement, update the simulation model, or accept the test result as a one-off anomaly. Each choice has downstream consequences for every requirement that depends on the TPS performance. In a document-centric requirements system, each of those consequences must be manually traced and updated. In programs where the test cadence is measured in months and the development timeline is measured in years, this manual reconciliation process becomes the limiting constraint on program velocity.

The Classified/Unclassified Interface

Every hypersonic program of any scale in the United States today has a classified component. For defense-funded commercial programs, this creates an architectural challenge that is rarely discussed in open forums: the requirements management system splits at the classification boundary.

The unclassified program elements — airframe geometry details that have been publicly disclosed, basic propulsion cycle information, non-sensitive test results — can be managed in commercial SaaS tools accessible to the broader engineering team. The classified elements — seeker performance, specific flight trajectories, electronic warfare interactions, warhead integration — must be managed in government-approved classified information systems, typically air-gapped networks inside cleared facilities.

The interface between these two worlds is where integration failures hide. An unclassified requirement for vehicle control authority at a given flight condition may depend on a classified requirement for terminal guidance accuracy that the structural team does not have access to. The structural engineer specifying actuator load limits does not know why those limits are what they are. The systems engineer responsible for vehicle-level closure cannot hold a single integrated model that contains all the dependencies.

Programs manage this through cleared systems engineers who can see both sides of the classification boundary and act as translation layers. This works, but it creates single points of failure in the requirements chain and makes automated traceability — knowing which downstream requirements change when an upstream requirement changes — effectively impossible across the classification boundary.

The government customer is aware of this problem. The DoD’s hypersonic development programs, including DARPA’s various hypersonic initiatives and the Air Force Research Laboratory’s programs, have invested in classified requirements management infrastructure. But the commercial startups operating on OTA (Other Transaction Authority) contracts often do not have the cleared facility infrastructure to mirror that investment, and the requirements management gap is real.

What Hypersonics Is Borrowing and What It Is Inventing

Hypersonic programs are drawing heavily from three predecessor domains: reentry vehicle development (Atlas, Titan, and ICBM programs of the 1950s–70s), the Space Shuttle thermal protection system program, and recent commercial space development practice.

From reentry vehicle work, they are borrowing ablative TPS design methodology and the institutional knowledge that lives in national labs — Sandia, AEDC, NASA Langley — about high-enthalpy flow characterization. From the Shuttle program, they are borrowing material databases and some structural analysis methods for carbon-carbon composites and ceramic tiles. From commercial space, they are borrowing organizational models: small teams, model-based workflows, rapid iteration, tolerance for anomalies as learning events rather than program-stopping failures.

What they are inventing, or attempting to invent, is the requirements methodology for systems with these coupling characteristics and this uncertainty profile. The closest existing framework is probably Model-Based Systems Engineering (MBSE), which replaces document-centric requirements with interconnected model elements that can represent dependencies explicitly. But MBSE as practiced in most defense programs is still heavily document-like — SysML diagrams that are essentially structured documents rather than live computational models.

The genuine innovation being attempted in programs like Hermeus and at AFRL is tighter integration between the physics models (CFD, FEA, trajectory simulation) and the requirements model — so that when the physics model changes, the requirements implications are computed rather than manually inferred. This is model-based systems engineering in a more literal sense: the system model and the requirements model are the same artifact, or at minimum tightly coupled.

What Modern Tooling Offers

The requirements tools most widely deployed in defense programs — IBM DOORS, DOORS Next, Jama Connect, Polarion — were architected around documents and attributes. They manage requirements as text objects with links between them. They are effective for programs where the requirements hierarchy is relatively stable and the interdependencies are manageable through manual review. They are poorly suited to programs where requirements are generated by simulation, where the coupling between requirements is dense and computational rather than logical, and where the baseline changes faster than manual link maintenance can track.

Some hypersonic program teams are adopting graph-native requirements tools that represent the requirements model as a network of connected elements rather than a hierarchy of documents. Flow Engineering, which is built specifically for hardware and systems engineering teams working on complex interdependent systems, takes this approach — representing requirements, design parameters, test events, and interface definitions as nodes in a connected graph where the relationships carry semantics, not just links. For programs where a single material selection change needs to propagate through thermal, structural, aerodynamic, and propulsion requirements automatically, that graph structure is functionally necessary, not a convenience.

The honest limitation is that no commercial requirements tool currently solves the classified/unclassified split. That problem lives at the intersection of information security policy and systems architecture, and it requires organizational solutions — cleared requirements managers, carefully defined interface documents at the classification boundary — not just tool solutions.

The Honest Assessment

Hypersonic commercialization is accelerating the exposure of a gap that existed in aerospace systems engineering practice long before Hermeus or Stratolaunch existed: the gap between how requirements are managed (in documents, hierarchically, by discipline) and how the actual physics of complex vehicles behaves (continuously, coupled, across disciplines).

Programs with sufficient resources and institutional patience — the Skunk Works programs, the DARPA flagship efforts — can absorb this gap through expensive human coordination. Lean startups on compressed timelines cannot. They are being forced to invent requirements management practice faster than the standards community can ratify it.

The organizations learning fastest are the ones treating this not as a process compliance problem — how do we satisfy MIL-STD-882 and DO-178C? — but as an information architecture problem. What does a requirements model need to look like so that it accurately represents how this vehicle actually behaves? The answer, emerging from practice rather than from standards committees, is: more like a graph and less like a document. More connected to physics models and less separated from them. More explicit about uncertainty and less committed to false precision in requirements text.

Whether the timelines being imposed by defense urgency will allow that learning to mature into reliable practice before first flights is the question that systems engineers working on these programs are losing sleep over. The physics does not care about the program schedule. The requirements process needs to be honest about that, even when the program is not.