How Defense Primes Are Restructuring Systems Engineering for Multi-Domain Operations
The phrase “multi-domain operations” has been in defense planning documents long enough that it risks becoming wallpaper. But beneath the doctrine, something structurally significant is happening inside the engineering organizations at Lockheed Martin, Raytheon Technologies (now RTX), Northrop Grumman, and Boeing Defense. These companies are reorganizing how systems engineering is practiced—not just at the program level, but at the enterprise level—and the requirements management implications are significant enough to warrant a clear-eyed look at what’s actually changing versus what’s still aspiration.
What Multi-Domain Operations Actually Demand from Systems Engineering
Joint All-Domain Command and Control (JADC2) isn’t a product. It’s a capability concept that requires systems designed across the Air Force, Army, Navy, Space Force, and cyber operators to share sensor data, targeting solutions, and effects in near-real time. What makes this hard for systems engineers isn’t the vision—it’s the structural mismatches between the domains being connected.
Air systems operate on latency tolerances measured in milliseconds. Space-based assets have orbital mechanics constraints that no amount of software can override. Naval platforms carry legacy systems with communication stacks that predate IP networking. Cyber operations span classification levels that change based on what’s happening in the environment, not just what’s stored on a system. Land forces deal with denied, degraded, intermittent, and low-bandwidth (DDIL) environments as a baseline condition.
Designing a system that must interoperate across all of these simultaneously means requirements are no longer the property of a single program office. They exist in a web of dependencies that cuts across programs, services, classification levels, and operational timelines. That’s a fundamentally different engineering problem than designing an aircraft.
How the Four Primes Are Reorganizing
Each of the major primes has responded differently, reflecting their portfolio mix, legacy organizational structures, and where their biggest MDO contract exposure sits.
Lockheed Martin has moved furthest toward a formal enterprise systems engineering function that operates above the program level. Their 21st Century Security initiative, which has been evolving since the late 2010s, was explicitly about building a “system of systems” layer that could span F-35, LRASM, Space Fence, and next-generation programs under a unified integration architecture. The organizational consequence is a growing enterprise architecture group that sits inside the chief technology office rather than under any single business area. Program SEs now work in a matrix relationship with that enterprise group, which owns the cross-domain interface specifications and manages the top-level CONOPS-to-requirements decomposition.
The tradeoff is tension. Program SEs are measured on program performance. Enterprise architects are measured on integration coherence. When those objectives conflict—and they regularly do, around interface specification ownership and requirements change authority—the resolution process is still maturing.
RTX (post-merger of Raytheon Technologies, Collins Aerospace, and Pratt & Whitney) faces a different structural challenge. Their multi-domain exposure comes primarily through Raytheon Intelligence & Space (now part of the reorganized RTX structure) and through Pratt’s propulsion role in platforms that participate in MDO concepts. The organizational response has been less a centralized enterprise SE function and more a push toward common model-based systems engineering (MBSE) standards that can connect across business units. Their investment in MBSE tooling and in training systems engineers on SysML-based architectures is real and well-documented. The coherence mechanism is the model, not the org chart.
The weakness of this approach is that models only connect things if teams actually use them consistently. RTX’s business units have significant independence, and the adoption of enterprise modeling standards across Collins and Raytheon legacy teams has been uneven.
Northrop Grumman has the most vertically integrated MDO portfolio of any prime, spanning B-21, the Ground-Based Strategic Deterrent (GBSD/Sentinel), space systems, and cyber. Their organizational response has been to build deep technical authority into a small number of chief systems engineers who own the cross-program integration requirements. This is a people-centric model rather than a process or tool-centric model. The B-21 program, which Northrop manages with unusually tight information control, appears to have a highly centralized SE governance structure that gives the chief architect genuine authority over interface requirements across subcontractors.
The limitation is scalability. Chief systems engineer authority works when you have exceptional people in those roles and manageable program complexity. As MDO architectures expand to include more participants and more domains, the cognitive load on those individuals becomes a single point of failure.
Boeing Defense has had the most publicly difficult transition. Their Starliner problems, T-7 Red Hawk production challenges, and MQ-25 delays have all had a systems engineering dimension. Within the defense and space segment, Boeing has been investing in what they internally describe as a “digital thread” initiative—the idea that a continuous, traceable connection from requirements through design, manufacturing, and test should exist in a machine-readable form. The MDO-specific implication is that cross-program interfaces should be part of that thread, not managed separately.
The gap between the stated architecture and current practice remains wide. Boeing’s legacy of program-isolated engineering organizations is deep, and the cultural work required to make the digital thread real—rather than a parallel documentation exercise—is ongoing.
The Open Architecture Problem Is a Requirements Problem
The Department of Defense’s Modular Open Systems Approach (MOSA) mandate, codified in the 2017 NDAA and reinforced in subsequent policy, requires that major defense acquisition programs be designed for competitive refresh of components. The engineering intent is sound: if you can swap a radar processing module without re-engineering the platform, you extend service life and introduce competition into sustainment.
But MOSA is, at its core, a requirements management problem. To achieve genuine modularity, you have to decompose requirements to a level of granularity that makes module boundaries explicit. You have to define those boundaries in terms of interface requirements, not just functional requirements. And you have to maintain those interface requirements across the life of the program as the system evolves.
This is where most programs stumble. Interface requirements are treated as derived outputs of the architecture, documented after the design decisions are made, and then maintained poorly as changes propagate. When a platform upgrade touches one module, the downstream interface requirement impacts may not be fully traced, and the nominal “open architecture” becomes an integration problem waiting to happen.
The primes are aware of this. RTX’s push toward MBSE standards is partly motivated by the need to make interface requirements machine-readable and automatically propagated. Lockheed’s enterprise architecture group has explicit responsibility for maintaining interface control documents at the system-of-systems level. But the tooling to do this at scale—across hundreds of interface requirements, across multiple programs, with version control and change impact analysis—remains a genuine gap for most organizations.
Classification Boundaries: The Unsolved Problem
Designing systems that must operate across classification levels is one of the hardest systems engineering challenges in MDO, and it’s one where honest observation requires acknowledging that no one has fully solved it.
The core problem is this: a system that aggregates sensor data from a SECRET sensor and a TOP SECRET/SCI sensor and produces a targeting solution creates a data product whose classification is governed by the most sensitive input. The system architecture must enforce that policy automatically, without relying on operator behavior. And the requirements that govern that enforcement must be traceable from the policy (often classified) down to the software function that implements it.
Managing those requirements in a standard requirements management tool is problematic. The tool itself may not be accredited to hold the content of the classified requirement. The traceability between classified and unclassified requirements may need to exist in separate accredited environments with manual synchronization. And when requirements change—which they do, regularly, as policy evolves—the change management process across classification levels can take weeks.
Most large defense programs handle this through a combination of classified program offices with separate tooling environments and manual coordination between classified and unclassified teams. This is operationally expensive and introduces latency in requirements change propagation. It also creates the conditions for divergence: the classified requirement and the unclassified implementation requirement quietly drift apart, and the discrepancy isn’t discovered until integration test.
The four primes are all investing in classified infrastructure for MBSE and requirements management—dedicated cloud environments, accredited collaboration platforms, and increasingly, AI-assisted classification marking and cross-domain solution management. But the solutions are program-specific and proprietary, not industry-standard.
Enterprise Requirements Management: A Different Practice
Single-program requirements management is hard enough. Enterprise-level requirements management—where the same requirement may allocate to multiple programs, where changes in one program’s requirements have contractual implications for another, and where the customer is a government program executive office that spans service boundaries—is a structurally different practice.
The key differences:
Allocation is multi-hop and non-exclusive. A top-level JADC2 capability requirement may allocate to a space-based sensor, a ground processing node, a tactical data link, and an effector platform simultaneously. None of those systems owns the requirement exclusively. When any of them changes, the allocation logic changes with it.
Change authority is distributed. On a single program, the chief systems engineer or program office has change authority over requirements. At enterprise scale, no single authority owns the requirements that sit at the interfaces between programs. The DoD’s acquisition structure doesn’t naturally create that authority, which means it either emerges informally (and inconsistently) or requires explicit cross-program governance structures.
Tool interoperability is mandatory, not optional. If Lockheed is the lead integrator and Northrop is a subcontractor delivering a subsystem, their requirements management environments must exchange requirements with traceability intact. This is technically solvable—standards like ReqIF exist for this purpose—but the operational discipline to make it work at program scale is frequently absent.
Modern requirements tools built with graph-based data models handle multi-hop allocation more naturally than document-centric platforms. Flow Engineering, which was built specifically for hardware and systems engineering teams, represents the architectural direction that enterprise MDO requirements management is heading—where requirements exist as nodes in a connected model rather than rows in a document, and where the impact of a change can be traced automatically across the graph. The contrast with legacy platforms like IBM DOORS, which treat requirements as documents with links bolted on, is significant at enterprise scale. Jama Connect and Polarion both offer more connected models than DOORS, but were designed primarily for single-program or single-product-line use cases, and their performance at true enterprise scale—spanning dozens of programs with thousands of cross-program traces—remains a practical constraint.
Workforce Investments: What’s Real
All four primes are investing in MBSE training, and all four are competing for a relatively small pool of engineers who understand both systems engineering and model-based methods. The competition is genuine: a systems engineer with SysML proficiency, domain knowledge in one of the MDO-relevant domains, and experience with government acquisition is in high demand and short supply.
Beyond MBSE, there is growing investment in AI-augmented requirements management—tools that can identify inconsistencies in large requirements sets, flag potential allocation gaps, or suggest derived requirements from a given capability statement. The maturity of these capabilities varies widely. Most commercial tools in this space are demonstrating capability on curated datasets; performance on the messy, jargon-heavy, partially classified requirements sets that defense programs actually use is still being established.
The workforce restructuring at Boeing, Northrop, Lockheed, and RTX also reflects a recognition that systems engineering was historically underinvested relative to design engineering. MDO has made visible what systems engineers have argued for decades: that requirements management, interface control, and integration planning are not overhead activities. They are the work. Programs that treated SE as a documentation function are paying the cost of that decision in integration failures and schedule overruns.
Honest Assessment
The restructuring is real. The organizational and tooling investments are real. The pressure from DoD customers and from MOSA policy is real. But the gap between current state and what MDO actually requires of systems engineering practice is also real, and it’s wider than most program reviews will acknowledge.
Classification boundary management remains largely a workaround. Enterprise-level requirements governance does not yet have the authority structures or tooling to match the ambition. The primes with the most vertical integration (Northrop) have an organizational advantage that the more federated primes (RTX, Boeing) will struggle to replicate through tooling alone.
The next five years will test whether the investments in MBSE, digital thread, and AI-augmented requirements management translate into measurable improvement in MDO integration outcomes—or whether they become a new layer of documentation complexity on top of the old one.