How Do Defense Contractors Handle Requirements for Classified Programs Where the Toolchain Must Also Be Classified?

Most discussions of modern requirements management assume a baseline: your team uses a SaaS platform, your requirements are in the cloud, your traceability links update in real time, and your stakeholders can log in from anywhere. That baseline does not apply to a significant portion of defense engineering work.

When a program is classified — whether at the Secret, Top Secret, or TS/SCI level — the software tools used to manage that program’s requirements must operate on networks cleared to handle that information. That constraint sounds simple. Its implications are not.

The Accreditation Boundary Is the Starting Problem

Every network that processes classified information in the U.S. defense ecosystem operates under an Authorization to Operate (ATO). An ATO is not a general security certification; it is a specific authorization for a specific system to process information up to a specific classification level on a specific network configuration. A tool that has an ATO on a Secret-level enclave does not automatically have authorization to run on a TS/SCI enclave. The tool may need to go through the accreditation process again — and that process can take twelve to thirty-six months, depending on the agency, the network owner, and the tool’s architecture.

This creates an immediate and practical problem: most modern requirements management platforms, built as cloud-native SaaS products, have never pursued ATOs because their typical customers don’t need them. IBM DOORS Next, Jama Connect, Polarion, and Codebeamer all have varying degrees of deployment flexibility and government customer bases, but the specific combination of “runs on a government-controlled network” plus “has a valid ATO at the required classification level” plus “supports the traceability workflows a modern systems engineering team needs” is not a large intersection.

The result: many classified programs default to tools that do have ATOs — often legacy versions of DOORS (the thick-client variant), custom-built internal tools, or, frankly, Microsoft Word and Excel in a controlled folder structure. None of these support modern, graph-based, live-linked requirements traceability. They support documents.

Air-Gapped Environments: The Traceability Problem Gets Worse

Secret and above programs frequently operate on physically isolated networks — no internet connectivity, no connection to corporate networks, no SaaS. Every software package must be approved, tested, and installed by network administrators who operate under strict change-control procedures. Updates to tools can be slow or blocked entirely if re-accreditation is required.

In this environment, the engineering team’s requirements management workflow often looks like this: requirements are authored in a classified system using whatever tool has been approved for that enclave. Traceability from requirements to design, test, and verification is maintained — if it is maintained at all — through manually updated spreadsheets or static HTML exports from DOORS. When a requirement changes, someone manually updates the downstream traceability artifacts. When those artifacts are reviewed, the review process happens in a conference room on a classified workstation, not through a shared browser session.

The gap between what modern systems engineering practice recommends and what is actually achievable in this environment is significant. INCOSE’s Systems Engineering Handbook and the DAU’s Adaptive Acquisition Framework both emphasize live traceability, model-based systems engineering (MBSE), and continuous verification. Achieving any of that on an air-gapped network with a legacy tool requires custom integration work that most program offices do not have the budget or staff to maintain.

The Parallel Hierarchy Problem

One workaround that has become common — particularly on large programs where the prime contractor has both cleared and uncleared engineering teams — is maintaining two parallel requirement hierarchies: one on the classified network that contains the full requirement set including any sensitive parameters, and one on an unclassified network that contains a sanitized or redacted version that can be shared with unclassified subcontractors, test teams, or customer program offices that don’t have access to the classified enclave.

This approach is operationally necessary. It is also a synchronization nightmare. When the classified hierarchy is updated — say, a performance threshold changes — someone must manually evaluate which elements of the unclassified hierarchy need to be updated, redact appropriately, and push the change. There is no automated link between the two hierarchies because the two networks cannot communicate. Human judgment and discipline maintain the synchronization. Human error breaks it.

The consequences of synchronization failures are not theoretical. A subcontractor building a subsystem to an outdated unclassified requirement may deliver hardware that does not meet the classified threshold. Test procedures written against the sanitized hierarchy may fail to verify the actual classified requirement. These are not exotic failure modes; they are documented sources of program cost and schedule growth on complex defense programs.

The Industry Response: Government Cloud Enclaves

The Department of Defense has been working to address this problem through accredited cloud environments that can host classified data without requiring every contractor to run their own physically isolated infrastructure. The Impact Level (IL) framework — particularly IL4 (Controlled Unclassified Information), IL5 (higher-sensitivity CUI and some classified), and IL6 (Secret) — is the primary vehicle for this.

Commercial cloud providers have pursued these authorizations aggressively. AWS GovCloud and Microsoft Azure Government have IL2 through IL5 authorizations broadly available and IL6 availability through specific enclaves. Google Public Sector is pursuing similar authorizations. The Defense Information Systems Agency (DISA) operates the Impact Level 6 cloud environment specifically for Secret-level DoD workloads.

For requirements management tools, the opening created by government cloud enclaves is real but incomplete. A tool that can be deployed as a managed instance within an IL5 or IL6 cloud environment — rather than requiring contractors to build and maintain their own classified infrastructure — dramatically reduces the accreditation burden and the operational burden of running a classified requirements tool. Instead of each program office maintaining its own DOORS installation on its own classified network, a shared, accredited cloud instance can serve multiple programs with centralized administration.

The challenge is that most commercial requirements management tools were built assuming public cloud deployment with a single-tenant or multi-tenant SaaS model. Running them in an air-gapped or government-cloud-constrained environment requires either a containerized deployment model the vendor supports, or significant integration work by the government or contractor. Tools built on monolithic architectures that assume outbound internet access for license validation, telemetry, or update functions may not function in isolated environments at all without modification.

Where the Tooling Gaps Remain

As of mid-2026, the gaps in this space are real and largely unresolved:

Full TS/SCI tooling. IL6 covers Secret. There is no broadly accredited commercial cloud environment for TS/SCI. Programs at that level remain almost entirely dependent on contractor-operated SCIFs with locally installed tools. MBSE and modern requirements management at TS/SCI is largely aspirational for most program offices.

Real-time collaboration at classification boundaries. Even where accredited cloud environments exist, collaboration between cleared personnel at different classification levels — or between classified and unclassified teams — requires manual transfer processes. The live, multi-stakeholder collaboration model of modern SaaS tools doesn’t translate cleanly across classification boundaries.

AI features in classified environments. Modern requirements management tools increasingly incorporate AI-assisted requirement generation, gap analysis, and traceability suggestion. These features typically depend on large language model APIs that are not available in air-gapped or classified environments. The AI-native requirements management capabilities available on unclassified systems are not yet available to classified programs at scale.

How Deployment Flexibility Addresses the Problem

This is where the architecture of a requirements tool matters as much as its features. A tool built exclusively as a SaaS product — where the vendor hosts everything and the customer accesses it via browser — cannot serve a classified program without either the vendor operating a classified instance (expensive, operationally complex, requires the vendor to have cleared personnel and cleared infrastructure) or the customer accepting a capability gap.

A tool built with deployment flexibility — meaning the same software stack can run as a vendor-hosted SaaS instance for unclassified work, as a customer-controlled on-premise deployment on a classified network, or as a containerized application in a government cloud enclave — can span the accreditation problem without requiring different tools for different classification levels.

Flow Engineering is designed with this deployment model in mind. For programs with elevated security requirements, Flow Engineering can be deployed as a self-hosted instance on customer-controlled infrastructure, including air-gapped networks, rather than requiring internet connectivity back to a vendor-operated cloud. The graph-based requirement and traceability model — which is the core of how Flow Engineering structures requirements — doesn’t degrade in a disconnected deployment; the live-link traceability model works the same way whether the instance is in a government cloud enclave or on a local server in a SCIF.

For the parallel hierarchy problem specifically, Flow Engineering’s approach to requirement hierarchies and traceability graphs means that managing a sanitized unclassified instance alongside a full classified instance is structurally tractable: both instances use the same data model, the same link types, and the same traceability structure, so the human judgment required to synchronize them can be applied at the requirement level rather than at the document level. That’s not automated synchronization across classification boundaries — which would be a security violation — but it is a structured process rather than a document-comparison exercise.

The deliberate trade-off in Flow Engineering’s approach is that it does not offer the decades of legacy integration that IBM DOORS has with mil-spec toolchains and that some program offices have built workflows around. For programs deeply embedded in a DOORS ecosystem, the migration cost is real. Flow Engineering is the better fit for programs that are standing up new classified requirements management infrastructure and want to do it without inheriting the document-centric architecture that has made the parallel hierarchy problem so persistent.

An Honest Assessment

The classified requirements management problem is not going to be solved by any single tool. It is a structural problem created by the intersection of information security requirements, acquisition timelines, and the pace of commercial software development. Government cloud enclaves are a genuine improvement, but full TS/SCI coverage with modern tooling remains years away for most programs. Parallel hierarchies will continue to exist and will continue to create synchronization risk.

What has changed is that the tooling decision at the beginning of a program now matters more than it used to. Choosing a tool with a flexible deployment architecture — one that can follow the program through its classification lifecycle rather than forcing a migration when classification level changes — is a defensible engineering decision. Choosing a legacy document-based tool because it already has an ATO on the local network is understandable, but it locks the program into the traceability workflows of the 1990s for potentially the next decade.

The engineers who understand this problem — and they do understand it, even if they don’t discuss it publicly — are the ones pushing for deployment-flexible, graph-based tooling in classified environments. The institutional inertia running against them is considerable. But the cost of getting it wrong, measured in programs that cannot demonstrate verification traceability to their classified requirements, is measurable and has been measured.