Aerospace has a reputation for conservative technology adoption, and requirements management has historically been the most conservative layer. DOORS has been the dominant tool for 30 years. Process change requires regulatory acceptance. The cost of getting it wrong is high.
And yet, over the last 18 months, the pace of AI adoption in aerospace requirements management has accelerated noticeably. Not across the board — but at a meaningful set of programs and primes.
Here’s what’s actually happening.
The Use Cases That Are Being Deployed
Contrary to what vendor marketing might suggest, aerospace teams are not deploying AI to generate requirements autonomously. The actual deployments are more modest and more thoughtful:
Requirements quality analysis is the most widely deployed capability. Tools that analyze requirements text for ambiguity (undefined terms, ambiguous verbs like “adequate,” “sufficient,” “user-friendly”), measurability gaps (requirements with no quantitative criteria), and structural issues (compound requirements that should be decomposed, requirements that mix “what” with “how”) are being used at scale.
This is high-value work that has historically been done manually in requirements reviews, taking hours of senior engineer time. Automating the first-pass quality analysis — flagging issues before they reach formal review — is demonstrably reducing cycle time in the programs using it. One prime reported a 30% reduction in requirements review cycle time after deploying automated quality checking.
Change impact analysis is the second major deployment area. When a requirement changes — and in complex aerospace programs, requirements change constantly — understanding the downstream impact requires traversing the traceability model. AI-assisted tools that not only identify the immediate downstream artifacts but also provide natural language summaries of the impact and flag high-risk dependency chains are being deployed as productivity tools for systems engineers doing change control.
First-draft generation for child requirements is being used selectively. Given a verified parent requirement and system context, AI assistance that generates candidate child requirements for review by systems engineers is being used to accelerate decomposition on programs where requirements volume is high and qualified systems engineers are the bottleneck. The output is always reviewed before acceptance — the AI is drafting, not deciding.
Coverage gap identification — surfacing requirements that have no downstream allocation, no verification method assignment, or no child requirements where children are expected — is being deployed as an ongoing monitoring tool rather than a point-in-time audit function.
The Regulatory Question
The honest answer about regulatory acceptance is: it’s unclear, and it’s being navigated case-by-case.
The FAA has not published guidance specifically addressing AI-assisted requirements management. The EASA AI roadmap addresses AI in certified systems, not AI tools used to develop certified systems. The DO-178C supplements in development address machine learning components in airborne software, not the development tools used to create conventional software.
What this means in practice: programs that are using AI assistance in requirements management are responsible for demonstrating that their process still meets the objectives of the applicable standards. If DO-178C requires that requirements be reviewed and found to meet quality criteria by qualified engineers, using an AI tool to perform the first-pass quality check doesn’t satisfy that objective if a qualified engineer doesn’t review and approve the output.
Most programs are treating AI assistance as a first-pass capability that informs but doesn’t replace the required human review activities. The AI flags issues; the qualified engineer reviews and dispositions them. This preserves the process integrity that certification requires while capturing the productivity benefit.
Programs that are trying to use AI assistance to reduce the human review activities — rather than to make those activities more productive — are taking on regulatory risk that most DERs and airworthiness authorities will not accept.
The Supply Chain Gap
A pattern that aerospace primes are beginning to notice: the capability gap between prime contractors and their Tier 2/3 suppliers is widening as primes invest in digital engineering tools that suppliers haven’t adopted.
Primes on major programs — particularly those under DoD Digital Engineering mandates — have invested in graph-based requirements tools with AI assistance, connected to MBSE environments and digital thread infrastructure. They can perform automated impact analysis when a requirement changes, run coverage queries against the full requirements baseline, and generate traceability reports without manual assembly.
Their suppliers, many of whom are still using DOORS or spreadsheets, can’t participate in that connected environment at the same fidelity. When a prime’s requirements tool generates a structured change notification, the supplier processes it by hand and updates their document-based requirements manually. The prime’s digital engineering investment doesn’t propagate through the supply chain.
This is creating a new RFP factor: prime contractors for new program captures are evaluating whether suppliers have the digital engineering infrastructure to interoperate with their requirements management environment. Suppliers that can’t demonstrate this capability are being disadvantaged relative to those that can.
What the Cautious Adopters Are Waiting For
Not every program is moving fast, and the caution is not unreasonable. What the holdouts are waiting for:
Regulatory clarity. Until the FAA or EASA publishes explicit guidance on AI-assisted development tools, programs with conservative DER relationships are in a difficult position explaining why their process change is acceptable without a regulatory framework to point to.
Track record. Several programs are waiting to see whether the early adopters have certification problems attributable to AI assistance in requirements management. If the early programs certify cleanly, that reduces perceived risk for followers.
Tool maturity. Early-generation AI assistance in requirements management had meaningful false positive rates — flagging requirements that were fine while missing real issues. As the tools mature and calibration improves, the signal-to-noise ratio is improving. Some programs are waiting for the tools to stabilize before committing to process changes.
Workforce readiness. Systems engineers who have worked in DOORS-based environments for 20 years need training and change management to adopt different tools effectively. Programs with limited program schedule slack are reluctant to take on tooling transitions mid-development.
The Competitive Angle
The programs that are moving fastest are doing it for competitive reasons, not compliance reasons.
Requirements churn is one of the most expensive sources of schedule and cost growth in aerospace programs. Every late requirement change that propagates undiscovered through the design costs an order of magnitude more to fix than a change caught during requirements definition. Programs that can reduce requirements cycle time, improve decomposition quality, and catch traceability gaps before they become integration surprises have a structural cost and schedule advantage.
In program capture, this shows up in bids. A team that can demonstrate requirements management infrastructure that reduces the historical pattern of requirements-driven rework is bidding a different risk profile than a team using the same DOORS-based process that’s caused cost growth on previous programs.
The AI assistance in requirements management that’s being deployed today isn’t primarily about compliance — it’s about building a more competitive way to execute programs. The compliance dimension will follow as regulatory clarity develops.