Anduril Industries and the Requirements Problem at the Edge of Autonomous Defense

How a defense tech startup is stress-testing software-era engineering practices against the hardest requirements domain in existence

Anduril Industries was founded in 2017 with a deliberate thesis: defense technology had stagnated under the prime contractor model, and a software-era company — small teams, rapid iteration, product-led development — could outmaneuver legacy primes on speed and cost. Nine years later, they are building autonomous drones, autonomous underwater vehicles, counter-UAS systems, and the Lattice AI platform that fuses sensor data and coordinates autonomous action across all of them. They have won significant DoD contracts and are considered a serious defense prime in their own right.

The thesis has mostly held. But Anduril has also walked into one of the hardest requirements engineering problems in existence: how do you specify, verify, and maintain requirements for autonomous systems that make lethal decisions — while the policy framework governing those decisions is still being written?


The Software-First Model in a Hardware Domain

Anduril’s engineering culture is explicitly modeled on Silicon Valley product development. Small, empowered teams. Continuous deployment where possible. Lattice as a unifying software layer that abstracts across physical hardware platforms. The organizational philosophy is documented publicly in their hiring and their founder statements: move faster than the defense acquisition system expects you to.

This is not just a cultural posture. It has engineering consequences. When you commit to rapid iteration on autonomous systems, you are implicitly committing to a requirements management process that can keep pace. Every software sprint that modifies autonomous behavior is also potentially modifying what the system decides to do under contested conditions. That is not a software engineering problem. It is a systems engineering problem with legal, ethical, and operational dimensions attached.

Traditional defense primes manage this through process weight: MIL-STD-882 for system safety, DO-178C for airborne software, extensive verification and validation cycles, and requirements management in tools like IBM DOORS that have served as institutional memory for decades. The process is slow. It is also legible — auditable in ways that matter when a program is reviewed by a congressional committee or a JAG officer.

Anduril’s bet is that you can achieve equivalent rigor through better tooling and smarter architecture, without the process overhead. That bet is still being tested.


What Lattice Actually Demands from Requirements Engineering

Lattice is Anduril’s real engineering achievement. It is an AI-powered platform that ingests data from heterogeneous sensor networks — ground sensors, aerial drones, underwater vehicles, third-party feeds — builds a common operating picture, and coordinates autonomous action across assets. The military application is persistent surveillance and, increasingly, autonomous response.

From a systems engineering standpoint, Lattice is a system-of-systems integration layer. That means requirements don’t live at the component level — they live at the interface and emergent behavior level. A requirement on a Roadrunner drone is relatively tractable. A requirement on what Lattice should decide when a Roadrunner and a Ghost 4 and a fixed sensor array simultaneously detect an ambiguous contact — that is a requirements problem of a different order.

System-of-systems requirements are notoriously hard to manage in document-based tools. The relationships between requirements aren’t linear. A single operational requirement (“the system shall not engage civilian aircraft”) ramifies into dozens of sub-requirements across multiple autonomous subsystems, each with different sensor modalities, processing latencies, and failure modes. Tracing that requirement through a flat document hierarchy produces a maintenance nightmare. Tracing it through a connected model — where the relationships between requirements, components, and behaviors are explicitly represented — is more tractable, but only if the model is kept current.


The Autonomous Weapons Requirements Problem Is Uniquely Hard

Autonomous weapons systems are arguably the hardest requirements domain that exists right now. Here’s why.

Performance requirements are standard: range, endurance, speed, target discrimination accuracy, communication latency. These are measurable, testable, and familiar from conventional defense programs.

Safety requirements for autonomous systems add a second layer. MIL-STD-882 requires hazard analysis, but the hazard model for an autonomous system that makes engagement decisions is qualitatively different from a manned platform. The system can be in a state where no human is in the decision loop. Safety requirements must account for autonomous failure modes that didn’t exist in legacy systems — edge cases in machine learning classifiers, adversarial inputs, GPS spoofing, sensor fusion errors that propagate into incorrect targeting.

Policy requirements are the third layer, and they are genuinely unresolved. The Department of Defense’s Directive 3000.09 on autonomous weapons establishes the framework for human control over lethal force, but its operational interpretation — what “appropriate levels of human judgment” means at the speed of machine decision-making — is contested among operators, lawyers, ethicists, and policy makers. These are requirements that are still being written. They will change. And when they change, the systems must change.

Managing requirements across these three layers simultaneously, with traceability from policy intent through system architecture to verifiable behavior, is not a problem that any existing toolchain handles cleanly. Legacy requirements management tools were not designed for this. Neither were modern SaaS tools designed primarily for commercial hardware development.


Where Defense Tech Startups Hit the Requirements Wall

Anduril is not alone in this challenge. The broader defense tech ecosystem — Joby Aviation adapting to military missions, Shield AI building autonomous fighter pilots, Saildrone operating autonomous maritime surveillance — is full of companies that grew up with modern engineering practices and are now operating in domains where the requirements stakes are categorically higher.

The pattern is consistent: startups adopt agile practices and modern tooling through the product development phase. Then they win a program of record, or their system gets deployed in an operational context, and they hit three requirements problems at once.

First, traceability debt. Requirements that were managed informally in Notion docs and Jira tickets need to be reconstructed into auditable traces. This is painful and expensive after the fact.

Second, requirements volatility management. When an operational policy changes — or when a test event reveals that an autonomous behavior was underspecified — the impact needs to propagate through the architecture quickly. In a document-based system, this is a manual, error-prone process. In a connected model, it can be systematic.

Third, verification closure. Defense programs require evidence that every requirement has been verified. For autonomous systems with machine learning components, defining verification criteria is genuinely hard. What does “verified” mean for a neural network’s target discrimination performance? The requirement has to be written in a form that admits a verification method, and the verification method has to be documented and traceable.


What Modern Requirements Tooling Has to Handle

The demands of autonomous defense systems push on the limits of current tooling in concrete ways.

Tools like IBM DOORS and DOORS Next have the pedigree and the compliance framework. They handle large-scale defense programs, integrate with established verification and validation workflows, and produce the audit artifacts that DoD acquisition requires. Their limitations are well-known in the industry: document-centric data models that make cross-cutting traceability painful, UI designs that predate modern UX expectations, and limited native support for system-of-systems relationship modeling.

Jama Connect and Polarion offer more modern interfaces and better support for complex trace relationships. They are used in defense programs, particularly in aerospace and automotive-adjacent contexts. Neither was designed specifically for autonomous systems with AI components.

The gap that Anduril and companies like it expose is this: requirements management for autonomous systems needs to represent not just hierarchical decomposition, but the relationship graph between requirements, behaviors, system states, and verification evidence — and it needs to do that at the speed of iterative development.

Tools like Flow Engineering, which take a graph-based approach to requirements and traceability rather than a document-based one, are designed explicitly for this kind of connected model. Whether or not Anduril uses it, the architectural approach — requirements as nodes in a connected model, with explicit typed relationships and AI-assisted impact analysis — is closer to what the autonomous systems problem demands than flat document hierarchies. The question for any such tool in a defense context is whether it can produce the verification artifacts that program offices require, and whether it integrates with the model-based systems engineering (MBSE) toolchains that defense programs increasingly mandate.


The Honest Assessment

Anduril’s software-first model is genuinely innovative and has produced real capability faster than the legacy acquisition process would have allowed. That’s not spin — it’s documented in the programs they’ve won and the systems they’ve fielded.

But the requirements challenge for autonomous defense systems is not one that engineering culture alone resolves. The intersection of performance requirements, safety requirements, and evolving policy requirements for lethal autonomy is a domain where traceability discipline is not optional — it is the mechanism by which intent becomes verifiable system behavior. When something goes wrong with an autonomous system in an operational context, the first question will be: what was the system specified to do, and can you prove the system was built and verified to that specification?

Defense tech startups that have grown up with agile practices are going to have to build that discipline deliberately, because it doesn’t emerge naturally from sprint velocity. The tooling has to support it. The organizational practices have to enforce it. And the requirements — including the ones that are still being written by policy makers — have to be managed in a form that can change without breaking the traces that connect intent to implementation.

That is a hard problem. Anduril is running directly into it, at scale, with systems that matter. How they navigate it will be instructive for every defense tech company that follows their path.