The Industrialization of Space: How Satellite Constellations Are Forcing Manufacturing-Ready Requirements
For most of the history of the space industry, a satellite was a bespoke artifact. Eighteen months of design. Twelve months of integration. A test campaign measured in quarters. A single launch. Requirements documents ran to thousands of pages, lived in PDFs or in IBM DOORS databases that only specialists could navigate, and were verified through manual inspection processes that took weeks per subsystem. The economics supported it. If you’re building one spacecraft, you can afford to treat the requirements review board as a ceremonial institution.
That calculus is now obsolete.
SpaceX has manufactured and launched over 6,000 Starlink satellites. Amazon’s Kuiper program has commitments for 3,236 operational spacecraft. OneWeb — now under Eutelsat ownership — has operated a constellation in the hundreds. Planet Labs has put more than 500 Earth observation satellites on orbit. These are not one-off programs. They are industrial production runs. And the requirements engineering practices that served the industry for four decades are collapsing under the production pressure.
What “Manufacturing-Ready” Actually Means
The phrase gets used loosely. It is worth being precise.
A manufacturing-ready requirement is one that can be verified at production throughput without specialized human interpretation. That means three things simultaneously:
Testable at line speed. If your production line is completing a satellite integration every 36 hours and your requirement takes a three-day thermal-vacuum chamber cycle to verify, that requirement is not manufacturing-ready. It may be physically necessary — some thermal requirements genuinely require TVAC — but structurally it becomes a gating constraint that must be managed through sampling strategy, pre-qualification lot testing, or acceptance by similarity from a qualification unit. Requirements that cannot be mapped to one of these verification paths create production schedule risk.
Automatable without ambiguity. A requirement that reads “the RF chain shall exhibit acceptable signal quality under nominal operating conditions” cannot be automated. “Acceptable” and “nominal” require human judgment at test time. On a production line at volume, human judgment at test time means operator variance, which means non-reproducible acceptance decisions, which eventually means field failures that cannot be traced to a root cause. Manufacturing-ready requirements are written with quantitative, deterministic acceptance criteria: specific frequencies, specific margins, specific temperatures, specific durations. If a test script cannot be generated directly from the requirement, the requirement is not yet done.
Linked to acceptance test procedures that execute in minutes. The acceptance test procedure is not downstream documentation — it is the executable form of the requirement. In a volume manufacturing context, the ATP is the requirement, practically speaking. If a requirement exists in a requirements management system but has no linked ATP that can execute automatically on production hardware, that requirement does not exist in any operationally meaningful sense.
How the Major Constellation Programs Got Here
SpaceX’s approach to Starlink manufacturing requirements emerged from its launch vehicle culture, which had already internalized rapid test iteration. The company did not import traditional aerospace requirements practices and adapt them; it largely discarded them and built from first principles around automated test coverage. Starlink satellites are tested through software-driven functional checkout that covers power systems, communications payloads, attitude control, and thermal management in an integrated sequence that runs in hours. Requirements are embedded in the test logic, not in documents that reference the test logic.
The practical consequence is that SpaceX’s requirements corpus is inseparable from its production test infrastructure. A change to a power bus voltage requirement is a change to a test threshold in an automated script. The latency between requirements change and production implementation is measured in software deployment cycles, not document revision cycles. This is a fundamentally different epistemology of what a requirement is.
OneWeb’s manufacturing program at its Toulouse facility with Airbus took a different path but arrived at similar conclusions. OneWeb inherited more traditional aerospace pedigree — its original technical staff and supply chain came from the established satellite industry — and had to deliberately transform a document-centric requirements practice into something that could support a production cadence of multiple satellites per week. The transformation required not just tooling changes but organizational changes: test engineers who had traditionally been downstream consumers of requirements documentation became upstream participants in requirements authorship, specifically to ensure that acceptance criteria could be automated before a requirement was baselined.
Amazon Kuiper has had the advantage of building from scratch in an era when the lessons of Starlink and OneWeb were visible. Kuiper’s engineering teams have been explicit internally about treating test automation coverage as a requirements completeness metric. A requirement without an automatable acceptance test is, by that standard, incomplete — it has not yet been fully specified.
The Tooling Mismatch Is Real and Consequential
Here is where the industry is encountering a structural problem that individual engineering heroics cannot solve.
The dominant requirements management tools in aerospace — IBM DOORS, DOORS Next Generation, Jama Connect, Polarion — were designed for a world of careful, document-centric review. They are excellent at managing large hierarchical requirement sets, capturing stakeholder rationale, maintaining change history, and producing traceability matrices for regulatory submission. These are genuine capabilities that matter. DOORS has decades of deployment in defense and space programs and carries institutional knowledge that should not be casually dismissed.
But their data models are document-oriented. Requirements live in modules. Traceability is expressed through links between text objects. Test procedures exist as separate documents that are manually associated. The workflow assumption baked into these tools is that requirements are authored by systems engineers, reviewed in formal sessions, baselined, and then handed downstream. Verification happens later, in a separate phase, by a separate team, using separate tools.
That assumption is incompatible with constellation manufacturing at volume.
When your production cadence requires that a satellite be accepted or rejected based on automated test results in under four hours, and when those acceptance decisions need to trace to specific requirements, and when a production anomaly needs to immediately trigger a requirements review to determine whether the acceptance criterion is wrong or the hardware is wrong — you need a requirements system that participates in the production workflow in real time. You need a system where the connection between a requirement, its acceptance logic, its test results, and its disposition history is live, queryable, and actionable.
The document-centric tools are not architected for this. The data lives in the right places — requirements, tests, results — but the connections are manual, the queries are slow, and the update latency is measured in review cycles rather than production cycles.
What the New Architecture Looks Like
The requirements management architecture that constellation programs are converging toward has several consistent characteristics.
Graph-based, not document-based. Requirements, components, tests, anomalies, and manufacturing records are nodes in a connected model, not paragraphs in a hierarchy of documents. This enables queries that document systems cannot answer: which requirements have acceptance tests that have never passed on production hardware? Which requirements have the highest failure rates on line? Which component change requests invalidate which acceptance criteria?
AI-assisted authorship with deterministic output. The bottleneck in generating manufacturing-ready requirements is not understanding the physics — it is translating physics understanding into unambiguous, automatable acceptance criteria at scale across thousands of requirements. AI-assisted authoring, when it works well, does not generate requirements from nothing. It takes an engineer’s intent and produces candidate acceptance criteria that are structured, quantitative, and parseable by test automation. The engineer reviews and adjusts. The throughput gain is significant.
Bidirectional traceability to test execution. Requirements are linked not just to test procedures but to test execution records. A requirement’s verification status is not a manually updated field — it is a computed property derived from test result data. When a production anomaly is logged, the system surfaces the requirements whose acceptance criteria the anomaly potentially challenges.
Living baselines with production feedback loops. The baseline is not frozen. It is versioned, with production test data informing ongoing requirements refinement. When a requirement’s acceptance threshold is routinely failing hardware that field performance data shows is operating correctly, that is a signal that the requirement needs adjustment — and the system should surface that signal, not bury it in a spreadsheet somewhere.
Tools like Flow Engineering are built around exactly this architecture. Rather than adding AI features to a document-centric foundation, it models requirements as a connected graph from the ground up, with AI assistance in authoring acceptance criteria and native integration paths to test infrastructure. For constellation programs that are standing up requirements practices from scratch or rebuilding them, the architectural fit matters more than feature parity with tools that have been accumulating capabilities for thirty years.
The Honest Assessment
The satellite industry has not solved this problem. It is in the middle of solving it, unevenly.
SpaceX is furthest along, but its approach is tightly coupled to its internal software infrastructure in ways that are not transferable as a model — it is a capability they built for themselves, not a practice other organizations can readily adopt. OneWeb’s transformation was hard-won and came with significant organizational friction. Kuiper is early in its production ramp and the proof will be in whether its requirements architecture holds at full rate.
What is clear is that the competitive pressure is one-directional. Constellation programs that succeed in making requirements manufacturing-ready will have production throughput advantages, lower rework rates, faster anomaly resolution, and more defensible acceptance decisions. Programs that do not will be slower, more expensive, and more exposed to field failures that trace back to acceptance criteria that nobody could automate.
The requirements management tool vendors who serve this market face a genuine choice: architect for the new model, or remain excellent tools for a shrinking segment of the market — the programs where building one satellite at great expense is still the right answer. Both markets will exist. But the growth is in the constellation manufacturing floor.
For practicing systems engineers entering a constellation program today, the practical implication is direct: if you are writing a requirement and you cannot specify the acceptance test that will verify it at line speed, you are not done writing the requirement. The document is not the deliverable. The automatable, traceable, executable acceptance criterion is the deliverable. Everything else is working notes.