Turion Space and the Systems Engineering of Uncooperative Targets

How a small team is designing spacecraft to interact with objects whose properties they cannot fully predict


There is a narrow category of engineering problems where the target of your system’s primary function is not just unknown but actively resistant to characterization. Military systems deal with adversarial environments. Medical devices operate in biological systems with high patient-to-patient variance. But in orbital mechanics, a new class of problem is emerging: spacecraft designed to physically approach, inspect, and in some cases capture other spacecraft — objects they have never touched, whose operators may not be responsive, and whose physical state has been degrading in an environment that prevents direct inspection.

Turion Space, a small company headquartered in Irvine, California, is building into exactly this category. Their DROID spacecraft program targets proximity operations, orbital debris removal, and satellite life extension services. The engineering challenge is not just that these are hard problems. It is that the requirements you write at the start of a program cannot fully specify the system you will encounter when you operate.

That gap — between what you can know when you write requirements and what you will face when you execute — is one of the most structurally interesting systems engineering challenges in the current commercial space industry.


What Turion Is Actually Building

Turion’s approach is to develop a spacecraft platform capable of multiple in-space services missions: rendezvous and proximity operations (RPO), resident space object (RSO) characterization, physical grapple and tow for debris removal, and potential refueling or hardware servicing for cooperative clients. Their initial customers include government contracts for RSO characterization, which provides a lower-risk entry point before progressing to contact operations.

This sequencing is deliberate and reflects sound systems engineering instinct. Characterization missions — flying near an object and imaging it — stress the rendezvous and proximity operations architecture without requiring contact mechanics. The team learns about their GNC performance, their sensor suite, their operational procedures, and the realities of RSO characterization data quality, all before the stakes of physical contact are introduced. It is a phased validation strategy built into the mission architecture itself.

But the long-term business case requires contact operations. Debris removal at meaningful scale — which is what both regulators and commercial operators increasingly want — requires capture. And capture requires grappling with objects that were never designed to be grappled.


The Requirements Problem for Uncooperative Targets

Traditional interface control in space systems engineering assumes bilateral agreement. Two programs produce an Interface Control Document. Both sides have engineers. Both sides sign. Both sides test to the interface. When a satellite services company approaches a defunct third-party spacecraft, none of that applies.

The target spacecraft may have:

  • Unknown tumble rates and rotational state
  • Degraded surface coatings and thermal properties affecting sensor returns
  • Structural integrity that has not been assessed since launch
  • Propellant remaining in lines at unknown pressure
  • Solar array configurations that may have changed due to mechanism failure
  • No operational team available to query

Writing requirements for a capture system that must work across this envelope demands a different methodology than writing requirements for a known interface. The requirements engineer cannot specify “target rotation rate shall not exceed X deg/s” as a design driver and call it done, because there is no contract mechanism to enforce that constraint on an uncooperative object.

Instead, requirements must be written around the service spacecraft’s capability envelope: what tumble rates can the GNC and capture system handle, what structural load cases must the grapple mechanism survive given range of target mass estimates, what minimum sensor return quality is required for the vision-based navigation to converge. The uncertainty is in the requirements themselves, not just in the implementation.

This is a probabilistic requirements problem. Turion’s engineers must characterize the population of likely targets — using conjunction data, historical launch records, NORAD catalog data, and available imagery — and bound the design space accordingly. Requirements become distributions, not point values. Trade studies must carry those distributions forward through the design.

Small teams doing this for the first time, without the institutional knowledge base of a prime contractor that has operated in this space for decades, face a genuine documentation and traceability challenge. How do you maintain requirement rationale — the why behind a specific performance bound — when the bound was derived from a probabilistic analysis of a target population that itself carries uncertainty? Legacy document-based requirements tools were not designed for this. A matrix of requirements and verification methods does not naturally accommodate the metadata needed to understand that Requirement 4.3.2.1 was set at a particular delta-V tolerance because of a Monte Carlo analysis over a specific debris population model run in Q2 of a given year using specific catalog inputs.


Regulatory Terrain That Moves Under You

Turion operates in a regulatory environment that is, to be direct, still being invented. Active debris removal (ADR) and on-orbit servicing (OOS) touch at least four distinct regulatory domains: FCC spectrum licensing, FAA launch licensing, State Department ITAR controls, and — most ambiguously — the emerging question of who authorizes a spacecraft to make physical contact with another nation’s space object.

The Outer Space Treaty (1967) creates a framework in which launched objects remain the property of the launching state. Removing another state’s debris without authorization is legally ambiguous at best. The practical reality for Turion in near-term operations is that their initial debris removal missions are likely to be conducted under government contracts — particularly with the Space Force and with operators who own the debris in question and are contracting for its removal. This sidesteps the authorization problem for now.

But for commercial autonomous debris removal at scale, the regulatory framework does not yet exist. FCC has begun addressing satellite interference aspects of proximity operations. The State Department is engaged on dual-use concerns. No single regulatory body currently has clear jurisdiction over the authorization question for contact operations near foreign-owned objects.

From a systems engineering standpoint, this creates a requirements instability problem. If your spacecraft is designed to meet a regulatory compliance requirement, and that requirement changes mid-program because the regulatory body revised its guidance, you face potential requirement churn on a timeline that may not accommodate graceful redesign. This is not hypothetical — it is an active concern for every company in the OOS and ADR space.

Turion, like its peers Astroscale, ClearSpace, and Northrop’s SpaceLogistics division, must architect their systems with enough flexibility to accommodate regulatory evolution. This argues for modular operations architecture, clear separation between mission-specific and platform-generic capabilities, and requirements traceability that can demonstrate compliance against multiple potential regulatory regimes simultaneously. None of that is easy. All of it is necessary.


Small Team, Full Lifecycle, Compressed Timeline

Turion does not have hundreds of systems engineers. They have a small, technically deep team that must cover requirements, architecture, design, verification, and operations planning simultaneously. This is increasingly common in the new space industry and it creates a specific pattern of engineering risk.

Large programs have the luxury — and the curse — of specialization. A requirements engineer works requirements. A verification engineer works verification. The handoff between them is the problem, and organizations like NASA have spent decades building standards (SE-NPR-7123.1, for instance) to manage it. Small teams avoid that handoff problem because the same engineer often writes the requirement and owns the verification approach. Context is preserved in human memory, not in process artifacts.

The risk is what happens when that engineer leaves, or when the program scales, or when an external auditor needs to understand requirement rationale two years after the original decision was made. The institutional knowledge that lives in a senior engineer’s head is not available to a new hire, not auditable by a customer, and not recoverable after a gap.

This is not a Turion-specific problem. It is a structural challenge for every fast-moving small-team space company, and it has caused real program failures. SpaceDev, Kistler Aerospace, and others stumbled in part because engineering knowledge that was never properly externalized could not survive organizational stress.

The response that serious small teams are adopting is to invest in tooling that externalizes requirements context — traceability to rationale, not just to verification methods — without imposing the overhead of enterprise tools designed for teams ten times their size. The alternative, managing requirements in a shared spreadsheet or a word processor, actively destroys context even as the team believes it is capturing information.


What Serious Requirements Practice Looks Like at This Scale

For a program like DROID, serious requirements management at small-team scale means several things concretely.

First, requirements must carry rationale. Not just the performance value but the derivation: what analysis produced this number, what assumption set is it contingent on, what would change the requirement if the assumption proved wrong. This is the information that protects the team when they encounter the uncooperative target in orbit and need to rapidly assess whether an anomalous condition falls within their designed envelope.

Second, traceability must connect to the actual architecture, not just to a verification event. If the capture mechanism requirement traces only to “test during ATP,” that traceability is not useful when a customer asks why the design is the way it is. Traceability that connects requirements to the architectural decisions that implement them — and to the analyses that bounded the requirement in the first place — is operationally useful information, not just a compliance artifact.

Third, the requirements model must accommodate uncertainty explicitly. For Turion, writing a requirement against a distribution of target states means the requirement document must carry that distribution as engineering data, not just the derived bound. Graph-based requirements tools, which can represent relationships between requirements, their derivation sources, and the analytical models that produced them, are structurally better suited to this than flat document hierarchies. Tools like Flow Engineering, which are built around connected models rather than document templates, can represent the relationship between a performance requirement and the population analysis that produced it — making that context auditable and maintainable rather than tribal.

Fourth, requirements must be written with verification in mind from day one. For contact operations with uncooperative targets, some verification methods are inherently limited. You cannot fully test capture on the specific debris object you will encounter. This means your verification approach must be transparent about what it proves and what it leaves as residual risk. That residual risk needs to be formally dispositioned, not quietly ignored.


Honest Assessment

Turion Space is doing technically ambitious work with a lean team in an immature regulatory environment. The engineering challenges they face — probabilistic interface requirements, regulatory instability, uncooperative target dynamics — are genuinely novel and not well served by inherited processes from traditional space primes.

Their sequenced mission approach is thoughtful. Their entry through government characterization contracts is strategically sound. The questions that remain open are the questions that every small team in this space faces: whether the requirements and engineering knowledge being generated now will survive the organizational scaling that success will require, and whether the regulatory environment will stabilize fast enough for the commercial business case to close.

The systems engineering of in-space services is not harder than the orbital mechanics. It may be harder than the GNC. Getting requirements right for systems that must work against objects you cannot fully specify in advance is a first-principles problem that demands first-principles rigor — and that rigor has to be built deliberately, not assumed to emerge from smart people working hard.

Turion appears to understand this. Whether the timeline and resources allow them to execute on that understanding is the open question.