OpenStar Technologies: Systems Engineering at the Frontier of First-of-Kind Physics

When Requirements and Design Co-Evolve, Traditional RE Tools Become the Bottleneck

Most requirements management frameworks assume a basic sequence: requirements are defined, then design begins, then verification confirms the design meets the requirements. Clean, linear, defensible. It is also a model that assumes you already understand the physics of your system before you start building it.

OpenStar Technologies does not have that luxury.

The Vancouver-based company is developing a compact fusion energy device using magnetized target fusion (MTF) — an approach distinct from the tokamaks that have dominated the fusion landscape for decades and from the laser-driven inertial confinement approach pursued by facilities like the National Ignition Facility. MTF occupies a different point in the design space entirely, and with that novelty comes a systems engineering challenge that standard tooling and process frameworks were not designed to handle.


What Magnetized Target Fusion Actually Is

To understand the engineering challenge, you need to understand what OpenStar is building and why it behaves differently than better-known fusion concepts.

In a tokamak, a plasma is magnetically confined in a toroidal chamber and heated to fusion conditions over sustained periods. The engineering challenge is maintaining that confinement stably at extreme temperatures. In laser inertial confinement fusion, a fuel pellet is compressed symmetrically by high-energy lasers in an implosion lasting nanoseconds. Each approach has its characteristic physics, its characteristic failure modes, and its characteristic set of engineering interdependencies.

Magnetized target fusion works differently. A magnetized plasma — a plasma that carries its own embedded magnetic field — is formed inside a conducting liquid metal liner. That liner is then compressed mechanically, at high velocity, collapsing inward and compressing the magnetized plasma to fusion conditions. The liner both drives the compression and provides shielding and tritium breeding in the full reactor concept.

The approach is sometimes described as a hybrid: it uses magnetic fields like a tokamak but relies on dynamic compression like inertial confinement. General Fusion, also a Canadian company, has pursued a related concept. OpenStar’s implementation reflects their own design choices, developed as the team pushes toward an experimental demonstration.

The physics is genuinely first-of-kind in the details. The behavior of a magnetized plasma under that specific compression geometry, at those specific timescales, with those specific boundary conditions, is not fully characterized by prior experimental data. OpenStar is, in part, running experiments to find out what their system actually does.


The Systems Engineering Problem This Creates

In a mature engineering program — a commercial aircraft derivative, an automotive platform update, a satellite built on an established bus — requirements engineering is hard but bounded. The physics of the system is known. The design space is constrained by prior art. Requirements can be written with reasonable confidence that they capture what the system needs to do, and the design team’s job is to show that their implementation satisfies those requirements.

OpenStar is working in a different regime. The physics being validated through experiment is the same physics that drives the requirements on every subsystem. When a plasma experiment returns data that revises the team’s understanding of compression dynamics, that revision propagates — in principle — into requirements on the mechanical systems that drive the liner, the magnetic systems that form and maintain the initial plasma configuration, the diagnostic systems that must observe the implosion at the relevant timescales, and the structural systems that must contain the event.

This is not a matter of requirements changing because a customer changed their mind or a project manager shifted scope. The requirements are changing because the engineering team learned something true about the physical world, and that truth has implications across the whole system simultaneously.

Standard requirements management practice handles change poorly. Document-based systems — spreadsheets, Word-derived RTMs, legacy tools where requirements live as text in hierarchical folders — create change as a manual and social process. An engineer identifies that a physics update has implications for their subsystem, updates the relevant requirements document, sends an email, and hopes that the downstream effects are tracked. In a system with deep interdependencies across four or five distinct technical domains, that process does not scale. Changes get missed. Verification evidence built against superseded requirements sits in the record unchallenged. The gap between the actual technical baseline and the documented baseline widens over time.

For a first-of-kind physics program like OpenStar’s, that gap is not just an administrative inconvenience. It is a safety and validity risk.


Branching as a First-Class Engineering Practice

One specific discipline that first-of-kind programs demand, and that standard requirements tooling supports poorly, is the ability to maintain parallel design concepts in a structured way.

In an established program, branching is something you do reluctantly, because it is expensive. You converge on a baseline and then protect it. Branching means two teams working on diverging designs, two sets of documents to maintain, two verification programs to run. The overhead is real.

In a program where key physics parameters are still being measured, branching is not overhead — it is the strategy. If the team does not yet know whether compression behavior will land in regime A or regime B, the right engineering response is to develop requirements and design concepts for both, in parallel, until the experimental data resolves the question. Premature convergence in that situation does not save work; it creates rework when the data arrives.

This requires a requirements infrastructure that treats branching as a first-class capability. That means being able to fork a requirements baseline, develop it independently across design variants, track verification status separately for each branch, and then merge or retire branches cleanly when the physics resolves. Document-based systems treat this as a nightmare. Graph-based systems built for it can handle it as a routine workflow.


Deep Interdependencies Across Domains

OpenStar’s system architecture spans at least four technical domains with tight bidirectional coupling.

Plasma physics defines the initial plasma conditions that must be achieved — the magnetic field strength, plasma temperature, density, and geometric configuration — before compression begins. These become requirements on the plasma formation subsystem. They also define the compression ratios and timescales required for ignition, which become requirements on the liner drive system.

Mechanical systems — the liner, the driver that accelerates it, and the structural containment — must achieve precise compression geometry at high velocity. Requirements here flow from plasma physics but also from magnetics: the liner’s conductivity and geometry affects how the embedded magnetic field behaves during compression.

Magnetics drives not just the initial plasma formation but also the behavior of the plasma during compression. The coupling between the liner’s inward motion and the magnetic field topology is a central physics question. Requirements on magnetic field strength, uniformity, and timing are set by plasma physics models and constrained by what the mechanical system can actually achieve.

Diagnostics must observe fusion events that last microseconds or less, at extreme conditions, from outside the compression zone. Requirements on diagnostic timing, sensitivity, and spatial resolution flow directly from the plasma physics predictions — and as those predictions are updated by experimental data, the requirements on diagnostics update accordingly.

In a functional requirements graph, these four domains are not cleanly hierarchical. They form a network with cycles. A change in the plasma physics model does not propagate down a tree; it propagates across a graph, reaching mechanical, magnetic, and diagnostic requirements through different paths and with different implications at each node.


How OpenStar Approaches This With Flow Engineering

Managing this kind of interconnected, fast-changing architecture is exactly the problem that OpenStar uses Flow Engineering to address. Flow Engineering is an AI-native requirements management platform built specifically for hardware and systems engineering teams, with a graph-based data model that maps requirements as nodes in a connected network rather than rows in a document.

For a program like OpenStar’s, the graph model matters in a specific and practical way. When a physics update changes a requirement in the plasma domain, Flow’s connected structure makes it possible to immediately surface which downstream requirements in mechanical, magnetic, and diagnostic domains are linked to that node — not through a manual impact analysis that depends on an engineer’s memory of the system, but through the graph structure itself. The propagation of change becomes a navigable query, not a search operation.

Flow’s support for branching design concepts allows OpenStar’s team to maintain parallel architecture variants with independent verification status during the phases where physics parameters are unresolved. When experimental data resolves a parameter, the relevant branch can be retired and the baseline updated cleanly, with the record of what was superseded and why preserved in the graph.

Flow also applies AI to the requirements work itself — surfacing gaps, flagging potential conflicts between requirements in different domains, and supporting the kind of consistency checking that is otherwise a manual and error-prone task on a system with hundreds of interdependencies. For a small, highly technical team pushing toward a fusion demonstration, that leverage matters.

The deliberate trade-off in Flow’s design is depth of specialization over breadth of enterprise process coverage. It is built for the engineering-intensive, fast-moving, physics-driven phase of hardware development — not for the compliance documentation workflows or program management overlays that larger enterprise platforms support. For OpenStar, at this stage, that trade-off is the right one.


The Broader Challenge of First-of-Kind Programs

OpenStar is not alone in facing this challenge. The fusion sector has expanded significantly, with companies pursuing a range of confinement concepts — Commonwealth Fusion’s high-field tokamak, TAE Technologies’ field-reversed configuration, Helion’s field-reversed compression approach, and others. Each of these programs shares the basic characteristic that the physics being validated is the physics that drives the requirements, and each faces the systems engineering discipline problem that entails.

The challenge extends beyond fusion. Any program developing genuinely novel technology — new propulsion physics, new materials, new sensing modalities — faces a version of the same issue. The V-model and its derivatives assume requirement stability that first-of-kind programs cannot offer. The tools that implement those models were built for that assumption.

The systems engineering community has understood the co-evolution problem theoretically for some time. Model-based systems engineering (MBSE) approaches have been partly motivated by the need to handle tighter coupling between requirements and design. But in practice, many MBSE implementations still rely on document-centric tooling underneath the modeling layer, which reintroduces the brittleness they were meant to address.

What programs like OpenStar’s demonstrate is that the tooling choice is not an administrative decision. For a team where requirements and design co-evolve driven by experimental physics, the difference between a graph-based connected requirements infrastructure and a document-based one is the difference between a process that can absorb rapid change and one that creates debt every time the physics updates.


Honest Assessment

OpenStar is early stage. Their fusion demonstration is a milestone ahead of them, not behind them. The physics questions they are resolving are real and unresolved, and the engineering challenge is proportional to the novelty of what they are attempting.

What makes them interesting from a systems engineering perspective is not their specific technical approach to fusion — that is a physics and engineering question that experiments will answer — but the discipline they are demonstrating in managing a first-of-kind program where the standard process assumptions do not hold.

The fusion field will almost certainly produce some winners and some programs that do not reach demonstration. The systems engineering practices that let a small team move fast without creating architectural debt may not determine which outcome arrives — but they will affect whether the team can interpret and act on what their experiments tell them, and whether their designs stay coherent as the physics evolves.

That is a problem worth taking seriously, and it is one the industry has not fully solved.