How the Fusion Energy Sector is Building Its Systems Engineering Foundation

Commercial fusion is no longer a physics project. At Commonwealth Fusion Systems, TAE Technologies, Helion Energy, Type One Energy, and a dozen other companies, programs have crossed a threshold: the question is no longer whether plasma confinement is theoretically achievable, but how to engineer a machine that does it reliably, safely, and economically. That transition — from experiment to engineering development program — is one of the most demanding systems engineering challenges in any industry. Fusion companies are now living inside it.

The difficulty is not just technical. It’s structural. These organizations were built by plasma physicists and computational scientists. Their processes were optimized for hypothesis testing and publication cycles, not for configuration control, requirements baselines, or formal verification. As they scale toward first-of-a-kind (FOAK) devices and, eventually, pilot plants, they are building a systems engineering foundation largely from scratch — often simultaneously with the engineering itself.

The Physics-Before-Requirements Problem

Every mature engineering discipline has a clean(ish) separation between design input and design output. You write requirements. You design to them. You verify against them. Fusion collapses this sequence in uncomfortable ways.

Confinement physics uncertainty is the root cause. The performance of a magnetic confinement device — whether tokamak, stellarator, or field-reversed configuration — depends on plasma behavior that is partially characterized, never fully predictable, and inherently nonlinear. Instability modes, disruption energy, fast particle losses: these are known phenomena with incomplete first-principles models. A requirement like “the first wall shall survive 10,000 disruption events at 5 MJ/m²” is technically meaningful, but the 5 MJ/m² figure is derived from plasma models that carry substantial uncertainty bands.

This creates what systems engineers at fusion companies describe informally as the “TBD spiral.” Requirements start with assumptions. Assumptions generate designs. Designs surface new physics questions. Physics questions force requirement revisions. In a traditional program, TBDs (to be determined) are acceptable early in Phase A but are supposed to close as the program matures. In fusion, some TBDs don’t close until the machine runs — which means you’re validating requirements with the system you built to meet them.

This isn’t a failure of engineering discipline. It’s the honest shape of first-of-a-kind development. The systems engineering response is to make that uncertainty explicit, traceable, and managed — rather than hiding it inside a requirements document that looks more settled than it is.

What Standards Are Fusion Companies Actually Using?

There is no fusion-specific systems engineering standard. That gap is real, and filling it is a genuine engineering challenge. What fusion companies are doing instead is selective borrowing from adjacent industries, with deliberate adaptation.

From nuclear: The U.S. Nuclear Regulatory Commission’s regulatory frameworks — particularly 10 CFR Part 50 (existing light water reactor rules) and the newer 10 CFR Part 53 (advanced reactor licensing framework) — provide the closest regulatory analogs. Fusion companies pursuing NRC licensing (which may be required for tritium-breeding or neutron-producing devices) are mapping their safety analysis and design control requirements against these frameworks. The NRC’s position on fusion is still evolving, and the agency issued preliminary guidance in 2023 acknowledging fusion’s distinct risk profile. Most fusion companies are not waiting for definitive regulatory clarity — they’re building licensable documentation structures now, with the assumption that requirements for defense-in-depth and formal safety case development will apply.

From aerospace: NASA-STD-7009 (models and simulations) and the broader NASA Systems Engineering Handbook have significant influence on how fusion companies manage their physics models as engineering artifacts. When a plasma simulation is being used to derive a design requirement — say, a thermal load on a divertor target — that simulation needs to be treated with the discipline of a model used to certify flight software. Uncertainty quantification, validation against experimental data, version control, and documentation of assumptions are all practices borrowed from aerospace’s model-based engineering culture.

From defense: MIL-STD-882 (system safety) and MIL-STD-461 (electromagnetic compatibility, for superconducting magnet systems with substantial energy storage) appear in the requirements infrastructure of several fusion programs. The hazard analysis methodology in 882 maps reasonably well onto fusion’s dominant risk categories: magnet quench energy, tritium inventory, cryogen releases, and high-voltage systems.

What fusion companies are generally not doing is adopting any of these frameworks wholesale. A 10 CFR 50 design control program, applied literally to a FOAK fusion device, would generate documentation overhead that small engineering teams cannot sustain. The adaptation is deliberate: extract the underlying principles (design basis documentation, independence of safety functions, controlled configuration changes), implement them with lean tooling, and build up as the program scales.

Plasma-Facing Components: The Requirements Engineering Nightmare

If there is one subsystem that most clearly exposes the limits of traditional requirements management in fusion, it’s plasma-facing components (PFCs): the first wall, divertor, and limiters that are in direct contact with or immediately adjacent to the plasma.

PFC requirements are technically dense. Surface heat flux, neutron fluence, tritium permeation, thermomechanical fatigue, sputtering erosion, and compatibility with vacuum and plasma chemistry all need to be specified, and they interact. A tungsten divertor tile that satisfies peak heat flux requirements may crack under the cyclic thermal stress of daily plasma pulses. A material optimized for low tritium retention may have unacceptable neutron activation characteristics.

The deeper problem is that PFC performance requirements are partially unknowable before operation. Fusion-relevant neutron irradiation data for candidate materials — tungsten, beryllium, reduced-activation ferritic-martensitic steels — is limited. The International Fusion Materials Irradiation Facility (IFMEA) in Rokkasho, Japan, is designed to address this, but it won’t generate the comprehensive irradiation database needed for full FOAK device design until well into the 2030s. Fusion programs are writing PFC requirements today using available irradiation data from fission reactors, ion beam experiments, and simulation — all of which are partial proxies for the actual environment.

This forces a requirements strategy of deliberate conservatism combined with formal uncertainty acknowledgment. Requirements are written with explicit safety margins that account for data gaps, and those margins are documented with traceability to the specific uncertainties they cover. When material data improves, the requirements can be updated — but the linkage to original uncertainty must be preserved so engineers understand what changed and why.

Traceability in First-of-a-Kind Systems

Traceability is where many fusion programs are most exposed. In a mature industry with established supply chains and precedent designs, requirements traceability is difficult but tractable. In a FOAK fusion program, it’s structurally harder for three reasons.

Requirements churn is high. Physics understanding improves. Regulatory positions evolve. Design iterations invalidate assumptions. A requirement that was baselined two years ago may have been written under a different plasma model, before a key experiment, or before a regulatory interaction that changed the licensing strategy. If traceability is maintained in static documents — requirements in one spreadsheet, design decisions in another, verification plans in a third — the linkages break silently. Engineers working downstream don’t know that the requirement they’re designing to has been superseded in practice even if not formally revised.

The requirement hierarchy is deep and cross-domain. A top-level requirement like “the fusion device shall achieve net energy gain with Q ≥ 2” flows down through plasma performance requirements, to magnet field requirements, to coil current requirements, to power supply requirements, to electrical safety requirements. Each level crosses a different engineering discipline. Maintaining traceability across that hierarchy — especially when teams are operating partially in silos — requires active tooling, not passive documentation.

Verification is deferred. For many FOAK requirements, formal verification cannot happen until the machine is built and operating. This means the verification plan is a living document for years, and the relationship between requirements and evidence must be actively managed across a long time horizon.

Legacy requirements tools — IBM DOORS and DOORS Next, Jama Connect, Polarion — are capable of maintaining structured requirement hierarchies and link matrices. What fusion teams report, consistently, is that these tools were built for programs with larger systems engineering staffs, more stable requirement sets, and more established verification methods. The overhead of maintaining a DOORS database — attribute management, change control workflows, reporting — is substantial for a team of twenty engineers trying to move fast on a novel device.

How Emerging Tooling Is Helping

This is where purpose-built, AI-native systems engineering tools are beginning to make a genuine difference. Fusion programs are not large enough to staff a dedicated systems engineering organization the way a major aerospace prime would. They need tools that provide structure without requiring a bureaucracy to operate them.

Flow Engineering, built explicitly for hardware and systems engineering teams, represents the category of tooling that fusion companies are evaluating and adopting for this reason. Its graph-based requirement model — where requirements, design elements, analyses, and verification evidence are nodes in a connected network rather than rows in a database — maps naturally onto the interconnected, physics-driven requirement structure that fusion programs actually have. When a plasma model is revised and a thermal load requirement changes, the connected graph surfaces immediately which downstream requirements, design decisions, and verification plans are affected. That impact propagation is exactly what fusion teams need when requirements churn is unavoidable.

The AI-native layer matters for another reason specific to fusion: a significant fraction of requirements are being written by engineers who are domain experts (plasma physicists, magnet engineers, materials scientists) but not requirements management specialists. Tooling that helps those engineers write well-formed, traceable requirements — suggesting linkages, flagging conflicts, identifying TBDs — provides leverage that compensates for the absence of a large systems engineering support staff.

The deliberate focus of tools like Flow Engineering on hardware and systems teams, rather than trying to serve software development, regulatory compliance, and business process management simultaneously, is an intentional trade-off. Fusion programs need depth in systems engineering support, not breadth across adjacent domains.

Practical Starting Points for Fusion Systems Engineering Teams

For fusion programs currently building their systems engineering foundation, a few principles hold across different confinement approaches and organizational scales.

Model your uncertainty explicitly. Don’t let TBDs disappear into requirement text. Every requirement derived from a physics model with significant uncertainty should carry a formal assumption or TBD tag, linked to the specific model output or data gap that drives it. This makes the requirement set honest and makes closure planning tractable.

Borrow frameworks selectively and document the adaptation. Using 10 CFR 53 principles for design control is sound. Claiming full 10 CFR 53 compliance when your program doesn’t yet meet all its provisions is a documentation liability. Be explicit about what you’re implementing and why, and what you’re deferring.

Treat the requirement graph as a living system. Static RTMs (requirements traceability matrices) are inadequate for high-churn programs. The linkages between requirements, models, design artifacts, and verification evidence need to be actively maintained and machine-readable, not just human-readable.

Size your process to your team. A six-person systems engineering team running a full MIL-STD-499C program structure will fail under process weight. Start with the minimum viable structure that provides real traceability and safety case support, and scale up as the program grows. Build infrastructure you can actually operate.

Honest Assessment

Fusion is doing something genuinely hard: building engineering programs around systems that don’t fully exist yet, for physics that isn’t completely characterized, under regulatory frameworks that are still being written. The systems engineering response — borrowing selectively from nuclear, aerospace, and defense while adapting for the FOAK context — is pragmatic and directionally correct.

The risks are real. Requirements written under physics uncertainty can calcify into design constraints that are harder to revise than they should be. Traceability, if neglected in the early phases when teams are moving fast, is expensive to reconstruct later. And the gap between what fusion programs know today and what they need to know to license and operate a pilot plant is large enough that requirements management failures could surface as licensing obstacles at exactly the wrong moment.

The teams building this foundation now — carefully, with appropriate humility about what they don’t yet know — are doing the unglamorous work that determines whether fusion energy is a physics achievement or an engineering industry.