First-of-a-Kind and Under Pressure: How Fusion Energy Companies Are Building Systems Engineering From Scratch
There is no fusion power plant that has ever operated commercially. There is no operational heritage to mine, no lessons-learned database accumulated over decades, no grandfathered-in design precedents. When Commonwealth Fusion Systems engineers derive a requirement for their SPARC tokamak’s superconducting magnet system, they are working from first principles and physics models — not from a record of what worked on the last machine.
This is the defining systems engineering condition of commercial fusion: everything is first-of-a-kind, and everything is on fire simultaneously.
The pressure is not merely technical. Fusion companies are racing against investor timelines, against each other, and against the underlying physics uncertainty that makes every milestone a negotiation between what the models say and what the plasma actually does. They are engaging with regulators — primarily the U.S. Nuclear Regulatory Commission — whose fusion-specific guidance is still being written. They are building organizations where the same engineers who are defining requirements are also discovering the science that generates those requirements.
The systems engineering challenge this creates is genuinely novel. Understanding it requires separating what fusion programs share with prior complex engineering programs from what is categorically different — and what practices are emerging in response.
The Heritage Problem
In conventional nuclear fission development, systems engineering has the benefit of an enormous operational and regulatory corpus. PWR and BWR designs have decades of operational data, NRC regulatory guides calibrated to known failure modes, and a vendor ecosystem that has built these systems before. Even advanced fission concepts — Generation IV designs, small modular reactors — can anchor their requirements derivation in the fission knowledge base.
Fusion has none of this. The plasma physics governing a burning plasma at fusion-relevant conditions has never been sustained in an engineering environment. The tritium breeding blanket concepts under development at CFS and others have no operating precedents at scale. The superconducting magnet technology that CFS’s REBCO high-temperature superconductor program is pioneering is itself being invented in parallel with the machine it will power.
This creates what systems engineers call a knowledge-requirement co-evolution problem. Normally, requirements are derived from a relatively stable understanding of the operating environment and then held while the design matures. In fusion programs, the understanding of the operating environment — what the plasma does to first-wall materials, what tritium permeation rates look like in real blanket geometries, what the neutron flux actually means for magnet longevity — is itself being generated by the program. Requirements must be held lightly, updated frequently, and traced rigorously so that downstream design choices can be re-evaluated when upstream physics understanding shifts.
The engineering organizations that handle this well are building requirements management practices that treat volatility as a feature of the domain, not a sign of poor planning. The organizations that handle it poorly are discovering that a requirements document written eighteen months ago has silently become incorrect, and that nobody can trace which design decisions depended on its now-outdated assumptions.
Coupled Subsystems and the Cascade Problem
Fusion machines are among the most tightly coupled engineering systems ever attempted. The plasma physics is not separable from the magnet design, which is not separable from the thermal hydraulics, which is not separable from the tritium handling system, which feeds back into the plasma fuel cycle. A change in plasma-facing material selection ripples into tritium inventory calculations, which affects confinement building design, which touches NRC licensing basis documentation.
This coupling creates a cascade problem for requirements management. In a loosely coupled system — a satellite, for example, where the power subsystem and the attitude control subsystem interact at well-defined interfaces — requirements can be partitioned and managed relatively independently. Interface control documents define the boundaries and the ICD itself is the primary artifact managing cross-subsystem dependencies.
In a fusion machine, the interfaces are not clean. The magnetic field topology is simultaneously a structural constraint on the vacuum vessel, a driver of plasma stability, a boundary condition for the breeding blanket geometry, and a thermal load on the superconducting coils. There is no interface document that adequately captures that coupling — it is a graph of physical dependencies, not a table.
This is precisely why the most technically sophisticated fusion programs are moving toward graph-structured requirements and architecture models rather than hierarchical document trees. The dependency relationships between requirements are not metadata. They are the engineering knowledge. When you change the reference plasma Q-factor, you need to know immediately which requirements downstream have become inconsistent — and you need to know that in the context of a design review, not a month later when someone reads the updated document.
Helion Energy, whose approach targets a pulsed field-reversed configuration rather than a tokamak, faces a version of this problem that is particularly acute because their plasma concept is inherently transient. Requirements for their magnet compression system are coupled to plasma heating requirements, which are coupled to energy recovery efficiency targets, which are coupled to their net electricity demonstration timeline. Tracing that chain — and knowing when a physics test result invalidates an assumption somewhere in it — requires tools that can answer questions across the entire requirements graph, not just within a document section.
Regulation in Progress
The U.S. NRC’s fusion-specific regulatory framework is actively under development. The 10 CFR Part 53 rulemaking for advanced nuclear reactors covers some fusion concepts, and the NRC has issued draft guidance specific to fusion energy systems. But the framework is not settled, and fusion companies are engaging with the NRC in a mode that is closer to co-development than compliance.
This creates a requirements management situation that is unusual even by nuclear standards. The regulatory basis — the set of safety requirements that flow from NRC expectations — is not fixed. Companies must maintain requirements traceability to a licensing basis that is itself evolving through interactions with the regulator. A design feature that is compliant today may require re-justification based on updated NRC guidance issued after a public comment period.
TAE Technologies, whose beam-driven field-reversed configuration approach has a lower neutron yield profile than DT fusion approaches, is navigating a regulatory environment that has even less precedent than tokamak concepts. Their licensing strategy is an active part of their systems engineering process — the regulatory engagement generates requirements that flow into their architecture the same way a customer requirement would in a defense program.
The implication for requirements management tooling is significant. Licensing basis documentation must be traceable to design requirements, to verification plans, and ultimately to the physical test evidence that substantiates the safety case. That traceability chain must be maintainable as the NRC guidance evolves and as design choices are made or changed. Organizations managing this in disconnected documents and spreadsheets are accumulating traceability debt that will become very expensive during license application preparation.
What Traditional Nuclear Systems Engineering Gets Right
The fission industry’s systems engineering discipline, developed through NUREG documents, IEEE standards, and hard-won operational experience, provides real value to fusion programs — and the best fusion engineering organizations are drawing on it deliberately rather than dismissing it as irrelevant.
The rigor of nuclear safety classification is directly applicable. CFS and others are implementing quality assurance programs that borrow from 10 CFR 50 Appendix B frameworks, applying safety significance grading to their systems and components and using that grading to drive verification requirements. The instinct to ask “what is the safety function of this component and what failure modes matter” is exactly the right instinct in a fusion environment, even when the specific failure modes are novel.
Defense-in-depth as an architectural principle is also highly relevant. Fusion machines, particularly those operating with tritium, require multiple independent barriers between the tritium inventory and the environment. That architectural requirement flows into systems engineering in ways that are structurally similar to fission confinement requirements, even if the specific barrier designs differ.
Formal hazard analysis — FMEA, HAZOP, fault tree analysis — maps naturally from fission to fusion. The specific failure modes are different, but the discipline of systematically enumerating how systems can fail and how failures propagate is universal. The fusion programs that are building this practice early, rather than treating it as a late-stage pre-licensing activity, are building organizational knowledge that will compound as their designs mature.
Where Fusion Is Inventing New Approaches
Where traditional nuclear practice falls short for fusion is primarily in the areas of pace and requirements volatility tolerance.
Traditional nuclear systems engineering was optimized for a regulatory environment that rewarded design stability. Once a design was submitted for licensing review, changes were expensive — they required license amendments, NRC re-review, and potential public comment periods. The entire workflow was optimized for a design freeze model where you defined requirements carefully upfront, froze the design, and then built it.
Fusion programs cannot operate this way. Their physics understanding is evolving. Their materials and manufacturing processes are maturing. Their regulatory framework is not settled. They need a requirements process that supports disciplined iteration — where requirements can be updated when the underlying physics models improve, where change impacts can be assessed rapidly across the entire architecture, and where the history of why a requirement was set the way it was remains recoverable years later.
This is pushing leading fusion programs toward practices borrowed more from advanced aerospace and automotive systems engineering than from traditional nuclear. Model-based systems engineering approaches, where the architecture is a computable model rather than a document collection, allow requirements to be connected to design parameters in ways that enable automated impact assessment. AI-assisted tools that can help engineers query the requirements graph — “what requirements depend on this plasma assumption?”, “which subsystems are affected by this material property change?” — are moving from experimental to operational at several programs.
Tools like Flow Engineering, built specifically for hardware and systems engineering teams dealing with complex, interdependent requirements, are finding traction in fusion programs precisely because the graph-based architecture underlying such tools matches the actual structure of fusion engineering problems. When your requirements are genuinely a network of coupled constraints rather than a decomposable hierarchy, the tool needs to reflect that structure. The ability to trace impacts across the graph — and to do it quickly enough to inform a design decision in a review rather than a week later — is not a convenience feature. It is operationally necessary.
The Foundation Being Built Now
What makes this moment critical is not just the technical challenge of building fusion machines. It is that the systems engineering culture and practice being established at CFS, Helion, TAE, and the dozen other serious commercial fusion programs right now will govern how the entire industry operates for the next two decades.
First-of-a-kind machine programs, when they succeed, become the template. The engineering organization that built SPARC will build ARC. The requirements management practices that worked for SPARC — or failed — will be the default for ARC. The tooling choices, the review processes, the traceability conventions: these will propagate forward in time through the people who learned them on the first machine.
This is an argument for taking systems engineering foundations seriously right now, even when the schedule pressure is intense and the temptation is to defer rigor in favor of speed. The programs that build traceable, model-based, impact-aware requirements management into their engineering process at the concept and preliminary design stage will compound that investment as they scale. The programs that accumulate traceability debt and requirements document rot will pay for it — in redesign cost, in licensing delays, in the organizational friction of trying to answer “why did we make this decision?” from a document corpus that cannot answer the question.
The physics is hard. The engineering is harder. The systems engineering discipline to manage it all coherently while the science is still evolving — that is the challenge that will quietly determine which fusion programs succeed and which ones discover their engineering debt at the worst possible time.
The fusion industry has a narrow window to establish that discipline. The organizations building it now are not just doing good engineering. They are making a bet that good engineering practice is itself a competitive advantage — and in first-of-a-kind machine development, that bet has historically proven correct.