Digital Twins for Systems Verification: Where the Practice Actually Stands in 2026
The phrase “digital twin” has been applied to everything from a CAD file that gets updated quarterly to a physics-based, real-time simulation environment that mirrors a flying aircraft. The gap between those two things is not rhetorical — it determines whether your digital twin produces verification evidence that a regulator will accept, or whether it produces a compelling demo that still requires you to run the same physical tests you always ran.
This article is not about the potential of digital twins. It is about the current state of simulation-based verification in production programs: what is actually being accepted as verification evidence, what still requires hardware in the loop or physical article testing, and what the technical and organizational gaps are that prevent broader adoption. It also examines a problem the industry has been slow to confront — how do you maintain traceability between requirements and simulation-based evidence when both the requirements and the models evolve independently?
What Simulation-Based Verification Actually Means
Before discussing what works, it is worth being precise about the activity. Verification confirms that a system or component meets its specified requirements. Traditionally, this means running a test on a physical artifact — a unit test, an integration test, a qualification test — and recording the pass/fail result against a requirement in a traceability matrix.
Simulation-based verification replaces or supplements that physical test with a model. The model simulates the behavior of the system under the specified conditions, and the model’s output becomes the evidence that the requirement is met. The critical distinction is between simulation as analysis (supporting engineering decisions during development) and simulation as verification evidence (formally demonstrating conformance to a requirement). Most programs do a great deal of the former. Relatively few have structured workflows for the latter.
Where Regulators Are Actually Drawing the Line
The standards landscape is more permissive than many engineers assume — but the conditions attached to that permission are significant.
DO-178C and DO-331 (Aerospace Software): DO-331 is the Model-Based Development and Verification supplement to DO-178C. It explicitly permits the use of simulation to verify software behavior, but it also requires that the simulation tool itself be qualified as a development tool under DO-330 if the tool’s output is used as verification evidence without independent review. Tool qualification is not a checkbox — it requires documented tool operational requirements, tool accomplishment summary, and evidence that the tool produces correct results for the intended use. Many programs underestimate this burden and discover it late.
ARP 4754A and AC 20-193 (System-Level Verification): At the system level, FAA AC 20-193 and EASA’s equivalent guidance have progressively acknowledged simulation and iron-bird testing as compliant means of verification for certain functional behaviors. Structural qualification, however, remains anchored to physical test requirements under FAR Part 25 for transport aircraft. You cannot certify a wing by simulation alone in 2026. You can use simulation to reduce the number of physical load cases you test, to extrapolate to untested conditions, and to support safety analysis — but the physical ultimate load test is not going away.
ISO 26262 (Automotive Functional Safety): ISO 26262 Part 4 explicitly lists simulation as a recommended method for software unit verification and hardware-software interface verification. SIL/HIL (Software-in-the-Loop / Hardware-in-the-Loop) testing is well-established and accepted at the unit and integration level. However, for ASIL D functions, the standard still expects a combination of methods, and final system validation on the target vehicle hardware is expected. The OEM’s functional safety case must justify any simulation-only coverage.
IEC 61508 (Industrial Functional Safety): Similar structure to ISO 26262 — simulation is listed as a technique for verification at lower levels, but the safety case must justify the model’s validity, and physical testing of the final installed configuration is expected for higher SIL levels.
The common thread across all these frameworks: simulation is accepted, tool qualification and model validation are required, and physical testing of the final system configuration is rarely fully replaceable at high safety integrity levels.
Where Simulation Is Genuinely Replacing Physical Test
There are domains where simulation-based verification has measurably reduced physical test burden with regulatory acceptance:
Structural analysis (FEA): Finite element analysis is broadly accepted as verification evidence for non-structural components and secondary structure. For primary structure, simulation supports the test matrix reduction — analysis justifies why you test fewer load cases by demonstrating similarity to analytically understood cases. The FAA has accepted FEA-based stress analysis as part of the certification basis for decades, provided the analysis methods are validated and the analyst credentials are documented.
Thermal and fluid dynamics (CFD/thermal): Computational fluid dynamics and thermal simulation are accepted for cooling system verification, airflow analysis, and thermal margin demonstration in electronics. These methods are well-established in mil-aero electronics and in automotive powertrain programs. The validation approach matters — correlated against physical test data from representative articles — but once validated, the simulation can be used to explore a design space that would be impractical to test physically.
EMC pre-screening: Electromagnetic compatibility simulation has become a standard part of automotive and aerospace development workflows, primarily as a pre-screening tool that reduces the number of EMC chamber sessions required. It is not yet used as standalone certification evidence in most jurisdictions, but it is accepted as part of a combined analysis-and-test approach.
Software behavior verification (MIL/SIL/HIL): The progression from Model-in-the-Loop through Software-in-the-Loop to Hardware-in-the-Loop is the clearest example of simulation replacing physical test at lower levels. The automotive industry, in particular, has mature toolchains (MATLAB/Simulink, dSPACE, National Instruments VeriStand) and acceptance by OEMs and Tier 1s that MIL/SIL testing provides valid evidence for software unit verification. HIL testing with production ECUs is broadly accepted as integration verification evidence.
What Still Requires Physical Test
The list of things that cannot currently be verified by simulation alone is longer than advocates often acknowledge:
- Ultimate structural load testing for primary structure in transport aircraft
- Pyrotechnic and separation system function (flight termination systems, stage separation, crew escape systems)
- Tire and brake system qualification under FAR/CS-25
- Crash testing under automotive regulations (FMVSS, NCAP, EURONCAP) — simulation supports design but physical tests are mandated
- Crew oxygen system qualification
- Flammability and fire suppression verification
- Seal and leakage testing for pressurized systems
- Production conformity testing (individual article verification)
Some of these are mandated by regulation regardless of simulation fidelity. Others reflect genuine uncertainty about whether models can capture all relevant failure modes under extreme conditions.
The Traceability Problem No One Has Solved Well
Here is the gap that most digital twin discussions skip over: when simulation output replaces or supplements physical test results as verification evidence, the traceability architecture of the program has to accommodate that.
In a traditional physical test program, the traceability chain looks like this: requirement → test procedure → test result → pass/fail closure. The test procedure is a controlled document. The test result is a data record. The link between them is explicit.
When verification evidence comes from a simulation, the chain becomes: requirement → model configuration → simulation run parameters → simulation output → pass/fail interpretation. Every element in that chain can change independently. The model can be updated to fix a bug or incorporate design changes. The requirements can be revised. The simulation parameters can be adjusted. Unless there is a disciplined configuration management system that captures the specific model version, input deck, and parameter set associated with each verification claim, the traceability chain is fragile.
This problem is compounded by the fact that simulation models typically live in engineering data management systems (PDM/PLM) while requirements live in requirements management tools — and the two systems are rarely integrated. The result is that when an auditor asks “which version of the model was used to verify Requirement 4.3.2, and what were the boundary conditions?” the answer frequently requires a manual search across multiple disconnected systems.
The programs that are handling this well are treating simulation-based verification artifacts the same way they treat physical test reports: versioned, linked to a specific requirement at a specific revision, and managed in a system that can produce a coherent audit trail. That is a workflow problem as much as a tooling problem.
How Modern Tools Are Addressing the Gap
Traditional requirements management tools — IBM DOORS, Jama Connect, Polarion — were designed around the document-based, test-result-as-attachment workflow. Linking a simulation artifact to a requirement in these tools typically means attaching a PDF of the analysis report, which captures the output but not the model provenance.
The industry is beginning to see tools that approach this differently. Flow Engineering, built specifically for hardware and systems engineering teams, uses a graph-based data model that can represent the relationship between a requirement, the model that was used to verify it, the specific model version and configuration, and the evidence artifacts produced — as interconnected nodes rather than documents with attachments. That structure makes it possible to query impact: if Requirement 4.3.2 changes, which simulation-based verifications are potentially invalidated? If the thermal model is updated to version 3.1, which requirement closures were based on version 3.0 and may need re-verification?
This is not a theoretical capability — it is the practical problem that programs scaling simulation-based verification are running into. The limitation is that Flow Engineering, like most tools in this space, does not directly execute or manage simulation runs. It manages the requirements, the traceability, and the evidence links. Integration with simulation execution environments (MATLAB/Simulink, ANSYS, dSPACE) requires API connections that each program has to configure for their specific toolchain.
The Organizational Barrier Is Larger Than the Technical One
In programs where simulation-based verification has stalled, the technical fidelity of the models is rarely the primary problem. The barriers are organizational:
Separate functional owners. Simulation capability lives in analysis groups. Verification evidence is owned by systems engineering or V&V teams. These groups often have different tools, different data formats, and different understanding of what constitutes verification evidence versus engineering analysis.
Model validation is treated as a one-time activity. A thermal model validated against test data from a 2023 hardware build is not automatically valid for a redesigned component in 2026. Programs that validated their models once at program start and then continued using them without re-validation are accumulating silent risk.
Configuration management for models is an afterthought. Simulation input decks, mesh files, material property databases, and solver configuration files are often stored on shared drives with informal version control. Reconstructing the exact simulation state that produced a specific result six months later is frequently impossible.
Regulatory engagement is deferred. Programs often build an entire simulation-based verification approach and then discover, late in the certification process, that the regulatory authority requires additional physical testing or model qualification evidence they did not plan for. Early engagement with the certifying authority on the means of compliance is consistently the difference between programs that use simulation effectively and programs that run simulation in parallel with physical test without reducing cost.
An Honest Assessment
Digital twins for systems verification are real, useful, and increasingly accepted — within bounded domains, with documented model validation, with qualified tools, and with traceability infrastructure that most programs have not yet built. The version of the story where simulation replaces physical test at the system level for safety-critical certification is not the current state and is not likely to be the near-term state for most regulated industries.
The most productive framing is not “simulation versus physical test” but “what is the minimum necessary physical test program, and how does simulation evidence support and justify that minimum?” That framing is achievable now. It requires taking model validation seriously, treating simulation artifacts as first-class verification evidence with full configuration management, and investing in the traceability infrastructure that keeps requirements and models synchronized as both evolve.
The programs that will benefit most from digital twins in the next three years are not the ones building the highest-fidelity physics models. They are the ones building the most disciplined connection between their models and their requirements — and treating that connection as something that has to be maintained throughout the program, not established once and forgotten.