What Does a Good Verification Plan Look Like for a Novel Technology With No Prior Flight Heritage?

A program with flight heritage can point to a previous vehicle and say: this component performed within specification for 47 missions under these environmental conditions. That statement does work. It satisfies regulatory reviewers, shortens test matrices, and justifies similarity-based arguments for derivative designs. Novel technology programs — a tiltrotor eVTOL with a new distributed electric propulsion architecture, a reusable upper stage with a regeneratively cooled aerospike, a crewed spacecraft using composite pressure vessels for the first time — cannot make that statement. They have to build the same credibility from nothing, on a commercial schedule, under certification frameworks that were not originally written with them in mind.

This is not an unsolvable problem. It is a tractable engineering and documentation challenge, and programs that treat it as such tend to close. Programs that treat it as a bureaucratic obstacle to manage around tend to discover their verification gaps late, when changes are expensive and schedules are no longer recoverable.

What “Verification Basis” Actually Means Without Heritage

For a mature program, the verification basis is largely inherited: previously accepted test reports, type certificate data sheets, qualified parts lists, and precedent from similar certified systems. For a novel technology program, the verification basis must be constructed deliberately. It has four components.

Hazard-driven requirements. Before you can verify anything, you need requirements that trace to identified hazards. For eVTOL and commercial space, this means conducting a system-level hazard analysis — typically a Functional Hazard Assessment (FHA) for aviation-adjacent programs and a Preliminary Hazard Analysis (PHA) for space — early enough that the outputs actually drive requirement generation. The verification plan inherits its structure from this analysis. Requirements without hazard traceability are requirements the certifier will ask about.

Explicit assumption registers. Novel technology programs operate on assumptions that heritage programs have already converted to data. Every assumption in your verification plan — about material behavior, about load combinations that have never been tested, about failure mode independence in a new architecture — should be named, owned, and tracked. The assumption register is not a compliance artifact. It is the list of things that could invalidate your verification argument, and it should be reviewed at every major milestone.

Method-to-requirement assignment. Each requirement gets a verification method: test (T), analysis (A), inspection (I), or demonstration (D). This assignment is not decoration. It is a commitment that drives procurement, schedule, and cost. For novel technologies, the method selection requires explicit justification when you are not using test — because test is the default that regulators understand.

Closure criteria. A verification activity is not complete when the report is written. It is complete when the evidence satisfies a pre-defined acceptance criterion that is traceable to the requirement. Programs that define closure criteria after the fact spend significant schedule time in negotiations with their certification authority.

Similarity Arguments: Where They Work and Where They Break

Similarity is a legitimate verification technique. It is also the technique most likely to be applied carelessly, because it feels like a shortcut and is sometimes treated as one.

A valid similarity argument has three parts: a reference item (the thing you are claiming similarity to), a comparison scope (the specific characteristics being compared — geometry, material, operating environment, load spectrum), and a bounded delta (an explicit statement of how the new item differs and why those differences do not invalidate the comparison). The third part is what most programs underinvest in.

For eVTOL, a motor controller that uses an IGBT topology with 15 years of automotive qualification data is not automatically similar to an aviation-qualified inverter. The thermal cycling profile is different. The altitude-related derating is different. The failure mode consequences are different. A similarity argument that ignores these deltas will not survive DER or ASA review. One that explicitly analyzes each delta — with quantitative bounds where possible — can be accepted, particularly when supported by targeted environmental testing that confirms behavior at the boundaries.

For commercial space, structural similarity arguments must address load path differences and manufacturing process equivalence. A composite overwrap pressure vessel (COPV) that shares a liner material with a previously qualified design is not similar if the winding angle, proof-to-burst ratio, or acceptance test pressure is different. Columbia and Challenger both carried lessons about the limits of similarity arguments under schedule pressure. Those lessons are embedded in current FAA AST and NASA program requirements, and reviewers know them.

The practical rule: use similarity to bound the test matrix, not to eliminate it. A strong similarity argument reduces the scope of required testing. It rarely eliminates the need for testing entirely on a first-of-kind system.

Modeling and Simulation as Verification: The Conditions That Make It Acceptable

Regulators — the FAA, EASA, and FAA AST — do accept modeling and simulation (M&S) as a verification method. The conditions under which they accept it are specific, and programs that miss those conditions lose significant schedule when the acceptance question comes up at a PDR or CDR review.

The core requirement is model credibility, established through a documented Verification and Validation (V&V) process for the model itself. This is distinct from the V&V of the product. You are validating the model’s fitness for the specific purpose it is being used for.

Validation scope must match intended use. A CFD model validated against wind tunnel data at sea-level conditions has not been validated for high-altitude relight prediction. A structural FEM validated against coupon-level test data has not been validated for full-assembly dynamics with fastener flexibility. The validation coverage map — what conditions and outputs have been validated, to what accuracy, with what confidence — must be documented and must encompass the conditions in the requirement being verified.

Independent validation data. The data used to validate the model cannot be the data the model was tuned to. If your aerodynamic model was calibrated against internal wind tunnel data and you are then using that same data set as validation evidence, reviewers will reject the argument. Independent sources — published literature, government test data, third-party test campaigns — carry more weight.

Sensitivity analysis and uncertainty quantification. For every M&S-based verification finding, the plan should document a sensitivity analysis showing how the output changes with uncertainty in key inputs. If the model output passes the requirement by a margin smaller than the model’s quantified uncertainty, that is not a verification closure. It is an uncertainty management problem that needs to be resolved before closure.

Regulatory pre-coordination. For FAA Part 23 or Part 27/29 equivalence paths, and for FAA AST license applications, M&S-heavy verification plans benefit substantially from early coordination with the certification authority — issue papers for FAA, specific means of compliance for EASA, or equivalent mechanisms. Presenting M&S as a primary verification method for a novel system without prior agreement on the validation standard is a common source of late-program delays.

The Documentation Trail Certification Bodies Actually Need

The failure mode for non-test verification is almost never technical. The aerodynamics team may be entirely correct that their validated CFD model demonstrates compliance. The failure mode is that the documentation chain cannot be reconstructed by a reviewer who was not in the room when the decisions were made.

A complete verification record for a non-test method includes:

  • The requirement text and its hazard traceability
  • The verification method selected and the rationale for selecting it over test
  • The model or analysis pedigree: software version, configuration, key inputs, output files
  • The validation evidence: what data, what conditions, what accuracy was demonstrated
  • Sensitivity and uncertainty analysis results
  • The closure criterion and the compliance finding against it
  • Reviewer sign-off, including independent technical review where the certification plan requires it
  • Configuration management — the analysis must be re-runnable from the archived inputs

This chain must be auditable. If a certification authority DER or ASA requests to reconstruct the verification finding two years after closure, the record must support that reconstruction. Programs that keep this documentation in engineering notebooks, email threads, and shared drives with no version control routinely fail this test.

Cost and Schedule Implications of Method Selection

Verification method selection is a cost and schedule decision with long-term consequences, and programs that treat it as a purely technical decision miss significant risk exposure.

Hardware test is expensive and schedule-constrained. For novel technologies, test articles must often be built before the design is mature, creating the risk of testing something that does not represent the final configuration. Each test event has a procurement lead time, a facility reservation, a test readiness review, and a data reduction cycle. A complex environmental qualification test campaign for an avionics chassis can take 18 months from requirement to closure.

Analysis and M&S can be faster and cheaper per verification point — but only when the model infrastructure is already in place and validated. Building that infrastructure from scratch for a novel technology is itself a significant program investment. The cost savings from avoiding hardware test accrue to programs that invest early in model development and validation; they do not accrue automatically.

The hidden cost of method selection is late discovery. A test campaign that fails reveals a noncompliance while hardware is still in production and design changes are possible. An analysis-based verification finding that is rejected by the certifier at a late-stage review can require a hardware test campaign on a schedule that no longer has margin. Verification method decisions made without a rigorous risk assessment of the “what if this method is not accepted” scenario are decisions that defer risk without eliminating it.

The practical recommendation: for safety-critical requirements on novel technologies, default to test as the primary method unless you have a documented, pre-coordinated basis for an alternative. Use analysis and M&S to reduce the test matrix — fewer test points, narrower parameter sweeps, targeted rather than comprehensive — but retain test as the primary evidence source for the requirements with the highest consequence of failure.

How Modern Tools Assign, Track, and Close Verification

The operational challenge of a large verification program is not developing the right strategy. It is executing against that strategy at scale — across hundreds or thousands of requirements, with multiple verification methods per requirement, across a team where not everyone has visibility into the overall status.

Flow Engineering (flowengineering.com) is built specifically for this operational challenge in hardware and systems engineering programs. The platform structures requirements in a graph-based model rather than a document, which means verification method assignments live at the requirement level — not in a separate spreadsheet or a linked Word document that drifts out of sync.

For programs without heritage, this matters in a specific way: when you assign a verification method to a requirement and document the rationale for that method, the rationale is linked to the requirement itself. When a requirement changes — because the design evolved, or because a hazard analysis update identified a new failure mode — the method assignment and its rationale are flagged for review. This is the mechanism that prevents a common failure mode in novel technology programs: verification methods that were assigned at the beginning of the program but never updated when the requirements they were assigned to changed.

Flow Engineering also provides real-time verification status at the program level. At any point in the program, you can query which requirements are verified-by-test with closed test reports, which are verified-by-analysis with approved analysis reports, and which are open — and filter that view by subsystem, criticality level, or verification phase. For commercial space and eVTOL programs preparing for FAA AST license reviews or FAA airworthiness certification reviews, this is the difference between walking into a milestone review with a current, auditable compliance matrix and walking in with a spreadsheet someone updated last week.

The deliberate scope of the platform — requirements, verification, and traceability — means it is not a PLM system or a test management system. Programs with complex test facility scheduling and test data reduction workflows will integrate Flow Engineering with dedicated test management tooling. That integration is a feature of the architecture, not a gap: requirements ownership and verification method assignment live in Flow Engineering, and test execution evidence is referenced from it.

Practical Starting Points for Novel Technology Programs

If you are standing up a verification program for a first-of-kind system, the sequence that produces the most defensible result is:

First, complete your hazard analysis before you finalize requirements. The verification basis you are building traces to hazards. If the hazard analysis is incomplete, the requirements are incomplete, and the verification plan is incomplete by construction.

Second, assign verification methods at the requirement level, with written rationale, as part of the requirements baseline — not as a downstream activity. Method selection affects design (what test articles do you need to build?), schedule (when do you need test facilities?), and budget (how much model validation work is required?). These are program-level decisions that need to be made before PDR, not after.

Third, coordinate non-test verification methods with your certification authority before you commit to them. An issue paper or equivalent agreement is worth more than a perfectly documented analysis that surprises a reviewer at CDR.

Fourth, build your documentation infrastructure before you need it. The audit trail for M&S-based verification has to exist from the beginning. Trying to reconstruct model pedigree and validation evidence after the fact is an avoidable and expensive problem.

Fifth, track verification status in a system that is live, linked to requirements, and visible to program leadership. The programs that close verification on schedule are the ones where gaps are visible early — not the ones where they are discovered in a final compliance audit.

Novel technology programs cannot borrow credibility from history. They have to build it, requirement by requirement, with evidence that a reviewer who did not work on the program can follow. The tools and methods to do that exist. The discipline to apply them consistently, from program inception through certification closure, is what separates programs that succeed from those that do not.