Verification and validation are the two activities that close the loop between what was specified and what was built. They’re frequently conflated and occasionally confused with each other. Getting them right is the difference between a system that meets requirements and a system that surprises you during field deployment.
The Distinction That Matters
Verification answers: did we build the system correctly? Did we implement what the requirements said to implement? Verification checks the system against its specifications.
Validation answers: did we build the correct system? Do the requirements themselves correctly capture what the customer or operator needs? Validation checks requirements against operational need.
A system can pass all verification activities — every requirement verified — and still fail validation because the requirements were wrong. This is one of the most expensive failure modes in complex system development: a system that does exactly what was specified but doesn’t do what was needed.
The practical implication: validation activities need to involve operational stakeholders and be conducted against operational scenarios, not just specification documents. Prototype demonstrations, operational testing in representative environments, and stakeholder reviews of system behavior against operational use cases are all validation activities.
The Four Verification Methods
Most systems engineering standards recognize four verification methods:
Test — Exercising the system or component under controlled conditions and comparing actual behavior to required behavior. Testing is the most common verification method and the most directly applicable to behavioral requirements. The cost is that it requires a working implementation — you can’t test before something is built.
Analysis — Using mathematical models, simulations, or analytical calculations to demonstrate that requirements will be met. Analysis is appropriate for requirements where testing is impractical (radiation tolerance, extreme environments), where the requirement can be derived analytically from lower-level verified properties, or where early-phase verification is needed before hardware is available.
Inspection — Visual examination or review of a design artifact to verify it conforms to a requirement. Appropriate for requirements about physical properties, materials, labeling, accessibility, or other attributes that can be directly observed.
Demonstration — Showing that the system operates as required, typically through an operational scenario, without the detailed instrumented measurement of a test. Appropriate for operational requirements where the performance criterion is “it works” rather than a quantitative measurement.
Most requirements are verified by test. But a V&V program that uses only test is likely missing some requirements — particularly early-phase requirements where analysis is the only practical method — and adding unnecessary cost where inspection or demonstration would be sufficient.
Planning V&V During Requirements Authoring
The most common V&V mistake is treating verification as a late-phase activity. Teams write requirements, design the system, build it, and then figure out how to verify it. By then, requirements that can’t be verified have become requirements that won’t be verified — creating coverage gaps that become certification findings or integration surprises.
The discipline is to assign a verification method to every requirement when the requirement is written. This produces two benefits:
It filters bad requirements. If you can’t figure out how to verify a requirement, the requirement probably needs to be rewritten. Vague, immeasurable, or ambiguous requirements fail the “how would we verify this?” test during authoring rather than during verification planning.
It makes verification cost visible. Knowing early that a requirement requires full environmental test qualification, radiation testing, or EMC analysis makes those costs visible during system requirements reviews — when they can still influence architecture decisions — rather than at PDR when the design is committed.
This practice is formalized in some standards (DO-178C requires verification methods defined for each requirement; ISO 26262 requires verification methods in the safety plan) and is best practice regardless.
The V&V Matrix
The V&V matrix (or verification cross-reference matrix, or requirements verification matrix — terminology varies) is the artifact that captures which verification activity covers each requirement. It’s the traceability artifact that connects requirements to verification.
A complete V&V matrix shows:
- Every requirement
- Its assigned verification method
- The specific test, analysis, or inspection activity
- Status (planned, in progress, complete, passed, failed)
Producing this matrix is a DO-178C objective and an ISO 26262 requirement. For most regulated programs, the V&V matrix is a certification deliverable.
The matrix becomes unmanageable at scale when maintained as a spreadsheet. Requirements traceability tools that model the V&V relationship as a typed edge — requirement → verified by → verification record — make the matrix a queryable view rather than a separately maintained artifact.
Structural V&V vs. Compliance V&V
V&V is best done for two reasons: because it increases confidence that the system will work, and because it satisfies certification requirements. When these two motivations are aligned, you get a V&V program that both improves quality and satisfies auditors.
When they’re misaligned — when teams design verification activities to pass audits rather than to find problems — you get a V&V program that produces green status in the matrix and surprises in the field.
The distinction often shows up in test coverage philosophy. A V&V program optimized for coverage metrics hits every requirement with a test that’s likely to pass. A V&V program optimized for finding problems includes adversarial tests, boundary condition tests, and integration tests designed to stress the system in the ways it’s most likely to fail.
Both are necessary. Coverage-based verification demonstrates that requirements were implemented. Adversarial testing finds the problems that coverage-based testing misses.
V&V for AI Components
Classical V&V methods don’t transfer to AI components without modification. Test-based verification of an AI model — running test cases and checking outputs against expected values — doesn’t capture whether the model behaves correctly across the operational distribution, whether it’s robust to distributional shift, or whether it fails safely outside its operational design domain.
V&V for AI components requires:
Statistical testing over the operational design domain. Rather than a fixed set of test cases, testing a representative statistical sample from the ODD — potentially thousands of test inputs per performance dimension.
Adversarial evaluation. Testing with inputs designed to find failure modes — corner cases, boundary conditions, and out-of-distribution inputs that the model hasn’t been optimized for.
Distributional shift analysis. Characterizing how model performance degrades when inputs shift from the training distribution — expected in real operational environments where conditions drift over time.
Coverage of the ODD, not just nominal cases. A model that performs well on nominal inputs but fails at ODD boundaries creates safety risk exactly in the operational scenarios that push the system hardest.
The standards are still developing for how to formally structure AI V&V. The emerging frameworks (EASA AI roadmap, FDA AI/ML guidance) all point toward statistical characterization of performance across the ODD rather than deterministic coverage metrics.
Making V&V Status Visible
One of the persistent problems in complex program V&V is visibility: at any given time, what’s the verification status of the requirements baseline? What’s the coverage gap? What’s blocking completion?
In a requirements model where V&V artifacts are native nodes connected to requirements by “is verified by” edges, these questions are queries. Requirements with no verification edge are immediately visible. Requirements whose verification is in progress are immediately visible. The verification status of any system area is a graph traversal.
In a program where requirements live in documents and V&V status lives in a separate spreadsheet, the status is always slightly out of date and always requires manual reconciliation to be reliable.
The operational benefit of connected V&V — where requirements and their verification artifacts live in the same model — is that program managers and chief engineers can have real-time visibility into verification status rather than waiting for the next status roll-up. This changes the conversation from “what’s our status?” to “what are we doing about the 23 requirements that still have no verification method assigned?”