How Do You Manage Requirements When You’re Certifying Under Both FAA and EASA Simultaneously?
The short answer is: very carefully, and with tooling that was designed for this problem rather than adapted to it.
The longer answer involves understanding how two major aviation authorities structure their certification bases for novel aircraft, how the bilateral agreement between the US and EU affects your workload, and what it means operationally to maintain one product that must legally satisfy two regulatory frameworks that were written independently and do not always agree.
This is not a theoretical problem. Several eVTOL developers — Archer, Joby, Lilium before its restructuring, Wisk, Volocopter — have been running or attempting dual FAA/EASA certification programs. The discipline required is substantial, and the requirements management challenge is one of the least-discussed but most consequential aspects of keeping those programs on track.
How Dual Certification Programs Are Structured
When a manufacturer wants a product certified under both FAA and EASA authority, they do not submit one application and receive two approvals. They submit two applications, run two parallel certification projects, produce compliance demonstrations that satisfy each authority’s basis, and coordinate constantly between those parallel tracks.
In practice, one authority is designated the State of Design (SoD). For US-based eVTOL developers, that is the FAA. The FAA issues the Type Certificate. EASA then conducts a validation of that certificate under its own processes, establishing its own certification basis and making independent findings of compliance.
This structure means the program has two formal certification bases. For the FAA, the certification basis for a novel aircraft like an eVTOL is established through Special Conditions and Issue Papers (IPs). The G-1 Issue Paper defines the overall certification basis, lists applicable airworthiness standards, identifies where special conditions or equivalent safety findings are needed, and sets the stage for every subsequent substantive compliance discussion. Each discrete technical topic — flight control laws, battery failure modes, ditching provisions, pilot interface — typically gets its own numbered Issue Paper.
EASA’s parallel instrument is the Certification Review Item (CRI). A CRI A-01 serves the same function as the FAA’s G-1: it defines the certification basis for the validation. Subsequent CRIs address specific technical topics, often aligned with the FAA Issue Papers but not always in the same structure or with the same technical content.
The first discipline the program manager must establish is a living map between Issue Papers and CRIs. Which FAA IP corresponds to which EASA CRI? Are they asking for the same compliance method? Are the acceptable means of compliance (AMC in EASA terminology; AC or MOC in FAA terminology) technically equivalent? Do both authorities accept the same test data?
If this map does not exist as a managed artifact — updated every time either authority issues a revision — the program will drift toward inconsistency without anyone noticing until late in the game.
What the Bilateral Aviation Safety Agreement Actually Does (and Doesn’t Do)
The Bilateral Aviation Safety Agreement (BASA) between the United States and the European Union, along with its Technical Implementation Procedures (TIP), creates a framework for mutual recognition of certain findings between the FAA and EASA. The underlying theory is that both authorities are technically sophisticated, apply rigorous processes, and reach similar conclusions on most conventional airworthiness questions. So rather than having EASA re-examine everything the FAA already found compliant, certain findings can be accepted with reduced validation effort.
In practice, the TIP defines which product categories and which types of findings are eligible for this streamlined treatment. The key limitation for eVTOL programs is that novel aircraft categories, novel technologies, and novel failure modes often fall outside the scope of streamlined treatment. When neither authority has established precedent, both will want to make independent findings. The BASA reduces your workload on conventional avionics, structural analysis methods, and established certification approaches. It does not eliminate the bilateral coordination burden on the things that make your eVTOL novel — which is, unfortunately, most of the things that matter.
The practical implication: plan for active bilateral engagement on every substantive technical topic. Do not assume EASA will simply accept FAA findings on propulsion system safety, fly-by-wire flight control systems without mechanical reversion, or battery energy management. These are exactly the areas where EASA has consistently issued independent CRIs with independent technical content.
The BASA does matter for your program in another way. It establishes the coordination protocol between the two authorities. The FAA and EASA hold regular bilateral technical meetings. When a manufacturer surfaces a technical disagreement between the two bases — when the FAA accepts one analysis approach and EASA does not — the bilateral forum is the appropriate venue to seek alignment. This is slow. Plan for it.
Working with Both Authorities on Issue Papers and CRIs
The practical workflow for a dual certification program looks like this:
Early phase — Certification basis definition. The applicant engages the FAA to draft the G-1 Issue Paper. Simultaneously, the applicant engages EASA to define the CRI A-01. Both documents should be reviewed side-by-side from the first draft. Differences in the proposed basis — different applicable standards, different acceptable means of compliance — should be surfaced immediately, before either authority’s position hardens.
Issue-by-issue coordination. As each substantive IP and CRI is drafted, the applicant’s certification team prepares compliance approach documents that explicitly address both bases. A good practice is to maintain a compliance method matrix that, for each function or design feature, identifies the FAA MOC, the EASA AMC, the test or analysis that satisfies each, and whether a single test or analysis event covers both. Where a single test covers both, document that clearly. Where it does not, you have identified a schedule and cost risk.
Level of Involvement agreements. Both FAA and EASA will establish Level of Involvement (LOI) for each IP and CRI, indicating how much direct oversight they will apply to the compliance demonstration. High LOI means the authority wants to witness tests, review analysis directly, and approve specific deliverables. Managing LOI commitments across two authorities simultaneously — coordinating test witness schedules, ensuring analysis reports are formatted to each authority’s preferences, tracking open action items by authority — is a significant program management burden.
Divergence resolution. When the two authorities reach different conclusions about what a compliant design requires, the applicant has three options: satisfy the more stringent requirement (which works if both authorities accept a demonstration that exceeds their individual floor), seek an equivalency finding from the less stringent authority (arguing that your approach provides equivalent safety even though it differs from the typical AMC), or escalate to bilateral coordination and accept the schedule impact. There is no shortcut. The applicant who tries to paper over a substantive divergence without resolving it will face findings from one or both authorities during conformity inspection.
Where FAA and EASA Requirements Actually Diverge
The areas where FAA and EASA certification bases most commonly diverge for novel aircraft include:
Exposure to hazardous conditions. FAA and EASA apply different definitions and risk metrics for what constitutes acceptable exposure to Hazardous or Catastrophic failure conditions. The quantitative probability targets are similar on paper (10⁻⁷ per flight hour for Catastrophic), but the methods accepted to demonstrate those probabilities — particularly for software-intensive systems without substantial in-service history — differ.
Continued Safe Flight and Landing (CSFL). Both authorities require CSFL as the safety objective for certain failure conditions, but the specific failure scenarios each authority requires the design to address, and the conditions under which CSFL must be achieved, are not identical. An eVTOL program found that EASA required CSFL demonstrations in a wider range of atmospheric conditions than the FAA had specified.
Software and Airborne Electronic Hardware. Both authorities reference DO-178C and DO-254, but EASA has historically applied additional scrutiny through its AMC 20-115 series and has specific requirements for model-based development and tool qualification that the FAA may handle differently on a given program.
Environmental qualification. EASA’s environmental testing standards for electrical systems and avionics differ from FAA requirements in some categories. A test conducted to satisfy one authority’s environmental qualification standard may not be directly accepted by the other without additional analysis or testing.
Pilot interface and Human Factors. FAA AC 25.1302 and EASA’s CS-25 AMC 25.1302 are largely harmonized, but eVTOL-specific interface questions — how to present propulsion system health to a pilot in a degraded state, what automation modes are required — have been resolved differently in some programs.
The Single Product Definition Imperative
The most dangerous failure mode in a dual certification program is requirements bifurcation: the gradual divergence of two parallel requirements databases, one maintained for FAA compliance and one for EASA, that are supposed to describe the same physical product.
This happens because it feels like the path of least resistance. The FAA wants to see compliance demonstrated against its standards, so the team builds out an FAA compliance matrix. EASA wants to see compliance demonstrated against its standards, so a parallel EASA compliance matrix grows alongside it. When a design change happens, the team updates one matrix and intends to update the other later. Later does not always come before the next review.
The result is a product definition that has silently diverged from itself. The FAA certification artifacts describe a design that, in some details, differs from what the EASA artifacts describe. When this is discovered — during a bilateral coordination meeting, during conformity inspection, or during a design change review — it is expensive to untangle.
The correct architecture is a single requirements layer that describes the product, with two certification overlay layers that map each requirement to FAA compliance obligations and EASA compliance obligations separately. The product requirement does not change based on which authority you are talking to. The compliance demonstration — the test, the analysis, the report — may differ, and those differences are tracked in the overlay, not in the underlying requirement.
This architecture requires tooling that can represent the relationship between a single requirement and multiple, parallel compliance structures without duplicating the requirement itself. Flow Engineering is built around exactly this kind of graph-based relationship model. Rather than storing requirements as rows in a document that must be copied to create a second authority’s view, it maintains requirements as nodes in a connected structure, with certification basis mappings as typed relationships that can be queried, filtered, and audited independently. A team using Flow Engineering can ask “show me every requirement that has an EASA CRI mapping but no FAA IP mapping” as a live query — the kind of gap analysis that catches bifurcation before it becomes a crisis.
This matters especially during design changes. When a propulsion system architecture change affects twenty requirements, the platform can surface all affected FAA and EASA compliance obligations simultaneously, not sequentially. The change review is complete rather than partial.
When Two Authorities Require Different Testing for the Same Function
This is where dual certification programs generate the most schedule risk, and where the program manager needs the most precise information.
The general rule is: satisfy the more stringent requirement, and document the fact that doing so also satisfies the less stringent one. If EASA requires a ditching demonstration at 40-knot surface winds and the FAA requires the same demonstration at 25-knot winds, you conduct the 40-knot test, generate a report that explicitly references the FAA requirement and explains why the 40-knot test result bounds the 25-knot case, and submit to both authorities.
Where this breaks down is when the requirements are not directly comparable — when each authority specifies a different test methodology, not just a different threshold. In that case, a single test event may not serve both purposes. The applicant must run two test campaigns or seek an equivalency finding.
The program management discipline here is to identify these cases before the test program is planned, not after. Each test event should be annotated with: which FAA MOC does this satisfy, which EASA AMC does this satisfy, and is there any gap. This is not a one-time analysis. It must be maintained through every design change and every revision to either authority’s certification basis.
Flow Engineering’s approach to this is to maintain compliance method as a first-class attribute of the relationship between a requirement and its verification event, rather than as a comment in a document. When an authority revises its AMC, the affected relationships can be identified instantly, and the test program impact is surfaced before the schedule is disrupted.
Honest Assessment
Dual certification is achievable. Multiple programs have done it or are in the process. But the workload is not simply twice the work of a single certification. The coordination overhead, the bilateral engagement, the divergence resolution, and the maintenance of a coherent single product definition across two parallel regulatory frameworks add non-linear complexity.
The programs that manage this well share two characteristics: they invest early in a requirements architecture that separates product definition from certification overlay, and they treat the IP/CRI mapping as a first-class program artifact that is actively maintained rather than periodically reconstructed.
The programs that struggle have typically tried to adapt document-based requirements management processes designed for single-authority programs. Those processes do not bend gracefully to this problem.