The eVTOL Shakeout: What Separates Programs That Will Certify From Those That Won’t
The eVTOL industry entered 2026 the way a lot of overhyped aerospace cycles end: with fewer players than the press releases suggested and a much sharper divide between programs making real progress and programs managing investor optics. FAA and EASA have now processed enough type certificate applications, special conditions packages, and issue paper submissions to reveal a pattern. The programs moving forward share specific engineering and organizational characteristics. The programs stalling or withdrawing share different ones.
This is not a story about which aircraft concept is technically superior. It’s a story about which organizations built the engineering infrastructure to actually certify an aircraft—and which ones confused design ambition with certification readiness.
What the First Wave of Applications Actually Revealed
The first serious wave of eVTOL type certificate applications—spanning roughly 2022 through 2025—exposed a structural problem in how many programs approached certification planning. Companies that had raised hundreds of millions in capital submitted applications that certifying authorities received with significant reservations. Issue paper queues grew. Means-of-compliance negotiations dragged. Several programs publicly announced schedule revisions that were, in practice, acknowledgments that their initial applications had been premature.
The FAA’s handled the eVTOL certification challenge by issuing special conditions under Part 21 and, for many vehicles, placing them under Part 23 with special conditions for powered-lift. EASA took a parallel path through its SC-VTOL Special Condition framework, which has been relatively more prescriptive. In both cases, the certifying authorities have been clear that novel risk categories—particularly distributed electric propulsion, battery energy management during failure modes, and degraded-flight-envelope controllability—require rigorous, documented safety arguments that many applicants simply did not have at submission.
The programs that have continued advancing share four specific characteristics. Each is worth examining in detail.
The Safety Case Quality Gap
A safety case is not a safety plan. This distinction is not semantic—it is the single most common source of certification setbacks in eVTOL programs.
A safety plan describes the activities an organization intends to perform to demonstrate safety. A safety case is a structured, evidence-backed argument that the aircraft meets an acceptable level of safety, with explicit links from top-level safety objectives through functional hazard assessments (FHA), to system safety assessments (SSA), to design requirements, to verification evidence. The argument has to close. Every hazard identified in the FHA needs a traceable path to a design feature or operational limitation that mitigates it to an acceptable probability.
Programs with mature safety cases entered CDR with living documents—models, not binders—that their engineers could query. When a design change was proposed, the safety case updated. When a new failure mode was identified in testing, it propagated back to the FHA and forward to the verification matrix. The safety case was a tool, not a deliverable.
Programs in trouble treated safety documentation as a compliance artifact to be assembled before submission. The FHA existed as a spreadsheet. The SSA existed as a PDF. The relationship between them was maintained manually and inconsistently. When certifying authority reviewers asked how a specific design decision addressed a specific hazard category, the programs often could not produce a coherent answer quickly—because the answer required manually tracing through disconnected documents.
This is not a theoretical problem. FAA reviews under the Stage 2 and Stage 3 certification basis engagement processes explicitly test whether the applicant can respond to follow-up questions with traceable evidence. Programs that cannot do this in real time are flagged, and the subsequent remediation is expensive.
Failure Mode Analysis: Where Programs Actually Fail
If safety case quality is the strategic differentiator, failure mode analysis completeness is the tactical one—and it is measurable.
The propulsion architecture of a distributed electric aircraft creates a failure mode space that is categorically larger than a conventional twin-engine design. With six to twelve or more motor-rotor assemblies, multiple battery packs, a power distribution network, motor controllers, and a flight control system that has to manage actuated responses to propulsor failures in real time, the failure mode combinations are not just numerous—they are interactive. A partial battery degradation affects available thrust envelope. A motor controller fault affects the remaining propulsors’ load. The flight control law’s response to any failure mode is itself a potential source of new failure modes.
Programs that are advancing spent serious engineering resources on FMEA and FMECA at the propulsion and power distribution levels, specifically covering:
- Single-point failure identification across the power path from battery cell to rotor thrust
- Common-cause failure analysis across propulsors sharing common hardware families, thermal environments, or software baselines
- Dependent failure analysis covering the flight control system’s coupling to propulsion health monitoring
- Functional failure analysis covering degraded flight envelope cases where multiple non-independent failures occur within the same flight phase
Programs that are stalling typically completed top-level functional hazard assessments—which are required relatively early—but did not complete detailed FMEA/FMECA work before CDR. Some scheduled this work as a post-CDR activity, treating it as verification work rather than design-informing analysis. This sequencing error is fatal to certification schedules. When detailed failure mode analysis reveals hazard categories not captured in the FHA, and when those categories require design changes, the downstream impact on requirements, verification plans, and supplier agreements is enormous. Several programs are currently living through exactly this problem.
The pattern from certifying authority feedback is consistent: applicants who submitted FMEA work that was both complete and traceable to their FHA received faster means-of-compliance feedback. Applicants who submitted incomplete FMEA work, or FMEA work that was not clearly linked to their FHA and SSA, received requests for additional information that took months to resolve.
Propulsion System Maturity at CDR
Critical Design Review in a novel aircraft program is supposed to be a gate. The design is complete enough that production tooling can proceed, that verification test articles can be built to a stable design baseline, and that the risk of design changes post-CDR is acceptable. For most eVTOL programs, the propulsion system is the hardest thing to mature by CDR—and programs that tried to pass CDR with immature propulsion systems have paid a heavy price.
Propulsion system maturity at CDR, for eVTOL certification purposes, means something specific: the motor, motor controller, and battery system have accumulated sufficient test hours under representative duty cycles to support the failure rate claims in the safety case. For propulsion components in safety-critical propulsion chains with failure probability requirements in the range of 10⁻⁷ to 10⁻⁹ per flight hour, the test data required to substantiate those rates is substantial. It cannot be back-loaded.
Programs that are on credible paths entered CDR with prototype propulsion assemblies that had completed early qualification testing and that had identified—and already resolved—the first generation of failure modes that only appear under representative thermal and electrical stress conditions. Battery management system behavior under partial cell failure and under rapid discharge demand was characterized. Motor controller fault response was tested against the flight control law’s expected inputs. Propulsion health monitoring algorithms had been validated against injected fault conditions.
Programs that are struggling either treated CDR as a paper exercise—a documentation review rather than a hardware maturity gate—or allowed propulsion design changes to continue past CDR without formally re-baselining the certification program. Both errors compound. A propulsion change post-CDR that is not formally controlled invalidates previous test data, which invalidates failure rate substantiation, which reopens safety case arguments that were presumed closed.
The Certifying Authority Relationship
“Working well with the FAA” is often discussed in vague terms. It is actually measurable: issue paper resolution rate and means-of-compliance agreement velocity.
An issue paper is how FAA documents novel or unique aspects of a design that require specific means of compliance to be established. For a genuinely novel propulsion architecture, an applicant might have twenty to forty active issue papers. The rate at which those papers move from “open” to “agreed means of compliance” is a direct indicator of program health.
Programs that are advancing resolved issue papers consistently. They had dedicated teams whose job was to respond to FAA and EASA technical questions with engineered, traceable answers. They scheduled regular technical working group meetings with certifying authority engineers—not just legal and regulatory staff, but systems engineers and safety engineers who could engage substantively on technical questions. They treated disagreement as a problem to be solved with engineering evidence, not a negotiation to be managed.
Programs that are struggling often structured their certifying authority relationship primarily around regulatory and legal staff, with technical engineers brought in episodically. They treated issue papers as administrative items rather than engineering problems. When certifying authorities asked technical follow-up questions, the response cycle was slow because the technical staff who could answer were not continuously engaged with the certification process.
There is also a transparency dimension. Certifying authorities have been explicit, in post-submission feedback and in public forums, that they respond better to applicants who proactively surface problems. A program that identifies a failure mode in testing and immediately briefs its certifying authority, with a proposed mitigation and updated safety case, maintains credibility. A program that attempts to resolve problems quietly and present a clean picture at the next scheduled review loses credibility when the issues surface anyway—and they do surface.
The Systems Engineering Investment Correlation
Across the programs that are advancing, the systems engineering infrastructure investments are consistent. Requirements are managed in tools that enforce bidirectional traceability—from operational hazard to system requirement to subsystem requirement to test. Design changes propagate automatically to verification coverage analysis. Safety case arguments are maintained as living models, not static documents.
This is not about tool sophistication for its own sake. It is about organizational capability under pressure. Certification programs involve thousands of requirements, hundreds of test cases, dozens of suppliers, and continuous design evolution over years. Programs that managed this with document-based processes—requirements in Word, traceability in Excel, safety models in disconnected PDFs—hit walls when the volume and rate of change exceeded what manual processes could handle accurately.
Tools that implement graph-based traceability models, where requirements, hazards, design artifacts, and test evidence are nodes in a queryable network rather than rows in a spreadsheet, give engineers the ability to answer certifying authority questions in hours rather than weeks. Flow Engineering, which was built specifically for this kind of connected systems work, represents the architectural direction the industry is moving toward: AI-assisted, graph-native, and designed around the assumption that requirements and safety artifacts need to be continuously linked rather than periodically reconciled.
The programs that treated systems engineering tooling as overhead—something to add later, after the technical work was done—discovered that the technical work and the engineering infrastructure are not separable. You cannot build a credible safety case in a document. You cannot maintain complete failure mode traceability in a spreadsheet. You cannot respond to certifying authority technical questions at speed without queryable traceability infrastructure.
What the Industry Is Learning
The eVTOL shakeout is not primarily a story about which battery chemistry is best or which rotor configuration is most efficient. It is a story about organizational capability and engineering discipline applied to one of the hardest certification problems commercial aviation has attempted.
The first wave of applications taught the industry several things that were not obvious in 2020:
Novel aircraft types do not get the benefit of existing certification precedent. Every issue paper is a negotiation from scratch, and the applicant’s credibility is the only currency.
Safety case maturity at CDR is not a documentation exercise. It requires design stability, completed failure mode analysis, and test data that substantiates the probabilistic claims the safety case makes.
Propulsion system qualification cannot be back-loaded. The failure modes that appear under representative stress conditions are exactly the failure modes that invalidate safety arguments if they are found late.
Certifying authority relationships are engineering relationships first. The organizations that are advancing treat their certifying authority counterparts as technically sophisticated reviewers whose questions deserve engineered answers, not administrative reviewers to be managed.
Systems engineering infrastructure investment is not optional overhead. It is the capability that allows an organization to maintain a coherent safety case, respond to design changes without losing traceability, and answer technical questions at the speed certification programs demand.
Honest Assessment
The number of eVTOL programs that will achieve type certificates in the next three years is smaller than the number of programs that claimed to be on three-year schedules two years ago. That is now obvious. What is less obvious is that the differentiating factors are largely knowable in advance—not from the aircraft’s technical specifications, but from an examination of the program’s engineering infrastructure, the completeness of its failure mode analysis, the maturity of its propulsion system at CDR, and the quality of its engagement with its certifying authority.
The programs that will certify invested in those things early, when the investment was expensive and the value was not yet visible. The programs that won’t certify are now discovering that those investments cannot be made retroactively at acceptable cost.
The shakeout is not over. But the outcome, for most programs, is already determined by decisions that were made two and three years ago.