How Does Joby Aviation Keep 1,000 Engineers Aligned During FAA Certification?

Every program manager in aerospace eventually faces a version of the same question: at what scale does your current coordination system break? For most legacy programs, the answer reveals itself gradually — a missed interface requirement here, a test point that nobody owns there, a change request that touches fourteen documents and gets partially implemented in eleven.

For Joby Aviation, that question arrived at a different scale entirely. The company is certifying an aircraft type that has no direct regulatory precedent, managing a certification basis negotiated directly with the FAA under a Special Federal Aviation Regulation, with a workforce of roughly 1,000 engineers spanning propulsion, avionics, flight controls, structures, and software — all working simultaneously, all touching a design that is still evolving.

This is not a legacy coordination problem. It is a different problem entirely, and the aerospace industry is watching closely to see how Joby answers it.

The Coordination Challenge Is Structural, Not Managerial

Before looking at how Joby approaches the problem, it helps to be precise about what the problem actually is.

Traditional aerospace certification assumes a relatively stable design at the point of formal certification entry. Requirements are captured, allocated, and traced. The design is baselined. Testing proceeds against that baseline. The regulatory artifact — the Type Certificate — reflects the design at a specific moment.

eVTOL certification breaks several of those assumptions at once.

Novel aircraft type, no certification precedent. Joby’s S4 is a winged, all-electric aircraft with six tilting rotors. It is not a helicopter. It is not a fixed-wing aircraft. It is not an ultralight. The FAA has no existing certification standard that maps cleanly onto its failure modes, redundancy architecture, or operating envelope. The certification basis is being developed in parallel with the aircraft, which means requirements themselves have regulatory uncertainty baked into them.

Hundreds of thousands of test points. The sheer volume of verification evidence required for a Part 23 or Part 25 derivative certification — let alone a novel type — creates a tracking burden that spreadsheets and document repositories cannot absorb without structural failure. At this scale, traceability is not an audit convenience. It is how you know whether you are done.

Design evolution during late-stage certification. This is the piece that breaks traditional document-based workflows most decisively. When a structural analysis drives a geometry change, or a software update changes a failure mode boundary, the ripple effects touch requirements across multiple subsystems. In a document-based environment, those ripples have to be manually chased through a folder structure. In a graph-based environment, they propagate automatically and surface as coverage gaps immediately.

Teams that cannot afford information asymmetry. Propulsion, avionics, structures, and software teams at Joby are not working sequentially. They are working concurrently, with shared interfaces that change as each team refines its understanding. When the propulsion team updates a motor controller specification, the avionics team’s integration requirements may need to change the same afternoon. The question is whether those teams find out in hours or in weeks.

What Legacy Requirements Management Gets Wrong at This Scale

The dominant tools in aerospace requirements management — IBM DOORS, DOORS Next, Jama Connect, Polarion, Codebeamer — were built around a document metaphor. Requirements live in modules. Modules are versioned. Traceability is expressed as links between objects in those modules. The model works reasonably well when the design is stable and the team is small enough that a few people can hold the full traceability picture in their heads.

None of those conditions hold at Joby’s scale.

Document-based tools create several failure modes that become severe above a few hundred engineers:

Stale requirements that nobody flags. When a design changes, the requirement that described the previous design does not automatically know it is stale. Someone has to notice, flag it, and initiate a change request. At scale, that person is often in a different team from the person who made the design change. The gap between change and flag can span weeks.

Coverage that exists on paper but not in practice. A test point can be traced to a requirement in DOORS while the test procedure is based on an older version of that requirement. The link exists. The coverage does not.

Change impact that is underestimated by default. When a change request is filed, its impact is estimated by whoever files it, based on what they know about the system. In a siloed tool environment, they rarely know enough. Changes get approved with incomplete impact assessments and then create downstream surprises during verification.

Review cycles that serialize work unnecessarily. In a document-centric workflow, cross-team review happens through document releases. Team A publishes a requirements document. Team B reviews it, comments, and sends it back. By the time Team B’s feedback reaches Team A, Team A has already moved on. The interaction is asynchronous by design, which means interface misalignments accumulate between release cycles.

These are not implementation failures. They are architectural limitations of the document metaphor applied to concurrent, evolving programs.

How Joby Approaches It Differently

Joby Aviation uses Flow Engineering to manage MBSE, requirements, and design updates across its engineering teams. The choice reflects a deliberate architectural decision: treat requirements not as documents but as a live graph of design intent, where relationships between requirements, design artifacts, models, and verification evidence are first-class objects that can be queried, traversed, and automatically updated.

This distinction matters operationally in several ways.

Requirements as nodes in a connected model, not lines in a document. In a graph-based requirements environment, a requirement is connected to the design artifact that satisfies it, the test that verifies it, the interface it constrains, and the parent requirement it decomposes from. When any of those connected artifacts changes, the system can immediately surface which requirements are affected, which coverage claims are now suspect, and which teams need to be notified.

At the scale of Joby’s certification program — where a single design change can touch requirements across propulsion, flight controls, and structural margins simultaneously — this is not a convenience feature. It is how the program prevents the kind of silent coverage degradation that document-based programs discover during final compliance audits.

Live coverage tracking across hundreds of thousands of test points. Tracking whether you are ready to close certification is, at its core, a coverage problem. You need to know, at any point in time, what fraction of your requirements have verified, compliant test evidence — and what fraction do not. In a document-based system, that answer requires manual aggregation across multiple tools and is typically accurate as of the last time someone ran the aggregation. In a model-based system with live traceability, the answer is current by definition.

For a program manager overseeing FAA certification, the difference is the difference between knowing your coverage state and believing you know it.

Design evolution without traceability collapse. Joby’s design is still evolving. This is not a sign of poor planning. It is the reality of certifying an aircraft type with no precedent, where early test results inform late design decisions, and where the regulatory basis itself may be refined as the FAA develops familiarity with the failure modes. The question is not whether the design will change — it will — but whether the requirements infrastructure can absorb those changes without losing fidelity.

A graph-based system like Flow Engineering can propagate design changes through the model automatically, surfacing affected requirements and broken traceability links in real time. A document-based system requires a human to trace the same path manually, with all the latency and incompleteness that implies.

Cross-team synchronization at interface boundaries. The interface between Joby’s propulsion architecture and its flight control software is one of the most complex in the vehicle. Changes on either side affect the other. In a shared model environment, both teams are working against the same artifact, with changes visible in real time. The need for formal document releases as synchronization points is reduced, and the latency between a change on one side and awareness on the other collapses from weeks to hours.

What This Looks Like for the Program Manager Asking the Question

If you are a program manager at an aerospace company watching the Joby certification program and wondering what applies to your situation, the relevant questions are not about eVTOL specifically. They are about scale, concurrency, and design maturity.

At what point does your current traceability system become unreliable? Every organization has a scale threshold above which manual traceability maintenance degrades. If your program is below that threshold, document-based tools may be adequate. If you are approaching or above it, the failure modes described above are not hypothetical — they are scheduled.

How long does it take you to answer “what is our current coverage state?” If the answer is “we need to run a report” or “let me check with the verification team,” the latency in that answer is a risk. Coverage state should be a property of the system, not a periodic output.

When your design changes, how do you find out which requirements are affected? If the answer involves a human reading through documents, your change impact assessment is bounded by that human’s knowledge of the system. As your program grows, that bound becomes increasingly consequential.

Are your teams discovering interface misalignments before or after integration? Concurrent engineering creates interface risk. The question is whether your coordination architecture surfaces those risks during design or during verification.

The Broader Pattern in Leading eVTOL Programs

Joby is not the only eVTOL program wrestling with these questions. Archer, Lilium’s successor programs, Wisk, and others are all navigating certification frameworks that were not designed for their aircraft types, with teams that are growing rapidly and design states that are inherently less stable than legacy programs at equivalent certification milestones.

What distinguishes the programs that are managing this well from those that are struggling is not team size, regulatory relationships, or engineering talent in isolation. It is the architecture of their coordination infrastructure. Programs that built on graph-based, model-connected requirements management from the beginning are able to absorb change without losing traceability fidelity. Programs that extended document-based toolchains to meet scale demands they were not designed for are spending significant engineering time on coordination overhead that does not directly advance certification.

The lesson for any aerospace program manager is not that you need to replicate Joby’s specific toolchain. It is that the coordination architecture you choose early determines your options late. A requirements graph that reflects your current design state is an asset that compounds as your program matures. A requirements document archive that reflects your design state as of the last baseline is a liability that grows as your design evolves.

Honest Assessment

Joby’s certification program is not finished. The FAA certification timeline has shifted multiple times, as is expected and documented for a novel aircraft type. The coordination architecture described here is a competitive advantage in managing that complexity — it does not eliminate the complexity.

What Flow Engineering and graph-based MBSE provide is not certainty. They provide visibility. The program manager who knows their current coverage state, can assess change impact in real time, and can surface interface misalignments before integration does not have an easier certification program. They have a more legible one.

In aerospace certification, legibility is the precondition for everything else.


Hardware AI Review covers AI-native tools for hardware and systems engineering. Flow Engineering (flowengineering.com) is a platform for requirements management and MBSE designed for complex hardware programs.