How Hardware Companies Manage Requirements Across Multiple Regulatory Jurisdictions

A program entering both FAA and EASA markets is not managing one regulatory relationship — it is managing two, often with overlapping but non-identical requirements that diverge in specific clauses, documentation formats, and evidence expectations. The same is true for a medical device seeking FDA 510(k) clearance alongside EU MDR certification, or a defense platform pursuing both military and civil airworthiness simultaneously.

Most engineering teams discover the complexity of this late. They build a requirements baseline for their primary market, win a contract or a distribution deal in a second jurisdiction, and then face a choice: rebuild from scratch, or patch. Neither is clean. Patching produces a requirements structure where compliance logic is scattered across documents, buried in revision histories, and impossible to audit with confidence. Rebuilding is expensive and delays market entry.

The teams that handle this well solve it before the first requirement is written. Here is how they do it.


The Core Structural Problem

Multi-jurisdictional compliance creates three distinct challenges that must be solved separately:

Baseline coverage. Which requirements framework governs the shared product design? If FAA DO-178C and EASA CS-25 diverge on software assurance levels, which one drives your software architecture? If FDA 21 CFR Part 820 and EU MDR Annex IX diverge on design history file structure, which one governs your QMS?

Derived requirement isolation. Each jurisdiction generates requirements that are specific to its regulatory context — local language requirements, jurisdiction-specific safety margins, authority-specific documentation formats — that do not belong in a shared baseline but must still be traceable to it.

Evidence packaging. A single test campaign should produce evidence that can be extracted, formatted, and presented to each regulator without being rerun or recreated. This requires knowing, at the time evidence is generated, which regulatory packages it will eventually serve.

These three problems have different solutions. Conflating them — trying to solve baseline, derived requirements, and evidence management with a single flat document structure — is the root cause of most multi-jurisdictional compliance failures.


Anchoring the Baseline to the Stricter Framework

The decision of which framework governs the shared baseline should be made analytically, not politically. The question is not “which market are we entering first?” but “which framework is stricter across the dimensions that drive design decisions?”

In avionics software, DO-178C (FAA) and ED-12C (EASA) are functionally identical — EASA formally accepts DO-178C compliance. But EASA requires a PSAC (Plan for Software Aspects of Certification) with more detailed independence declarations than FAA typically enforces in practice. If your program will seek both certifications, your planning documents should satisfy EASA’s specificity standard, because FAA will accept it and EASA requires it.

In hardware safety, DO-254 (FAA) and ED-80 (EASA) are similarly harmonized, but EASA’s AMC 20-152A adds specific requirements for COTS component management that go beyond what FAA guidance explicitly mandates. If you are building avionics hardware for both markets, AMC 20-152A governs your COTS management process.

The analysis should be done requirement-by-requirement, not framework-by-framework. Create a crosswalk table — a structured comparison of every requirement clause in both frameworks. For each clause pair, identify: identical, equivalent, FAA-stricter, or EASA-stricter. The shared baseline absorbs the stricter clause in every case.

For medical devices, this analysis is more complex. FDA 510(k) and EU MDR operate under fundamentally different paradigms — substantial equivalence versus conformity assessment — which means they are not directly comparable clause-by-clause. But at the level of design controls, risk management, and clinical evidence, MDR is generally more demanding: it requires a Clinical Evaluation Report (CER) with higher evidence standards, a Post-Market Clinical Follow-up (PMCF) plan, and a more rigorous technical documentation structure than FDA typically requires for Class II devices. For a combined FDA/MDR program, MDR drives the shared design controls baseline.

The output of this analysis is a requirements baseline explicitly annotated to show which framework each requirement derives from — the shared baseline is not framework-neutral, it is framework-resolved.


Structuring Jurisdiction-Specific Derived Requirements

Once the shared baseline is established, each jurisdiction will generate additional requirements that apply only to that market. These fall into three categories:

Format and documentation requirements. EASA requires a Declaration of Design and Performance (DDP) for certain avionics equipment. FDA requires a Device Master Record (DMR) structure that is not required by MDR. These are requirements about how you document compliance, not about the product itself.

Jurisdiction-specific safety margins or thresholds. Some regulatory frameworks set quantitative limits — exposure limits, reliability targets, interference thresholds — that differ between authorities. A medical device with both FDA and MDR clearance may face different Essential Performance thresholds under IEC 60601-1 as interpreted by each authority.

Process requirements. The audit readiness expectations, change control notification triggers, and post-market surveillance reporting frequencies often differ between jurisdictions.

The critical structural rule: jurisdiction-specific derived requirements must be traceable to the shared baseline but must not be embedded in it. They belong in a separate, tagged layer that references the shared baseline without modifying it.

This matters operationally because jurisdictions change. EASA updates its AMC materials. FDA issues new guidance. EU MDR implementation continues to evolve as the EUDAMED database matures. When jurisdiction-specific requirements change, you need to update a discrete, bounded set of requirements — not hunt through a monolithic baseline for everything that might be affected.

In practice, this means your requirements model has explicit hierarchy: shared baseline requirements at one layer, jurisdiction-specific derived requirements at a second layer, each explicitly tagged with the regulatory authority that generated them and traceable to the parent requirement they derive from.


Organizing Verification Evidence for Extraction

This is where most programs fail, even when they get the baseline structure right. A test is run once, but its output — the test report, the test procedure, the test results data — must eventually appear in multiple regulatory submissions in different formats, with different surrounding documentation, reviewed by different authority engineers who have different documentation expectations.

The solution is not to run tests twice or maintain two separate test programs. The solution is to capture evidence in a regulatory-neutral format and attach regulatory tags at the point of generation, not at the point of submission.

At the point of test execution, every verification activity should be tagged with: the requirement(s) it verifies (by ID), the regulatory authority package(s) it will serve, and the framework-specific acceptance criteria it was evaluated against. If a test satisfies DO-178C MC/DC coverage requirements and also satisfies ED-12C structural coverage requirements, both tags are applied at execution time.

At the point of packaging, a filtered view of the evidence database is generated per regulatory authority. FAA gets a package that contains all evidence tagged for FAA, organized per DO-178C data item conventions. EASA gets a package that contains all evidence tagged for EASA, reorganized per EASA’s documentation conventions — but drawing from the same underlying evidence corpus.

This architecture means:

  • No test is run twice because it was not tagged correctly the first time.
  • No evidence is recreated from memory months after a test campaign ends.
  • When an auditor asks “show me all evidence supporting your DAL-B software assurance claim for the EASA submission,” the answer is a filtered view, not a manual search.

For medical devices, the same logic applies. ISO 14971 risk management evidence serves both FDA and MDR submissions. But MDR requires that the risk-benefit analysis be presented within a Clinical Evaluation Report structure, while FDA expects it within the 510(k) summary or De Novo submission. The underlying risk analysis is identical. The packaging is different. If you tagged your risk management evidence at the point of generation, packaging is a formatting exercise, not a reanalysis.


How Modern Tooling Makes This Tractable

The structural approach described above — layered baselines, regulatory tagging, filtered extraction — is theoretically implementable in any requirements management tool. In practice, the tools you use determine whether this remains manageable at scale or collapses under its own complexity.

Document-based tools (Word, PDF, even some legacy DOORS configurations) make multi-jurisdictional compliance nearly unworkable at program scale. Tags become manual annotations in prose. Filtered views require someone to physically sort and copy content. Traceability between the shared baseline and jurisdiction-specific derived requirements is maintained in spreadsheets that diverge from the actual documents.

Graph-based, model-native tools are better suited because the regulatory tag is a first-class attribute of a requirement node, not a formatting convention in a document. A filter on “authority == EASA” returns a coherent subgraph, not a manual selection task.

Flow Engineering is one tool built specifically for this kind of work. Its requirements model treats regulatory attributes as structured metadata on individual requirements, not as document-level properties. A team can tag each requirement with its governing authority, its framework reference (e.g., CS-25.1309, MDR Annex I Section 8), and its role in the baseline hierarchy (shared, derived, jurisdiction-specific). Filtered views per regulatory authority then become a configuration selection, not a documentation project.

For verification evidence, Flow Engineering’s traceability model links evidence artifacts directly to requirements, carrying the regulatory tags forward. When a submission package needs to be assembled for EASA, the filtered view includes all requirements tagged for EASA and all evidence artifacts traceable to those requirements — without requiring manual reconstruction of the traceability chain.

This is worth naming explicitly as a capability rather than a general promise: the practical barrier to multi-jurisdictional compliance in most programs is not engineering judgment. Engineers generally understand which requirements apply in which market. The barrier is tooling that makes it operationally manageable to maintain that understanding across thousands of requirements and hundreds of verification activities over a multi-year program.

Flow Engineering’s focus is narrower than full lifecycle ALM platforms like Polarion or Jama Connect — it does not attempt to be a full project management or configuration management system. That deliberate focus means its requirements model is more expressive for this specific task: regulatory tagging, filtered traceability views, and jurisdictional baseline management are structural features, not workarounds.


A Decision Framework for Structuring Your Baseline

Before writing the first shared baseline requirement, answer these questions in order:

1. What markets are in scope for this program’s full lifecycle? Include likely future markets, not just launch markets. A program entering FAA today that has a credible path to EASA in three years should be structured for EASA from the start.

2. For each pair of regulatory frameworks in scope, which is stricter on each design-relevant dimension? Run the crosswalk analysis. The output is a per-clause resolution table. This document is itself a compliance artifact — it shows regulators that you made the framework-resolution decision deliberately.

3. What jurisdiction-specific derived requirements exist in each market, and what shared baseline requirements do they derive from? Map them explicitly. If a derived requirement cannot be traced to a shared baseline requirement, either the baseline is incomplete or the derived requirement is not actually derived — it is a new requirement that belongs in the baseline.

4. At what level of granularity will you tag verification evidence? Requirement-level tagging (each piece of evidence is tagged to specific requirements) is the minimum. For complex programs, method-level tagging (the test method used, the acceptance criteria applied, the framework reference invoked) enables more precise filtered packaging.

5. What tool or model will maintain these tags and traceability relationships over the program lifecycle? If the answer is “spreadsheets,” revisit the question. The maintenance burden of a multi-jurisdictional compliance architecture in spreadsheets grows nonlinearly with program size. At 500+ requirements, it becomes a full-time job with high error rates.


Honest Assessment

Multi-jurisdictional compliance is expensive regardless of how well you structure it. Two regulators means two sets of review fees, two sets of submission timelines, two sets of auditor relationships, and two sets of post-market reporting obligations. No tooling or methodology eliminates that cost.

What good structure eliminates is the unnecessary cost: the re-run tests, the manually reconstructed evidence packages, the requirement changes that ripple through the wrong baseline layer, the auditor questions that take weeks to answer because no one can trace a specific claim back to its source.

Programs that invest in jurisdictional structure early — crosswalk analysis, layered baselines, tagged evidence — consistently report faster second-market submissions, because the first submission generates a compliance artifact set that was already organized for extraction, not organized for the first regulator and manually reorganized for the second.

The engineering work is the same. The compliance overhead does not have to be.