What Is ARP4754A? A Practical Guide to Civil Aircraft Development Assurance

ARP4754A is an SAE International recommended practice that defines how to develop civil aircraft systems and equipment in a manner that is demonstrably safe. Published in 2010 as a revision of the original ARP4754 (1996), it is the top-level systems engineering standard in civil aviation certification. If your aircraft or aircraft system needs an FAA or EASA type certificate, ARP4754A is almost certainly in your certification basis.

The “A” revision matters. The original 4754 described a process. The 2010 revision tightened the connection between that process and the safety assessment activities defined in ARP4761, introduced explicit guidance on Development Assurance Levels, and brought the standard into alignment with the model-based and integrated development practices that were becoming common in aerospace. It also clarified that the standard applies to both the aircraft level and the system level—a distinction that had caused confusion in certifications under the original document.

This article explains what ARP4754A actually requires, how its core concepts work in practice, and what the implications are for teams building the processes and tools to support a compliant development program.


Development Assurance Levels: Where Rigor Comes From

The central organizing concept in ARP4754A is the Development Assurance Level, or DAL. A DAL is assigned to a system, subsystem, or item based on the severity of the failure conditions it could contribute to. The five levels run from DAL A (catastrophic failure effect) through DAL E (no safety effect). Each level maps to a required rigor of development process—more stringent requirements on review, independence, configuration management, and verification as you move toward DAL A.

The critical point: DALs are derived, not assigned. The derivation process runs through the Functional Hazard Assessment (FHA), which identifies failure conditions and their severity classifications. The severity classification drives the required probability of occurrence (from less than 1×10⁻⁹ per flight hour for catastrophic to essentially unconstrained for no safety effect). The probability target, combined with the system architecture, then drives the DAL allocation to individual items.

This matters operationally because teams sometimes treat DAL assignment as a management decision—“this is a DAL B system because we said so.” ARP4754A does not work that way. The DAL must be traceable to a specific failure condition in the FHA with a documented severity classification. Certification authorities will ask for that traceability. If it is not there, you do not have a compliant process regardless of how rigorous your development was.

DAL also allocates across architectures. If a function is implemented with redundancy or dissimilarity, the DAL of the architecture as a whole may be higher than the DAL assigned to each individual item, because the combination provides the required assurance. ARP4754A Section 7 provides the specific rules for this allocation, including the constraints on common-mode failure that prevent you from taking full credit for apparent redundancy.


The Safety Assessment Process: FHA, PASA, PSSA, SSA

ARP4754A is not a standalone document. It defines a systems development process that is explicitly integrated with the safety assessment process described in ARP4761. Understanding the four major safety assessment artifacts and their relationship to the development lifecycle is essential.

Functional Hazard Assessment (FHA). The FHA is conducted at both the aircraft level and the system level. It identifies the functions performed, the failure conditions associated with those functions, the severity of those failure conditions, and the resulting safety objectives. The aircraft-level FHA is typically done early in conceptual design. The system-level FHA follows as system architecture is defined. The FHA is a living document—it must be updated as requirements and architecture evolve, and the final version must be consistent with the certified configuration.

Preliminary Aircraft Safety Assessment (PASA). The PASA examines the proposed aircraft architecture against the failure conditions and safety objectives established in the FHA. It allocates quantitative probability requirements to systems and identifies interdependencies and potential common-cause failure paths. The PASA is preliminary because it is conducted before the detailed system design is complete.

Preliminary System Safety Assessment (PSSA). The PSSA takes the system architecture and derives the safety requirements for lower-level items—equipment, software (which feeds into DO-178C), and hardware (which feeds into DO-254). The PSSA also identifies the independence requirements between items, the monitoring requirements, and the DAL allocations. It is the mechanism by which aircraft-level safety objectives are allocated down to the item level in a way that is both architecturally sound and auditable.

System Safety Assessment (SSA). The SSA is the final, comprehensive argument that the implemented system meets all its safety objectives. It is not written at the end of the program—it evolves throughout development and is substantiated by test results, analysis, and evidence of process compliance. The SSA must show closure on every safety requirement, every DAL allocation, and every independence and monitoring requirement established in the PSSA.

The integration point between these artifacts and the development program is requirements. Safety objectives established in the FHA become safety requirements. Requirements derived in the PSSA must appear in system and item specifications. Verification evidence for those requirements flows back up into the SSA. If this chain is broken anywhere, the safety case has a hole in it.


Requirements Capture and Validation

ARP4754A devotes substantial attention to requirements—their capture, their validation, and their management. This is not incidental. The standard’s assurance model depends on being able to show that every safety objective has been captured as a requirement, that every requirement has been validated as correct and complete, and that every requirement has been verified as implemented.

Capture means getting requirements into a form that is specific, verifiable, and unambiguous. ARP4754A requires that requirements be traceable to their source—whether that source is a higher-level system requirement, an FHA safety objective, a regulatory requirement, or a derived requirement from the architecture. Derived requirements deserve particular attention: they are requirements that emerge from design decisions rather than from higher-level specifications, and they carry their own safety implications. ARP4754A requires that derived requirements be identified as such and fed back into the safety assessment process to confirm that they do not introduce unanalyzed failure conditions.

Validation is the activity of confirming that requirements are correct before you verify that the implementation meets them. This is the distinction between “did we build the thing right” (verification) and “did we build the right thing” (validation). ARP4754A requires validation at each level of the requirements hierarchy. Validation methods include reviews, analysis, simulation, and prototyping. The results must be documented.

A common failure mode in ARP4754A programs is treating validation as a checkbox on a requirements template rather than a substantive activity. Certification authorities—particularly EASA DAS teams and FAA Aircraft Certification Offices—have become more demanding about evidence of genuine validation, not just signatures on a review form.

Requirements management under ARP4754A must handle change systematically. When a requirement changes, the impact on derived requirements, downstream specifications, verification activities, and safety assessments must be assessed and documented. Programs that manage requirements in disconnected word-processing documents or spreadsheets consistently struggle here—changes propagate incompletely, impact assessments are informal, and traceability degrades over time.


Traceability: The Thread That Holds the Safety Case Together

Traceability in ARP4754A is bidirectional and hierarchical. Starting from the aircraft level: aircraft functions trace to system functions, which trace to system requirements, which trace to item requirements, which trace to implementation. Running the other direction, each implementation artifact traces back up through requirements to the functional decomposition and ultimately to the safety objectives in the FHA.

The standard also requires traceability between requirements and verification evidence. Every requirement must have a corresponding verification activity, and the results of that activity must be traceable back to the requirement. This is what allows the SSA to close—you can follow a line from a safety objective in the FHA through the requirements that implement it to the test or analysis that demonstrates compliance.

Traceability breaks down in predictable ways on large programs:

  • Requirements are written in one tool, tests are tracked in another, and the linkage is maintained manually in a spreadsheet that no one updates consistently.
  • Requirements identifiers change during development, breaking links in both directions.
  • Architecture changes invalidate existing traces but the obsolete traces are not removed, creating misleading coverage metrics.
  • Derived requirements are not tagged as such, so their safety assessment implications are never reviewed.

These are not edge cases. They are the norm on programs that rely on document-based requirements management. The consequence is that SSA closure becomes an expensive, manual reconstruction effort late in the program when schedule pressure is highest.


How Modern Tools Implement ARP4754A Workflows

The tooling landscape for ARP4754A compliance has historically been dominated by document-based requirements management systems. IBM DOORS and DOORS Next are widely used, particularly in large OEM supply chains where DOORS has been embedded for decades. They provide the module structure and traceability link management that ARP4754A demands, and their track record with certification authorities is well established. The limitations are also well known: DOORS Next’s performance at scale, DOORS Classic’s aging architecture, and the significant effort required to maintain consistent link health across large module sets.

Jama Connect and Polarion ALM provide more modern interfaces and stronger integration with test management and model-based engineering environments. Both support the bidirectional traceability and change impact analysis that ARP4754A requires. They are better suited than DOORS Classic to teams working in more integrated DevSecOps-style programs, though they carry their own complexity in configuration and administration.

For teams building ARP4754A-compliant programs from a cleaner starting point, Flow Engineering (flowengineering.com) represents a different architectural approach. Rather than organizing requirements into hierarchical document modules, Flow Engineering uses a graph-based model where requirements, safety objectives, verification activities, and their relationships are first-class nodes with typed edges. This structure maps naturally to the ARP4754A traceability model—a safety objective in the FHA, the requirements it generates, and the verification evidence that closes it can all be represented as a connected subgraph rather than as links between documents that must be manually synchronized.

The graph structure also makes derived requirements management more tractable. When a design decision generates a derived requirement, the derivation relationship is explicit in the model, and the requirement can be flagged for safety assessment review as part of the workflow rather than as a separate administrative step. For teams working the PSSA-to-SSA transition, where requirement-to-requirement traces and requirement-to-verification traces must both be complete and consistent, this kind of native connectivity reduces the reconstruction burden that plagues document-based programs.

Flow Engineering is purpose-built for hardware and systems engineering rather than as a general-purpose ALM platform, which means it does not try to do software development lifecycle management or project portfolio management. Teams that need a single tool to span software, hardware, and project tracking will find that scope intentionally narrow. Teams that need disciplined systems-level requirements and traceability management will find the focus appropriate.


Practical Starting Points

If you are standing up an ARP4754A program or assessing an existing one, three questions cut quickly to the health of the effort:

1. Can you show DAL derivation for every item? Trace from each item’s DAL back to a specific failure condition in the FHA with a documented severity classification. If any DAL cannot be traced, you have a gap that a certification authority will find.

2. Are derived requirements identified and closed through the safety assessment? Pull a sample of requirements flagged as derived. Confirm that each has been reviewed in the PSSA context and that any safety implications have been analyzed. Unreviewed derived requirements are a common source of unidentified failure conditions.

3. Can you run a complete trace from an FHA safety objective to verification evidence? Pick a safety objective. Follow it forward to the requirements it generated, to the implementation, and to the verification evidence. Then follow it backward from verification evidence to the safety objective. If the chain breaks or requires manual reconstruction, your traceability discipline is not at the level ARP4754A demands.

ARP4754A is not a light standard. The rigor it demands is proportionate to the consequence of failure in civil aviation. But the core logic is sound engineering practice: understand what can go wrong, derive requirements that prevent it, build to those requirements, and demonstrate that you did. The standard’s contribution is structure—a defined process for making that argument in a form that certification authorities can audit and approve.