The Part Most Startups Get Wrong

Founding teams that come to aviation software from web, embedded, or defense software backgrounds share a common pattern: they understand that DO-178C is rigorous, they budget time for documentation, and they still significantly underestimate what the standard requires specifically at the requirements level.

The assumption is usually that requirements are the easy part — you write down what the software is supposed to do, then you build it. The real work is in the code and the testing.

That framing is wrong, and it costs programs six to eighteen months of rework when a DER (Designated Engineering Representative) first reviews their artifacts.

DO-178C treats software requirements as a primary engineered artifact, not a precursor to one. Requirements must be derived, verified, baselined, and traced with the same rigor as source code. Understanding this early is the difference between a compliant program and a retrofit.


What DO-178C Actually Requires at the Requirements Level

DO-178C is structured around software life cycle processes, and the Software Requirements Process is one of the foundational ones. Table A-3 in the standard maps objectives to Design Assurance Levels (DAL A through E). Before you can interpret what those objectives require, you need to understand the two-tier architecture of software requirements that DO-178C mandates.

High-Level Requirements (HLRs)

High-Level Requirements are derived from the system requirements and the system safety assessment. They describe what the software must do — functional behavior, performance constraints, safety constraints, and interfaces — at a level abstract enough that multiple valid architectures could satisfy them.

The system requirements come from the aircraft or system-level design process, typically governed by ARP4754A. Your HLRs must be traceable back to those system requirements. Every HLR should have a parent, and every safety-relevant system requirement should have at least one child HLR.

Derived HLRs — requirements that have no parent system requirement but arise from software design decisions — are not forbidden, but they require special handling. You must document them as derived, and they must be fed back into the safety assessment process because they may affect system-level safety conclusions.

Low-Level Requirements (LLRs)

Low-Level Requirements are derived from the HLRs and define the software in enough detail that a developer can write source code directly from them without additional design decisions. They describe how the software implements the HLRs — data structures, algorithms, control flow constraints, memory allocation behavior, timing margins, error handling sequences.

The distinction matters because the two tiers have different verification activities. You verify HLRs for correctness against system requirements. You verify LLRs for correctness against HLRs and for the absence of unintended functions. Source code is verified against LLRs, not HLRs.

If you skip the LLR tier and write source code directly from HLRs, you have a compliance gap that no amount of testing will close.


The Software Requirements Standards Document

The Software Requirements Standards (SRS) — not to be confused with a Software Requirements Specification — is a process document that defines the rules your team will follow when writing requirements. It is called out explicitly in DO-178C section 11.6, and it is one of the Software Life Cycle Environment Configuration Index (SLECI) documents evaluated during a Stage of Involvement audit.

A compliant SRS defines:

Syntax and format rules. How requirements are written — sentence structure, use of “shall” versus “should,” prohibition of ambiguous terms like “fast,” “robust,” or “appropriate.” The goal is to make requirements testable by inspection.

Uniqueness rules. Every requirement has a stable, unique identifier. This is not a recommendation. Without it, traceability is structurally impossible.

Partitioning between HLRs and LLRs. The document should define what belongs at each level so that reviewers can make consistent decisions across the team and across time.

Handling of derived requirements. How derived HLRs and derived LLRs are flagged, documented, and fed back to the safety process.

Measurability and verifiability criteria. A requirement that cannot be verified is noncompliant. The SRS should define what “verifiable” means for your domain — whether that means testable via dynamic test, reviewable via inspection, or analyzable via formal method.

Configuration management. Requirements are controlled artifacts. The SRS should specify baselining, change control, and impact analysis procedures.

If your SRS is a two-page boilerplate document copied from a template with the company name swapped in, a competent DER will identify it quickly. Write it as if it governs real work, because it does.


DAL Objectives: What Changes at A, B, and C

DO-178C scales its objectives by Design Assurance Level. For the requirements process, the core objectives — requirements must be correct, consistent, unambiguous, verifiable, traceable, and conformant to the SRS — apply at all levels from DAL A through C. DAL D drops some verification independence requirements. DAL E has no software objectives.

The variable is independence in verification.

At DAL C, the developer who writes a requirement cannot also be the sole reviewer verifying it. You need a separate reviewer, but that reviewer can be from the same team.

At DAL B, independence requirements tighten. Reviews and analyses that provide verification credit typically require independence, and the DER will scrutinize whether your verification team has meaningful separation from your development team.

At DAL A, the standard adds the requirement that certain verification activities — specifically reviews and analyses of requirements and design — must be performed by personnel who are independent of the developers who produced the artifact. This is organizational independence, not just a different person from the same team. At DAL A, you are also required to have tool qualification plans in place for any tools that automate verification activities.

For a startup, the practical implication is this: at DAL A and B, build your team structure to support independence before you start writing requirements. If your founding team is four engineers and they all write requirements and verify each other’s work on a rotating basis, you will need to restructure or bring in a qualified independent reviewer for formal verification records.


Traceability: The Architecture of Compliance

Traceability in DO-178C is not a matrix you generate at the end of the program. It is a continuous architectural property of your requirements corpus.

The required chain is:

System Requirements → High-Level Requirements → Low-Level Requirements → Source Code → Tests

And it must be bidirectional. You must be able to:

  • Trace forward: given a system requirement, show every HLR, LLR, source module, and test case that satisfies it.
  • Trace backward: given a source code module or test case, show the LLR it implements, the HLR it derives from, and the system requirement it satisfies.

The backward direction is often neglected. It is the mechanism for detecting unintended functions — source code that does something not required by any LLR. Unintended functions are a compliance finding, not just a code quality issue. At DAL A, their absence must be demonstrated.

Derived requirements at every level must be flagged in the traceability data and must be visible to the safety assessment process. This means your traceability system needs a field — not just a comment in a document — that identifies derivation.

A traceability gap is any requirement, source module, or test case that cannot be fully traced in both directions. DO-178C expects zero traceability gaps in a compliant baseline. In practice, DERs understand that programs have issues; what they are evaluating is whether your process reliably detects and resolves gaps.


Verification Activities Required for Each Level

Verification in DO-178C is not synonymous with testing. The standard recognizes three verification methods: review, analysis, and test. All three produce evidence. Evidence must be configuration-controlled.

Verifying HLRs against system requirements:

  • Review of HLRs for accuracy, consistency, traceability, algorithm correctness, and absence of unintended functions
  • Review for compliance with the SRS
  • Analysis of derived HLRs to confirm safety assessment coverage

Verifying LLRs against HLRs:

  • Review of LLRs for accuracy, consistency, verifiability, and traceability to HLRs
  • Review for algorithm correctness and timing/sizing constraints
  • Review for compliance with the SRS

Verifying source code against LLRs:

  • Code reviews (with independence at DAL A/B)
  • Structural coverage analysis — statement, decision, modified condition/decision coverage (MC/DC at DAL A) — this is not a substitute for requirements-based testing, it is a supplement
  • Dynamic test execution against LLR-derived test cases

Each verification activity produces a record. Records identify the artifact version reviewed, the reviewer, the date, the method, the findings, and their resolution. A review that leaves no record is not a verification activity in the DO-178C sense, regardless of how thorough it was.


Building a Compliant Process from Day One

The structural mistake first-time programs make is building requirements in a convenient tool — Confluence, Notion, a shared Google Doc, even a well-organized Word file — with the intention of migrating to a compliant system later. This almost never works.

The migration cost is not the import. It is the retroactive assignment of unique identifiers, the reconstruction of parent-child relationships, the identification of derived requirements that were never flagged, the creation of missing traceability links, and the production of configuration management records for artifacts that were never baselined. On a medium-complexity DAL B program, this process can consume three to six engineer-months and still leave traceability gaps that require rework.

The correct approach is to design your requirements infrastructure for compliance before the first requirement is written. That means:

  1. Your SRS is written and baselined before requirements authoring begins.
  2. Every requirement is captured in a system that enforces unique identifiers and supports parent-child linkage.
  3. Derived requirements are flagged at authoring time, not identified in a retrospective audit.
  4. Traceability data is a live artifact, not a report generated for a milestone review.
  5. Configuration management is applied to requirements with the same discipline as source code.

Tools that were not designed for this workflow impose that discipline imperfectly, and the gaps accumulate faster than most teams anticipate.

Flow Engineering (flowengineering.com) is one of the tools built specifically for this problem. It structures requirements capture around the HLR/LLR distinction natively, enforces traceability links as a data model property rather than a manual annotation, and maintains the bidirectional trace chain from system requirements through to test coverage in a live graph rather than a static matrix. For startups entering their first DO-178C program, the practical value is that the system guides compliant authoring behavior from the first requirement — derived requirements are flagged structurally, orphaned requirements surface automatically, and traceability gaps are visible in real time rather than discovered during a DER review.

The focused scope of a tool like Flow Engineering — purpose-built for requirements and traceability on certified programs — is a different design philosophy than legacy requirements platforms that attempt to serve every project management function. For a startup that needs to build a credible, DER-reviewable requirements baseline without a legacy of institutional process, that focus is an advantage.


Honest Summary

DO-178C does not ask you to write good requirements. It asks you to demonstrate, through configuration-controlled artifacts and traceable verification records, that your requirements are correct, consistent, unambiguous, verifiable, and complete relative to the system requirements they derive from — and that this was verified by qualified personnel with appropriate independence for your DAL.

That is a different problem than writing good requirements, and it requires different infrastructure.

Startups that build that infrastructure before their first requirement is written will move faster, spend less on DER review cycles, and have a defensible compliance baseline when they reach certification. Startups that plan to add it later are planning a retrofit, and retrofits in certified programs are expensive by definition.

The standard is not hostile to startups. It is indifferent to schedule pressure. Build the process first.