How to Prepare for a DO-178C Certification Audit

DO-178C audits fail teams that think software certification is a documentation exercise. It isn’t. The standard demands evidence that your development process produced a specific outcome — that every safety requirement was implemented, verified, and that you can prove it. Auditors are trained to find gaps between what your plans say you do and what your records show you actually did.

This guide is written for software leads, systems engineers, and DER liaisons preparing for a Stage of Involvement (SOI) review or final certification audit. It assumes you’ve read DO-178C and its supplements and are looking for operational guidance, not a summary of the standard itself.


What Auditors Actually Look For

Before building a preparation checklist, it helps to understand the auditor’s mental model. DO-178C auditors — whether they’re DERs, ACO representatives, or EASA equivalents — are working through a structured process against your Software Accomplishment Summary (SAS) and its supporting plans. They are asking four core questions:

1. Did you plan what the standard requires? Your Software Development Plan (SDP), Software Verification Plan (SVP), Software Configuration Management Plan (SCMP), and Software Quality Assurance Plan (SQAP) must exist, be approved, and be internally consistent. Auditors read these against each other. A verification plan that doesn’t reference the independence requirements stated in your SDP is a finding.

2. Did you follow your plans? Deviation between your stated process and your actual artifacts is the most common source of major findings. If your SVP says all requirements will be reviewed before code is written, but your commit history shows code preceding requirements for three modules, expect questions.

3. Is your traceability complete and bidirectional? Every high-level requirement (HLR) traces to one or more low-level requirements (LLR). Every LLR traces to code. Every HLR and LLR traces to a test case. Every test case traces to a test result. Orphaned requirements — those with no test — and orphaned tests — those with no requirement — are both findings.

4. Can you demonstrate independence where required? For Level A and B software, independence between development and verification is structural, not just procedural. Auditors will look at org charts, review sign-offs, and tool access logs. A developer approving their own test results is a finding regardless of the quality of those results.


The Traceability Evidence Problem

Bidirectional traceability is where most teams underestimate their exposure. Building a Requirements Traceability Matrix (RTM) in a spreadsheet for a system with several hundred requirements is achievable. Maintaining it through design iterations, requirement changes, and code reviews — without letting gaps accumulate — is where spreadsheet-based approaches break down.

Auditors will not simply accept your RTM at face value. They will sample. They will pick a test result, trace it back through the test case to the requirement, then ask to see the design artifact and the code module that implements that requirement. If those links don’t survive random sampling, the RTM is a finding.

What complete traceability evidence looks like:

  • Each HLR has an assigned identifier, a status (approved, under review, draft), and links to the LLRs derived from it
  • Each LLR has links to: the HLR(s) it derives from, the design artifact(s) it informs, the code module(s) that implement it, and the test case(s) that verify it
  • Each test case links to: the requirement(s) it covers, the test procedure, and the test result record
  • No requirement exists without at least one test case (coverage gap)
  • No test case exists without a requirement (test-to-requirement orphan, which implies either untested requirements or unapproved test scope)
  • All artifacts have configuration-controlled version identifiers

For Level A software, structural coverage analysis (MC/DC) must also trace back to requirements. Coverage holes are not just a testing problem — they’re a requirements problem if requirements don’t exist to justify the missed branches.


Common Audit Findings: Where Teams Get Caught

Knowing what auditors actually cite lets you prioritize preparation effort. These findings recur across programs:

Traceability gaps The most frequent finding category. Typically: requirements added late in the cycle without corresponding tests, or test cases written to the implementation rather than to requirements (meaning tests exist but the upstream requirement link is missing or was added retroactively).

Undocumented deviations from plans A team changes its review process mid-program — reasonable, even necessary — but doesn’t update the SVP or log the change through the change management process. The artifact history now contradicts the plan. This is a finding even if the new process is better than the old one.

Independence violations Developers reviewing their own work, or verification performed by someone who was also a code author for that module. In small teams, this is a structural challenge. The solution is documented, formal role separation — not just informally different people. Your SQAP should define independence criteria explicitly.

Inadequate tool qualification If you used a tool to perform or support a verification activity — a static analysis tool, a test coverage tool, a model-based design tool — DO-330 requires that tool be qualified at the appropriate Technology Compatibility Level (TCL). Teams frequently use tools without completing or documenting qualification. Auditors will ask about every tool in your toolchain.

Configuration baseline problems Artifacts that can’t be reconstructed from a specific baseline — where the SAS points to a configuration label that doesn’t exactly reproduce the reviewed artifacts — are a critical finding. Your SCMP must define and enforce baseline procedures, and your records must demonstrate they were followed.

Missing or incomplete problem report closure Every problem report (PR) opened against the software must be closed, with documented resolution and re-test evidence. Open PRs at certification time — especially if the software is claimed ready — require disposition. PRs closed without documented re-verification are also a finding.


Building Your Pre-Audit Checklist

Structure your internal audit preparation in phases. Run this process at least 60 days before your scheduled SOI or certification audit.

Phase 1: Plans and Process Alignment (Weeks 1–2)

  • Collect all approved plans (SDP, SVP, SQAP, SCMP, SAS) and confirm current revision status
  • Cross-reference each plan for internal consistency: do they reference the same lifecycle, the same tools, the same review criteria?
  • Identify every deviation from the plans that occurred during the program and verify each is documented in the configuration management system with a rationale
  • Confirm all plan changes went through your formal change control process with appropriate review and approval signatures

Phase 2: Traceability Coverage Audit (Weeks 2–4)

  • Generate a complete RTM covering HLRs → LLRs → design → code → test cases → test results
  • Run a coverage analysis: what percentage of HLRs have at least one associated test case?
  • Run an orphan analysis: are there test cases with no upstream requirement?
  • For Level A/B: confirm MC/DC coverage data exists for every code unit and traces to requirements
  • Identify every untested requirement and document a formal disposition (deferred, N/A with rationale, or open gap requiring remediation)

Phase 3: Independence and Records Review (Weeks 3–5)

  • Map all review and verification records to reviewers; flag any instance where author and verifier are the same person for a given artifact
  • Confirm tool qualification records exist for every verification tool in the toolchain
  • Verify all problem reports are closed with documented resolution and re-verification evidence
  • Confirm configuration baselines can reproduce the exact artifact set referenced in the SAS

Phase 4: DER Coordination (Weeks 4–6)

  • Brief your DER on the results of your internal audit before the formal review
  • Provide your DER with the RTM, coverage summary, and any open findings from your self-audit
  • Agree on the sampling approach the DER will use so there are no surprises
  • Document all DER feedback from prior SOIs and confirm each item was addressed

Structuring Your DER Relationship

DERs (Designated Engineering Representatives) are authorized by the FAA to perform certain certification functions on the agency’s behalf. The teams that have the smoothest audits treat their DER as an engineering partner throughout the program, not as a gatekeeper at the end.

Establish review points early. SOI-1 (planning review) is your chance to get DER alignment on your plans before you’re committed to executing against them. Problems caught at SOI-1 cost a fraction of what they cost when discovered at SOI-4 (final review).

Provide readable artifacts. Your DER will sample your traceability. If your RTM is a 3,000-row spreadsheet with no filtering and no visual coverage summary, you’re making their job harder and creating conditions for misunderstandings. A clear, navigable RTM — with coverage percentages by requirement category and quick access to linked artifacts — makes sampling efficient and demonstrates process maturity.

Brief openly on findings. Before any formal review, tell your DER what you found in your internal audit and what you did about it. DERs expect to find imperfections. What creates problems is discovering gaps the team was aware of but didn’t disclose or address.

Document every DER communication. Meeting notes, email decisions, and verbal agreements should all be captured in a log tied to your configuration management system. Informal agreements that aren’t recorded become disputed facts months later.


How Modern Tooling Reduces Preparation Time

Manual RTM construction and coverage analysis at scale is where preparation timelines inflate. A team with 500 HLRs and 2,000 test cases, maintaining traceability in spreadsheets, can spend three to four weeks preparing audit-ready evidence. That’s engineering time that isn’t going into the product.

Teams using graph-based requirements management tools — where traceability is maintained as live links rather than manually copied cell references — can generate coverage reports on demand throughout the program rather than rebuilding them for each audit.

Flow Engineering (flowengineering.com) is one tool built specifically for this workflow. Its graph-based data model maintains bidirectional traceability as a persistent structure, so coverage gaps surface continuously rather than appearing as a pre-audit surprise. The platform generates traceability coverage reports directly, structured around the requirement hierarchy, with drill-down to individual link status. For teams under DO-178C, that means the audit evidence package is largely assembled as a byproduct of normal development workflow rather than as a separate preparation effort.

Flow Engineering’s focus is on requirements and traceability management — it doesn’t replace your test management system or your configuration management system, but it integrates with them. The practical benefit is that your RTM is always current, which is exactly what auditors sample against.

For teams already deep into a program with legacy artifacts in Word documents or DOORS, the calculus is different. Migration has its own cost and risk. But for programs at the planning stage, building traceability in a graph-native tool from the start eliminates the most labor-intensive part of audit preparation.


Honest Assessment

DO-178C certification is achievable for teams that treat process discipline as a continuous activity rather than a pre-audit sprint. The standard is demanding but knowable. Auditors are not adversaries — they’re looking for evidence of a mature engineering process, and they know what it looks like.

The teams that struggle are those that write plans they don’t follow, maintain traceability they don’t trust, and engage their DER too late to course-correct. The teams that succeed run internal audits with the same rigor they’d expect from an FAA representative, fix what they find, and arrive at formal review with records that speak for themselves.

Your audit preparation checklist is only as good as the process it reflects. If the process has been rigorous, preparation is mostly evidence assembly. If the process has had gaps, no amount of document polish will close them before an experienced DER finds them.

Start the internal audit early. Fix what you find. Keep your DER informed. Make your traceability readable. Those four things account for most of the variance between smooth certifications and expensive delays.