How Do You Know If Your Requirements Are Good Enough to Start Design?
The honest answer to this question is that most teams don’t actually know. They start design because the schedule says design starts Monday, then spend the next six months reworking subsystems because the requirements they were working from were incomplete, ambiguous, or contradictory. The cost shows up later, attributed to “integration issues” or “late-breaking customer changes”—but the root cause was almost always requirements that weren’t ready.
This article gives you a concrete checklist you can run against your requirements set before committing to design. Not vague criteria like “requirements should be clear and complete.” Specific, checkable conditions. If you can check every box, you’re ready. If you can’t, you know exactly what to fix.
The Readiness Checklist
Work through each criterion in order. They’re sequenced intentionally—earlier items expose gaps that make later items impossible to assess.
1. Every requirement has a verification method assigned
Before design starts, every requirement needs to have one of the four standard verification methods assigned: test, analysis, inspection, or demonstration. Not “TBD.” Not “test or analysis, TBD which.”
This is the most mechanically skipped step in requirements development, and it’s skipped because it’s uncomfortable. Assigning a verification method forces you to confront whether a requirement is actually verifiable. If you write “The system shall provide an excellent user experience” and then try to assign a verification method, the exercise immediately surfaces that this isn’t a requirement—it’s a wish. A requirement that can’t be verified can’t be satisfied.
How to check this: Run a query against your requirements database. Count requirements with no verification method assigned. The target is zero. If you have more than a handful, stop and resolve them before moving forward. Each unverified requirement is a contract term you can’t close out at acceptance.
Practical note: For derived requirements at the subsystem level, it’s acceptable to assign provisional methods that get refined after architecture decisions. But those must be flagged explicitly, not silently left blank.
2. No TBDs in safety-critical requirements
TBDs anywhere in a requirements set are a problem. TBDs in safety-critical requirements are a program risk. The distinction matters because safety-critical requirements drive qualification activities, design margins, and hazard analyses. A TBD in a safety requirement means someone is designing a safety-critical function to an undefined specification.
This isn’t a quality standard argument. It’s a systems engineering argument: you cannot perform a complete hazard analysis on a requirement that isn’t fully defined. Fault trees have holes. FMEA results are provisional. Design margins get arbitrarily set because no one wants to be the person who holds the program for a TBD resolution.
How to check this: Every requirement carrying a safety-critical tag or hazard association must have zero TBDs, open actions, or deferred rationale. This includes referenced documents—if a requirement says “per TBD-1234,” the reference counts as a TBD.
The standard to apply: If you wouldn’t sign a safety case with this requirement as written, it’s not ready.
3. Every system requirement traces to a stakeholder need
Traceability from system requirements up to stakeholder needs (customer requirements, operational concepts, regulatory mandates) does two things: it tells you why a requirement exists, and it tells you when you’re gold-plating.
Any system requirement with no parent trace is either an undocumented assumption, a design decision masquerading as a requirement, or a leftover from a previous program that got copied in. All three of those cause downstream problems.
How to check this: Generate an orphan requirements report. Every system-level “shall” with no upward trace needs to be either justified and linked to a stakeholder need, or deleted. Requirements without parents are requirements without owners, and requirements without owners don’t get resolved when they conflict with something else.
Common mistake: Teams trace to internal documents (design specs, interface control documents) and call it complete. Upward traceability means tracing to what a stakeholder actually needs, not to what your organization wrote down.
4. Interface requirements are defined for all external interfaces
Interface requirements are the first thing that breaks when two subsystems hit integration, and they’re the last thing to get written because everyone assumes the other team is handling them.
Before design starts, every external interface your system has—mechanical, electrical, data, thermal, RF, human—needs requirements covering: what the interface transmits or transfers, the protocol or standard governing it, the environmental or operating conditions at the interface, and who owns each side.
How to check this: Walk your system’s context diagram (if you don’t have one, that’s a separate problem). For every line crossing the system boundary, verify there are requirements covering that interface in your requirements set. Then verify those requirements have verification methods assigned (see item 1). Interface requirements with no verification method are almost universally “we’ll figure it out at integration.”
The ICD question: Interface Control Documents are not a substitute for interface requirements in your requirements database. ICDs are the reference artifact. Requirements need to be traceable, verifiable, and allocated. A link to a PDF is not a requirement.
5. No compound requirements
A compound requirement is any single requirement that contains more than one independently verifiable condition. The most common form: “The system shall achieve X and shall also achieve Y.” These requirements look harmless and are catastrophic in practice.
The problems with compound requirements are well-documented but still common:
- You can’t verify them independently. If you close out the requirement with a test, which condition did you satisfy?
- They can’t be partially allocated. If X belongs to Subsystem A and Y belongs to Subsystem B, neither team has a clean requirement.
- They inflate coverage metrics falsely. One requirement closed out hides two conditions, one of which might be unmet.
How to check this: Scan for requirements with multiple “shall” clauses, requirements connected with “and” or “as well as,” and requirements with enumerated sub-conditions that are independently testable. Each condition becomes its own requirement. If they’re always tested together, note that in the verification approach—but keep them as separate requirements.
The word count tell: Requirements longer than three sentences almost always contain compound conditions. Flag them for review.
6. A peer review has been completed by someone who didn’t write the requirements
Every previous item on this checklist can be performed by automated tooling or by the author running self-checks. This one cannot. The author who wrote the requirement is the worst person to review it for completeness and ambiguity—not because they’re incompetent, but because they know what they meant and will unconsciously read that meaning into ambiguous text.
Peer review by an independent reviewer—a systems engineer on a different subsystem, a safety engineer, a test engineer—consistently surfaces assumptions the author didn’t know they were making. Test engineers are particularly valuable reviewers because they will immediately ask “how would I verify this?” for every requirement they read.
How to check this: You need a documented review record, not a “yes, someone looked at it” attestation. The review should record: who reviewed it, what version they reviewed, what issues were raised, and how those issues were dispositioned. Reviews with no issues raised are a yellow flag—either the requirements are unusually good, or the reviewer wasn’t actually reviewing.
The scope question: Peer review doesn’t mean every engineer reviews every requirement. It means requirements are organized into reviewable sets, each set is reviewed by at least one person who didn’t write it, and all review-identified issues are resolved before design starts.
Why Teams Start Design Before Requirements Are Ready
The answer is almost always schedule pressure, and the logic is almost always wrong.
The reasoning goes: “We’re behind, we need to start making design decisions, we’ll finalize requirements in parallel.” This feels like managing schedule risk. It is actually creating technical risk and transferring it forward in time, where it will be more expensive.
Here is what actually happens when you start design with unready requirements:
Rework at the most expensive phase. Design decisions made against incomplete requirements get invalidated when requirements mature. Rework during design is expensive. Rework during integration—when you discover two subsystems made incompatible assumptions about an undefined interface—costs multiples of what the upstream requirements gap would have cost to resolve.
Assumption proliferation. Every designer who encounters a TBD or an ambiguous requirement makes a local assumption to keep moving. Those assumptions are usually undocumented. By integration, you have an undocumented assumption layer sitting between your requirements and your design that nobody can fully reconstruct.
Verification deferred to impossibility. Requirements that never got verification methods assigned become requirements verified by analysis at the end of the program—because it’s the only method left that doesn’t require a retest. That analysis often becomes engineering judgment dressed up in report format.
The actual schedule cost of starting design with unready requirements is well-established: studies across aerospace and defense programs consistently show that requirements defects found during integration cost 10x to 100x more to resolve than defects found during requirements review. “Parallel development” rarely saves the schedule time it promises.
Making Requirements Readiness Visible
The checklist above is entirely executable manually. It’s also tedious enough that under schedule pressure, it gets abbreviated, delegated incompletely, or skipped in favor of a subjective confidence vote from the chief engineer.
Tools like Flow Engineering address this by making requirements readiness a continuous, automatically computed metric rather than a periodic manual audit. Verification method coverage, orphan requirements, TBD counts, and traceability completeness are surfaced as live metrics against the requirements set—so readiness isn’t assessed once at a gate review, but is visible throughout requirements development. When a new requirement is added without a verification method, the coverage metric drops immediately. That’s a different forcing function than “we’ll check completeness at CDR.”
The practical effect is that requirements readiness becomes a dashboard metric rather than a checklist that lives in a spreadsheet and gets updated monthly if someone remembers.
The Honest Summary
Requirements readiness isn’t a philosophical condition. It’s a checklist. Run the checklist:
- Every requirement has a verification method assigned.
- No TBDs in safety-critical requirements.
- Every system requirement traces to a stakeholder need.
- Interface requirements are defined for all external interfaces.
- No compound requirements.
- Peer review completed by someone who didn’t write them.
If you can check all six, you’re ready to start design. If you can’t, the question isn’t whether to start design—it’s which of these gaps you’re accepting as a known risk, and whether you’ve documented that decision explicitly enough that someone can hold you accountable for it later.
Most teams that skip this checklist don’t do it because they don’t know better. They do it because the schedule is uncomfortable and the consequences are deferred. The consequences always arrive eventually.