What Is a Requirements Smell and How Do You Find Them?
In 1999, Martin Fowler popularized the term “code smell” in Refactoring — the idea that certain patterns in code aren’t bugs, exactly, but they signal that something is probably wrong underneath. A function that’s 400 lines long isn’t necessarily broken. It just smells.
Requirements have the same problem. A requirement can be grammatically correct, traceable in your tool, and signed off by a stakeholder — and still carry a smell that will cause real trouble in design, verification, or acceptance. The smell doesn’t mean the requirement is wrong. It means you should look harder before it becomes expensive.
This article defines what requirements smells are, names the six most common ones, and explains how to find them systematically — before your verification team finds them for you in the worst possible way.
What Is a Requirements Smell?
A requirements smell is a surface-level characteristic of a requirement that indicates a likely underlying problem — ambiguity, untestability, scope creep, or missing rationale — without necessarily proving the requirement is incorrect.
The analogy to code smells is intentional and useful. Just as a developer can scan a codebase for long methods, excessive coupling, or magic numbers without fully executing the code, a requirements engineer can scan a requirements set for linguistic and structural patterns that predict problems downstream.
The critical distinction: a smell is a heuristic, not a rule. “Shall ensure” is suspicious. It might be fixable with a single word change. Or it might reveal that no one actually knows what the system is supposed to do. The smell prompts investigation. The investigation tells you which.
The Six Requirements Smells You’ll Encounter Most
1. Ambiguous Verbs
The most common smell. Certain verbs appear authoritative but are structurally meaningless for verification purposes.
The offenders: ensure, support, facilitate, maximize, minimize, optimize, handle, consider, adequate
Why they smell: Each of these verbs describes a desired posture rather than a measurable outcome. “The system shall ensure network availability” could mean anything from 99.9% uptime to “try not to break it.”
What it often signals: The requirement was written before the stakeholder had defined what success actually looks like. The word “ensure” is doing the work that a performance criterion should be doing.
How to spot it: A simple word-search pass on your requirements document. Build a list of forbidden verbs and run it. You’ll be uncomfortable with how many hits you get.
2. Compound Requirements
The “and/or” smell. A single shall statement that covers two or more distinct functions.
Example: The system shall acquire GPS position data and transmit it to the ground station within 500 ms.
Why it smells: Acquisition and transmission are two different functions with potentially different failure modes, different responsible subsystems, and different verification methods. If acquisition works but transmission doesn’t, has the requirement been met?
What it signals: The requirement was written for readability or brevity, not for traceability. Compound requirements create ambiguity in verification closure and make gap analysis unreliable.
How to spot it: Search for “and” and “or” in shall statements. Not every occurrence is a compound requirement — “the system shall acquire X and store it locally” may be a single atomic function — but every occurrence deserves a look.
3. Implementation-Prescriptive Requirements
The smell where a requirement tells you how to build something rather than what the system must do.
Example: The thermal management subsystem shall use a copper heat spreader with a minimum thickness of 3mm.
Why it smells: Requirements should define behavior and constraints at the system boundary. When they specify implementation, they eliminate design freedom the engineer needs, often without a justified reason. If there’s a good reason to specify copper, that reason should be documented separately as a constraint with a rationale.
What it signals: Either the design is leaking into the requirements layer, a supplier is writing requirements to lock in their solution, or a past failure drove a knee-jerk constraint that was never formally rationalized.
How to spot it: Look for material specifications, component names, named technologies, and specific architectural choices inside shall statements. Anything that reads like a design decision is a candidate.
4. Untestable Performance Criteria
A close relative of the ambiguous verb smell, but specifically in quantified requirements that can’t actually be measured the way they’re written.
Example: The system shall respond quickly to operator inputs.
Or its subtler cousin: The system shall achieve a Probability of Detection of 0.95 under all operating conditions.
The first is obviously untestable — “quickly” has no value. The second sounds quantified but fails because “all operating conditions” is undefined.
What it signals: Either the performance value was never derived from a real budget or analysis, or the operating conditions were too hard to enumerate, so someone left them vague. Either way, verification is impossible.
How to spot it: Two passes. First pass: find requirements with no numeric value where one should exist. Second pass: find requirements with numeric values where the measurement method or operating conditions are unspecified.
5. Orphaned Requirements
Requirements with no parent in the hierarchy — no stakeholder need, no higher-level system requirement, no design input they trace to.
Why they smell: A requirement without a parent has no derivation. That means no one can explain why it exists, which means no one can confidently say whether it should be allocated, verified, or deleted. Orphaned requirements accumulate during change management when parent requirements are deleted and children are left behind.
What it signals: Traceability was maintained inconsistently, or the change control process didn’t enforce bidirectional link management. In the worst case, the requirement was added informally and never reviewed for scope alignment.
How to spot it: This is a structural smell, not a linguistic one. It requires a traceability analysis — finding requirements that have no parent links. This is exactly the kind of check that should be automated.
6. Gold-Plated Requirements
Requirements that exist, are traceable, and are testable — but have no identifiable stakeholder who actually needs them.
Example: A reliability requirement of 99.999% availability on a non-safety-critical subsystem where the stakeholder is satisfied with 99.9%.
Why they smell: Gold-plated requirements drive real cost. They require real verification. They constrain real design decisions. And if no stakeholder actually needs the capability, that cost is pure waste. The smell is often invisible because the requirement looks legitimate in the database.
What it signals: The requirement was added by an engineer who thought it was a good idea, inherited from a previous program without review, or generated by a model without validation against actual need.
How to spot it: Requires traceability to stakeholder needs, not just parent requirements. Ask: “Which customer or operational need does this serve?” If the answer requires several hops through internal documents and no one can name a real human or real use case, it’s a candidate for challenge.
How to Find Them Systematically
Finding one smell in isolation is easy. Finding all the smells in a 2,000-requirement document before PDR takes a deliberate process.
Manual Review Checklists
The baseline. Build a checklist that covers each smell type and run it against requirements during authoring. The checklist should include:
- Forbidden verb list (ensure, support, adequate, etc.)
- Structural rules (one function per shall statement)
- Mandatory fields (performance criterion, measurement method, operating condition)
- Traceability completeness (parent link required)
- Stakeholder attribution (origin field required and populated)
Checklists are effective when enforced consistently and updated when new smells emerge. Their limitation: they catch linguistic smells but miss semantic ones. A checklist can flag “shall ensure” but can’t tell you whether the stated performance criterion was derived from a real budget.
Structured Peer Review
The most powerful smell-detection tool, because it brings context that no checklist can encode. A reviewer who knows the system architecture can identify an implementation-prescriptive requirement that wouldn’t trigger any automated rule. A reviewer from the verification team will immediately spot untestable criteria that passed linguistic review.
Structure the peer review with roles: one reviewer focused on testability, one on traceability, one on stakeholder alignment. Split the smell types across roles so each reviewer goes deep on a smaller surface area.
Peer review is expensive relative to automated checking. Reserve it for high-risk requirements sets — interface requirements, safety requirements, derived requirements from complex allocations — and let automated tools handle the volume.
AI-Assisted Quality Checking
The scale problem with requirements is real. Manually reviewing 2,000 requirements for all six smell types across a full program is not feasible in the time most programs allocate. AI-assisted tools address this by running quality analysis continuously against defined smell patterns, flagging candidates for human review rather than replacing human judgment.
Flow Engineering has built requirements quality analysis directly into the authoring workflow. As requirements are written or imported, the tool flags common smells — ambiguous verbs, compound shall statements, missing performance criteria, orphaned nodes in the model — and surfaces them as quality issues before they enter formal review cycles. The value isn’t that it eliminates the need for human review; it’s that it ensures human review time is spent on the hard judgment calls, not the mechanical ones.
This is the appropriate role for AI in requirements quality: pattern recognition at scale, surfacing candidates, not rendering verdicts. A requirement flagged for “ambiguous verb” still needs a human to determine whether the ambiguity is fixable with a word change or reveals a fundamental gap in stakeholder definition.
Finding a Smell Is Not Fixing It
Each smell has a different remediation path. Ambiguous verbs usually require stakeholder clarification to replace the verb with a testable criterion. Compound requirements require decomposition. Orphaned requirements require either a parent to be identified or the requirement to be formally deleted. Gold-plated requirements require a challenge meeting with a stakeholder — the most politically difficult remediation in requirements engineering.
Tracking smells as a quality metric across reviews is more useful than treating them as binary pass/fail. A program that enters CDR with 20% fewer ambiguous verb occurrences than it had at PDR is making measurable progress. A program that signs off requirements without smell analysis and discovers the problems in acceptance testing is not.
The Honest Answer
Requirements smells are not a new problem. They exist in every program to some degree. The question is whether you find them when they’re cheap to fix — during authoring and early review — or when they’re expensive to fix, during verification or, worse, during customer acceptance.
The code smell metaphor is useful precisely because it reframes the goal: you’re not trying to write perfect requirements on the first draft. You’re trying to build a process that surfaces the problems early, names them specifically, and routes them to the right remediation. That’s what good tooling supports. That’s what structured review enforces. And that’s what smell analysis gives you: a vocabulary for the problems you already knew were there.