Are Edge Cases Requirements, Test Cases, or Something Else?
The question seems academic until a hardware verification campaign turns up a failure mode nobody wrote a requirement for. Then it becomes urgent. Was that scenario supposed to be covered by an existing requirement? Was it a test gap? Or did the requirement never exist?
Systems engineers use the terms “edge case” and “corner case” loosely. That looseness has a cost. When teams don’t agree on what category an edge case belongs to, it falls through the cracks between requirements management and test planning — captured nowhere, owned by nobody.
This article gives you a working definition and a decision procedure. The short answer: edge cases are neither requirements nor test cases. They are scenarios that probe the completeness of your requirements. What you do with them depends on what you find.
Defining the Terms Precisely
Edge case: A scenario that operates at or near the boundary of defined operating conditions. The system is still technically within scope, but inputs, states, or environmental conditions are at extremes where behavior may become difficult to specify or verify.
Corner case: A scenario that involves multiple boundary conditions simultaneously. If an edge case is one variable at its limit, a corner case is two or more variables at their limits at the same time. Corner cases are combinatorially harder to enumerate and almost always harder to test.
Requirements define expected behavior across the system’s intended operating envelope. A well-written requirement states what the system shall do (or shall not do) under specified conditions. It does not enumerate every possible input combination — that would be infinite. It abstracts.
Test cases verify specific behaviors against specific requirements. A test case operationalizes a requirement into a concrete, repeatable, pass/fail procedure. A test case without a traced requirement is a test in search of a purpose.
Edge cases fit neither category cleanly. They are scenarios — hypothetical or observed — where the adequacy of existing requirements becomes uncertain. They are probes, not products.
The Ambiguous Zone Edge Cases Occupy
Consider a power supply with a requirement: The output voltage shall remain within ±2% of nominal under all load conditions within the rated operating range. That requirement covers the interior of the operating space well. But what happens at exactly the rated maximum load? At 0.1% above it? During a rapid load transient from minimum to maximum in under 1 ms?
These are edge cases. Each one exposes a question: does the existing requirement cover this scenario, or not?
There are two possible answers, and they lead to fundamentally different actions:
Answer 1: The edge case is an instance of an existing requirement. The rapid load transient is covered by “all load conditions within the rated operating range.” The requirement is adequate; you need a test case that exercises this specific condition. The edge case has done its job — it pointed you toward a test you hadn’t written yet.
Answer 2: The edge case exposes a gap in the requirements. The existing requirement says nothing about transient behavior, only steady-state conditions. No amount of creative test case writing will cover a behavior the system was never required to exhibit. You need a new requirement first, then a test case.
Treating case 2 as case 1 — writing a test case when the underlying requirement is missing — is one of the most common and costly errors in hardware verification. You can pass every test and still ship a product with undefined behavior in real operating conditions.
Techniques That Surface Edge Cases Systematically
Informal brainstorming surfaces some edge cases. It misses more. Systematic techniques are not optional for safety-relevant or high-reliability systems.
Boundary Value Analysis
Boundary value analysis (BVA) is the most direct technique. For every parameter with a defined range, you examine behavior at the minimum, maximum, and values just inside and outside each boundary. The “just outside” cases are particularly valuable — they test whether boundary conditions are correctly specified, not just whether the system works in the middle of the range.
BVA applied to requirements also reveals specification ambiguity. If you can’t determine from the requirement what the system should do at the boundary, the requirement is underspecified. That is a requirements problem, not a test design problem.
Fault Tree Analysis
Fault tree analysis (FTA) works backward from undesired outcomes. Starting with a top-level failure event — loss of control, exceeding a safety threshold, erroneous output — you decompose the contributing causes through AND/OR gate logic until you reach basic events: component failures, software faults, environmental conditions, human errors.
The value of FTA for edge case identification is that it surfaces failure paths that cut across normal requirement boundaries. A specific combination of a sensor degradation, a software state, and an environmental condition may represent a corner case that no single requirement addresses because no single stakeholder thought about all three simultaneously. FTA finds these intersections structurally.
Hazard Analysis and Risk Assessment (HARA)
For automotive systems under ISO 26262 and related standards, HARA systematically identifies hazardous events and their consequences. Hazardous events are, in effect, edge cases with safety implications. HARA makes the identification process auditable and links it directly to safety goals — which are requirements at the highest level of abstraction.
How SOTIF Treats Unknown Unsafe Scenarios
ISO 21448 — the Safety of the Intended Functionality (SOTIF) standard — introduces a category of edge case that conventional requirements management struggles with: the unknown unsafe scenario.
SOTIF partitions the scenario space into four quadrants based on two axes: known/unknown and safe/unsafe. The dangerous quadrant is unknown unsafe scenarios — situations that are unsafe but that the development team has not yet identified. By definition, you cannot write a requirement for a scenario you haven’t identified, and you cannot write a test case for a scenario you don’t know to test.
SOTIF’s response to this problem is deliberate: unknown unsafe scenarios cannot be addressed through test case generation alone. They require a design and validation strategy that:
-
Expands the known scenario space through exposure to real-world operational data, simulation, and systematic triggering analysis — the process of asking what inputs or conditions could cause the system to behave unsafely.
-
Moves scenarios from unknown to known so they can be evaluated. A scenario that has been identified and analyzed is no longer unknown, even if it remains dangerous. Once known, it can be addressed through a requirement or a design change.
-
Demonstrates sufficiency of coverage — not by exhaustive testing, but by showing that the process for identifying unknown scenarios was thorough enough, and that residual risk is acceptable.
This is a fundamentally different relationship between edge cases and requirements. Under SOTIF, the edge case identification process is itself a design activity. The output is not just test cases — it is new requirements, design constraints, and monitoring strategies for scenarios that were previously undefined.
Practically, this means SOTIF-compliant programs need a mechanism to track the lifecycle of identified scenarios from “found during analysis” through “traced to existing requirement” or “generated new requirement” through “verified by test or other means.” Teams that manage this in spreadsheets typically lose traceability under audit pressure.
The Decision Procedure: What to Do With an Edge Case You’ve Found
When an edge case is identified — through analysis, simulation, testing, or field feedback — apply this sequence:
Step 1: Can you trace it to an existing requirement? Read the requirement carefully. Not optimistically — carefully. Does the requirement’s scope language explicitly or clearly implicitly cover this scenario? If yes, the edge case is an instance of that requirement. Proceed to test case design and ensure the test specifically exercises the edge condition.
Step 2: If not, is the scenario within the system’s intended operating envelope? If the scenario is outside the defined operational design domain, the correct response may be a boundary requirement (the system shall detect and respond to out-of-envelope conditions) rather than a functional requirement covering the scenario itself.
Step 3: If the scenario is within envelope and uncovered, generate a requirement. The edge case has identified a gap. Write a requirement that specifies expected behavior. Get it reviewed and baselined before writing the test case. The test case verifies the requirement; it does not substitute for it.
Step 4: Trace everything. Whether the edge case traces to an existing requirement or generates a new one, the trace should be explicit and documented. This is what makes a verification campaign auditable.
How Modern Tools Implement This Connection
The techniques above are well-understood. The failure point is almost always operational: teams run a FTA or a SOTIF triggering analysis, produce a list of scenarios, and then lose the connection between those scenarios and the requirements database. The scenarios live in a separate document. The requirements live in a separate tool. The trace between them exists in someone’s memory.
This is where tool architecture matters. Tools built around documents — where requirements are paragraphs and traceability is a manual cross-reference — make the edge case–to–requirement connection expensive to maintain. The work of linking a scenario to a requirement, or flagging a scenario as a requirements gap, involves navigating between documents and maintaining consistency by hand.
Flow Engineering approaches this differently. The platform models requirements, scenarios, and their relationships as nodes in a graph, where the connection between an identified scenario and its parent requirement (or its status as an open gap) is a first-class relationship in the data model. When a SOTIF analysis or FTA produces a scenario, teams can bring it directly into the same environment where requirements live and immediately establish whether it’s covered or whether it drives a new requirement.
That structural connection matters during verification. When an auditor asks whether every identified edge case is either traced to a requirement or documented as a requirements gap, the answer should come from a query on a model, not from someone manually reconciling two spreadsheets the night before the review.
Flow Engineering is specifically built for hardware and systems engineering teams running safety-relevant programs where this kind of scenario-to-requirement traceability is not optional. It doesn’t try to be a general-purpose requirements tool for every industry — that focus is what makes the implementation coherent.
The Honest Summary
Edge cases are neither requirements nor test cases. They are analytical products — outputs of a structured effort to probe the completeness of your requirements before the system encounters those conditions in the field.
An edge case that traces cleanly to an existing requirement is a signal to write a test case. An edge case that doesn’t trace cleanly to anything is a signal to write a requirement. The distinction is not subtle: it determines whether you have a verification gap or a specification gap. Those are different problems with different owners and different remedies.
SOTIF makes this even more explicit by defining unknown unsafe scenarios as a class of edge case that design and validation strategy must address — not test case generation. For systems operating under SOTIF, the edge case identification process is a requirements-generating activity, and the tools you use need to support that connection structurally, not as an afterthought.
Get the categorization right, and your requirements stay complete. Get it wrong, and you write tests for behaviors you never actually specified — and miss the ones you didn’t know to look for.