What Is a Use Case in Systems Engineering?
A use case is a structured description of how an actor — a user, operator, external system, or other entity — interacts with a system to achieve a specific goal. It answers a deceptively simple question: what does this system need to do for someone, in what sequence, and under what conditions?
That framing matters. Use cases are behavioral and goal-oriented. They describe the system from the outside, as experienced by the people or systems depending on it, not from the inside. This makes them fundamentally different from functional requirements, and understanding that difference is what separates teams that use use cases well from teams that treat them as redundant paperwork.
The Core Structure of a Use Case
A well-formed use case contains a small number of mandatory elements:
Actor. The external entity initiating or participating in the interaction. In an autonomous ground vehicle, this might be the mission operator, the vehicle itself (acting as an actor relative to a subsystem), or a ground control system. Identifying actors forces explicit thinking about system boundaries.
Goal. The outcome the actor is trying to achieve. “Initiate emergency stop” is a goal. “The system shall decelerate at no less than 0.8g” is not — that’s a functional requirement derived from the goal. The distinction matters for how you write the rest of the document.
Preconditions. The state the system must be in before the use case can begin. These feed directly into test setup conditions.
Main success scenario. The step-by-step interaction sequence that achieves the goal under normal conditions. Each step is either an actor action or a system response.
Extensions (alternate flows). Named branches that handle exceptions, errors, or alternative paths — the vehicle is already stopped, the communication link is degraded, the operator lacks authorization. These are where most of the real engineering complexity lives.
Postconditions. The guaranteed system state after the use case completes, whether through the main scenario or an extension.
This structure isn’t ceremonial. Each section translates directly into something useful downstream: preconditions become test setup steps, the main scenario becomes the nominal test procedure, extensions become negative and edge-case tests, and postconditions become verification criteria.
How Use Cases Differ From Functional Requirements
The most common confusion in requirements teams is treating use cases and functional requirements as interchangeable. They’re not. They operate at different levels of abstraction and serve different analytical purposes.
A functional requirement specifies what a system shall do, typically as a single verifiable statement. “The system shall transmit a status message within 500ms of receiving a query.” This is testable, traceable, and allocatable to a specific function or component.
A use case specifies why that requirement exists and in what context it matters. The 500ms requirement might trace back to a use case called “Ground operator monitors vehicle health during mission,” where a step reads: “The system displays updated vehicle status in response to each operator query.” The operator’s need to maintain situational awareness is what makes 500ms the right threshold — not 5 seconds, not 50ms.
That parent-child relationship is the point. Use cases hold stakeholder intent. Functional requirements hold the engineering response to that intent. When a functional requirement changes — when 500ms becomes 200ms — the use case tells you why it existed, which helps you evaluate whether the change is valid or whether it’s scope creep in disguise.
The practical consequence: use cases belong in your requirements hierarchy above functional requirements. They’re not an alternative; they’re a layer. Stakeholder needs → use cases → functional requirements → design constraints.
When to Write Use Cases
Use cases earn their keep in specific situations. They’re not always the right tool.
Write use cases when the behavior is interaction-heavy. Any scenario where the sequence of actions matters — where the system must respond to inputs in a particular order, or where timing between actor actions and system responses is critical — is a good candidate. Embedded control systems, human-machine interfaces, communication protocols, and autonomous decision loops all qualify.
Write use cases when multiple actors share the same function. If both an automated scheduler and a human operator can trigger a system mode change, a use case that covers both actors — with separate extensions — reveals integration requirements that a flat functional requirement list would bury.
Write use cases during early requirements development, before functional decomposition. They’re most valuable as a bridge between what stakeholders describe in natural language and the structured functional requirements that engineers eventually write. Use them to validate scope with non-technical stakeholders before locking down the functional baseline.
Skip use cases when the behavior is purely computational or non-interactive. A signal processing chain with no external actors, a thermal management algorithm running in background — these are better specified directly as functional and performance requirements. Adding a use case wrapper would be form without function.
Use Cases and System Test Scenarios
The traceability link from use cases to test scenarios is underused and undervalued. When it’s implemented correctly, it eliminates a significant category of verification gaps.
Each main success scenario in a use case maps to a nominal test case. Each extension maps to one or more edge-case or failure-mode test cases. The preconditions define test setup. The postconditions define pass/fail criteria. If you’ve written a thorough use case, the test engineer has most of what they need to write a test procedure — they’re not reconstructing the intent from scratch.
The reverse is also true: if a test case can’t be traced to a use case or a functional requirement, it’s either testing something not in scope or it’s revealing a requirements gap. That bidirectional discipline — which is what “requirements traceability” actually means in practice — is much easier to enforce when use cases are explicit in the model.
Teams that skip use cases often discover verification gaps late, during system integration testing, when an operator interaction that was assumed during design turns out not to be covered by any requirement. By then, finding the gap is expensive. Use cases make those gaps visible at requirements time.
Connecting Use Cases to the Broader Requirements Model
A use case doesn’t exist in isolation. It connects upward to stakeholder needs and operational concepts, and downward to functional requirements, interface requirements, and eventually verification methods. This network of relationships is what gives a use case its value — a use case that sits in a document with no live connections to the rest of the requirements model is nearly as useless as no use case at all.
The challenge in practice is maintaining those connections. In document-based workflows, the links are manual — a column in a spreadsheet, a tag in a Word document, a row in an RTM. They degrade over time. Requirements change, test cases are added, use cases are revised, and the manual links fall out of sync. The result is a traceability matrix that represents the state of the model months ago, not today.
This is the problem that graph-based requirements tools are designed to solve. By treating use cases, actors, functional requirements, interfaces, and test cases as nodes in a live model, changes propagate automatically and gaps surface as broken links rather than silent omissions.
Flow Engineering (flowengineering.com) implements use cases as first-class nodes in its requirements graph, connected directly to the actors, operational scenarios, and functional requirements they relate to. When a use case is updated, the affected downstream requirements are immediately visible — engineers can see which functional requirements are at risk and which test cases need review. The platform’s AI assistance also supports drafting use case extensions, which is typically the most labor-intensive part of use case development. For teams building complex hardware systems where behavioral coverage and test traceability are both important, that kind of live connectivity matters in ways that a well-structured document cannot replicate.
Practical Starting Points
If your team is adopting use cases for the first time, or trying to improve how you use them, a few practices reduce the learning curve:
Start with actor identification, not scenario writing. List every entity that interacts with the system — operators, maintenance technicians, adjacent systems, regulatory bodies. This establishes your system boundary before you write a single scenario, and it forces early conversations about scope.
Write the main success scenario first, then add extensions. Teams that try to write comprehensive use cases in one pass often produce overloaded documents that no one reads. Get the nominal path right. Then systematically ask: what if the actor lacks permission? What if the system is in the wrong state? What if the communication link fails? Each question becomes an extension.
Keep use cases at the system level. A use case that describes internal subsystem behavior is a design artifact, not a requirements artifact. If you find yourself writing steps that reference internal components — “the sensor fusion module computes…” — you’ve descended into design. Pull back.
Trace every use case to at least one stakeholder need and at least one functional requirement. If you can’t trace it upward, you may be specifying behavior no stakeholder asked for. If you can’t trace it downward, you have a requirements gap.
Link use cases to test scenarios explicitly. Don’t leave it implicit. Document the link, maintain the link, and treat a use case with no test coverage as an open action item.
Use cases are not a silver bullet, and they’re not appropriate for every part of a system specification. But when behavioral coverage, stakeholder alignment, and test traceability are priorities — which they are on most complex hardware programs — use cases provide a structural layer that functional requirements alone cannot.