How to Write Operational Requirements for Systems with Variable User Training Levels
A requirements document that says “the system shall be operable by trained personnel” has told you almost nothing. Trained how? To what standard? Under what conditions? With what cognitive load from concurrent tasks? The word “trained” is doing enormous work in that sentence, and it is doing none of it correctly.
This is not a minor stylistic complaint. When the user population spans from a military operator who has spent 400 hours on a specific platform to a first-time patient using a continuous glucose monitor after a 10-minute tutorial, the requirements that govern human interaction with the system must be built from entirely different foundations. Writing operational requirements that actually constrain system design and support verification requires you to treat the human operator as a system element — with defined inputs, outputs, error modes, and environmental tolerances — rather than an assumed constant.
This guide covers how to do that across three domains where the stakes are highest and the user variance is most extreme.
What “Operational Requirements” Actually Means Here
Operational requirements describe what the system must do, under what conditions, and who must be able to do it. They are distinct from functional requirements (which describe system behavior in isolation) because they encode context: the environment, the user, the time pressure, the failure states the user must manage, and the boundaries outside which the system’s safe operation cannot be guaranteed.
The distinction matters because operational requirements are the specification layer where human factors engineering and systems engineering intersect. A functional requirement might state: “The system shall display current dosage rate on the primary interface.” An operational requirement adds the human: “A user with no biomedical training, following a single-session onboarding protocol, shall be able to correctly identify the current dosage rate within 5 seconds under normal indoor lighting conditions, with an error rate not exceeding 1% across the target user population.”
The second form is testable. The first is not — not for the user scenario that actually matters.
Defining the User Population as a System Input
The most common failure in writing operational requirements is treating the user as homogeneous. In practice, every fielded system operates across a distribution of users. The question is whether that distribution is explicitly bounded in the requirements or implicitly assumed by whoever writes them.
User profile decomposition is the engineering activity that makes the distribution explicit. For each distinct user role, you define:
- Training baseline: What formal qualification, certification, or experience is assumed? Is this verified at the point of use, or only at initial deployment?
- Domain familiarity: Does the user understand the underlying system or only the interface to it? A military vehicle commander may be expert in tactics but nave about the sensor fusion algorithm presenting information on their display.
- Task context: What else is the user doing simultaneously? Cognitive load at the moment of interaction is a design constraint, not a soft consideration.
- Error recovery capability: If the user receives a misleading indication, what is their capacity to recognize and recover from it? An expert operator may catch ambiguous feedback; a novice user will act on it.
- Physical and environmental constraints: Low-light conditions, gloves, noise, motion, time pressure, emotional stress — these modulate effective user capability in ways that must be captured in requirements.
Once profiles are defined, they become boundary conditions on the requirements themselves. A requirement written for a Level 1 user (no specialized training) must constrain interface complexity, error tolerance, and recovery path in ways that a requirement written for a Level 3 certified operator does not. The certification regime for your domain tells you whether you can rely on a training floor — and how hard that floor actually is.
The ODD as a Human Factors Tool
The Operational Design Domain (ODD) originated in autonomous vehicle development to define the conditions under which an automated system can safely operate. The concept translates directly to any system where the human operator is part of the control loop — which is most systems.
Your ODD for human-operated systems captures the envelope of conditions within which the user population, as defined, can be expected to perform reliably. This includes:
- Environmental bounds: Temperature, lighting, noise, vibration, time of day if relevant to fatigue.
- Operational state bounds: Normal operations, degraded mode, emergency. Does the user population have the training to manage each state?
- Information state bounds: What information is the user assumed to have at any decision point? What happens if that information is absent or incorrect?
- Interaction frequency bounds: A quarterly-use device presents different human factors challenges than one used daily. Skill decay is an ODD parameter.
Requirements derived from ODD analysis have a different character than requirements derived from pure functional decomposition. They specify not just what the system must do, but what conditions must exist for the user to do their part safely. This leads directly to requirements on interface design, alert structure, mode indication, and error prevention — all of which must be verifiable.
Use case analysis is the method for exploring the ODD systematically. Each use case defines a user, a goal, a starting state, environmental conditions, and the sequence of interactions required. Edge cases and misuse scenarios are where the human factors requirements are found, not in the nominal flow. Explicitly analyze: What does a minimally qualified user do when the nominal path fails? What does the system provide to support recovery?
What the Standards Actually Require You to Do
Three standards are central to this work across the domains in question. None of them are optional decoration — each imposes specific engineering obligations.
MIL-STD-1472 (Military Systems)
MIL-STD-1472 is the U.S. Department of Defense standard for human engineering. It is remarkable in that it goes far beyond general principles: it specifies numeric constraints on display legibility, control force, label typography, warning signal levels, and workspace dimensions. These are requirements constraints, not design guidelines.
For operational requirements in military systems, MIL-STD-1472 does two important things. First, it provides a floor on interface requirements that you can cite directly — minimum character height, maximum force required for control actuation, minimum luminance contrast for critical displays. Second, it forces you to consider the full operator population the military actually fields, which spans a wide range of physical characteristics, training levels, and operational roles. The standard’s tables on population anthropometrics exist precisely because “the operator” is not a single person.
The requirement-writing implication: cite the MIL-STD-1472 parameter in the requirement itself and make the verification method explicit. “The primary status display shall meet MIL-STD-1472G Section 5.2.3.3 minimum character height requirements for operator viewing distances of 45–75 cm” is a requirement. “The display shall be readable” is not.
DO-333 and the Aviation Human Factors Framework
Aviation is unusual in that its user population is tightly qualified through type ratings, recurrent training, and regulatory oversight. This does not eliminate the human factors problem — it changes its character. The question in aviation shifts from “can the user operate this?” to “can the user operate this correctly under abnormal conditions with high workload, with the specific cognitive biases that high-time pilots develop, and without introducing new error modes when integrated into an existing flight deck?”
DO-333, Formal Methods Supplement to DO-178C and DO-254, is a methods standard rather than a human factors standard per se, but its companion document relationships and the broader DO-178C/DO-254 ecosystem impose rigorous requirements on how human-machine interaction in safety-critical systems is analyzed and verified. The aviation domain’s operational requirements must address crew alerting hierarchy, mode awareness and mode confusion prevention, and the specific task allocation between pilot, co-pilot, and automation that the aircraft’s operational concept defines.
For systems being certified under 14 CFR Part 25 or EASA CS-25, human factors requirements derived from AC 25.1302 govern. The certification basis forces the requirements to be explicit about: which crew member performs which task, under what conditions, with what information available, and how the system supports error detection and correction. This is use case analysis mandated by regulation, and it must produce traceable requirements, not just analysis artifacts.
IEC 60601-1-6 (Medical Devices)
IEC 60601-1-6 is the usability engineering standard for medical electrical equipment, and it operates on the premise that you cannot assume user competence. Consumer-facing medical devices — glucose monitors, insulin pumps, cardiac rhythm monitors — may be operated by people with no clinical training, under emotional stress, possibly with impaired cognition or dexterity, often without another trained person available.
The standard requires a formal usability engineering process that includes: user needs analysis, use error analysis (not just hazard analysis — specifically what errors users are likely to make and why), formative usability evaluation, and summative validation with representative users. Each of these feeds requirements.
The critical obligation for requirements writers is that IEC 60601-1-6 does not allow you to write requirements and then test against them — it requires you to derive requirements from empirical user data. Observed use errors become design constraints. A user population that consistently misreads a particular display state generates a requirement on that display, not a training recommendation. The standard explicitly rejects “the user will be trained not to do this” as a risk control for foreseeable misuse.
This is the most demanding standard of the three for operational requirements, and it is the right model for thinking about any system where the user population is heterogeneous and training cannot be enforced or verified at point of use.
Writing Requirements That Actually Constrain Design
Given the above, the structure of an effective operational requirement for variable-user systems looks like this:
Actor: Identify the specific user profile. Not “the operator” — the specific bounded profile with its training assumptions.
Condition: State the environmental and operational context, drawn from ODD analysis. This is where you encode the edge cases that matter.
Action/State: What the user must be able to do, or what the system must support the user in doing.
Criterion: A measurable performance boundary. Task completion rate, error rate, time-to-correct-recognition, acceptable false activation rate. This must be verifiable against the stated user population.
Standard reference: Where applicable, cite the relevant clause of MIL-STD-1472, IEC 60601-1-6, or the applicable aviation regulation. This is not just for compliance — it tells the verification team what methodology applies.
One additional practice that catches failures early: write the verification method at the same time you write the requirement. If you cannot describe a test or analysis that would confirm the requirement is met, the requirement is not finished. Human factors requirements are frequently left in a state where they feel complete but are not verifiable — “the interface shall be intuitive” being the canonical example of a requirement that should never ship.
Keeping Operational Context Alive Through Development
The problem most programs face is not writing good operational requirements at program start. The problem is that those requirements lose their connection to the human factors analysis that generated them as the program moves from definition through design to verification.
A requirement captured in a document with no link to its parent use case, no link to the user profile analysis that bounded it, and no link to the test protocol that verifies it is operationally vulnerable. Someone will modify it during a design review without understanding its origin. Someone will accept a partial verification result without knowing what population the test was conducted against.
This is where tooling becomes a genuine engineering decision rather than an administrative one. Flow Engineering is built around graph-based traceability — the kind that maintains live connections between user analysis artifacts, derived requirements, design decisions, and verification evidence. For human factors requirements in particular, where the chain from user observation to system test is long and passes through multiple engineering disciplines, that connected structure prevents the requirement from becoming an orphan. When a use case changes because of updated ODD analysis, Flow Engineering’s model surfaces which requirements are affected and which downstream design decisions need to be revisited. That is not a convenience feature — it is the mechanism that keeps operational context from being lost in translation between the people who understand the user and the people who build the system.
Teams working under IEC 60601-1-6 benefit directly from this structure: the standard’s requirement that use error analysis drive design constraints is only enforceable if you can trace from observed error to derived requirement to implemented control to validation evidence. That chain must be maintained through the entire development lifecycle.
Practical Starting Points
If you are beginning a program where user population variance is a known challenge, start here:
-
Define your user profiles before writing any operational requirements. Do not let functional requirements drive the structure. User analysis is an engineering input.
-
Conduct use case analysis for the edges, not just the nominal. The nominal flow rarely reveals where human factors requirements come from. Degraded mode, first use with minimal training, and time-pressure scenarios are where the constraints live.
-
Map your domain standard to specific requirement parameters early. MIL-STD-1472 tables, IEC 60601-1-6 use error categories, and aviation crew alerting hierarchy requirements can be pre-mapped to requirement templates before any specific system design exists.
-
Write verification methods in parallel with requirements. If you cannot describe the test, you do not yet have a requirement.
-
Maintain traceability from user analysis to test, not just from requirement to test. The user analysis is the source. If it is not in the traceability graph, it will not survive the program.
The user with no technical background and the expert operator with 10,000 hours are both legitimate system users. Writing requirements that serve both — or that explicitly define where the system’s responsibility ends and the training program’s begins — is engineering work. It is not soft.