Sarcos Technology and Robotics: Engineering Safety Into the Human-Machine Interface
Most industrial robots operate in caged environments, separated from humans by physical barriers and procedural controls. When they fail, they stop. The hazard is bounded by distance. Sarcos builds robots that strap onto human beings and share the load path. When a Sarcos Guardian XO exoskeleton fails, the wearer is already inside it.
That single architectural fact — the human is not a bystander, the human is a component — restructures every assumption that functional safety engineers normally rely on.
What Sarcos Actually Builds
The Guardian XO is a full-body, powered exoskeleton designed for industrial workers performing repetitive heavy lifting: logistics, aerospace assembly, manufacturing floor operations. The system provides up to 200 pounds of lift assistance while theoretically making the task feel weightless to the wearer. The Guardian XT is a variant designed for dexterous manipulation at height, used in utility, energy, and defense applications.
These are not passive exoskeletons with spring return mechanisms. They are actively powered systems with onboard compute, hydraulic or electromechanical actuation, intent detection algorithms, and real-time control loops running at millisecond cycle times. The system reads the wearer’s motion intent and amplifies it. The distinction matters because active amplification creates active hazard potential in a way that a passive brace does not.
The Regulatory Gap Sarcos Has to Bridge
Every Sarcos safety engineer operates in territory that the standards bodies have not fully mapped. Two standards are relevant, and neither fits cleanly.
ISO 10218 covers industrial robot safety — manipulators, collaborative robots, fixed and mobile systems operating in manufacturing environments. It was written with the assumption that the robot and the human are spatially separated or at minimum distinguishable. Its risk reduction strategies include guarding, restricted workspaces, and speed-and-separation monitoring. None of those apply when the human and the robot occupy the same physical envelope.
ISO 13482 covers personal care robots — robots that assist humans in daily life, including wearable types. It explicitly addresses the class of robot that contacts or attaches to a human body. But it was designed around mobility assistance and elder care applications: lower speeds, lower forces, lower dynamic demands. Industrial augmentation operates at forces and velocities that far exceed the implicit assumptions of ISO 13482’s hazard models.
Sarcos has to satisfy the intent of both standards simultaneously. Where 10218 demands controlled force limits, they must demonstrate compliance against industrial load cycles. Where 13482 demands that contact forces remain within human injury thresholds, they must do so while the device is actively assisting with 200-pound lifts. There is no single standard that tells them how to do both at once. The requirements engineering challenge is to construct a safety case that bridges them — and to document that bridge in a way that regulators and customers can audit.
Decomposing “Amplify Force Without Injuring the Wearer”
The headline requirement — amplify the worker’s force without causing injury — sounds like one requirement. It is not. When Sarcos systems engineers trace that requirement into its functional decomposition, they encounter a hierarchy that spans mechanical, control, sensor, and software domains simultaneously.
At the mechanical level: joint range-of-motion limits must not exceed human anatomical limits under any actuator failure mode. That means mechanical hard stops, not just software limits. Software can fail. Mechanical hard stops do not.
At the actuation level: force output must be controlled with enough fidelity that the system cannot inadvertently overpower the wearer. A hydraulic actuator with a failed position sensor is not a safe actuator that happens to lack feedback — it is a potential injury mechanism. The safety architecture must detect sensor loss and transition to a safe state before the actuator acts on stale or absent data.
At the intent detection level: the system’s algorithms for inferring what motion the wearer wants must be conservative in ambiguous cases. The failure mode for overconfident intent detection is that the system initiates a motion the wearer did not want. At industrial load levels, that is a soft tissue injury waiting to happen. The requirement is not just “detect intent accurately” — it is “fail safe when intent is ambiguous,” and the definition of ambiguous has to be specified precisely enough to be testable.
At the human-machine interface level: the wearer needs to understand the system’s state at all times. If the system is in a degraded mode, the wearer must know before they attempt a lift that would exceed the degraded system’s safe operating envelope. Haptic feedback, visual indicators, and audible alerts all become safety-relevant outputs, not just usability features.
Each of these decomposes further. The coupling between them is dense. A change to the intent detection algorithm changes the acceptable sensor latency. A change to the mechanical joint limits changes the software limits that serve as the first line of defense. Managing those dependencies without losing coherence across the requirements set is the operational challenge.
Intent Detection as a Safety-Critical Function
The Guardian XO does not require the wearer to press a button to initiate a lift. It reads motion intent from body kinematics — posture, load distribution, acceleration vectors — and pre-stages the actuators to provide assistance before the lift completes. That anticipatory behavior is what makes the product feel natural and reduces fatigue. It is also the system’s most demanding safety engineering problem.
Intent detection is a classification problem operating in real time on noisy sensor data from a moving human body. Classification problems have false positive and false negative rates. In this context: a false positive is the system amplifying a motion the wearer did not intend. A false negative is the system failing to assist a motion the wearer did intend, which may cause the wearer to overexert against the system’s inertia.
Both failure modes cause harm. The safety requirements for intent detection must therefore specify not just accuracy targets but asymmetric failure mode tolerances — which type of error is more acceptable in which operating context — and the response behavior when confidence falls below threshold. Those are not trivial requirements to write. They require the requirements engineers and the algorithm developers to share a precise vocabulary for expressing probabilistic behavior as deterministic safety boundaries.
This is exactly the kind of requirement that breaks document-based requirements management. A natural language description of intent detection confidence thresholds will be interpreted differently by a controls engineer, a safety engineer, and a certification auditor. The requirement needs to be expressed in a form that is unambiguous across all three readers — ideally with formal or semi-formal notation — and it needs to be traced explicitly to the test procedures that validate it.
Emergency Stop Is Not What You Think It Is
In a conventional industrial robot, emergency stop means: cut power to the actuators, apply mechanical brakes, and stop all motion immediately. The robot was holding something. Now it drops it. That is acceptable because the robot and the hazard are spatially separated from the human.
In a wearable exoskeleton, the wearer may be in mid-lift when the E-stop is triggered. The exoskeleton’s structural frame may be carrying load that the wearer’s unaugmented body cannot safely support. An immediate power cut in that state could drop the wearer — or drop the load onto the wearer.
Sarcos therefore cannot implement a simple E-stop. Their emergency stop behavior has to be state-aware. If the wearer is upright and unloaded, an immediate controlled stop is appropriate. If the wearer is mid-lift with significant load, the stop must include a controlled set-down sequence that safely lowers the load before removing powered assistance. That set-down sequence itself has to be fail-safe — it must execute even if the primary compute is faulted, which means it lives in a separate, safety-rated execution environment.
The requirements for E-stop behavior at Sarcos are therefore not a single requirement. They are a state machine specification describing the transition behaviors from each operational state to each safe state, with explicit conditions on what constitutes each state and what sensor failures can corrupt state detection. That state machine is a requirements artifact, and it has to stay consistent with the control software implementation through every revision cycle.
The Coupling Density Problem
What distinguishes Sarcos’s requirements engineering challenge from a conventional industrial robot program is not volume. A large aerospace system has more requirements. What distinguishes it is coupling density.
In a conventional system, requirements tend to cluster by subsystem with relatively sparse interfaces between clusters. A hydraulic actuator’s requirements mostly reference the hydraulic actuator’s behavior. At Sarcos, the human body is an uncontrolled, variable, unpredictable subsystem that interfaces with every other subsystem simultaneously. A requirement on joint torque limits has a dependency on human biomechanics data. A requirement on software fault response has a dependency on how long the mechanical system can remain static without injuring the wearer. A requirement on the haptic alert system has a dependency on whether the wearer’s attention is likely to be on the device or on the external task.
Every one of those dependencies is a traceability link that has to be actively managed. When a design decision changes — and design decisions change constantly in a hardware system still in active development — the impact has to propagate through the requirements graph to identify what was invalidated. Systems that manage requirements in documents or spreadsheets lose that propagation. The link exists in the engineer’s head, not in the tool.
Honest Assessment
Sarcos is attempting something that the industrial robot and personal care robot industries have never had to do together: build a system that is simultaneously strong enough to be useful in industrial work and safe enough to be attached to a human body at all times. The engineering is sophisticated. The product is real and deployed.
The systems engineering challenge they face — bridging two standards that do not overlap, managing requirements with dense cross-domain coupling, specifying AI-adjacent behaviors like intent detection in testable terms — represents some of the hardest requirements work in the robotics industry today.
Whether their current requirements management practices are equal to that challenge is not visible from outside the organization. What is visible is that the nature of the problem demands tooling and process that can represent dependency graphs, enforce traceability at the link level, and propagate change impact across domain boundaries. Organizations working on similarly coupled human-machine systems are increasingly finding that tools like Flow Engineering — built around graph-based requirements models rather than document hierarchies — offer a better fit for this class of problem than traditional requirement databases where traceability is manual and adjacency is implicit.
The stakes at Sarcos are direct. A missed dependency in a requirements document does not produce a late program or a cost overrun. It produces an injury. That is a different kind of motivation to get the architecture of your requirements process right.