Sarcos and the Systems Engineering Problem No Textbook Covers
When the human is part of the control loop, traditional requirements methods start to break down
There is a standard way to think about requirements for robotic systems. You define the operating environment, specify performance envelopes, derive safety requirements from hazard analyses, write verification criteria, and build a traceability matrix that connects everything back to top-level objectives. It is rigorous, well-understood, and taught in every systems engineering curriculum.
It does not work cleanly for powered exoskeletons.
Sarcos Technology, based in Salt Lake City, builds two families of systems that expose this gap. The Guardian XO is a full-body powered exoskeleton designed to augment industrial workers — letting a person wearing it lift and carry loads up to 200 pounds with little perceived effort. The Guardian DX and related teleoperation platforms extend human capability into hazardous environments, allowing operators to control robotic systems from a safe distance while retaining dexterous, intuitive control. Both product lines sit at the intersection of industrial robotics, wearable devices, and human-factors engineering. That intersection is where conventional systems engineering methodology gets uncomfortable.
What Sarcos Actually Builds
The Guardian XO is not a passive exoskeleton like some of the lighter industrial back-support devices on the market. It is a powered, full-body suit with actuated joints at the ankles, knees, hips, and shoulders. Onboard sensors read the wearer’s intended motion and the system amplifies it in near-real-time. The wearer does not pilot the exoskeleton so much as wear it — their body signals feed continuously into a control system that decides how to apply force and when.
This makes the Guardian XO a fundamentally different engineering object than a forklift or a collaborative robot arm. A forklift does not care what the operator’s body is doing. A cobot arm can be stopped by a safety interlock without injuring anyone. The Guardian XO cannot cause harm and then stop — it is already attached to a person. The person is inside the system boundary.
The Guardian DX teleoperation platform has different physics but a related problem. The operator’s body movements are mapped onto a remote robotic manipulator. Latency, fidelity of force feedback, and fatigue in the operator’s arms all become system parameters. The human is not just at one end of an interface — the human’s physiological and cognitive state is a variable that affects system behavior.
The Requirements Problem
Standard requirements methodology handles humans as external actors. The system has an interface to the human. The human provides inputs. The system produces outputs. You specify the interface, you verify the interface, and anything inside the human’s head is someone else’s problem.
Wearable robotics breaks this model in at least three ways.
The wearer is simultaneously the operator, the payload, and a sensing element. In a conventional robotic system, these roles belong to separate entities. A warehouse robot has an operator (who programs it), a payload (the goods it moves), and sensors (lidar, cameras, encoders). In the Guardian XO, one person performs all three functions. Requirements that touch any of these roles — payload capacity, control latency, sensor placement — can create conflicts that are genuinely circular. You cannot fully specify the control system without characterizing the human; you cannot fully characterize the human without knowing the control system parameters.
Safety requirements and HMI requirements are not separable. On most industrial systems, the safety case can be developed somewhat independently of the interface design. Safety interlocks are layered on top of functionality. That approach fails on wearable systems. If the exoskeleton misreads an intended motion and applies force in the wrong direction, the safety failure and the interface failure are the same event. The requirements for detecting and interpreting human movement intent — an HMI concern — are directly and inseparably load-bearing for the safety case. Change the sensor placement to improve ergonomics, and you may need to rebuild portions of the safety argument.
Regulatory requirements are still forming. The FDA has jurisdiction over powered exoskeletons used for rehabilitation. OSHA has interests in industrial powered exoskeletons. ISO 9999 covers assistive products. CE marking requirements in Europe involve both machinery directive and potentially medical device regulations depending on application. None of these frameworks were designed with powered industrial exoskeletons in mind, and the applicable standards are being interpreted, extended, and in some cases newly written as products like the Guardian XO move through certification pipelines. Writing requirements against a regulatory target that may move before you ship is a different kind of problem than writing requirements against a known standard.
Verification That Doesn’t Fit Standard Test Cases
Most systems engineering processes define verification methods early: test, analysis, inspection, or demonstration. For each requirement, you specify which method applies and write a verification procedure accordingly. The pass/fail criterion is supposed to be objective.
Human-augmentation hardware makes this harder. Consider a requirement like: The exoskeleton shall not generate joint torques that exceed the wearer’s biomechanical limits during normal operation. This is a safety requirement. How do you verify it?
You could test it on instrumented human subjects, but biomechanical limits vary by individual, age, fatigue level, and prior injury history. The population of industrial workers is not a controlled test article. You could run simulations against biomechanical models, but the models are approximations, and the most conservative models may be so conservative they make the performance requirement impossible to meet. You could demonstrate compliance in a bounded scenario, but that leaves open the question of edge cases the scenario didn’t cover.
There is no clean answer here. What you end up with is a layered verification strategy that combines all three methods and relies on engineering judgment to bridge the gaps. That judgment is not arbitrary — it is informed by hazard analysis, human-factors research, and prior safety data — but it does not reduce to a single pass/fail test. Requirements management processes that assume one-to-one mapping between requirements and test cases will struggle.
Change Management Across an Entangled Architecture
Sarcos’s products go through iterative development cycles, as any hardware startup’s products do. Motors get changed for better efficiency. Software control algorithms are tuned. Structural components are lightened. Sensor packages are updated.
On a conventional industrial robot, most of these changes are local. You change the motor, you re-verify the torque outputs, you close the change request. The safety case for the rest of the system is largely unaffected.
On a wearable system, changes propagate differently. A motor swap changes the force profile, which affects how the control system interprets human motion intent, which changes the effective sensitivity of the safety monitoring, which may require updates to the hazard analysis, which may touch regulatory documentation. None of these connections are automatically visible in a flat document-based requirements structure. They have to be manually traced and manually checked — which means they will sometimes be missed.
This is precisely the problem that graph-based requirements models are better suited to handle. When requirements and their relationships are modeled as nodes and edges rather than rows in a spreadsheet, change impact becomes queryable. You can ask the system which safety requirements have a dependency path to the motor specification you just changed. You get an answer in seconds rather than hours of manual cross-referencing. For a product class where safety, HMI, and performance requirements are deeply entangled, that capability is not a convenience — it is a risk management tool.
How Modern Tools Are Starting to Address This
The systems engineering tooling industry has mostly not caught up to human-augmentation hardware. The dominant platforms — IBM DOORS, Jama Connect, Polarion, Codebeamer — were designed for aerospace and automotive programs where the human is firmly outside the system boundary. They handle large, complex requirement sets well. They provide mature workflow and review processes. They integrate with established V-model development processes. These are real strengths.
What they handle less well is deeply circular dependency structures, requirements that are jointly owned by multiple engineering domains without a clean interface between them, and dynamic change impact analysis across entangled requirement graphs.
Tools built on graph-native data models handle this architecture more naturally. Flow Engineering, an AI-native requirements management platform built specifically for hardware and systems teams, structures requirements as a connected model rather than a document hierarchy. Requirements, hazards, interfaces, and verification evidence all exist as nodes with typed relationships between them. When a requirement changes, the graph makes the downstream impact visible immediately — not through a manual impact assessment process, but as a native capability of the model.
For a company like Sarcos — where a change to a sensor placement touches safety analysis, HMI design, regulatory documentation, and verification planning simultaneously — this kind of connected model has structural advantages over document-based approaches. Sarcos is not a confirmed Flow Engineering customer, but the problem their products present is exactly the class of problem that graph-native tooling was designed to address.
What the Industry Should Take From Sarcos’s Challenge
Sarcos is not unique in facing this problem. They are among the most visible, but the same requirements entanglement exists in any human-augmentation system: powered prosthetics, haptic feedback suits, augmented reality headsets used in industrial settings, medical robotics where the patient is part of the control loop. The category is growing. The systems engineering methodology for handling it has not kept pace.
A few things are clear from looking at where Sarcos and similar companies run into difficulty:
Requirements for human-in-the-loop systems need to explicitly model the human as a system element, not as an external actor. This means human-factors requirements, physiological parameter constraints, and cognitive load considerations need to live inside the requirements model and be traceable to safety and performance requirements, not sit in a separate human-factors report.
Safety cases for wearable systems need to be built to handle interface requirements as load-bearing safety evidence. The clean boundary between functional requirements and safety requirements does not hold. Tooling and process need to reflect that.
Regulatory strategies need to be version-controlled and traceable to requirements. When the standards you’re certifying against are moving targets, you need to be able to reconstruct which version of a standard a given requirement was written against, and what changed when the standard was updated.
Change impact needs to be computed, not estimated. Given how many cross-domain dependencies exist in these systems, manual impact assessment is insufficient. The data model underlying the requirements process needs to support automated impact queries.
The powered exoskeleton market is still early. Sarcos has been refining the Guardian XO through multiple iterations and is pursuing deployment across industrial sectors including manufacturing, logistics, and utilities. The engineering challenges they are navigating are not going away — they are the baseline cost of building systems where humans and machines are genuinely integrated rather than merely adjacent. Getting the systems engineering methodology right for this product class matters more as these systems become more capable and more widely deployed.
The textbook answer — write requirements, trace them, verify them — is necessary but not sufficient. The question is what tooling and process look like when the system under development does not respect the assumptions that textbook was written around.