Collaborative Robots and the Functional Safety Reckoning
Current State: Deployment Is Outrunning Engineering Discipline
Collaborative robot shipments crossed 80,000 units annually in 2025 and show no sign of plateauing. The pitch is straightforward: deploy a cobot next to a human worker, eliminate fixed guarding costs, gain flexibility, and reassign the worker rather than eliminating the role. The business case writes itself.
The safety engineering case is harder to write. And in many facilities, nobody is writing it at all.
The core challenge isn’t that cobots are inherently unsafe. Universal Robots, FANUC, ABB, KUKA, and the rest of the major platforms have invested heavily in certified safety-rated monitored stops, force/torque limiting, and speed monitoring. The hardware is often genuinely capable. The problem is that “the robot passed certification” is not the same as “the deployment is safe.” Every installation is a new system. Every application creates new hazards. And the organizational model for collaborative robot deployments — where an integrator configures a robot platform they didn’t design, installs it in a facility they don’t own, to do a task that will evolve — was not built with rigorous functional safety case development in mind.
The collision between rapid commercial deployment and serious safety engineering discipline is what the industrial automation industry is now reckoning with.
The Standards Landscape: What ISO 10218 and ISO TS 15066 Actually Require
Understanding where the engineering challenge comes from requires understanding what the standards do and don’t specify.
ISO 10218-1 covers robot manufacturers. It defines requirements for industrial robot design — safety-rated control functions, stop categories, force and power limits, and the technical documentation OEMs must provide. When a robot platform ships with a “PL d” or “SIL 2” rated safety function, that rating is grounded in ISO 10218-1 and the underlying functional safety standard, IEC 62061 or ISO 13849-1.
ISO 10218-2 covers integrators. It requires that the integrator conduct a risk assessment for the complete robot system — not just the robot — and implement risk reduction measures sufficient to achieve acceptable residual risk. The integrator is responsible for the system, including tooling, fixtures, workpieces, the facility environment, and the humans who will work near it.
ISO TS 15066 is a technical specification — not a full standard — that addresses collaborative operation specifically. It defines four collaborative operation modes: Safety-Rated Monitored Stop (SRMS), Hand Guiding, Speed and Separation Monitoring (SSM), and Power and Force Limiting (PFL). For PFL — the mode most commonly associated with “cobot” deployments — ISO TS 15066 provides a table of body region-specific biomechanical limits: maximum allowable transient and quasi-static contact forces and pressures for 29 body regions, from the skull to the toes.
These limits are the most operationally specific guidance the standard provides. They are also the most commonly misapplied.
The biomechanical limits in ISO TS 15066 Annex A were derived from pain threshold research conducted on test subjects under controlled conditions. They represent the force levels at which contact becomes painful, not the level at which injury occurs, and the research methodology has legitimate critics. More practically: the limits require that the actual contact forces generated during an inadvertent collision in a given application, at the configured payload and speed, stay below the table values. This requires measurement, not just assertion. Most deployments do not include formal contact force measurement campaigns. Many integrators configure force thresholds on the robot controller to values that seem conservative and move on.
This is the gap between standards compliance and safety case development, and it runs through almost every cobot deployment in the field.
Fixed Guarding to Dynamic Safety Functions: Who Owns the Complexity Now?
Traditional industrial robot cells are engineered around a simple safety concept: the robot operates in a physically separated space, and interlocked guarding prevents human entry while the robot is in motion. The risk assessment is real work, but the safety concept is structurally clean. Contact between human and robot is prevented. The primary hazard — struck by a fast-moving, high-inertia robot — is addressed through separation.
Collaborative deployments eliminate or reduce that physical separation by design. The safety concept shifts from “prevent contact” to “permit safe contact” or “monitor the space and respond dynamically.” This change has profound implications for what needs to be engineered, verified, and documented.
In a fixed-guard cell, the safety function is binary: guard closed, robot enabled; guard open, robot stopped. The safety integrity requirement is met by the interlocking device and the robot’s safety-rated stop function. Functional safety engineers have decades of experience with this architecture.
In a collaborative cell using SSM, the safety function is continuous and conditional: a sensor (typically a laser scanner or 3D vision system) monitors the human-robot shared workspace, and the robot’s speed and separation are adjusted in real time based on human position. The safety integrity requirement now spans the sensor, the safety controller, the robot’s safety-rated speed monitoring function, and the logic that ties them together. The sensor has detection probability characteristics that must be validated against the hazard geometry. The response time of the complete system — sensor scan time, controller processing, robot deceleration — must be shorter than the separation distance divided by the approach speed.
This is systems engineering. It requires modeling the safety function as an integrated architecture, allocating safety integrity requirements across components, and verifying the complete loop. It is not the kind of analysis that comes naturally to mechanical engineers who have always worked with fixed guarding, and it is not what most automation integrators were built to deliver.
The result is a skills gap that shows up in incident investigations: deployments where the robot was certified, the scanner was certified, and the safety relay was certified — but no one had formally analyzed whether the complete system, as installed, achieved the intended safety function under all foreseeable operating conditions, including maintenance, process changeover, and abnormal operation.
The Integrator-OEM Accountability Divide
The division of responsibility between robot OEMs and system integrators is codified in ISO 10218, but in practice it generates accountability gaps that neither party is fully positioned to close.
OEMs provide robots with certified safety functions and documentation describing what those functions do under what conditions. That documentation is legitimately technical and increasingly detailed — UR’s safety system whitepapers, for example, run to dozens of pages covering PL/SIL ratings, stop times, force and torque monitoring behavior, and validation methodology. The OEM cannot certify the system because they don’t build the system. They certify the robot.
Integrators build the system. They select and configure the robot, design the tooling, define the application cycle, choose and position any additional safety devices, and are responsible for the complete risk assessment under ISO 10218-2. The integrator generates the Declaration of Conformity that says the system meets the Machinery Directive (in Europe) or equivalent requirements.
The gap lives in the intersection: the integrator’s risk assessment depends on understanding the robot’s safety function behavior in detail — its actual force output at a given speed and payload, its stop times under different load conditions, its scanner integration response time. This data exists in OEM technical documentation, but the format is not standardized, the completeness varies significantly between manufacturers, and the mapping between OEM data and integrator analysis is left to the integrator to construct.
In complex deployments — high payload cobots, multi-robot collaborative cells, human-robot collaborative assembly with variable parts and processes — this is genuinely hard technical work. The integrator needs engineers who can read the OEM’s functional safety datasheet and translate it into system-level safety arguments. Many don’t have them.
The practical consequence is safety cases that are assembled from OEM certificates and generic risk assessment templates, with limited traceability between the identified hazards, the risk reduction measures, and the verified residual risk. This satisfies audit processes that check for the presence of documentation. It does not constitute a robust safety case.
The liability question, when incidents occur, gets complicated fast. OEMs point to the integrator’s system-level responsibility. Integrators point to the robot’s certified safety functions. End users, who often lack the engineering depth to evaluate either claim, are left holding the residual risk they didn’t fully understand when they signed the purchase order.
What’s Actually Happening vs. the Hype
The automation press tends to cover cobot safety in one of two modes: either announcing that cobots are now so safe that formal analysis is almost unnecessary, or warning that the industry is sleepwalking into a liability crisis. Both framings overstate their case.
The technology is genuinely capable. Modern cobot platforms with properly configured PFL can operate in shared workspaces with humans and generate contact forces well within ISO TS 15066 limits — if the application is designed for it. Low-payload, low-speed applications doing simple tasks near humans are substantially different from high-payload collaborative assembly tasks where the robot’s kinetic energy during a worst-case collision is significant.
The engineering discipline is genuinely lagging. The ISO standards framework is functional but demanding. ISO TS 15066 remains a technical specification, not a full standard, and proposals to formalize it as ISO 10218-3 have moved slowly. The biomechanical limit data is still being refined. Scanner validation for SSM applications doesn’t have a single agreed methodology. These are real gaps that thoughtful standards bodies are aware of and working on, but they create headroom for inconsistent practice.
The organizational model is genuinely mismatched to the technical requirement. The integrator ecosystem that deploys cobots was built for machine building, not safety case development. The systems engineering discipline that functional safety analysis requires — structured hazard analysis, safety function architecture, verification against requirements, traceable documentation — is available in the industry, but concentrated in aerospace, defense, and automotive OEM contexts, not in SME automation integrators deploying 10 cobots per year.
The Systems Engineer Shortage Is the Actual Constraint
If there is a single practical constraint on the industry’s ability to deploy collaborative robots safely at scale, it is the shortage of engineers who can bridge robot application knowledge and functional safety case development.
This is a narrow but specific skill profile. The engineer needs to understand enough about robot kinematics, payload dynamics, and safety-rated control function architecture to evaluate what the robot actually does during collaborative operation. They need to understand ISO 13849-1 or IEC 62061 well enough to assess the performance level or SIL of a safety function architecture. They need to be able to conduct a structured risk assessment — not just fill in a template — and construct a traceable safety case that connects hazards to risk reduction measures to residual risk acceptance arguments. And they need to understand the operational context: how maintenance people behave, what process engineers will ask to change six months after commissioning, and what “normal” looks like in a high-variance manufacturing environment.
Universities don’t produce this profile directly. It is built through experience at the intersection of systems engineering, functional safety, and automation — and there isn’t enough of that intersection to meet the demand the cobot market is generating.
The tooling side of this problem has started to receive serious attention. Requirements management platforms designed for complex systems — particularly those with graph-based traceability models rather than flat document structures — are increasingly being applied to safety case development in automation contexts. The ability to link a hazard in a risk assessment directly to the safety function that addresses it, and from there to the verification evidence that confirms the function works as specified, is exactly the kind of traceable structure that ISO 10218-2 compliance demands but rarely gets.
Tools like Flow Engineering, built specifically for hardware and systems engineering teams with graph-based requirements traceability and AI-assisted analysis, are beginning to appear in safety-conscious integrators’ toolchains. The value proposition in a cobot safety context is the ability to maintain a living safety case — one that is updated when the application changes, the robot software is updated, or the human role in the cell is modified — rather than treating the safety documentation as a one-time deliverable at commissioning. Whether the tooling can compensate for the underlying shortage of engineers who know what to put in it is a harder question.
Honest Assessment: Progress, Gaps, and What Has to Change
The collaborative robot industry has made real progress on safety technology. Force limiting, speed monitoring, and safety-rated control architectures have improved substantially over the past decade. The standards framework, imperfect as it is, provides a structure that organizations can work within.
The engineering discipline — the actual safety case development work that ISO 10218-2 requires — has not kept pace with the deployment rate. Too many cobots are in the field with safety documentation that checks boxes rather than making arguments, with risk assessments that identified hazards but didn’t rigorously verify that the risk reduction measures work as specified under all foreseeable conditions.
Three things need to change:
The integrator model needs to include safety engineering as a first-class discipline. Not as a compliance function that generates certificates, but as engineering work that constructs and maintains safety cases. This requires either building that capability in-house or creating honest partnerships with safety engineering firms who have it.
OEMs need to standardize their safety function documentation in ways that directly support integrator risk assessment and safety case construction. This means not just stating PL ratings, but providing stop time data at payload and speed combinations, force output characterization, and the conditions under which safety functions meet their rated performance.
The systems engineering community needs to engage with cobot deployments the same way it engages with automotive ADAS or aerospace avionics — not treating “it’s just a robot” as a license to skip the structured analysis that complex safety-critical systems require.
The incidents that will eventually force these changes are already happening. The industry’s choice is whether to reckon with the gap proactively or wait for the regulatory and litigation environment to make the reckoning involuntary.